Integration Tests? Why? What are those? Why don’t you hire testers?!

Cosmin Vladutu
8 min readNov 13, 2021

--

Let’s start with some simple definitions:

Martin Fowler said that “Integration tests determine if independently developed units of software work correctly when they are connected to each other.”.

“[…]Once all the components or modules are working independently, then we need to check the data flow between the dependent modules is known as integration testing.” — Javapoint

“Integration testing is the process of taking individual units, combine them together into the desired system and test it as a group. Integration testing may also mean testing the software along with the dependent systems such as database, message queues, etc.” — Deepsource.io

SmartBear has an article about generally accepted test methodology in which the testing is split up like in:

  • Unit testing
  • Integration testing
  • System testing
  • Acceptance testing
  • Performance testing
  • Security testing
  • Usability testing
  • Compatibility testing

John Kent also tried to split and classify test entities and you can find his paper here. (I think anyone that writes any kind of automated tests should read it but please refill your coffee first, since it’s not easy to grasp)

My opinion is that there are a lot of people having different opinions but a few years ago I loved one quote from Gojko Adzic: “It’s not important what you call it, but what it does” (from “The false dichotomy of tests” ).

As you already probably noticed, I have a small passion for testing and the quality of products (maybe sometimes I care more about the quality and testing than if it’s being used or not by the end-users, which might be wrong). I want anyone that comes to the product I work on, to enjoy his time, not to search on confluence, wiki and other 20 sources to understand what the product does, or what a specific flow is doing in a strange case. This should be easy to be understood only by looking over the tests.

I remember a quote from Test-Driven Development: An Empirical Evaluation of Agile: “Tests are considered a kind of a live documentation for the production code because tests are always kept in sync with the code as opposed to typical text-based documentation”, but that’s another long discussion and a start for it, you can find here. Now let’s get back to my opinion, but to make it easy let’s take an example:

We have an Ui/frontend/presentation which we will name A, we have the API/backend the thing we will want to test as B, another unimportant system (just to make it a little more complex) as C and our lovely database (let’s say a relational one just to give it a type, but it’s pretty unimportant for our example) as D.

Well, from my point of view, the integration tests are API tests (I hope no one is screaming now). Let’s get a little bit deeper and zoom in on the B. Like any API, you have at least an endpoint that (most probably) will call an application layer (a business logic layer) which will use an infrastructure layer (or not) to get or save the information from/into the database in some cases and in others to call the C (using the same infrastructure layer) or maybe both, it doesn’t matter.

How I see it is that you should write unit tests for each layer, so you’ll be confident that all the layers are ok and working as expected and above them, you should write integration tests, that should test the integrity of your system (in our case B, not the entire diagram).

How does it work? Well, it’s pretty simple: You call the endpoint with your desired input and test the output. In theory, sounds easy, doesn’t it? Well, it’s not. At least not the first few times when you put in place this for your projects.

First of all, I don’t think you should check ALL the variants of the input and name these integration tests. It’s good for the health of the product, don’t get me wrong, but I don’t think all this is the responsibility of the integration tests. How I see it, is that you should check the obvious inputs: a valid one, an invalid one, a null one and see if the system behaves as you expect (check the response codes, content not to be null, to deserialize in what you expect etc.) A big NO is to create for example for “bad request” only one test and have 40 assertions. Make your tests as small and atomic as possible. You should test only one thing (whatever the type is). If you test all the variants of inputs and expected output and test corner-cases, in my mind, that is acceptance testing. In my mind, the integration tests should test the communication between all the modules/components of my system or any other technical thing you can think of, but nothing related to the business, that the acceptance test. Example: What happens if I should pass a bool parameter and I don’t pass any, or I pass an int parameter, I should test in the integration if the system behaves how it should. What happens if I pass a true or a false, should be checked in the acceptance testing.

Now since you know where I stand, let’s continue: All the other systems/dependencies should be fake, created for the test, in the test initializer and destroyed after the test is finished. I don’t think any dependency should be reused between systems. You can do that, maybe it’s good in your cases, but in 98% of the cases I think it’s a bad thing. Even if in some cases you are using the same database and it’s faster, don’t do it! It’s a bad practice and sooner or later you’ll get into trouble. It doesn’t worth the risk, only for tests to run faster.

In our case study to create fake responses (mocks) for system “C”, I propose to use Wiremock. It’s pretty straightforward to use, you start a server on a specific location and you just tell it how to respond (with what status and what content).

For the database, we have in .NET Core 3 ways to do that (or 3 ways that I know of):

  • Use in-memory database
  • Use SQLite
  • Use a real SQL Database, created for each test.

You should use the one it helps you best to succeed, but if you use SQL Server in production I strongly suggest you use a real SQL Database (even if the fastest way to do it is to use an in-memory database). To give you a little more context, the idea is that both in-memory and SQLite versions have some limitations. For example, in-memory isn’t a real relational database, so issues like foreign keys that don’t match the appropriate primary key, won’t be caught, or if you use multiple schemas you won’t have support to make your tests. The same issue regarding support, in the case of SQLite, you won’t be able to use it if you have computed columns, so even if in-memory is the fastest, SQLite is 40–50% slower, I’d still go with real databases, unless, I am pretty sure what I’m doing over there and I’m sure my DB is very very simple. For how to configure each type you find a lot of examples and probably I’ll write someday a different article.

Let’s take XUnit as a testing framework and I’ll try to explain in theory a little bit how am I usually doing this:

  1. I am ClientFactory in which I am creating a method to create the HttpClient that I will use to make the API calls,
  2. That method will create my factory which will be a new WebApplicationFactory<Startup>() based on which the client will be created.
Example:public HttpClient CreateClientWithDbSetup<T>(Action<T> configure = null) where T : DbContext{_factory = new WebApplicationFactory<Startup>();var client = _factory.WithWebHostBuilder(builder =>{builder.ConfigureTestDatabase(configure);}).CreateClient();return client;}

3. The ConfigureTestDatabase as you imagine is an extension method custom created in which I replace the database connection string and even use the action for seeding the database from the test level if needed.

4. On the test level I have the creation of the client and the call. If needed I can send some “commands” to the database using the action also.

// Arrangevar viewModel = new TestViewModel {Id=1};using var factory = new ClientFactory();using var httpClient = factory.CreateClientWithDbSetup<MyDbContext>();//Actvar response = await PostToUrl(httpClient, RequestUrl, viewModel);//Assertresponse.StatusCode.Should().Be(HttpStatusCode.BadRequest);

The PostToUrl method is unimportant. It just serializes the viewModel and makes the call.

Takeaways: The client should be done on the test level, or the InitializeAsync level if you are using IAsyncLifetime and should be destroyed at the end of the test (or on the DisposeAsync). You also could send the database name from the test level, if you want more control. The connection string for the test server should be in a test settings file.

If I want to share a context:

  1. I am creating a CustomWebApplicationFactory of TStartup that inherits from WebApplicationFactory of TStartup in which I override ConfigureWebHost
  2. Inside this override the ConfigureWebHost.
Example with an in-memory database: public class CustomWebApplicationFactory<TStartup> : WebApplicationFactory<TStartup> where TStartup : class{public readonly InMemoryDatabaseRoot DbRoot= new InMemoryDatabaseRoot();protected override void ConfigureWebHost(IWebHostBuilder builder){builder.ConfigureTestServices(services =>{var serviceProvider =new ServiceCollection().AddEntityFrameworkInMemoryDatabase().BuildServiceProvider();var descriptor = services.SingleOrDefault(d => d.ServiceType ==typeof(DbContextOptions<MyDbContext>));services.Remove(descriptor);services.AddDbContext<MyDbContext>(options =>{options.UseInMemoryDatabase("InMemoryDb", DbRoot);options.UseInternalServiceProvider(serviceProvider);});var sp = services.BuildServiceProvider();using var scope = sp.CreateScope();var scopedServices = scope.ServiceProvider;var appDb = scopedServices.GetRequiredService<MyDbContext>();appDb.Database.EnsureCreated();}); }}

3. On the test level:

public class MyEndpointsTests : IClassFixture<CustomWebApplicationFactory<Startup>>, IAsyncLifetime{private readonly CustomWebApplicationFactory<Startup> _factory;public HttpClient HttpClient;public MyEndpointsTests(CustomApplicationFactory<Startup> factory){_factory = factory;HttpClient = factory.CreateClient();} }

I strongly suggest NOT to go this way if you don’t know what and why you need this in your tests. More information regarding sharing contexts using XUnit you can find here

Keep in mind that if you want to run your tests in parallel, each test should have its own factory/test server and its own database. Nothing should be shared between them.

A big Oh NO NO NO would be to have a repository layer and to replace its real implementation with another fake one. Have a repo only for tests. This is not the way to go for any type of testing. First of all, you should test the implementation that goes into production and second of all you shouldn’t write code, so you’ll be able to test. You should write testable code, but that’s all. You should be able to test it without other “helpers” or implementations. If you are doing this, you’re fooling yourself because basically, you don’t test the entire integrity of the system that goes to production.

Another big mistake is to have scenarios that are used from test to test. You can think about something like the first test creates the database, the second use test (and use the same database) to create an entity, the third one gets by id or list all the entity just created and the last one deletes the entity.

Conclusions:

  1. The integration tests are API tests that test only the integrity (technical part) of the system, by mocking the external dependencies but keeping the real implementation of the internal ones
  2. Integration tests should be able to be run in parallel and should be independent between them.
  3. The database/persistence system should be as closest as possible to the production one.
  4. If you want to consider acceptance tests as being integration, you can do that, God…you can name them system tests also, I don’t think anyone cares, as long as you have good tests but keeps you confident that your system is working as expected, but keep in mind, that writing your tests shouldn’t take you 5 times more than the actual implementation, because I think at that moment you’re doing something incredibly wrong, or you’re app is a high risk one.

--

--

Cosmin Vladutu
Cosmin Vladutu

Written by Cosmin Vladutu

Software Engineer | Azure & .NET Full Stack Developer | Leader

No responses yet