Integration Testing with Spring: Configuration Logic in Integration Tests

Gil Zilberfeld talks about using an anti-pattern in Spring integration tests where flags are used in the tests for mocking and controlling simulators
Standard
This is a short series of how to use Spring in integration testing and unit testing.
ConfigurationsMocking

Testing a REST API
A custom configurationConfiguration logic

Now that we’ve covered some of Spring capabilities, we can explore possibilities beyond simple mocking. Instead of “regular” mocks (that we can set up in the integration tests), we can inject actual simulators. For our purpose, let’s define a simulator as something that has its own logic, which simulates a production component.

Simulators deserve their own topics, and indeed – their own tests. Because they have logic, they need to be tested separately. For the sake of our discussion, let’s assume they are tested and work as we want them.

As we’ve already saw, we can inject whatever bean we want with Spring, and it’s best to use the Configuration classes to determine what to inject. A pattern I’ve started seeing seems to misuse the capabilities of Spring’s injection, as well as introduce possible bugs.

The pattern goes like this. In the test class, we inject the simulator as a bean, as we usually do. We also inject a value from a property file:

You can already guess where this is going. Next, we’ll see in the test body this code:

Integration tests are code

As applications grow, we find out that we need more diversity in how we use components in the application, and simulators are no different. Even more so, if they are kind of frozen in terms of behavior. Or maybe it’s hard to configure for the purpose of a single test, and than again for the next one. So it may seem that we need this kind of code to control our integration tests.

However, this way is not the best way to achieve this. First of all, we’re spreading the configuration code in different places – the property files, in combination with the Configuration classes, maybe some Profiles and even other places. When we keep data all over the place, it’s hard to understand the whole picture, and of course maintenance becomes a nightmare.

In addition, let’s think about the usefulness of the tests. Integration tests are supposed to catch logical errors, and now we’re introducing logic inside them. Having bugs in the integration tests lower our trust in their test results.

Even if we go over this bump, what can we learn from the integration test results? Looking at Jenkins, we’ll see the name of the integration tests, but in what mode did they run? Real or simulator? How was the simulator indended to behave? We need to go into the integration tests themselves, maybe even debug them, to see how they actually ran, and if the result is what we want.

So, what do we do?

  • Manage configuration data in Configuration classes.
  • If you have a simulator, consider it a separate Configuration.
  • Maybe running with multiple simulators is a cross-application mode that can be better managed under a specific Profile.
  • Separate integration tests that run in different configurations to different integration test classes.
  • Do not check logic flags in tests. Fit them into a consistent configuration setting mode, that is separate than property files.
  • Manage configuration data as code – put it where you expect to find it, in a central place, preferably not in multiple files.

Leave a Reply

Your email address will not be published. Required fields are marked *