As we’ve seen already, we don’t have to write every unit test possible, since not every unit test is going to help us in the future, and may even cause us some problems.
Here’s an actual example I encountered. I’ll go through the analysis.
There’s some code that does some operation, and the new functionality the developer wanted to add is a warning dialog before doing that operation. For our purposes we’ll focus on the warning dialog, the operation is not important as it had its own unit tests.
The warning dialog should be displayed only in certain conditions. The question was – should we write unit tests for this condition?
The analysis starts with noticing the UI smell: Everything that has to do with UI usually raises a red flag. Apart from technical difficulties to check, and maintainability issues, our unit tests are going to check a warning message. Checking messages smells funny.
The first question to ask about our unit test is: “What kind of bug will it find?”
Turns out it’s a serious one. The warning dialog requirement was added because users have made mistakes. In fact, the product owner said it’s much better to have the dialog presented all the time, even if some of the conditions don’t apply. The team decided to apply the condition anyway.
So we need to write unit tests. There were three conditions where the dialog needed to be presented and two in which it shouldn’t. Are we going for the full code coverage?
Are all five unit tests equally valuable?
The first three unit tests are important, and the bugs that they will uncover in case they break in the future are important to know about.
The other two unit tests though, is where things get interesting.
Those two unit tests are not important, based on the product owner’s description. I’m not sure if there’s an agreement between the developers and the product owner about writing those two unit tests. Usually, unit tests do not come up in requirement discussions. These unit tests are stricter than what the product owner’s requirement.
Let’s take a trip to the future. If there’s a bug somewhere down the line, the unit test will break. The bug it will uncover is that the dialog is presented despite some condition. This is a low impact bug – the system can and should continue to work as usual. Again, as per the requirement.
But here’s the catch: Having this unit test as part of the automated test suite, and having that bug there will break the build. People (maybe that developer, or statistically someone else) will need to fix it.
They will need to spend time on fixing a problem, that is not a real problem.
The cost of that not-so-important unit test, is not just of writing it. It’s not just the fix (which will take longer). It’s also possibly blocking the entire team until the problem is fixed.
I need to confess
I’m a completionist.
If I don’t cover all cases, I feel something is missing. But sometimes, it’s OK to leave things out. We need to concentrate on the important, known cases and write unit tests for them. Even if we hurt code coverage.
We should avoid writing unit tests for undefined behavior, or for not important behavior. We’re just creating more work for our future selves.