As we’ve seen already, we don’t have to write every test possible, since not every test is going to help us in the future, and may even cause us some problems.
Here’s an actual example I encountered. I’ll go through the analysis I do.
There’s some code that does some operation, and the new functionality the developer wanted to add is a warning dialog before doing that operation.
The warning dialog should be displayed only in certain conditions. The question was – should we write tests for this condition?
The analysis starts with the smell: Everything that has to do with UI usually raises a red flag. Apart from technical difficulties to check, and maintainability issues, it’s checking a warning message. Checking messages smells funny.
The first question to ask is: What kind of bug will it find? Turns out it’s a serious one. The warning dialog requirement was added because users have made mistakes. In fact, the PO said it’s much better to have it there all the time, even if some of the conditions don’t apply. The team decided to apply the condition anyway.
So we need to write tests. There were 3 conditions where the dialog needed to be presented and 2 in which it shouldn’t.
Are all five tests equal? The first 3 are important, and the bugs that they will uncover in case they break in the future are important to know about.
The other two though, is where things get interesting.
Those two are not important, based on the PO’s description. I’m not sure if there’s an agreement between the developers and the PO about writing those 2 tests. Usually, unit tests do not come up in requirement discussions. These tests are stricter than what the PO’s requirement.
That means that if there’s a bug somewhere down the line, the test will break. The bug it will uncover is that the dialog is presented despite some condition. This is a low impact bug – the system can and should continue to work as usual. Again, as per the requirement.
But here’s the catch: Having this test as part of the automated test suite, and having that bug there will break the build. People (maybe that developer, or statistically someone else) will need to fix it. Fix a problem, that is not a real problem.
The cost of that not-so-important test, is not just of writing it. It’s not just the fix (which will take longer). It’s holding the line for the entire team until the problem is fixed.
I’m a completionist. If I don’t cover all cases, I feel something is missing. But sometimes, it’s OK to leave things out. We need to concentrate on the important, known cases and write tests for them. We should avoid writing tests for undefined behavior, or for not important behavior. We’re just creating more work for our future selves.