As we’ve seen already, we don’t have to write every unit test possible, since not every unit test is going to help us in the future, and may even cause us some problems.

Here’s an actual example I encountered. I’ll go through the analysis.

There’s some code that does some operation, and the new functionality the developer wanted to add is a warning dialog before doing that operation. For our purposes we’ll focus on the warning dialog, the operation is not important as it had its own unit tests.

The warning dialog should be displayed only in certain conditions. The question was – should we write unit tests for this condition?

The analysis starts with noticing the UI smell: Everything that has to do with UI usually raises a red flag. Apart from technical difficulties to check, and maintainability issues, our unit tests are going to  check a warning message. Checking messages smells funny.

The first question to ask about our unit test is: “What kind of bug will it find?”

Turns out it’s a serious one. The warning dialog requirement was added because users have made mistakes. In fact, the product owner said it’s much better to have the dialog presented all the time, even if some of the conditions don’t apply. The team decided to apply the condition anyway.

So we need to write unit tests. There were three conditions where the dialog needed to be presented and two in which it shouldn’t. Are we going for the full code coverage?

Are all five unit tests equally valuable?

The first three unit tests are important, and the bugs that they will uncover in case they break in the future are important to know about.

The other two unit tests though, is where things get interesting.

Those two unit tests are not important, based on the product owner’s description. I’m not sure if there’s an agreement between the developers and the product owner about writing those two unit tests.  Usually, unit tests do not come up in requirement discussions. These unit tests are stricter than what the product owner’s requirement.

Let’s take a trip to the future. If there’s a bug somewhere down the line, the unit test will break. The bug it will uncover is that the dialog is presented despite some condition. This is a low impact bug – the system can and should continue to work as usual. Again, as per the requirement.

But here’s the catch: Having this unit test as part of the automated test suite, and having that bug there will break the build. People (maybe that developer, or statistically someone else) will need to fix it.

They will need to spend time on fixing a problem, that is not a real problem.

The cost of that not-so-important unit test, is not just of writing it. It’s not just the fix (which will take longer). It’s also possibly blocking the entire team until the problem is fixed.

I need to confess

I’m a completionist.

If I don’t cover all cases, I feel something is missing. But sometimes, it’s OK to leave things out. We need to concentrate on the important, known cases and write unit tests for them. Even if we hurt code coverage.

We should avoid writing unit tests for undefined behavior, or for not important behavior. We’re just creating more work for our future selves.


4 Comments

Interested User · June 24, 2015 at 12:18 am

Writing the unit test is worth it IMHO as it gives you insight into your system that you wouldn’t otherwise have. Perhaps the bug that you introduce that fails on one of these other scenarios is so subtle that it actually has infringed into the other more ‘valid’ scenarios.

Maybe I’m not writing this right but testing should encompass when it should and when it should not as much as possible. The risk otherwise is that the code base is in a less than expected standard.

Thirdly, there is also a cost if a manual tester finds this bug. That cost is higher than a unit test would have been. Perhaps feed back to the tester that it’s an ok bug to have? but then 6 months later a new tester finds the same bug..

    Gil Zilberfeld · June 24, 2015 at 9:08 pm

    Thanks for the comment!

    I try to separate testing from the actual writing of the test, thus committing it into an automated suite. When you’re writing a test, there’s no added insight – you confirm what you’ve already checked. So the value is not at the time of writing it, it’s in the regression.

    It’s true that if it is found manually, there’s a cost associated. But if this is a low risk bug, it shouldn’t even be logged. Much like not writing every test possible, we shouldn’t log every bug we find – we’re not planning to fix everything anyway. Using tests to signal an ok bug, is crossing the line to a system spec, which I don’t want. I’d like to use the test in a strict mode: define what exactly the system should and shouldn’t do. Unless teh team has a convention of writing assumptions as tests, all maybes should be left out.

Omer Zak · June 27, 2015 at 12:02 am

Every bug tracking system worth its salt has several classifications of bug severities.

Essentially, one has two levels of bug severities:
– Bugs that need to be fixed now or by a certain date.
– Bugs that can be left to linger on.

I suggest that tests be classified the same way as the corresponding bug severities.

So, if an unit test testing a low-severity bug fails, then it will be recorded but the software will still be considered to have passed the test suite.

It means that every assert statement needs an extra argument (with a good default value) to indicate the severity of the corresponding assertion failure.

    Gil Zilberfeld · June 27, 2015 at 10:45 am

    Thanks for the comment!

    As a proponent of not even documenting low level bugs (unless they are going to be fixed on the spot), if a test fails it should be an important test, otherwise why write it?

    Even if in your organization you document everything, I still would advise against it. Severity changes over time, and it may change since I wrote the test (like code comments).

Leave a Reply to Gil Zilberfeld Cancel reply

Avatar placeholder

Your email address will not be published. Required fields are marked *