Should We Write That Test?

Standard

As we’ve seen already, we don’t have to write every test possible, since not every test is going to help us in the future, and may even cause us some problems.

Here’s an actual example I encountered. I’ll go through the analysis I do.

There’s some code that does some operation, and the new functionality the developer wanted to add is a warning dialog before doing that operation.

The warning dialog should be displayed only in certain conditions. The question was – should we write tests for this condition?

The analysis starts with the smell: Everything that has to do with UI usually raises a red flag. Apart from technical difficulties to check, and maintainability issues, it’s checking a warning message. Checking messages smells funny.

The first question to ask is: What kind of bug will it find? Turns out it’s a serious one. The warning dialog requirement was added because users have made mistakes. In fact, the PO said it’s much better to have it there all the time, even if some of the conditions don’t apply. The team decided to apply the condition anyway.

So we need to write tests. There were 3 conditions where the dialog needed to be presented and 2 in which it shouldn’t.

Are all five tests equal? The first 3 are important, and the bugs that they will uncover in case they break in the future are important to know about.

The other two though, is where things get interesting.

Those two are not important, based on the PO’s description. I’m not sure if there’s an agreement between the developers and the PO about writing those 2 tests. Usually, unit tests do not come up in requirement discussions. These tests are stricter than what the PO’s requirement.

That means that if there’s a bug somewhere down the line, the test will break. The bug it will uncover is that the dialog is presented despite some condition. This is a low impact bug – the system can and should continue to work as usual. Again, as per the requirement.

But here’s the catch: Having this test as part of the automated test suite, and having that bug there will break the build. People (maybe that developer, or statistically someone else) will need to fix it. Fix a problem, that is not a real problem.

The cost of that not-so-important test, is not just of writing it. It’s not just the fix (which will take longer). It’s holding the line for the entire team until the problem is fixed.

I’m a completionist. If I don’t cover all cases, I feel something is missing. But sometimes, it’s OK to leave things out. We need to concentrate on the important, known cases and write tests for them. We should avoid writing tests for undefined behavior, or for not important behavior. We’re just creating more work for our future selves.

4 thoughts on “Should We Write That Test?

  1. Interested User

    Writing the unit test is worth it IMHO as it gives you insight into your system that you wouldn’t otherwise have. Perhaps the bug that you introduce that fails on one of these other scenarios is so subtle that it actually has infringed into the other more ‘valid’ scenarios.

    Maybe I’m not writing this right but testing should encompass when it should and when it should not as much as possible. The risk otherwise is that the code base is in a less than expected standard.

    Thirdly, there is also a cost if a manual tester finds this bug. That cost is higher than a unit test would have been. Perhaps feed back to the tester that it’s an ok bug to have? but then 6 months later a new tester finds the same bug..

    • Gil Zilberfeld

      Thanks for the comment!

      I try to separate testing from the actual writing of the test, thus committing it into an automated suite. When you’re writing a test, there’s no added insight – you confirm what you’ve already checked. So the value is not at the time of writing it, it’s in the regression.

      It’s true that if it is found manually, there’s a cost associated. But if this is a low risk bug, it shouldn’t even be logged. Much like not writing every test possible, we shouldn’t log every bug we find – we’re not planning to fix everything anyway. Using tests to signal an ok bug, is crossing the line to a system spec, which I don’t want. I’d like to use the test in a strict mode: define what exactly the system should and shouldn’t do. Unless teh team has a convention of writing assumptions as tests, all maybes should be left out.

  2. Every bug tracking system worth its salt has several classifications of bug severities.

    Essentially, one has two levels of bug severities:
    – Bugs that need to be fixed now or by a certain date.
    – Bugs that can be left to linger on.

    I suggest that tests be classified the same way as the corresponding bug severities.

    So, if an unit test testing a low-severity bug fails, then it will be recorded but the software will still be considered to have passed the test suite.

    It means that every assert statement needs an extra argument (with a good default value) to indicate the severity of the corresponding assertion failure.

    • Gil Zilberfeld

      Thanks for the comment!

      As a proponent of not even documenting low level bugs (unless they are going to be fixed on the spot), if a test fails it should be an important test, otherwise why write it?

      Even if in your organization you document everything, I still would advise against it. Severity changes over time, and it may change since I wrote the test (like code comments).

Leave a Reply

Your email address will not be published. Required fields are marked *