The Actual Cost of Tests – Part II
You think I was finished? I haven’t even started. The first group of associated costs to testing were around the fact that unit tests are, in fact, code and as such are susceptible to all kinds of flaws that come with them being code.
But that’s not all folks! There are more hidden costs that we pay for in opportunity cost: what we could have done instead. Yet we spend time and money on the following:
Longer test run times
The more tests we add to our test suite, you’re adding time to its completion, and therefore lengthening the feedback cycle. It may not mean much for a integration test or two (because unit tests run quickly), but it becomes apparent with hundreds and thousands of them.
Once we got addicted to feedback from unit tests (this is the good kind of addiction, mind you), we learn to wait for it before moving to the next thing. As the test suite run time gets longer, and the feedback cycle makes you wait, we’ll either decide that it’s not worth waiting, or continue to wait, instead of moving to the next task. There are many things we can do in order to make a test suite run faster, but if we don’t do them, the cost is there, accumulating.
However well we write your unit tests and code, we cannot eliminate coupling between them. It is true, interfaces and abstractions are better than direct implementation coupling, but they are just a lower level of decoupling. And coupling leads to anger, which leads to hate, and finally – suffering.
I mean lock-down.
The time will come when we will need to change the code’s interface. At that point, we’ll have to choose between changing that code, and its unit tests, or leaving it as-is and not pay that price.
It’s like you’re stuck in a middle seat on an airplane, wanting to go to the bathroom, but thinking whether to bother the aisle seat dweller. Sometime you don’t. And then you get angry, etc.
People tend to leave code as-is for a reason: It’s risky and costs too much. When tools do the changes for us, it’s a no-brainer. But when we need to do this manually, the coupling we introduce may encourage us to leave the code like that, rather than change it for the better.
Then there’s maintenance
We can define test maintenance caused by unit tests as the work we need to re-do on the tests, but doesn’t add any value. Remember that lock-down? When we need to change an interface, the tests don’t compile. So we fix the unit tests that passed a minute ago.
While this definition doesn’t cover actual functionality changes (when we actually need to change the unit tests for testing that functionality), we think of this work “maintenance” too. Thus, anytime we touch our tests, we’re counting that as “non-productive” work, and we want to abolish it. The fear and loathing of unit test maintenance is a big cause of people dropping testing.
Maintenance doesn’t just include the re-writing integration and unit tests. Managing builds, deployment for testing, and the test suites themselves takes a lot of work, which the team sometimes outsources to IT or DevOps or whatever they are called in the organization. This is work we don’t see as productive, and it comes directly from having tests.
Those bugs aren’t real
We don’t just pay in writing tests. When a unit test fails, we rush to see what happened. And we may find that many times, it wasn’t a bug that the test found. It failed because the integration tests themselves were buggy. Or that they were not isolated and independent from each other.
Or because we’re not good at writing tests. (I can help here, by the way).
Badly written unit and integration tests costs a lot of time. When we’re starting an implementation process, we focus on the effort of writing, but we don’t see how bad tests will haunt us for a long time. Sometimes a lot more work is wasted on bad tests than writing them.
How much for that test?
There are visible costs, deferred costs and even imaginary costs. But we pay for those costly unit and integration tests in blood, sweat and extra hours.