This the 6th post about Test Attributes that started off with the supermodel of series,  “Test attributes – Introduction” post. If you need training on testing, contact me.

I always hated the word “maintainability” in the context of tests. Tests, like any other code are maintainable. Unless there comes a time, where we decide we can’t take it anymore, and the code needs a rewrite, the code is maintainable. We can go and change it, edit or replace it.

The same goes for tests. Once we’ve written them, they are maintainable.

So why are we talking about maintainable tests?

The trouble with tests is that they are not considered “real” code. They are not production code.

Developers, starting out on the road to better quality, seem to regard tests not just as extra work, but also second-class work. All activities that are not directed at running code on production server, or a client computer, are regarded as “actors in supporting roles”.

Obviously writing the tests has an associated future cost. It’s a cost on supporting work, which is considered less valuable.

One of the reasons developers are afraid to start writing tests is the accumulated multiplier effect: “Ok, I’m willing to write the tests, which doubles my work load. I know that this code is going to change in the future, and therefore I’ll have to do double the work, many times in the future. Is it worth it?”

Test maintenance IS costly

But not necessarily because of that.

The first change we need to do is a mental one. We need to understand that all our activities, including the “supporting” ones, are all first-class. That also includes the test modifications in the future: After all, if we’re going to change the code to support that requirement, that will require tests for that requirement.

The trick is to minimize the effort to a minimum. And we can do that, because some of that future effort is waste that we’re creating now. The waste happens when the requirements don’t change, but the tests fail, and not because of a bug. We then need to fix the test, although there wasn’t a real problem. Re-work.

Here’s a very simple example, taken from the Accuracy attribute post:

[Test]
public void AddTwoPositiveNumbers_GetResult()
{ 
   PositiveCalculator calculator = new PositiveCalculator();
   Assert.That(calculator.Add(2, 2), Is.EqualTo(4));
}

 
What happens if we decide to rename the PositiveCalculator to Calculator?  The test will not compile. We’ll need to modify the test in order to pass.

Renaming stuff doesn’t seem that much of a trouble, though – we’re relying on modern tools to replace the different occurrences. However, this is very dependent on tools and technology . If we did this in C# or in Java, there is not only automation, but also quick feedback mechanisms that catch this, and we don’t even think we’re maintaining the tests.

Imagine you’d get the compilation error only after 2 hours of compiling, rather than immediately after you’ve done the changes. Or only after the automated build cycle. The further we get from automation and quick feedback, we tend to look at the maintenance as a bigger monster.

Lowering maintenance costs

The general advice is: “Don’t couple your tests to your code”.

There’s a reason I chose this example: Tests are always coupled to the code. The level of coupling, and the feedback mechanisms we use effect how big these “maintenance” tasks are going to be. Here are some tips for lowering the chance of test maintenance.

    • Check outputs, not algorithms. Because tests are coupled to the code, the less implementation details the test knows about, the better. Robust tests do not rely on specific method calls inside the code. Instead, they treat the tested system as a black box, even though they may know how it’s internally built. These tests, by the way, are also more readable.

 

  • Work against a public interface. Test from the outside and avoid testing internal methods. We want to keep the internal method list (and signature) inside our black box. If you feel that’s unavoidable, consider extracting the internal method to a new public object.

 

 

  • Use the minimal amount of assert. Being too specific in our assert criteria, especially when using verification of method calls on dependencies, can lead to breaking tests without a benefit. Do we need to know a method was called 5 times, or that it was called at least once? When it was called, do we need to know the exact value of its argument, or maybe a range suffices? With every layer of specificity, we’re adding opportunities for breaking the test. Remember we with failure, we want information to help solve the problem. If we don’t gain additional information from these asserts, lower the criteria.

 

 

  • Use good refactoring tools. And a good IDE. And work with languages that support these. Otherwise, we’re delaying the feedback on errors, and causing the cost of maintenance to rise.

 

 

  • Use less mocking. Using mocks is like using x-rays. They are very good at what they do, but over-exposure is bad. Mocks couple the code to the test even more. They allow us to specify internal implementation of the code in the test. We’re now relying on the internal algorithm, which can change. And then our test will need some fixing.

 

 

  • Avoid hand-written mocks. The-hand written ones are the worst, because unless they are very simple, it is very easy to copy the behavior of the tested code into the mocks. Frameworks encourage setting the behavior through the interface.

 

There’s a saying: Code is a liability, not an asset. Tests are the same – maintenance will not go away completely. But we can lower the cost if we stick to these guidelines.

Next up: Footprint.

For training and coaching on testing and agile, contact me.

Image source: http://www.business2community.com/content-marketing/how-super-mario-would-market-his-plumbing-business-in-2013-0423630#!bnPrC4


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *