The Metrics: Percentage of New Coverage
When I talk with people who want to start measuring stuff, it’s always coverage. I’ve already talked about coverage as a metric, why it’s not good enough, especially on its own. But, in one case, there’s something to be said for a coverage metric, that is actually helpful. If you’re not willing to try anything else.
Like new potatoes, the percentage of new coverage has a different flavor. But I digress. And apparently hungry.
What is it?
The percentage of new coverage – how much of new code is covered by tests.
Should I track it?
If you really, really want to measure coverage and just, only coverage, then yes. Can I interest you with test cases? No? Ok, let’s continue.
Why should I track it?
Let’s say you’re starting to write a new feature, inside your ten year old code-base. If you check total coverage you’ll start seeing progress in your coverage in a few months. The added percentage is just too small. Check it only on the new feature code, and it will start meaning something.
What does the metric really mean?
Coverage means, as always, “my tests exercised this code”. It doesn’t matter what your tests actually did with it. But again, that’s with any coverage. The percentage of new coverage, means how much of the new code you’ve covered (obviously). With TDD, and in fact, even without it, you should be close to 100% all the time, since you’re supposed to add tests just for the code that you add, right?
Because if it drops, the feature is not covered, and the developers are pushing tests to “later when they have time”, which is never. So you keep it up.
How to track it?
With your favorite coverage tool. Only this time you’ll need to do some manual work (or automate some reports), because only you know what’s really “new” in the feature, as opposed to the legacy stuff.
Is it comparable across teams?
You probably think you know what I’ll say, but let me surprise you: YES!
If you’re a manager you must love me. Today.
Since we want new covered code as much as we can, we can check the behavior of different teams working on different new features. If team 1 has started measuring their new code coverage percentage, and it’s near 100%, and team 2 does the same but they are close to 50%, you can actually ask why, and get relevant answers. You might get some “legacy code keeps dragging us down” answers. Or, the “team’s not knowledgeable with tests”.
But here’s the catch – improve the code and the team, so you can get higher coverage numbers!
This is a metric that specifically tells you how well the team performs their testing. Which is really what you want to know, and improve if needed.
Remember if you’re really interested in how your team is doing, you’ll want to measure more stuff for a clear picture.
What’s that, you need help with that?
Contact me. I can help with these things, you know.
Karlo Smid · October 1, 2020 at 6:45 pm
Hi Gil! This is very interesting appraoch to measure how code coverage (I will assume line, branch, loop, path) incrementaly changes.
I would like your opinion on this comment:
“Because if it drops, the feature is not covered, and the developers are pushing tests to “later when they have time”, which is never. So you keep it up.”
What is the context of feature here? User story, a feature that is part of user story, or a function that makes a feature?
Do you do scenario testing with TDD?
Gil Zilberfeld · October 5, 2020 at 11:26 am
Thanks for the comment Karlo!
The context would be whatever you want it to be, but I would take it to “feature”. User stories, if you do them correctly, you’ll get them done with tests as part of their “done” critieria, and after they are done, there’s not going to be more coverage on them. The point of the metrics is to see if people catch on to writing tests- so you need to give it time. And it’s not a stand alone thing – if you’re doing code and test review, you may not need the coverage metric at all. Like I said, it can be done “differently” but this thing is easy to measure and means something.
As for your second question I need a bit of clarification: what do you mean by scenario testing? I can answer better if I understood your context.