I’m working on an interesting project for a client about showing different, somewhat advanced metrics on their code quality. So I decided to write about metrics, what they mean and how you can track them, and much more importantly, how to use them. And yes, even though I talked about a lot about coverage before, I’m going to mention it again. And again. But we’ll see.
The first metric I’m going to talk about is… number of new tests. Well, that’s not really a surprise based on the title. You’re still impressed, right?
What is it?
The number of new passing tests added between builds or over a period.
Should I track it?
Yes. For a while.
Why should I track it?
At some point in your life, or your team’s life, there comes a time where you’ve had enough of those returning bugs you find really late, and are really a pain to debug. Then, you get a couple of insights: We need automated tests. And we need testable code. And we need code reviews. And by golly, we have arrived!
So people need to start writing and running automated tests. What is the indication of people adding tests? The number of tests! It’s not rocket science, and if it’s easy and simple, you just use it.
And remember, like any important metric, you publish it, visualize it, and talk about it, and the progress the team is making. Because otherwise it’s just a number that doesn’t carry importance.
What does the metric mean?
The simple answer is “do people write tests”. But once you start digging in, you may find interesting stuff. For example, for most teams, the findings would not be “holy velocity, Batman, they are producing more tests than a test factory on drugs”. But hey, let’s say they do. You didn’t expect that, so you can ask more questions: Are those good tests? Are those tests easy to write? Are they cheating me? And if not, why haven’t they started 3 years ago?
Ok, I haven’t seen this happens.
Most of the time the number either doesn’t change, or grows painfully slowly. Then you can ask things like: Why does it take so long to write tests. You’ll find there’s training missing. And knowledge about the system. And that testability of the application sucks. And then you can do something about it. Like call me for training or coaching. Or do some refactoring in those painful points. And truly understand the true meaning of development.
How to track it?
That’s easy. Whenever you build your project and run the tests (I’m assuming you’re using some kind of continuous integration system, like Jenkins, TeamCity, CircleCI or whatever flavor), you can see the number, right there for each build. If you’re running a nightly build, that’s fine too. Weekly not as good, but still works.
You’ve got the number. Show it and discuss it. Ask “what can we do to improve it?”.
Once you see a steady rise in the number of tests, you have achieved one or more goals – people are writing tests on regular basis, and that means they have the know-how and the system doesn’t resist tests. Then you can stop tracking it.
One more thing. The metric is not comparable across teams, projects or even the same team on the same app one year later. Don’t compare those numbers, it leads to pain, which leads to suffering, and doesn’t end well in most trilogies.
Want to learn introduce a dashboard for your team? Get in touch!