|This series deals with the implementation of a unit testing process in a team or across multiple teams in an organization. Posts in the series include:
|Leading Indicators I
|Leading Indicators II
|Leading Indicators III
We talked about management attention and support, and there’s more leaders can do, in order to help us make the process work.
Turns out the people who care about the metrics are the ones who have the power to facilitate their collection, demand the reports, analyze the patterns and ask for correction plans. Who knew?
The funny thing is the collection of the metrics is a leading indicator by itself. If the process of metric collection, anlysis and feedback is done, chances are the process will go well, since somebody’s supporting it. If it doesn’t (for any combination of reasons), people see that the process is “not that important to managers”, which quickly translates to “it’s not that important to us, and me”. Even if people care about unit testing, they care more about making sure they don’t get caught on other things that managemnet does care about. Safety first.
Even when the process does go well, an anti-pattern may appear: Sticking to the initial plan, rather than changing course over time, which we expect leaders to do.
An example of that is around our favorite metric: Management asserts that coverage should increase over time. Since our leaders are wise, they don’t state a minimum coverage threshold, just an indication that coverage is growing. And, we’re not covering old code, just new code. Looks innocent enough.
What happens next will surprise you (not). To keep the coverage increasing, people are encouraged to add tests to their code. But since our developers are also wise, they don’t add code that unit tests don’t make sense for (like data transformation, or auto-generated code).
So the coverage doesn’t rise. Alas, if that’s most of their code they don’t get “coverage points”, or in some measurement systems, lose some of them. Remember safety first? Let the gaming begin. The developer may add tests that are ineffective (or even harmful), just to satisfy the metric.
The only way to make sure this doesn’t happen is through retrospection, analysis and feedback. I’ve already said that as stakeholders, leadership can make sure these take place. But should they do all that by themselves?
Here comes the next way how leadership helps the process: creating forums for the learners to learn further from peers and grow internal experts. We call those communities of practice. Where practitioners discuss, review, analyze and get feedback to learn from the success and mistakes of others.
The chance of these forums created, and continuing to take place over time, is likely to succeed with management support. In these forums, the discussion takes two forms: the tactical level – how to write better tests, learn refactoring methods, etc. But also at the strategic level, look at the process itself, the metrics, suggest course correction, and follow up with the implementation.
Now, the more authority we give the communities, the better. We like self-organizing and self managing teams. They will need leadership to be created, exist and help when they need external resources and efforts.
Combine all these leadership support methods, and we’re on our way to a successful implementation.