|This series deals with the implementation of a unit testing process in a team or across multiple teams in an organization. Posts in the series include:
|Leading Indicators I
|Leading Indicators II
|Leading Indicators III
Any process we start to roll out, requires management support. If we want it to succeed, anyway.
Inside teams, if the team leader opposes the new process, she will either actively, or secretly, work against it. If she’s for it, she’ll mandate it. When the team is independent enough, and can make their own decisions, the team leader will approve of the team making their own decisions, and facilitate the team’s success.
When we’re talking about multiple teams, and cross organization processes, it’s not even a question. Not only do we need to make sure the new processes take hold, sometimes we need to make more resources available if they are not there to begin with.
Think about an organization moving from one team writing tests to multiple teams. We need to support all of them on the IT level (enough resources and environments to run the tests), at the branch management level (who works on which branch and when the changes move to the trunk), at the automation level (optimizing build performance), and coordination level (what happens when the build goes red).
To make a (very) long story short, it takes management time, attention, and a lot of pushing (nicely) to allow the process to take effect.
Oh, and there’s one more thing leaders need: Patience.
Regardless of how simple the process is (and unit testing, is definitely not simple), patience is a prerequisite. Any process implementation takes time, and we usually see the fruit of our labor down the line. Add to that the learning curve is steep, and the recipe for impatience is complete.
The learning process in unit testing seem short – if you focus on learning the tools. But until people start writing regularly, effectively, and see the benefits it takes weeks, usually a lot more.
If there are constraints and conflicts, it will take even more than that. Consider a team working inside a legacy code swamp, and a closed architecture they aren’t allowed to change. Their ability to change the code is constrained, and therefore the ability to write tests is constrained. That means less tests written, and often less effective tests at that (you test what you can, regardless of importance).
Expecting coverage to rise under these conditions, and even more so, the number of effective tests to increase is bound to crash into the realities. With failed expectations, come disapointment, maybe an angered response, losing faith in the capabilities of the developers, and often calling it quits, way too early.
We’ll continue to discuss what else leaders can do in the next post.