This is the 5th and final part of the series about analyzing what we’re coding, and therefore what to test. Last time, we actually got to writing the tests. Let’s do some cleanup.
Step 8: Review
Once we have the tests, it’s time to review both the code and tests. But we do those on a couple of levels.
At the code level, this is a regular code review – for clarity and readability, coding conventions, code location, names, and so on.
At the functional level, we review the cases we’ve covered. We start by looking back at our test categorization plan from step 3, and see if everything we’d wanted to test is tested.
Since new ideas and implementation have come up during our process, we need to redo some risk analysis on the current status of the code. Do we need to add (or remove) tests? What kind of tests should they be?
We need to see the impact of our changes on the manual testing and manual regression effort. Do we need to re-run tests manually? What kind of effort is required?
Non of these reviews is just one person’s work. All reviews are collaborative.
Check both code and tests in, run it and ship it. Oh, and there’s one more thing we might want to do.
Step 9: Knowledge sharing
What have we learned in the process?
It can be new design patterns, or that certain code requires a massive refactoring. It may be that for code in the vicinity of what we tested we need more tests, or that the naming convention we use suck.
Most of the learning we do comes from the actual work. It’s a waste to accumulate that knowledge and let it go to waste.It may even be risky.
Document what you need, take action items, and make sure to follow up on them. Decide on the format and venue (it can be the retrospective, or a smaller forum, you know what’s best).
Just don’t keep everything to yourself.
And that’s it. On to the next task, and starting the process again.
Also, check out the video of a my talk “Unit Testing Strategy“.