Integration Testing with Spring: Configuration Logic in Integration Tests

Gil Zilberfeld talks about using an anti-pattern in Spring integration tests where flags are used in the tests for mocking and controlling simulators
Standard
This is a short series of how to use Spring in integration testing and unit testing.
ConfigurationsMocking

Testing a REST API
A custom configurationConfiguration logic

Now that we’ve covered some of Spring capabilities, we can explore possibilities beyond simple mocking. Instead of “regular” mocks (that we can set up in the integration tests), we can inject actual simulators. For our purpose, let’s define a simulator as something that has its own logic, which simulates a production component.

Simulators deserve their own topics, and indeed – their own tests. Because they have logic, they need to be tested separately. For the sake of our discussion, let’s assume they are tested and work as we want them.

As we’ve already saw, we can inject whatever bean we want with Spring, and it’s best to use the Configuration classes to determine what to inject. A pattern I’ve started seeing seems to misuse the capabilities of Spring’s injection, as well as introduce possible bugs.

The pattern goes like this. In the test class, we inject the simulator as a bean, as we usually do. We also inject a value from a property file:

You can already guess where this is going. Next, we’ll see in the test body this code:

Integration tests are code

As applications grow, we find out that we need more diversity in how we use components in the application, and simulators are no different. Even more so, if they are kind of frozen in terms of behavior. Or maybe it’s hard to configure for the purpose of a single test, and than again for the next one. So it may seem that we need this kind of code to control our integration tests.

However, this way is not the best way to achieve this. First of all, we’re spreading the configuration code in different places – the property files, in combination with the Configuration classes, maybe some Profiles and even other places. When we keep data all over the place, it’s hard to understand the whole picture, and of course maintenance becomes a nightmare.

In addition, let’s think about the usefulness of the tests. Integration tests are supposed to catch logical errors, and now we’re introducing logic inside them. Having bugs in the integration tests lower our trust in their test results.

Even if we go over this bump, what can we learn from the integration test results? Looking at Jenkins, we’ll see the name of the integration tests, but in what mode did they run? Real or simulator? How was the simulator indended to behave? We need to go into the integration tests themselves, maybe even debug them, to see how they actually ran, and if the result is what we want.

So, what do we do?

  • Manage configuration data in Configuration classes.
  • If you have a simulator, consider it a separate Configuration.
  • Maybe running with multiple simulators is a cross-application mode that can be better managed under a specific Profile.
  • Separate integration tests that run in different configurations to different integration test classes.
  • Do not check logic flags in tests. Fit them into a consistent configuration setting mode, that is separate than property files.
  • Manage configuration data as code – put it where you expect to find it, in a central place, preferably not in multiple files.

Integration Testing with Spring: A Custom Configuration

Gil Zilberfeld explains another part of Spring support for integration tests and configuration classes.
Standard
This is a short series of how to use Spring in integration testing and unit testing.
ConfigurationsMocking

Testing a REST API
A custom configurationConfiguration logic

Here’s the situation: We have a couple of configuration files we use for integration tests. Each of them is a different set of real and mock objects. Some of the objects have behaviors set on them (using Mockito.when) in the configuration classes.

Now, one of the integration tests need one of the mock beans in a slightly different setup. Since Spring initializes the mocks once, we have a problem: Either the new integration tests use the mock object as-is, or we call Mockito.reset, and then all the other integration tests suffer, since we don’t want to rely on the order of running.

But let’s step back for a minute. How did we get here?

This application is heavily Spring-based. Everything is injected and auto-wired, every small object is a @Bean, and so integration tests rely on configuration classes that contain all the beans needed to complete a flow.

Now, here’s the issue: Everything works fine until some integration tests need a custom configuration that is slightly modified. Such a configuration is expensive to build, and of course, managing multiple configuration sets is very cumbersome. How do we do that?

This is not an ideal situation to begin with, so there are no optimal ways to solve that, but let’s go through the options.

The first solution is to add another configuration class. But when the next @Bean is introduced, we’ll need to add it across all existing configurations. It won’t be long until we are in configuration hell. This process needs to be managed, and while it is the simpler solution, it is not sustainable.

The second solution, offered by Spring, is using the @Import annotation. Much like include files, we put the common beans in the imported configuration class, and the custom configuration classes use @Import to include the common file.

If you’ve worked with any type of include files before, you know what a pain it is to manage them. Also in big projects, we need to make sure everyone that edits configurations and adds tests, need to know the guidelines, which @Bean goes where. And then we create hierarchies of imported configurations. We’re back at configuration hell.

Even then, there are still cases where this doesn’t solve the problem completely. We still need to support adding a special bean, right there in the middle of the heirarchy, and to do that we need to shake the entire tree.

The third option is a modified custom version of configuration. We can use Java’s inheritance to create a special configuration that extends the “common” configuration class. Then we override just the bean we want.

For example, here’s a configuration class, containing a mock for tests:

Most of our integration tests use this configuration, but we want a special case where we want another mock behavior. We can do the following:

With that configuration available, our special integration tests can now use the OverridingMockConfiguration. This is easier than setting up a new configuration with all other beans, and less hassle than the other options we went through in terms of management.

However, this is not hassle-free: We need to keep this “special” separate from the “regular” configurations. We don’t want other people and tests using this configuration. Also, this configuration class is susceptible to change in its base class.

Like I said, there’s no good solution. Each has a maintenance and risk cost that comes with it.

What can reduce these costs? A better architecture. If there are less beans to inject, configurations become smaller, and so are their variations. Minimize those and the configuration costs and types go down.

Clean Code: The Rectangle and the Square – Part II

Gil Zilberfeld explains how clean code should read and explain the relationship between types. Using class inheritance maybe
Standard
This series is about Clean Code, SOLID principles, and all kinds of other cool stuff I talk about in my Clean Code classes.
The Rectangle and the Square Part IThe Rectangle and the Square Part II

Last time, in the first post in my new clean code series, we discussed how I torment my students with the ol’ Square and Rectangle trick in my Clean Code course, talking about Liskov Substitution Principle (LSP). At the end of the discussion, we got the audience to understand the issue, and go from “you’re doing it wrong” to “so what?”.

So let’s talk about the bigger issue at hand, because it’s not just about clean code.

Remember before clean code, where we just learned about OOP, inheritance and class derivation?

Inheritance is a feature of a programming language that conjures up a relationship – the derived class is “a kind of” the base class.

Well, it can be, but as we saw, it’s not up to the derived or base class to decide. It’s how the client code uses it, that decides if they are treated the same way. Behavior is not just method signature and  implementation. It’s also the usage of these methods across the class – what we expect them to do when we call them in combination.

“A kind of” is not really a programming language feature, we decide about semantics. We frequently interpret this incorrectly, and create hierarchies of classes which we see as “a kind of”, but really are special cases that need to be treated differently.

Special relations

Ok, there must be another way. Let’s look at another solution to the problem.

Here’s another definition of a Rectangle, where the area calculation behaves consistently, based on the length of the diagonals and the angle between them:

For our area calculation purposes, a Square is equivalent to a Rectangle. It is literally and programtically, “a kind of” a Rectangle:

The area calculation code is similar for both Square and Rectangle of course:

Because the Square is a Rectangle, there can be no discrepancy between their behavior and that means that LSP approves it – we can replace Square with Rectangle and everything works as expected.

Something still feels wrong about this design.

Clean code is about other people

In a day-to-day conversation, would you describe a rectangle by its diagonals and the angle between them? Most people don’t.

They will look at your Rectangle code and snicker. Some will not even trust you anymore, and you’ll spend lunch eating alone. Width and height are good for most humans, diagonals and sine language – not so much.

That’s because the “width and height” design is how most people think of properties of a rectangle, while the diagonals and angle design is fit for area calculation. The former is about designing from the inside out, regardless of use, while the latter is using the code from an external client with a special use in mind.

There’s a gap between what we build and how it’s used. Plus, it took me awhile to find the formula and prove to myself that it is correct, other people I suspect will require the same treatment.

This is where bugs come from.

We translate models all the time – the business, the code, the intended and actual behavior – and things get lost.

Ironically we’re thinking of “class inheritance” as a sure way to describe the same kind of types, while forgetting that it’s really not. We’re using one language, where it does not translate correctly to our understanding.

So what do we do?

  • Use ubiquitous language. Same language, models, terms in the code, the design and the requirements.
  • Use implementation inheritance less.
  • Use interface “inheritance” instead. Interfaces have less baggage than implementation.
  • If you do use inheritance, check if LSP still works. If not, you found a loop-hole. Either the types are not of the same kind, or there’s a translation problem.
  • Review and review again. Other people can confirm or disprove what you’re thinking.

Remember that clean code is about communication with other people. Use their feedback to make sure.

Clean Code: The Rectangle and the Square – Part I

Gil Zilberfeld explains how in clean code similar things may not be so in clean code
Standard
This series is about Clean Code, SOLID principles, and all kinds of other cool stuff I talk about in my Clean Code classes.
The Rectangle and the Square Part IThe Rectangle and the Square Part II

In my Clean Code class, I go through this example about the Liskov Substitution Principle (LSP), part of the SOLID material. This example, the rectangle and the square, never fails to stump people, both experienced and less so.

It starts out with the question: Is a square a rectangle? Which of course, everybody knows is true.

I then show an example of a definition of a Rectangle class:

Then of a Square which inherits from Rectangle:

So far, so good. Clean code at its best.

Now let’s look at the client code, using the Rectangle class:

Which, produces an area of 10. Then I go describe the same usage of a Square:

Which of course produces 25. At this point, everybody understands the results, but something feels wrong. And they can’t really articulate why.

Then somebody says: “That’s not how you calculate the area of a square.”

Now we’re getting somewhere.
But the problem is still not clear, since the code is obviously correct. The design is ok. There is no bug. Clean code, we’ve already said.

What is the correct way to calculate the area of a square?

“Well, you set either the height or the width, not both”.

Which is different than calculating an area of a rectangle.

The Shape Of Things To Come

As I describe in the Clean Code class, LSP requires the ability to substitute the derived class (the Square in our case), with the base class (the Rectangle), and expect the same behavior. Obviously, this does not happen here, because the same behavior leads to a different result.

Here’s the kicker: Behavior doesn’t mean just implementation. And it is not just having the right interfaces.

Behavior also includes operations across methods. For the same inputs, and same method operations, we expect both the rectangle and the square to return the same results. They don’t and the result is different.

Big deal, so the code is not LSP-compliant. Is the world coming to an end?

No, it’s not. But.

It does mean that we’ll write more code to support the special cases, while we can be writing a more generic clean code. And we’ll need to unit test these other cases as well. Having different things means more code.

But there’s even bigger issue. We’ll discuss it next time.

Integration Testing with Spring – Testing A REST API

Gil Zilberfeld talks about integration testing a REST API
Standard
This is a short series of how to use Spring in integration testing and unit testing.
ConfigurationsMocking

Testing a REST API
A custom configurationConfiguration logic

After we understand how to use mocks in Spring in integration tests, let’s take a look at a setup for testing a REST service that uses a dependency we want to mock. API testing is a usual integration test scenario, and with those, we might need to mock dependencies buried under the API layer. Spring to the rescue.

Our StudentService contains an endpoint like the one below, and we’d like to mock the student in integration tests for both cases (either null or not):

We’ll use the updated configuration from last time, without any behavior setup:

Mockito’s default is to return null on methods we didn’t use when on.That means that when getName() is called, null is returned.

Now we need to setup the integration test class:

Integration tests are a bit more elaborate than regular unit tests. Let’s break it down. Let’s take a look at the annotation part first:

The @RunWith and @ContextConfiguration annotations are used just like we used them before – selecting Spring as the JUnit runner, and choosing the right configuration. We’ve also let Spring Boot know which service class to run (that includes our service), and allow it to select a random port, using @SpringBootTest annotation . This port number will be injected using the @LocalServerPort annotation in the class.

In addition, we @Autowired the mockStudent. We need it for the second test (it’s not needed in the first integration test, because we’re using the default setting, returning null).

The @Before method just sets up things for the call:

After that, it’s using the TestRestTemplate to invoke the service. The first integration test checks the return value for a non-existent student:

Since the mock is just initialized, it will return null as the name, which will cause to return the “unknown” value, the case we want to cover in the first our integration test.

In the second integration test we’re using, we’re also setting behavior on the mock for the getName() method:

That’s it so far. We may continue on integration tests more later.

Integration Testing with Spring – Mocking

Gil Zilberfeld explains how to configure spring with mocks for integration tests
Standard
This is a short series of how to use Spring in integration testing and unit testing.
ConfigurationsMocking

Testing a REST API
A custom configurationConfiguration logic

Let’s continue where we’ve left off – multiple configurations for integration tests.

We use different configurations when we need to inject two different sets of objects. For example a real one and mocked one of the same type for different integration test purposes.

Let’s say we have a REST API that calls some component logic, which then calls the database through a DAO (data access object).

In the first set of test scenarios, I wan’t to mock the database, and test that the logic component works correctly (similar to an isolated unit test, but through the API). In other scenarios I want to make sure the entire flow works, up to the database. So I’ll need two seperate configurations – one that injects a mocked DAO component, and one that injects the real one .

Note that managing configurations takes some work. We usually don’t have a configuration per test class, so that means we create a test configuration that serves multiple integration tests. The configuration classes need to be maintained and kept light, so they will fit every consumer.

Configuration pitfalls

Let’s look at the configuration from last time for the mock.

Spring injects any object once on startup by default. That means that integration tests share these mocked instances. That’s an issue we need to understand.

The MockInjectionConfiguration mockStudent that is injected, already has the behavior set for it. When the first integration test runs, Spring injects it as it is written.

But when a second integration test tries to set behavior by using Mockito.when on getName(), it will add the behavior, not override it. And when we’re using Mockito.verify(), oh the laughs we’ll have…

The solution to this is a conventional way of writing tests. A better way to do it to define the configuration like this:

The injected object is a plain basic mock. Let the integration tests define their own behavior. That means that any integration test can assume that it’s starting from scratch.

But assumptions are for fools. We need to make sure the assumption is correct, and that means that we need to reset the mock manually. For example, we can use Mockito.reset() in a setup method:

Without this, mocks can seem to behave erratically. As in, they behave exactly as we tell them, but not how we expect.

But even that maybe too much work for some of us.

If we’re lazy and want to avoid that, Spring Boot can do this for us, if we’re declaring the injection in the test @MockBean instead of @Autowired. With @MockBean we don’t need a @Configuration class to inject the mocks. Spring automatically injects a fresh mock of the object with every test. For further setup for the mock you can either use the @Before method, or the tests themselves.

One more thing to remember: Using beans in the production code makes them easy to use them in integration tests. That ease-of-use comes with the price of speed.

Even if your tests don’t use mocks or injections, the code-under-test may do that. Spring is slow in ramp-up and in run-time. If you write unit tests for the code, make sure it is free of beans, and inject the dependencies manually. “Regular” unit tests (the ones that don’t use Spring for injection) run much more quickly. It also makes sense to locate them differently from the integration tests so they can run separately.

In the final part, we’ll see how we set up a test for a REST API that calls a mock internally.

Integration Testing with Spring – Configurations

Gil Zilberfeld talks about configuration for mocking in integration tests and unit tesitng
Standard
This is a short series of how to use Spring in integration testing and unit testing.
ConfigurationsMocking

Testing a REST API
A custom configurationConfiguration logic

First, a couple of words about Spring in general as a dependency injection framework. One of the best things in Spring is its simplicity of injection. Regardless of where you are, you pop an @Autowired annotation on a class variable (which could be the test class), and it’s ready for injection. Since it’s injecting the same instance regardless of class, setup for mocks is easy. And regardless of how many layers away the object will be injected, the test has access to it, and doesn’t need to pass it between layers, which requires less code. That is always a good thing.

On the registeration side, with @Configuration classes, you can configure exactly and easily what to inject. Spring Boot allows you to do a bit more with easily bootstrapping a project, along with a bit more help for unit tests and integration tests.

All these make it easy to use (and abuse) writing integration tests. Let’s take a look at a couple of scenarios for integration testing.

For our first example, we have a REST API, that inside it calls a dependency. In integration testing APIs like this, we usually go all the way to the back, but even then, we might want to mock something at the back end, mostly for controlling behavior of our integration test.

First, I want to inject a real object I create. In a @Configuration class I put under the test folder, I create this configuration:

Then in the integration test I’ll use the @Configuration class:

That is more of an explanation how Spring works, and obviously, not that useful. In actual integration testing scenarios I use this for injecting POJOs in internal layers, or inject objects that will call dependcies I mock.

If I want to inject a mock instead (let’s say Student wasn’t a POJO), I’ll use a different @Configuration class:

And then I can use it in a test:

Now that’s in the test, I can add mocking behavior.

That’s the basic @Configuration stuff. We’ll continue the discussion on configurations and mocking next time.

How TDD Can Conquer The World (And Why It’s Unlikely To Happen)

TDD Is unlikely to win
Standard

He said: “I asked all my friends, and none of them likes TDD”.

This one I haven’t heard before, although I suspect I should have at some point. Like any practice, TDD has a social side.

I told him to find new friends, which he conveniently ignored. He then continued: “We ,developers,  want to move forward, build software. TDD slows us down”.

That’s true, TDD slows us down, because it forces us to think, which is important in software.
Why is that something that needs explaining? Over and over again?

Wash Your Hands

Uncle Bob uses the “doctors washing their hands” metaphor for explainin TDD. Today it’s common place for doctors to do wash hands before handling patients, but was that always the case? It’s not like all doctors, all over the world, one day switched to washing their hands. It was a continuous process, that probably reached a critical point, when doctors realized that it’s good for their patients.

Of course, there was opposition to the idea. You’d probably hear “I need to take care of my patients, this hand-washing thing just slows me down”.

Yet hand-washing has crossed the chasm.
What would it take TDD get to a “washing hands” status? Thou must count to three.

Three shall be the number of the counting

  • Education

When people die, that’s a big impact. When people are saved, and live, and catch fewer diseases that too is a big impact. When we see the corrolation (or let’s say, we’re convinced that there is one), we see motivation to change. However, we don’t always see the impact of getting quality software out the door. So we need to be educated.

It’s leadership’s job to relate this correlation. Not only that, management needs to understand the business impact of technical debt, so developers make sure the code provides the needed business benefits. Those benefits also translate to regular working expectations.

  • Regulation

Next is regulaton. I’m sure hand-washing opponents wouldn’t just jump on the ship without proper external incentives. As in, “you do it, or you won’t work here again”.

What’s the chance of that happening with software practices? Pretty low. As I’ve written before, I believe the software business is going to be regulated at some point, and badly at that. I don’t see how regulating software development techniques will actually make them work. For example, regulated TDD? First the regulator needs to understand it, then it needs way to see developers are confirming to it. Coverage you say? Good. And then the gaming is on. I don’t see that’s happening soon, though but we’re getting close.

Regardless, external forces saying “do this or you don’t work again”, can have that affect.

  • Social pressure

That means swaying the crowds toward good development practices. It is taking place, although at a glacial speed, that is really hard to see. Without proper education and/or regulation, developers are incentivized to build quickly, regardless of quality. Without proper training that quality is part of the work, this ain’t gonna happen. Developers will continue to say “TDD slows them down”. The more people saying that, the less people will pick it up and run with it.

So the happy path to “TDD taking over the world of software” requires internal, external and social incentives.

Aligning stars is easier. Or maybe it will just takes a very long time. And we need to work at it, doing the hard work, day in, day out.

Actual hard work.

Nah, nobody likes that.

Unit Testing Anti-Pattern: Leaky Mocks and Data

Unit testing anti-pattern: Leaking
Standard
This series goes through anti-patterns when writing tests. Yes, there are and will be many. 
TDD without refactoringLogic in testsMisleading testsNot asserting
Code matchingData transformationAsserting on not nullPrefixing test names with "test"

Unit tests should be isolated from each other. That means that it doesn’t matter if they run in any specific order, alone or in a group, we expect a consistent result. If there’s a reason for failure in the tests it should be a change in functionality.

However, things get tricky if the code, or the tests have dependencies that we can’t get rid of. The tests are no longer unit tests by definition, but if they are valuable, we want to keep them. We still need to take care of the leaks between tests. The anti-pattern is not doing so.

Leakage is not just a beginner problem. In large organizations, as we start testing bigger flows, rather than small classes. When “other people” start writing tests for areas of code that were initially written by “us”. Our assumptions of what clean up means, may no longer work, or worse, not known.

When we encounter these symptoms, they usually points to organization dysfunctions. Conway’s law works in mysterios ways.

Let’s take mocking for example. In isolated unit tests, we create the mock, configure it, and it dies at the end of the test. Then we create another one for the next test. The tests take care of the clean-up.

The “taking care of” is not really the handling of the tests. It’s just instance management – if we’re creating the mock inside a test, it will just die at the end of the tests. If we’re creating it in the @Before method (or equivalent depending on the framework), the instance used in the last test gets over-written in the current one. In a language like C++ without automatic memory management, we may still see the same affect, although the memory doesn’t just go away.

It seems that we get isolation for free, since we’re not doing anything with the mocks for that isolation we crave.

Don’t get used to those freebies

Let’s say we use Spring for dependency injection. Now, we don’t create the mocks manually. They just appear from thin air. We assume that isolation is also taken care of, out of thin air. But if the mock is injected once, it carries its expectations and behaviors from previous tests unless it gets reset (e.g. use reset() in Mockito at clean up time).

Mocks are not the only things that need leakage therapy.

As we’re testing flows with external data – database, cache, files, the registry – we need to make sure that data doesn’t leak to the next tests. We need to clean up after the test has completed.

Using Spring’s @Transactional in tests works, but maybe not enough. What about setup data? That need’s to be cleaned before moving to the next test.

And still that may not be good enough.

What if the test crashes mid-way? or has some unexpected behavior? We need to be sure we clean up correctly in every case. By the way, being sure is not that simple – we need to know how all the code behaves, from now until forever.

The other option is each test makes sure it cleans everything it needs before starting. That’s as simple as the other method.

Some of these things can be automated – base classes that contain these behaviors, clean up scripts, etc. It comes down to “how the teams works”. It assumes that the way we write code and tests is known and understood by everyone. This assumption is hardly true in most teams, and gets broken over large codebases.

When codebases break, so do the tests.

The successful path to overcome these problems is knowledge sharing and practice policies. Design reviews, pair programming, code reviews – the things that help to create a working development process. If everyone finds their own way to clean up the leaks, it pretty much guarantees that someone, somewhere will assume that “it should work like this”. Or worse – copy the method without understanding what stands behind it. And then they start noticing weird test behavir.

Leaks should be stopped, but defining how is just the first step. The next is making these methods commonly used..

 

Unit Testing Implementation: The Plan

Standard
This series deals with the implementation of a unit testing process in a team or across multiple teams in an organization. Posts in the series include:
GoalsOutcomesLeading Indicators ILeading Indicators II
Leading Indicators IIILeadership ILeadership IIThe plan

So far, we’ve talked about the process itself, our goals and expectations, what to look for while we’re moving forward, and now it’s time we get to the good stuff.

How does an implementation plan actually look like? A good plan includes these elements:

Training

Remember that when we start, we already have a core team, usually one, who learned the ropes all by themselves. While they can be great ambassadors or mentors, they are usually not trainers. They know what they’ve encountered, and that is usually much less than skilled practitioners and trainers, who’ve seen lots of code and tests.

The other teams, the people who start from scratch, need context, focus and the quickest ramp up in order to get started. In my introductory courses, I introduce tools as well as effective practices of testing – planning, writing, maintaining, working with legacy code, etc. In addition, I expose them to design for testability and TDD. The courses are hands on, so people can practice the different topics.

Environment preparation

Apart from having the tools available on the developer machines, we need a CI server that’s configured to run the tests and report the test run results. We’d also like to have project templates (maven archetypes, makefiles, etc.) available so people won’t need to start from scratch.

All dependencies (libraries, tools, templates, examples) should be available in a central repository. On day one we want people to start committing tests that are run and reported. We don’t want to have them bump into environmental problems and extinguish their motivation.

Coaching

This are sessions (1-2 hours each tops) when an experienced coach (either external or internal), sits with one or two people and helps them plan test cases, write tests and review tests for things they are working on. This way we transfer the knowledge of testing, as well as starting to create conventions of “this is how we test”. We focus on code that’s being worked on, focusing on making it testable and proving it.

If you start out with an external coach, it would scale up to a point. The idea is to start with a small group that can later become the mentors for new people in a viral way. The ambassadors from the pilot stage can and should support that process.

Communities of practice

We want to continuously improve the way we test, discuss and share our experiences. As we’ve already discussed, there should be forums for discussing and practicing testing. That means we need scheduled time, when people are encouraged to attend, talk about what they did, and learn from others. Test reviews, refactoring together, learning patterns – these meetings breed stronger developers.

These COP meetings are opportunities to discuss the metrics and goals, adapt if help is needed. They are engines for learning and imprvement. They also send the message from management that testing is important. As time goes by, and less coaching sessions are needed, the COP takes over as the main teaching and mentoring tool.

There you have it. In the next posts, I’ll go through a case study of deploying a unit testing plan.