In this series we're taking a look at how to refactor un-refactorable legacy code.
Part IPart IPart III
Part IVPart V

Part VI


Last time we talked about flow analysis in the code. We figured out the cases we want covered. But, even for these cases, we don’t know what the expected result is.

It’s worse than that in the real world. In our case, after we’ve drawn the boundary and decided which code stays and which waits around the corner, we remain with code that seem to have no known side effects.

But, what if our code, within the boundary, is more complex, and relies on other frameworks or libraries? It could be that there are side effects we don’t see directly from the code (like the calling method scenario from last time).

Ideally, in the real scenario, we would want to log everything that the system does, and from that, deduct what our asserts should be. Luckily, there’s a tool for that.

Enter ApprovalTests

Remember we talked about characterization tests? Those that we run in order to know what the system does, and then make those our asserts. ApprovalTests does that, and even creates those asserts. Kinda.

The way ApprovalTests works, is we write a couple of tests. When we run them, it creates two text files: The “received” file, and the “approved” file. The “approved” file becomes our golden template. Every time we run the test, the regenerated “received” file is compared to the “approved” file. If they are completely the same, then at least we know that the tests run as before (to the best of our knowledge). If the files are different, we’ve broken something. Excellent for refactoring.

But what do these files actually contain? Well, everything we want to write. ApprovalTests ties into the standard IO stream, so whatever we write in the tests and in the code goes there. The more we write there, the better our comparison between changes becomes. If however, we write just a bit, we may know that this bit hasn’t changed, but not a lot about other cases.

Once we’re done with out refactoring, we can either throw away the tests, or keep them, along with the “approved” version in our CI. Not ideal, but a lot more than no tests.

You can read more about it on the ApprovalTests site, of course. For now, I’m assuming you have some familiarity with how it works, and continue from here.

Let’s check out the test

If you look at our tests folder, you’ll see a single test, an ApprovalTests test. I’m using the @DiffReporter  annotation, so if anything is different between the “received” and “approved” files, it triggers my diff tool to run.

Also you can see on our test, there’s no Asserts of any kind, but a simple:

Approvals.verify("Say Spaghetti!");

Which basically means nothing. For now.

Now, if we want to cover our code with tests (or ApprovalTests), we need to collect a lot more logging. We’ve already covered what cases we need to track, and we’ve modified the code, in order to easily add those writing.

Next step? Add a couple of mocks. Remember the Dispenser  interface? Let’s create a mock dispenser. We can use a mocking framework, but it’s much easier to create a manual mock. Since only the test uses it, I’ll create it in the test folder. Here it is:

public class MockDispenser implements Dispenser {
	private String log = "";
	  
	public Ingredient getIngredient(IngredientType ingredient, Place place) {
		
		log += "getIngredient: ";
		log += "Ingredient Type: " + ingredient.toString();
		log += " Place: " + place.toString();
		log += "\n";
			 
		return new Ingredient(false);
	}
	
	public String toString(){
		return log;
	}

	public Ingredient getPasta(PastaType pasta, Place place) {
		log += "getPasta: ";
		log += "Pasta Type: " + pasta.toString();
		log += " Place: " + place.toString();
		log += "\n";
			 
		return new Ingredient(true);
	}
}

As you can see from the implementation, whenever a method on the mock gets called, it adds the information to the log. And I’ve added a nice toString()  override for getting the log. Since the Dispenser  interface is our boundary, we’d like to know everything that goes through it. I’m not doing this now, but I’d also log what I’m returning if I think it makes sense.

Note that logging doesn’t have to be concentrated in the mocks. You can also spread all kinds of logging in the code itself. Then collect those later into the test.

Now that we have logging going on, let’s write a real test. Remember the whole test case analysis? Here’s how the test looks like, and it runs many cases (almost like a parametric test):

public class PastaMakerTests {

    List<Dish> dishes = List.of (
        new Dish(SauceType.Alfredo, PastaType.FreshSpaghetti),
        new Dish(SauceType.Bolognese, PastaType.FreshSpaghetti),
        new Dish(SauceType.Marinara, PastaType.FreshSpaghetti),
        new Dish(SauceType.Pesto, PastaType.FreshSpaghetti),
        new Dish(SauceType.Alfredo, PastaType.Ravioly),
        new Dish(SauceType.Bolognese, PastaType.Ravioly),
        new Dish(SauceType.Marinara, PastaType.Ravioly),
        new Dish(SauceType.Pesto, PastaType.Ravioly)
    );

	@Test
	public void pastaMakerTest() {
	      MockDispenser dispenser = new MockDispenser();
	      StringBuilder log = new StringBuilder();
	      PastaMaker maker = new PastaMaker(dispenser);
	      
          dishes.forEach(dish ->
          {
              log.append(dish.toString());
              maker.cook(dish.sauce, dish.pasta);
          });

          log.append(dispenser.toString());
          Approvals.verify(log);
	}
}

All the dishes are cases we’ve identified and want to run. And here’s the Dish  class (again, in the test folder, it’s not part of the production code. Maybe in the future):

public class Dish {

	public SauceType sauce;
	public PastaType pasta;

	public Dish(SauceType sauce, PastaType pasta)
    {
        this.sauce = sauce;
        this.pasta = pasta;
    }

    public String toString()
    {
        String result = "Sauce: " + sauce.toString() + 
        		" Pasta : " + pasta.toString() + "\n";
        return result;
    }

}

As you can see I want the dishes to also add themselves to the log, so now I know what goes into the Pastamaker, in addition to what goes out.

Next time: Running the test and exploring the results. It’s going to be a bumpy ride.

 

Categories: Refactoring

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *