Test Automation Days 2020: Safe Refactoring with Characterization Tests

Standard

2020 has been a weird year conference-wise. Conferences were either canceled or pushed back, or went online. Some went hybrid, getting a smaller audience in, while others join in remotely. It’s a new world out there, and I kind of miss the old one. Conferences are my way to stay in touch with friends I don’t see frequently.

While missing out on the social part of a conference, I’m not going to miss out on everything. On September 24th, there is the Test Automation Days conference in Holland. I can’t be there in person, but I am going to present online on “Safe Refactoring with Characterization tests”.

Refactoring is near and dear to my heart. Safe refactoring is even better. Leave the cowboy refactoring behind with characterization tests. Want to know more? Register to the conference.

And if you’re not coming to the conference, would you like me to write or do a webinar about it? Let me know in the comments.

Stay safe, and conference even safer.

 

The Metrics: Number of Ignored Tests

The number of ignored automated tests can tell you how to improve your team's relationship with tests
Standard

Time for another metric. This time is about ignorance, and not the blissful kind.

From whenever we (or Kent Beck) invented test frameworks, we wanted to ignore tests. For all kinds of reasons, mostly focusing around: Don’t bother me now, I’ve got better things to do. Like spend time debugging.

But that is a temporary inconvenience, and as quickly as we can, we remove the ignor-ation, skipp-ation or disabled-ation of these automated tests and get them working. Right?

Right?

Funny, I seem to be hearing crickets, which is uncommon in an urban, closed and air-conditioned room.

What is it?

The number of skipped / ignored / disabled tests, or whatever your test framework uses to designate a test that is identified, but not run. Sadly, it is not feasible to also count fully commented out tests, but if there were, I would recommend tracking those too.

Should I track it?

Yes, I would not spend my Saturday morning telling you otherwise.

Why should I track it?

Ignored tests are a solution to an immediate pain. And that’s ok, as long as they are not kept in this state. In the ideal condition, ignored automated tests wouldn’t be pushed (or even committed) into our source repository.

The real problem is they just stay there, getting out of touch with the real code. When you get back to them, two months or years later, you start thinking:

  • What does this test do?
  • Why is ignored?
  • Do I already have a similar test? Does it check completely what the other test does?
  • Why is it failing? Because the test is not current, or the code?
  • What about those comments and its name? Do they describe the wanted behavior, current behavior, or what?
  • How long has it been laying there?
  • Is it flaky? It may be passing now, but I’m not sure how I can trust it.

And some more time wasters we don’t want to hamper us. It’s just not worth it.
The funny thing is how easy it is to fix the problem: Make it work, or throw it away.

What does the metric really mean?

Once again, the metric may point you to problematic tests, but it really shows how much the teams care about the testing effort, and how much they understand the value of automated tests. If you see this going and staying over zero too much, it’s a sign for re-education and focus. If you’ve inherited a code base with 500 ignored automated tests – delete them. Don’t worry, source control remembers everything.

How to track it?

That’s easy – All test frameworks report them. Just draw a line of the metrics across consecutive builds and you’ve got some fodder for your retrospectives.

If you want to know more about how simple metric tracking can impact the team behavior, try starting with this one. It brings a lot of interesting discussions to the table.

And, for helping your team get on track, contact me. I can help with these things, you know.

When TDD Is Not The Hammer

Not Hammer Time
Standard

I always tell people: When you need to add a new feature, use the TDD way. Write a test first, make small steps, and all will be well.

But sometimes, we need to take other paths.

I was working on a new feature last week. I already had a test in place, and a couple more to follow. I usually write those in comments, so I won’t forget to make sure they work, or add the code.

And so I start coding. And move code around. And refactor here and there. And low and behold, after two hours, I’m still in the same place. That test still doesn’t pass. I took a step back, slept on it, and reverted to the original passing code.

Job too big for one test

I knew where I was going (better than when I started the first time). So while all tests were green, I started to refactor the code, putting placeholders for the new code. This is similar to coding by intention, or a variant of “fake it till you make it”.

Then I went to the test side. I thought about going with TDD for a more internal component, but the way the architecture was built, it didn’t make sense to me. Plus, the original test described the behavior I wanted. I kept the original test.

Next, I went back to the code. I had some placeholders to fill. I knew that some of the code will converge into the same methods. But I was careful not to touch the existing code. I only added code,  duplicated (sometimes triplicated) it. I knew that there will be time for refactoring when everything works. I kept running the tests to make sure the old tests still work.

After an hour or so, I had most of the code in place, and still, a failing test. Just filling the placeholders was not enough. I could have reset and start over. But I decided to push through. Half an hour later the test was passing. A few minutes later the additional tests were passing, without adding more code. An hour later, the code was refactored to my satisfaction.

What lessons can we learn?

  • TDD tells us to go in small steps. That is, small working increments of code. But sometimes the jump is too big. Real life is hard.
  • Doing a once-over, even if not completed, did help. It showed me the “design of things to come”.
  • Throwing away the changes helped, and didn’t feel wasteful. I was no longer bound to my changes from the first session.
  • Working without a net (while not all tests are passing) felt weird (even scary). Going in the second time, I already knew it won’t be easy. But having the placeholders helped me feel I’m on the right path, as I was making them green.
  • Knowing there’s going to be refactoring later also helped. I concentrated on “making it work”, rather than the nagging “how it looks” feeling. I did fight the urge to consolidate code that looked similar, but prevailed.
  • At the end I was happy. The feature worked, I had passing tests, and the code looked better (although not exactly as I envisioned).

So what is the moral of the story? TDD is still my favorite way of coding. But like every tool, it’s not the only one I can use. In reality, It’s not the hammer for every nail out there.

Forget About the Tests in TDD

TDD Forget the tests
Standard

Ok, don’t kill me. Breath in and listen.

As I’m working on new material for my TDD course, I got back to one of the original tips I give people about starting TDD. Forget the tests.

Doing TDD for the first time is hard. Maybe there’s another way?

Examples

And I’ll give you one right now.

Let’s say you’re writing an algorithm for multiplying a decimal number by 10 (it’s really complicated, trust me). Only the input and output are in string form. Something like:

String result = calculator.multiplyBy10("7");

What do you do first?

The answer is not “start coding”.

You write examples first. On paper, or as comments. Or on someone’s wall.

"1" -> "10"
"1.2" -> "12"
"1.23" -> "12.3"
"0.12" -> "1.2"

I promise you, if you’ve done this, you’ve already getting the biggest value of TDD, which is “it makes you think before you code”.

Because you’re looking at those and say, there’s something is missing. And you add:

"0" -> "0"

 

Because you’re already imagining your code, and see that the zero case is different. And then you think “hmm, but what if…”

"c" -> ?

 

What happens if there’s a “non-defined state” here. And once you got an answer, like:

"c" -> "error"

 

You get a better idea of the design you’re going for. And when you have enough examples (and we really do have them), we can start writing tests, or even (I know, I know) start coding.

TDD is a lot of things, but when it comes down to it, it’s just another way to write code (only the best one I know). But if you’re worried about jumping over the whole valley, you can start skipping over puddles with examples.

The Metrics: Test Failure Rate

Test failure rate is a metric that indicates the team's attitude toward tests and quality.
Standard

Last time we’ve talked about counting the number of tests. That’s a pretty good indicator of how the team is adapting the new practice of writing tests. But how do they treat those tests? Do they see tests as a helpful technique to their code quality? Or do they write tests because they follow the scrum master’s orders?

(If there’s a master, there are probably slaves, right? Never mind).

So let’s take a look at our next metric: Rate of failing tests.

Should I track it?

Oh yeah. All the time. Its meaning changes over time, though.

Why should I track it?

There will be failures. That’s why we write tests – to warn us if we’ve broken something. But then we review those failures and discuss them, and we expect results – the overall test failure rate should drop (to zero if possible), and existing test failures to be fixed quickly. Sometimes the review of the failures may lead to have an affect on the rate – it could be that flaky tests should be handled differently. But overall, the metric tells us about attitude.

What does the metric mean?

Remember, we’re counting those failures in the our CI system.  We expect very few test failures, because we expect developers to push tested code. The tests should have run on their machine prior to pushing.

Always working tests in CI means the code quality is respected. Seeing the failure rate go down shows adoption.

To take it further: it’s about respect to other team members and their time. When the test failure rate goes down, the build doesn’t break so much. People don’t waste time to get it fixed, or waiting for it to be fixed.

In the long run, after the team has adopted automated tests, if the rate doesn’t drop to zero, it’s a sign of the codebase’s quality – the team doesn’t take care of consistent failing or flaky tests. You can then make decisions – remove tests, improve the code, or do something completely different. Your choice.

How to track it?

Easy, any CI tool will show you the number of failures for every build. We want the number to be as close to zero averaged over time. Put a graph on the wall and review at the retrospective.

Is it comparable?

Nope. This is a team specific metric. Different people, skills, experience and codebase make numbers different for different teams. There’s no sense comparing failure rates.

Want help with visualizing and measuring the quality of your code and tests? Contact me!

The Metrics: Number of New Tests

Quality metrics
Standard

I’m working on an interesting project for a client about showing different, somewhat advanced metrics on their code quality. So I decided to write about metrics, what they mean and how you can track them, and much more importantly, how to use them. And yes, even though I talked about a lot about coverage before, I’m going to mention it again. And again. But we’ll see.

The first metric I’m going to talk about is… number of new tests. Well, that’s not really a surprise based on the title. You’re still impressed, right?

What is it?

The number of new passing tests added between builds or over a period.

Should I track it?

Yes. For a while.

Why should I track it?

At some point in your life, or your team’s life, there comes a time where you’ve had enough of those returning bugs you find really late, and are really a pain to debug. Then, you get a couple of insights: We need automated tests. And we need testable code. And we need code reviews. And by golly, we have arrived!

So people need to start writing and running automated tests. What is the indication of people adding tests? The number of tests! It’s not rocket science, and if it’s easy and simple, you just use it.

And remember, like any important metric, you publish it, visualize it, and talk about it, and the progress the team is making. Because otherwise it’s just a number that doesn’t carry importance.

What does the metric mean?

The simple answer is “do people write tests”. But once you start digging in, you may find interesting stuff. For example, for most teams, the findings would not be “holy velocity, Batman, they are producing more tests than a test factory on drugs”. But hey, let’s say they do. You didn’t expect that, so you can ask more questions: Are those good tests? Are those tests easy to write? Are they cheating me? And if not, why haven’t they started 3 years ago?

Ok, I haven’t seen this happens.

Most of the time the number either doesn’t change, or grows painfully slowly. Then you can ask things like: Why does it take so long to write tests. You’ll find there’s training missing. And knowledge about the system. And that testability of the application sucks. And then you can do something about it. Like call me for training or coaching. Or do some refactoring in those painful points. And truly understand the true meaning of development.

How to track it?

That’s easy. Whenever you build your project and run the tests (I’m assuming you’re using some kind of continuous integration system, like Jenkins, TeamCity, CircleCI or whatever flavor), you can see the number, right there for each build. If you’re running a nightly build, that’s fine too. Weekly not as good, but still works.

You’ve got the number. Show it and discuss it. Ask “what can we do to improve it?”.

Once you see a steady rise in the number of tests, you have achieved one or more goals – people are writing tests on regular basis, and that means they have the know-how and the system doesn’t resist tests. Then you can stop tracking it.

One more thing. The metric is not comparable across teams, projects or even the same team on the same app one year later. Don’t compare those numbers, it leads to pain, which leads to suffering, and doesn’t end well in most trilogies.

Want to learn introduce a dashboard for your team? Get in touch!

“Better Estimation & Planning” Recording from PM Day 2020

Agile Estimation and Planning
Standard

Conferences are rare these days, like a meteor shower in Animal Crossing (or so my daughter tells me). However, the online conference market has evolved considerably as an alternatives.

And so, I was invited to do a session, on the agile track of Project Management Day, in Kiev. But borders don’t matter now.

It’s on one of my favorite subject – agile estimation. This is a short one, but if you’re interested, register to the squeeze course. Of course.

Here is the recording.

What Are Micro-Tests?

Micro-tests make incremental development faster
Standard

There are unit tests, integration tests, API tests, end-to-end tests and other “test” animals. (No animal was tested for this post).

These types of tests are based on what they test. Unit tests test a code unit – a method, a class, a small set of classes. An API test checks our API behaves properly.

Micro-tests can be understood the same way: They test a very small portion of code (sort of a unit test). However, that’s not really what they are about.

Speed

They improve our speed of producing working code.

Micro-tests are small tests that test small code. But if you write them incrementally, you make a large piece of code faster. Which also works. You just don’t waste time on writing extra code that gets in your way of adding more code later. You don’t debug.

Think about it: Continuous work of adding code that works.

TDD focuses on adding just enough code – enough to work, and enough for us to focus on. We then move to the next bit. The smaller the bit, we move faster. We focus on the current small problem, solve it and move on to the next one.

We don’t debug, because that’s a big waste of our time. If we don’t understand what’s going on, we’re not in micro-land anymore. If we get stuck, we can (and should) revert. We haven’t written too much code since we were green anyway.

The smaller you cut your “feature”, with micro-tests, you’ll implement “all the features” – faster.

Why Jumping Off A Ship Is A Great Idea

Why Jumping Off A Ship Is A great idea
Standard

Imagine: You’re on a ship, sailing the endless ocean. You’ve been on this voyage for a while now. You don’t remember since when, and you’re not sure how much time will pass until you see land again. The weather report says it will be like this for a while. You keep going.

Are you on the “Corona” ship too?

We’re not used to these long periods of time. Remember deadlines? We used to live by them. We checked our velocity by deadlines.

No matter how you measure it, velocity is down. What did we do in the pre-corona days?

“Ah, we’re late again. We were sure we’ll have these features ready by now. We better push what we have now, and press forward with more features.

Quality? We’ll take care of that when we have time”.

We’ve been there, and we’ve paid the price. We called it “technical debt”, but let’s call it for what it is: short term wins, paid for by future slow down.

Jump

So why am I telling you all this?

Well, reality has stepped in and gave as a choice. Because of the slow down we can do one of two things: Use the less time we have to push what features we can, and throw quality even further out of the window. Staying on the ship, until the food supply runs out.

Or we can do something surprising – jump.

We can use the slowdown to invest in the future. For example:

  • Automate the things we did a hundred time manually because we didn’t have the time.
  • Pay up some technical debt.
  • Add tests around that mess of a code.
  • Refactor that mess.
  • Learn some new skills (check my online courses).

Technical debt is an economic term. And investing when the market is down make sense both economically and technologically. When we go back to “normal”, velocity will pick up because your code, your tests and definitely you are better.

This is an opportunity. Don’t waste the time we’ve been handed.

Jump.

PM Day Online – Better Estimation

Standard

It was great doing a short “Better Estimation and Planning” presentation on the Kiev Project Management Day, which was online this time, considering the coronavirus roaming the streets.

In these times, both planning and estimation the way we used to is changed, and the data we’ve used as the baseline for our future estimates is no longer reliable. As work gets slower and our processes need reinventing, we need to start collecting data and metrics again. I’ve talked about this in short in the presentation, and I talk about it in depth in my online courses (especially the Better Estimation course, of course).

Remember: Take-Two of My Courses and Get the 3rd One For 50% OFF, OR Take three Courses and get the 4th one FREE OF CHARGE!!!!

Here are the slides of the presentation: