“Make it Public” Recording (Hebrew)

Make it public webinar on testability
Standard

This webinar recording (1 hour long) is the “Make it public and other things developers don’t like to hear”. It’s about how code impacts testability, and how the role of the tester includes making an impact in that regard.

Testability is very important in our effort to create a comprehensible report on the status of our system, and how code is written makes a direct impact. Even code duplication does that (and I actually show it).
We’ll probably do an English version soon, but until then, why not learn another language?

Enjoy.

And here are the slides:

JUnit – The Next Generation – Webinar recording

JUnit repeater tests from TestinGil webinar
Standard

And here’s the JUnit webinar recording. Live is better, but a repeat view is not too shabby as Data and Lore discuss above. 52 minutes of absolute testing deliciousness.

It goes through what makes special about JUnit 5, and more important than features, how they can help you write better, organized tests.

And who wants more in this day and age?

Enjoy.

The Quality Dashboard – TestCon 2020

The quality dashboard baby boss
Standard

This is the “Quality Dashboard” presentation I gave at TestCon 2020. It’s 36 minutes long, so grab a cup of joe and enjoy.

What’s it about you ask? It’s about my experience building with a client a bunch of quality reports, that actually matter – where developers can look at the test reports, SonarQube and others, and can improve their code, their tests and coverage. And where managers can get an overview of the quality picture that actually means something.

And that Guinea pig plays a big part of it.

“Safe Refactoring with Characterization Tests” from Test Automation Days 2020

Standard

I gave this presentation (22 minutes, so no need to clear your schedule for it) at Test Automation Days 2020. It’s about characterization tests – what they are, how they can help you, and how you can use the ApprovalTests tool (as an example) to cover your untested code with tests. Check it out!

The Metrics: Percentage of New Coverage

Metric: New code percentage Coverage
Standard

When I talk with people who want to start measuring stuff, it’s always coverage. I’ve already talked about coverage as a metric, why it’s not good enough, especially on its own. But, in one case, there’s something to be said for a coverage metric, that is actually helpful. If you’re not willing to try anything else.

Like new potatoes, the percentage of new coverage has a different flavor. But I digress. And apparently hungry.

What is it?

The percentage of new coverage – how much of new code is covered by tests.

Should I track it?

If you really, really want to measure coverage and just, only coverage, then yes. Can I interest you with test cases? No? Ok, let’s continue.

Why should I track it?

Let’s say you’re starting to write a new feature, inside your ten year old code-base. If you check total coverage you’ll start seeing progress in your coverage in a few months. The added percentage is just too small. Check it only on the new feature code, and it will start meaning something.

What does the metric really mean?

Coverage means, as always, “my tests exercised this code”. It doesn’t matter what your tests actually did with it. But again, that’s with any coverage. The percentage of new coverage, means how much of the new code you’ve covered (obviously). With TDD, and in fact, even without it, you should be close to 100% all the time, since you’re supposed to add tests just for the code that you add, right?

Right?

Because if it drops, the feature is not covered, and the developers are pushing tests to “later when they have time”, which is never. So you keep it up.

How to track it?

With your favorite coverage tool. Only this time you’ll need to do some manual work (or automate some reports), because only you know what’s really “new” in the feature, as opposed to the legacy stuff.

Is it comparable across teams?

You probably think you know what I’ll say, but let me surprise you: YES!

If you’re a manager you must love me. Today.

Since we want new covered code as much as we can, we can check the behavior of different teams working on different new features. If team 1 has started measuring their new code coverage percentage, and it’s near 100%, and team 2 does the same but they are close to 50%, you can actually ask why, and get relevant answers. You might get some “legacy code keeps dragging us down” answers. Or, the “team’s not knowledgeable with tests”.

But here’s the catch – improve the code and the team, so you can get higher coverage numbers!

This is a metric that specifically tells you how well the team performs their testing. Which is really what you want to know, and improve if needed.

Remember if you’re really interested in how your team is doing, you’ll want to measure more stuff for a clear picture.

What’s that, you need help with that?

Contact me. I can help with these things, you know.

Test Automation Days 2020: Safe Refactoring with Characterization Tests

Standard

2020 has been a weird year conference-wise. Conferences were either canceled or pushed back, or went online. Some went hybrid, getting a smaller audience in, while others join in remotely. It’s a new world out there, and I kind of miss the old one. Conferences are my way to stay in touch with friends I don’t see frequently.

While missing out on the social part of a conference, I’m not going to miss out on everything. On September 24th, there is the Test Automation Days conference in Holland. I can’t be there in person, but I am going to present online on “Safe Refactoring with Characterization tests”.

Refactoring is near and dear to my heart. Safe refactoring is even better. Leave the cowboy refactoring behind with characterization tests. Want to know more? Register to the conference.

And if you’re not coming to the conference, would you like me to write or do a webinar about it? Let me know in the comments.

Stay safe, and conference even safer.

 

The Metrics: Number of Ignored Tests

The number of ignored automated tests can tell you how to improve your team's relationship with tests
Standard

Time for another metric. This time is about ignorance, and not the blissful kind.

From whenever we (or Kent Beck) invented test frameworks, we wanted to ignore tests. For all kinds of reasons, mostly focusing around: Don’t bother me now, I’ve got better things to do. Like spend time debugging.

But that is a temporary inconvenience, and as quickly as we can, we remove the ignor-ation, skipp-ation or disabled-ation of these automated tests and get them working. Right?

Right?

Funny, I seem to be hearing crickets, which is uncommon in an urban, closed and air-conditioned room.

What is it?

The number of skipped / ignored / disabled tests, or whatever your test framework uses to designate a test that is identified, but not run. Sadly, it is not feasible to also count fully commented out tests, but if there were, I would recommend tracking those too.

Should I track it?

Yes, I would not spend my Saturday morning telling you otherwise.

Why should I track it?

Ignored tests are a solution to an immediate pain. And that’s ok, as long as they are not kept in this state. In the ideal condition, ignored automated tests wouldn’t be pushed (or even committed) into our source repository.

The real problem is they just stay there, getting out of touch with the real code. When you get back to them, two months or years later, you start thinking:

  • What does this test do?
  • Why is ignored?
  • Do I already have a similar test? Does it check completely what the other test does?
  • Why is it failing? Because the test is not current, or the code?
  • What about those comments and its name? Do they describe the wanted behavior, current behavior, or what?
  • How long has it been laying there?
  • Is it flaky? It may be passing now, but I’m not sure how I can trust it.

And some more time wasters we don’t want to hamper us. It’s just not worth it.
The funny thing is how easy it is to fix the problem: Make it work, or throw it away.

What does the metric really mean?

Once again, the metric may point you to problematic tests, but it really shows how much the teams care about the testing effort, and how much they understand the value of automated tests. If you see this going and staying over zero too much, it’s a sign for re-education and focus. If you’ve inherited a code base with 500 ignored automated tests – delete them. Don’t worry, source control remembers everything.

How to track it?

That’s easy – All test frameworks report them. Just draw a line of the metrics across consecutive builds and you’ve got some fodder for your retrospectives.

If you want to know more about how simple metric tracking can impact the team behavior, try starting with this one. It brings a lot of interesting discussions to the table.

And, for helping your team get on track, contact me. I can help with these things, you know.

When TDD Is Not The Hammer

Not Hammer Time
Standard

I always tell people: When you need to add a new feature, use the TDD way. Write a test first, make small steps, and all will be well.

But sometimes, we need to take other paths.

I was working on a new feature last week. I already had a test in place, and a couple more to follow. I usually write those in comments, so I won’t forget to make sure they work, or add the code.

And so I start coding. And move code around. And refactor here and there. And low and behold, after two hours, I’m still in the same place. That test still doesn’t pass. I took a step back, slept on it, and reverted to the original passing code.

Job too big for one test

I knew where I was going (better than when I started the first time). So while all tests were green, I started to refactor the code, putting placeholders for the new code. This is similar to coding by intention, or a variant of “fake it till you make it”.

Then I went to the test side. I thought about going with TDD for a more internal component, but the way the architecture was built, it didn’t make sense to me. Plus, the original test described the behavior I wanted. I kept the original test.

Next, I went back to the code. I had some placeholders to fill. I knew that some of the code will converge into the same methods. But I was careful not to touch the existing code. I only added code,  duplicated (sometimes triplicated) it. I knew that there will be time for refactoring when everything works. I kept running the tests to make sure the old tests still work.

After an hour or so, I had most of the code in place, and still, a failing test. Just filling the placeholders was not enough. I could have reset and start over. But I decided to push through. Half an hour later the test was passing. A few minutes later the additional tests were passing, without adding more code. An hour later, the code was refactored to my satisfaction.

What lessons can we learn?

  • TDD tells us to go in small steps. That is, small working increments of code. But sometimes the jump is too big. Real life is hard.
  • Doing a once-over, even if not completed, did help. It showed me the “design of things to come”.
  • Throwing away the changes helped, and didn’t feel wasteful. I was no longer bound to my changes from the first session.
  • Working without a net (while not all tests are passing) felt weird (even scary). Going in the second time, I already knew it won’t be easy. But having the placeholders helped me feel I’m on the right path, as I was making them green.
  • Knowing there’s going to be refactoring later also helped. I concentrated on “making it work”, rather than the nagging “how it looks” feeling. I did fight the urge to consolidate code that looked similar, but prevailed.
  • At the end I was happy. The feature worked, I had passing tests, and the code looked better (although not exactly as I envisioned).

So what is the moral of the story? TDD is still my favorite way of coding. But like every tool, it’s not the only one I can use. In reality, It’s not the hammer for every nail out there.

Forget About the Tests in TDD

TDD Forget the tests
Standard

Ok, don’t kill me. Breath in and listen.

As I’m working on new material for my TDD course, I got back to one of the original tips I give people about starting TDD. Forget the tests.

Doing TDD for the first time is hard. Maybe there’s another way?

Examples

And I’ll give you one right now.

Let’s say you’re writing an algorithm for multiplying a decimal number by 10 (it’s really complicated, trust me). Only the input and output are in string form. Something like:

String result = calculator.multiplyBy10("7");

What do you do first?

The answer is not “start coding”.

You write examples first. On paper, or as comments. Or on someone’s wall.

"1" -> "10"
"1.2" -> "12"
"1.23" -> "12.3"
"0.12" -> "1.2"

I promise you, if you’ve done this, you’ve already getting the biggest value of TDD, which is “it makes you think before you code”.

Because you’re looking at those and say, there’s something is missing. And you add:

"0" -> "0"

 

Because you’re already imagining your code, and see that the zero case is different. And then you think “hmm, but what if…”

"c" -> ?

 

What happens if there’s a “non-defined state” here. And once you got an answer, like:

"c" -> "error"

 

You get a better idea of the design you’re going for. And when you have enough examples (and we really do have them), we can start writing tests, or even (I know, I know) start coding.

TDD is a lot of things, but when it comes down to it, it’s just another way to write code (only the best one I know). But if you’re worried about jumping over the whole valley, you can start skipping over puddles with examples.

The Metrics: Test Failure Rate

Test failure rate is a metric that indicates the team's attitude toward tests and quality.
Standard

Last time we’ve talked about counting the number of tests. That’s a pretty good indicator of how the team is adapting the new practice of writing tests. But how do they treat those tests? Do they see tests as a helpful technique to their code quality? Or do they write tests because they follow the scrum master’s orders?

(If there’s a master, there are probably slaves, right? Never mind).

So let’s take a look at our next metric: Rate of failing tests.

Should I track it?

Oh yeah. All the time. Its meaning changes over time, though.

Why should I track it?

There will be failures. That’s why we write tests – to warn us if we’ve broken something. But then we review those failures and discuss them, and we expect results – the overall test failure rate should drop (to zero if possible), and existing test failures to be fixed quickly. Sometimes the review of the failures may lead to have an affect on the rate – it could be that flaky tests should be handled differently. But overall, the metric tells us about attitude.

What does the metric mean?

Remember, we’re counting those failures in the our CI system.  We expect very few test failures, because we expect developers to push tested code. The tests should have run on their machine prior to pushing.

Always working tests in CI means the code quality is respected. Seeing the failure rate go down shows adoption.

To take it further: it’s about respect to other team members and their time. When the test failure rate goes down, the build doesn’t break so much. People don’t waste time to get it fixed, or waiting for it to be fixed.

In the long run, after the team has adopted automated tests, if the rate doesn’t drop to zero, it’s a sign of the codebase’s quality – the team doesn’t take care of consistent failing or flaky tests. You can then make decisions – remove tests, improve the code, or do something completely different. Your choice.

How to track it?

Easy, any CI tool will show you the number of failures for every build. We want the number to be as close to zero averaged over time. Put a graph on the wall and review at the retrospective.

Is it comparable?

Nope. This is a team specific metric. Different people, skills, experience and codebase make numbers different for different teams. There’s no sense comparing failure rates.

Want help with visualizing and measuring the quality of your code and tests? Contact me!