“The Quality Dashboard” from Odyssey 2021

The quality dashboard baby boss
Standard

Conferences are cool, even when they take place in the online ether. So I was happy to bring an updated version of the “Quality Dashboard” session to Odyssey 2021. It’s 55 minutes long, so grab some coffee, and don’t let any baby get in your face until you do.

What is it about? It’s the real story, not that I invent stuff (mostly), of a project I did implementing quality metrics for REST API teams . It’s interesting because of the process and thinking of what to measure, and keys of success. If you want me to help you, contact me.

Here’s the recording, enjoy!

“10 Things They Didn’t Tell You About TDD” – Webinar Recording

Standard

Yesss. It’s here. At least the Hebrew version, I have a feeling that we’ll do an English one soon.

What is it? It’s a 48 mins of what you don’t read in introduction to TDD books or articles. (Except for Everyday Unit Testing, my book, but you probably knew that). From what TDD does to your thinking, coding and designing, to what it doesn’t do (hint: magically create great designs). Almost no code examples.

There’s a lot of good stuff in there (I know, I’ve written it, it took a lot of work). So if you’re speaking Hebrew, you can just go ahead and play it. Well, you can play it if you don’t too, but it would not be as effective.

Here it is.

Is There Such Thing as “Untestable Code”?

Untestable code
Standard

The idea of “untestable code” comes up often when you start implementing any test automation methods, especially unit testing.

Suddenly, you discover that code is not testable. It resists.

Actually, it’s not “untestable”. But it’s damn hard to test, without changing.

Here’s an example. Your code does a calculation and saves the result in the database. You want to check that the code is doing what it is supposed to. But to do it, you need to setup a database, set it up, call the code, call the database to check the result, and wipe it clean before you run the next test. If our code only returned a result we could simply assert on…

It’s not “untestable” code. It is expensive to test. Most code is like that, because most developers, yours truly included, were not taught to build cheaply, testable code.

There are consequences: Most of the time, “untestable code”, which is really hard-to-test code, doesn’t get tested. It’s either too costly to write tests for as-is, or risky to change (without tests), or both.

So we declare the code “untestable” and move on, perpetuating the idea that “untestable code” exists.

There are a few things we need to do for that to change.

First, we need to change our language. Remove “untestable” from our vocabulary. When we start calling it “hard-to-test code”, we state the code is testable, and it can be written differently for easy testing.

Then, we need to acknowledge that code, as it was written until now, is hard to test. And we have options:

  • Leave that code and don’t bother to test it
  • Test it (with time, effort, blood and tears)
  • Change it (with time, effort, blood and tears) and test it more easily

We can also decide to start testing new code, which was written with testability in mind, and will be easier to test. And we should train the developers to do that.

Untestable code doesn’t exist. But we can write better, simpler, cheaply-testable code, and make our testing implementation easier.

What I’ve Learned Writing Tests For Someone Else’s Code

The magic of writing tests for someone else's code
Standard

I’ve been asked if is it possible to write unit tests to someone else’s code, after it was written. Not your code, and not as a pair.

Up to a few months ago, I’d say no. But, here I am after doing it myself, on code I don’t know, and even in a language I don’t have enough experience with. TLDR: It’s not optimal, but it is possible.

Here are a couple of things I’ve learned on the way:

  1. Understand the risks – If you don’t understand the code, you might write automated tests that lock in the wrong behavior. If you take this on, the risk exists. So make sure you understand the code, and get an expert chaperone.

2. Decide what kind of automated tests you’re going to write – unit, integration, API. Unit tests may not be simple to write, but are less error prone. Bigger tests are even more complex, and you run the risk of losing your way.

3. Go over the code first – It may be alluring to just jump in and get your hands wet, and start writing something. But it’s worth while to get a comprehensive view of the code. Understanding what cases are there, and what is needed for them to pass through the code, is essential, and even more so, if they need testing at all.

4. Create a test list, with dependencies – What helped me a lot was creating the test list first. With each test, I wrote down the dependencies – external or internal. With the list, I could prioritize what to write first, and decide whether to mock internal calls, or test them within their calling code.

5. Find patterns – Still staying away from the code, I found similar test cases. Those would be tests that would look similar, use a common setup, or a common resource. These patterns allow me to categorize the tests into groups, and write automated tests for these groups, keeping focus on the patterns. Even more so, the patterns allowed me to “foresee” how to design the tests.

6. Refactor the tests as you go – This one is always true, but the iterative nature of writing, kept me coming back to refactoring the automated tests, where copy-and-paste tests would land me in trouble. It’s true to the TDD way, but with focus on the tests.

7. Understand what you’re doing – That goes back to step 1. If you get stuck, ask questions. I found out bugs in the code, from failing tests, only to find out the code was not in use for a while. Go figure.

It’s going to take a while – Understanding, learning how to test, and doing so iteratively takes time. It’s hard to estimate in terms of time, even after the deep planning I’ve done. I was surprised, and I’m pretty good at testing, you know.

So is it possible? Yes, under some conditions. But writing tests for yourself, as you’re writing the code, hopefully before it, is much better. And with a partner, it’s even more effective.

“Unit Testing for Grown-ups” Webinar Recording

Unit Testing for Grown ups webinar
Standard

Another month, another webinar recording!

At 52 mins, this time it’s about the mindset of testing. Especially when you want to introduce unit testing (or any other process), you need to understand where people are coming from. How developers think, what can help them consider change, and what objections they have (and why). Yes, some objections are not just there to get you away from them.

By the way, you should really join these webinars. I’m talking about things that are sometimes that people don’t talk about, plus, I like the company!

And here are the slides.

“Make it Public” Recording (Hebrew)

Make it public webinar on testability
Standard

This webinar recording (1 hour long) is the “Make it public and other things developers don’t like to hear”. It’s about how code impacts testability, and how the role of the tester includes making an impact in that regard.

Testability is very important in our effort to create a comprehensible report on the status of our system, and how code is written makes a direct impact. Even code duplication does that (and I actually show it).
We’ll probably do an English version soon, but until then, why not learn another language?

Enjoy.

And here are the slides:

JUnit – The Next Generation – Webinar recording

JUnit repeater tests from TestinGil webinar
Standard

And here’s the JUnit webinar recording. Live is better, but a repeat view is not too shabby as Data and Lore discuss above. 52 minutes of absolute testing deliciousness.

It goes through what makes special about JUnit 5, and more important than features, how they can help you write better, organized tests.

And who wants more in this day and age?

Enjoy.

The Quality Dashboard – TestCon 2020

The quality dashboard baby boss
Standard

This is the “Quality Dashboard” presentation I gave at TestCon 2020. It’s 36 minutes long, so grab a cup of joe and enjoy.

What’s it about you ask? It’s about my experience building with a client a bunch of quality reports, that actually matter – where developers can look at the test reports, SonarQube and others, and can improve their code, their tests and coverage. And where managers can get an overview of the quality picture that actually means something.

And that Guinea pig plays a big part of it.

“Safe Refactoring with Characterization Tests” from Test Automation Days 2020

Standard

I gave this presentation (22 minutes, so no need to clear your schedule for it) at Test Automation Days 2020. It’s about characterization tests – what they are, how they can help you, and how you can use the ApprovalTests tool (as an example) to cover your untested code with tests. Check it out!

The Metrics: Percentage of New Coverage

Metric: New code percentage Coverage
Standard

When I talk with people who want to start measuring stuff, it’s always coverage. I’ve already talked about coverage as a metric, why it’s not good enough, especially on its own. But, in one case, there’s something to be said for a coverage metric, that is actually helpful. If you’re not willing to try anything else.

Like new potatoes, the percentage of new coverage has a different flavor. But I digress. And apparently hungry.

What is it?

The percentage of new coverage – how much of new code is covered by tests.

Should I track it?

If you really, really want to measure coverage and just, only coverage, then yes. Can I interest you with test cases? No? Ok, let’s continue.

Why should I track it?

Let’s say you’re starting to write a new feature, inside your ten year old code-base. If you check total coverage you’ll start seeing progress in your coverage in a few months. The added percentage is just too small. Check it only on the new feature code, and it will start meaning something.

What does the metric really mean?

Coverage means, as always, “my tests exercised this code”. It doesn’t matter what your tests actually did with it. But again, that’s with any coverage. The percentage of new coverage, means how much of the new code you’ve covered (obviously). With TDD, and in fact, even without it, you should be close to 100% all the time, since you’re supposed to add tests just for the code that you add, right?

Right?

Because if it drops, the feature is not covered, and the developers are pushing tests to “later when they have time”, which is never. So you keep it up.

How to track it?

With your favorite coverage tool. Only this time you’ll need to do some manual work (or automate some reports), because only you know what’s really “new” in the feature, as opposed to the legacy stuff.

Is it comparable across teams?

You probably think you know what I’ll say, but let me surprise you: YES!

If you’re a manager you must love me. Today.

Since we want new covered code as much as we can, we can check the behavior of different teams working on different new features. If team 1 has started measuring their new code coverage percentage, and it’s near 100%, and team 2 does the same but they are close to 50%, you can actually ask why, and get relevant answers. You might get some “legacy code keeps dragging us down” answers. Or, the “team’s not knowledgeable with tests”.

But here’s the catch – improve the code and the team, so you can get higher coverage numbers!

This is a metric that specifically tells you how well the team performs their testing. Which is really what you want to know, and improve if needed.

Remember if you’re really interested in how your team is doing, you’ll want to measure more stuff for a clear picture.

What’s that, you need help with that?

Contact me. I can help with these things, you know.