Testability is weird. How we feel about our ability to write a test, comes not just from the effort of writing it. It comes from perceived future maintainence efforts.. Like I said, weird.

As we’ve already said, everything is testable, it’s just a matter of motivation, effort and skill. Until now, we looked at the effort of the writing part. But it doesn’t end there.

Testability depends on many things, but we can divide it to:

  1. Operability – How easily we can call the tested code
  2. Observability – How easily we can observe the result or effect of the operation
  3. Controllability – How easily we can control the conditions of the test
  4. Reproducibility – How easily we can make sure the code executes repeatably

Today, it’s all about Reproducibility.

Combining the Ingredients of Testability

We like procedural code. We understand and trust it. Less so with code that runs asynchronously. That goes not just for code, but also for processes – think sagas in micro-services. When things go according to plan, we like it. When they can go in different ways, we see this as a complexity we must deal with.

Let’s say we have some messy code that has a race condition in it. We usually correlate a race condition with something bad. But it doesn’t have to be – it just means that if things happen in a different order, there are going to be different expected (and maybe unexpected) results.

Let’s start with the unexpected. We can’t write an automated test for an unexpected result. We can set up conditions to check what happens, but is it worth it? Sometimes, for exploration.

This is where we start thinking about the building effort – How much work does it take to set up the first route? How much for the second? And make sure to keep them reproducible?

Setting things to go exactly we want them, in the right order, requires all former attributes of Testability to be easy to achieve.

We need Operability and Observability, that’s a given. But we may need Controlability – mock the computer clock, operate queues in a specific manner, dig into database synchronization – all to achieve the order of operations and system state we want.

Usually, setting conditions for winning the race is not easy. Sometimes we don’t have the skills or tools. Many times, we’d just say it’s not worth the effort. Which, of course, impacts testability.

The Flake Factor

There’s one more question: How much work does it take to work the same way every time?

An automated test is set up to prove that things work in a certain way all the time, unless conditions have changed. We value the “greens” as much as the “reds”.

When we see greens and reds switching places all the time, we call the test “flaky”. We associate that with extra work: Investigate what made it fail this time, read logs, debug, find the reason (maybe not even that). And of course, trust issues: We don’t trust this test on either the red or the green. Which leaks to other tests too.

(Usually, it’s not the test’s fault. It’s the code that flaky. But we continue to shoot the messenger. Anyway.)

While it’s not a cause and effect relation, flakiness lowers testability. If we feel our tests will be flaky in the first place, we rank the testability as low.

We combine the efforts of both setting up the conditions for either part of the race, and then our trust (and history and trauma) in using tests like that, throw everything into a bowl, look at it and mutter – “that’s untestable”.

Our ability to achieve reproducibility of conditions and results impacts our perceived ability to test easily. Testability depends on reproducibility, and it should be easy to do.

Want to learn how to deal with asynchronic code to make it more testable? check out my Clean Code workshop, where I talk about testability from this perspective.

Check out the my API testing and Clean Code workshops where I discuss testability:

Categories: Uncategorized

1 Comment

Software Testing Notes · September 24, 2023 at 5:06 pm

[[..Pingback..]]
This article was curated as a part of #104th Issue of Software Testing Notes Newsletter.
https://softwaretestingnotes.substack.com/p/issue-104-software-testing-notes
Web: https://softwaretestingnotes.com

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *