How to Design an Automated Test Case?
What is test design?
You want to automate tests, but you have several options to check even a single feature or API.
While there is no right answer for all cases, you might want to consider the following questions, before you decide how to design that test.
- What do I want to check?
Is it the behavior of a method? A whole API ? An ultimatee end-to end scenario? Sure, I can check the API logic by running it from the UI. But it will probably be more fragile then running an API test – it will cost more to know the same answer.
- How can I observe the impact of the test?
Suppose I’m checking an API that adds something to the database. How will I know it worked?
If I’m using a tool that allows me access to the database, that’s a plus in this scenario.
I know, it’s possible to do it with every tool, but it costs. For example, Postman (or other API automation tools), can’t access a database directly. You need to jump through hoops to get there, and that’s a worse choice in this case. Our test design need to rely on us being able to observe and analyze the result easily.
- Can I check just what I want?
Let’s say an API can’t work without authentication. I need to pass a token in order for it to work.
But what I’m really interested in, is the functionality under the hood. Do I really need to setup a token just to do that?
Or can I automate a non-authonticated, not even an API test? Like a unit test, or integrated with the database?
I want my test lean and focused on what I want to check. I don’t want it to fail on the extras.
- Can I run it the way I want it?
If I want to check an API concurrently (not just for load), I need a tool that lets me do it. Either code it myself, or use a concurrency suporting tool (like a load runner). If I can’t choose my toolset, it will cost me.
- Does this kind of test fit into my development process?
That’s a big one, that we sometimes miss out. I’ll give you another Postman example (I really like Postman, but this feature drives me crazy) – the fact that you can’t tie in (i.e. version or label) tests (or requests) to the code it tests. You can export the requests and put them in source control to label them. But this makes the developemnt process awkward and error prone.
When I say “cost” I really mean, “we’re going to skip that one”. Either we see that coming and decide not to go on that road, or we pay with maintenance for the tests we write.
Applying this thinking is crucial to create a test suite that serves us for the long run, and tells us what we want to learn from our tests. Cheaply.
And that’s just a tip of the iceberg. In my next webinar, “How to Design Effective API Tests”, I’m going to way deeper into that iceberg.