
Flaky Tests: A True Crime Webinar
Everybody’s got flaky API tests. If I didn’t know better, I’d say there’s a conspiracy to spread them around.
But I do know better. I’ve done some digging.
Turns out, flaky tests have a reason they’re so inconsistent. Actually, more than one. I’d say at least seven.
Seven suspects that make us spend so much time investigating and debugging our automated tests, not to mention – eroding our trust in our tests.
And these suspects? They come from us. We have assumptions on how the tests should run. And they break so easily. And they leak into our tests.
Oh, you’re using AI to generate tests? You’re not immune. AI comes loaded with assumptions.
And you’re the one doing time.
What’s in the Webinar?
- Why flaky tests aren’t a test problem – they’re an assumptions problem
- The 7 Suspects: a hit list of the flawed assumptions behind unstable API tests – from state pollution to AI indeterminacy
- How to read a flaky test failure like a crime scene and identify which suspect is responsible
- Why AI-generated API tests are creating a new breed of invisible assumptions
- How stable API tests become your best documentation when no one remembers why the code works.
Who is this for?
- QA engineers and test architects who keep seeing the same API test fail, pass, fail again — and nobody can explain why
- Developers, testers, and anyone writing or reviewing API tests — especially AI-assisted ones
- Anyone maintaining a test suite they’re slowly losing trust in
About the presenter
Gil Zilberfeld has spent 25+ years in software, with over two decades focused on testing. He runs TestinGil, where he teaches API testing to teams who want their tests to actually mean something. He’s opinionated, practical, and won’t waste your time with theory that doesn’t survive contact with production.
June 3rd, 3PM CET / 9AM EDT
Join The Investigation!
