Breaking the Test Case Addiction

A Reflection on Expected Behaviour and Misused Test Cases

The Falsehood of “Expected Behaviour”

Following on from my thoughts on “breaking the test case addiction”. There’s an interesting falsehood that what is written in a test case is somehow THE expected behaviour of a system. This comes down to one of the main reasons I believe businesses use and abuse test cases.

There appears to be a pervasive idea that once a test case is written down or documented, it can then be run and verified by another tester and therefore that system is deemed as "behaving correctly”. To put this in another way, the first person who encounters a feature does all of the thinking and the next person doesn’t have to, having the luxury of running tests looking for simple expected values.

This is seriously problematic.

Assumptions and Bias in Test Cases

Firstly, the test case that is written is baked in assumption and bias that we can’t possibly understand or evaluate. For example, we don’t know what questions were asked about the system under test? We don’t know what conversations had happened to get the information for this test case? We don’t know if any conversations happened at all. We don’t know how well the author understands the functionality, acceptance criteria, tooling used, environment configuration, the list goes on. We’re hoping the information in the test case is accurate, but we don’t know.

The Illusion of Confidence

Test cases usually have 2 outcomes; pass or fail. In the case that you ran a test case and it “passed” you may think that your confidence level has increased. Likewise if the test case fails, your confidence would decrease. I’d argue that regardless of the outcome, your confidence shouldn’t be based on someone else’s test idea. It can be used as a baseline, but ultimately, testing should be about the tester using their experience to evaluate software, not blindly following scripts.

Blind Following: A Childhood Analogy

When I was a child, if I ever got in trouble, I’d blame my brother or my cousin as we all did. I used to say that one of them told me to do something, which is why I did it. My mom used to then say to me, “If they told you to jump off a bridge, would you do that as well?”. Following a script blindly is jumping off a bridge and hoping there’s water, not train tracks below. If this sounds familiar and if you find yourself running tests blindly, you need to ask yourself what information you are trying to gain.

A Call for Critical Evaluation

I’m not saying test cases shouldn’t or can’t be run by another tester. I’m just saying “be careful” about what information you’re getting from them. Are you merely absorbing bias looking for someone else’s idea of what the system does or are you performing a critical evaluation of a system for information? If it's the latter, ask yourself a second question. Could I perform a better evaluation without running the same test cases? Is the act of running test cases the most beneficial way to gain confidence in a system or an area of a system under test?

Related topics:

← Back to blogs

The perpetual stew vs the historian

A story about a search for truth that no one asked for

Pushback on crappy testing interviews.

How to demonstrate responsible testing in an interview

Common misconceptions about Scrum

Common misconceptions about scrum

AI has got our wires crossed

How AI has us thinking back to front

How are we still doing Taylorism in 2025

It's 2025, and Taylorism should be long gone. Why are we still seeing it everywhere in 2025?

Testing practice: Irish phone numbers

Tales of testing a web form with field validation for Irish phone numbers

Have you had too much to think?

Are you being asked to test without thinking? be wary.

Forget flashy - focus on fundamentals in testing

Why testers should focus on risk and fundamentals instead of over-engineering solutions with automation.

Thoughts on Estimates in Software Engineering

A deep dive into why software estimations are so tricky, the asymmetry of estimates, and how Scrum approaches them.

Setting expectations for tester during agile ceremonies

Setting expectations that testers should follow throught each agile process to make more of an impact and provide value

Rating testing deifnitions from different orgs

Rating the definitions of software testing from page 1 of Google and explaining why I think they deserve the rating

Testing Financial data using an API

How to test time-series financial data through an API

My Accidental Vibe Coding Nightmare

When limitied coding experience meets AI, is it tempting to vibe code or are you entering a debugging nightmare?

Tales from Reddit: testing doesn't exist

My thoughts on a bizarre comment from Reddit in which a fellow tester claims testing doesn't exist and what it means to the state of testing

I was wrong about exploratory testting, are you?

How I came to finally understand what exploratory testing is