Forget flashy - focus on test fundamentals

When I was a teenager, I got very into music. I learnt guitar and later learnt drums as well, and it took me years to figure out what makes people stand out amongst guitarists and drummers.

Whilst learning guitar, I was always trying to rush ahead to be able to play a solo or learn how to shred and sweep pick. I did exactly the same thing when I got behind a kit—always trying to go as fast as possible and play the most complex, technical fills.

Then, whilst living in China I joined a house band, and got to spend time with other drummers that had years more experience. When they played, they didn’t have to do anything flashy, because they had the fundamentals nailed down. They would much rather focus on timing, texture, timbre, and context than flashy fills.

They knew their job was to be the backbone of the band and keep the rhythm flowing—not to show off, take centre-stage and leave the band hanging. It takes more than experience to be able to get to that place, it also means parking your ego and melting into the background.

A good drummer will rarely get noticed by the average person, because what they’re playing feels so right, it’s like they’re not even there.


The same can be said for testers. A good tester can make it look flawless, and at times it looks like you could drop them altogether and everything would continue as it was.

But what are good test fundamentals though? And what am I getting at with this analogy? Maybe a story would make sense.


A story from my team

My team had started a project that was very integration heavy. The integrations were all APIs that triggered logic apps (a no-code solution offered by Microsoft).

One of the pain points early on was that fundamentally, logic apps don’t support early testing or unit testing. They use a drag-and-drop style builder where each step is another no-code function.

As responsible testers, we raised the risk of using such a technology, and given that we could move the tests to the API level, it was an acceptable risk for the stakeholders.

One tester, very experienced in automation, decided that they wouldn’t be beaten, and they were determined to find a way to test logic apps at the component level.


The over-engineered solution

After a month, the determined tester had actually figured out a way to test logic apps at the unit test level using NUnit, which could be triggered by a code push to any branch and run in a pipeline like any other unit test on a coded project.

There was a massive issue with this approach though. The logic apps use a massive config file, which isn’t very readable to humans. These config files need to be updated manually in order for the tests to cover the updated functionality.

The issue was that because of the unreadable nature of these files, it would take more effort to understand the config file than doing the change to the logic app. The tests were never touched again.

The tests sat there and rotted for months—unmaintainable, unreadable, out of date. The solution was over-engineered for a problem that didn’t need solving.

It was a risk that was already accepted by the people who are paid to assess such risks. That should have been enough.


The trap testers often fall into

Testers who focus too heavily on automation always tend to fall into this trap. They dive head first into the code, when the real question is:

“Can the management of the project accept the risk?”

They turn “people problems” into “technical problems”, because they are comfortable solving code problems.


Back to fundamentals

The fundamentals of testing come down to risk.

We need to highlight risks that have the potential to damage our product or project before the customers face them in the wild. We need to find the problems that are going to be the headline in the papers.

Automation tools definitely have their role in these skills, but diving straight into engineering means we are missing the information that’s already telling us what we need to know.

In this case, there was no way of changing the architecture—so just accept the risk of lower test coverage.

Related topics:

← Back to blogs

Testing practice: Irish phone numbers

Tales of testing a web form with field validation for Irish phone numbers

Forget flashy - focus on fundamentals in testing

Why testers should focus on risk and fundamentals instead of over-engineering solutions with automation.

Thoughts on Estimates in Software Engineering

A deep dive into why software estimations are so tricky, the asymmetry of estimates, and how Scrum approaches them.

Setting expectations for tester during agile ceremonies

Setting expectations that testers should follow throught each agile process to make more of an impact and provide value

My Accidental Vibe Coding Nightmare

When limitied coding experience meets AI, is it tempting to vibe code or are you entering a debugging nightmare?

Rating testing deifnitions from different orgs

Rating the definitions of software testing from page 1 of Google and explaining why I think they deserve the rating

Testing Financial data using an API

How to test time-series financial data through an API

Tales from Reddit: testing doesn't exist

My thoughts on a bizarre comment from Reddit in which a fellow tester claims testing doesn't exist and what it means to the state of testing