Mind The Gap

Testing is all about the gaps.

Ask anyone what a software tester’s job is, and the answer will more than likely be along the lines of ‘to see if a bunch of code does what it is supposed to do or not’.

Ok, I’ve simplified it, but you get the idea. The perception is that it’s basically exercising software to see if it behaves as it should do, in accordance with the requirements.

But testing is more than that. Or it ought to be! It’s about finding the gaps between what a product owner/stakeholder asked for, and what was delivered – from a software behaviour perspective (at functional and non-functional levels) and also from a customer behaviour and usability perspective.

There may be gaps in the requirements that no-one has considered, gaps in the process that the end user will follow, or gaps in the understanding of the data flow between applications. A tester is in a great position to apply critical thinking skills to not just assess the requirements in terms of what is stated, but to identify what is missing, ask the awkward questions and feed that information back into the team.

I wonder though if we push this enough within the industry. So much emphasis is on the ‘automation’ side of things that we lose sight of the other areas where testers can (and should) add value. Automated tests are necessary and valuable, as we cannot manually regression test everything, and I am not saying that we should do so. But these are essentially ‘dumb’ tests, just repeating what they have been written to do. An individual with great critical thinking and analytical skills can be far more productive spending their time looking for gaps than writing automated tests, yet we seem to value writing automated tests over critical thinking. (See what I did there – good use of the Agile Manifesto syntax).

The writing of automated test scenarios is different to the actual automated tests themselves. The scenarios need to be thought out and established before they can be coded, and you need a particular skill to do that (what I term the testers mindset). The coding of the tests also requires a┬áparticular skill in developing code. Developers are great at writing code – it’s their bread and butter, so would it not make sense for testers to focus on the scenarios and ask the developers to code the tests?

I appreciate that there are testers who enjoy coding, so this wouldn’t suit everyone, but if I had to make a choice due to resource constraints, I would rather a tester focus on what needs to be tested, and let someone else code how it is done. After all – in order to automate a scenario, you must first define it.

I expect there will be many who disagree with me, and I welcome other opinions on this. As I state on the home page, this is just my ramblings based on my perception of the industry as it stands, and I could be mistaken, so if you know or feel differently, please do let me know.