Manual Testing is dead – long live Manual Testing!

This posting is a little later than planned (by about a week), as I had intended writing it after attending the National Software Testing Conference in London, where I was fortunate to speak as well as attend some great talks. I came away with 4 blog ideas, and this is the first one of them.

The demise of manual testing is being discussed in blogs, magazines, conferences, meetup’s etc. If you look at job ad’s, you’d think that manual testing has already died a death and been buried! They all mention ‘Automation tester’ – like its the ONLY thing that testers need to do. So, it was refreshing to attend sessions where people took a different view of things.

I’ve mentioned before about the need for testers to be able to do manual exploratory testing, and it was great to hear Ingo Philipp from Tricentis discuss this in a conference setting. As an industry we need to push testers back towards performing manual exploratory testing, to be complemented by automated regression testing, otherwise we are going to start missing defects due to deficiencies in the overall coverage.

Think about it. An automated test is only as good as the person who wrote it, and only as up to date as when it was last maintained. An automated test cannot make allowances for something that has changed. It cannot stop part-way through and think ‘I wonder what happens if I click this button rather than following the process flow’. It cannot look at the number of steps and highlight that the application sucks from a usability perspective. It cannot point out that the colour scheme is unreadable, or that the company logo is the wrong colour/shape/size etc. It can only run the steps it has been coded to do and validate against what is has been told to check for. So a test may pass, as the application displays what was expected, but what if additional text is present that shouldn’t be there? The test would pass, and unless anyone manually tested that screen, it would go undetected.

I’m not saying that automated tests are unimportant, far from it, but they have their place within a tester’s toolset and are not the only tool available to a tester. Of course you need to have automated tests in place to be able to follow a Continuous integration process, but automated tests cannot cover every possible scenario.

There is another aspect to this as well. Most applications that we develop are going to be used by human beings, so why do we insist on believing that it is best tested by code, without any human test covereage as well? Automated tests should cover the repetitive tests, the load and performance tests, but a person, a tester, needs to look at the application and think about the different paths that the end user could take.

The fun is in the thinking. The benefits are derived from the thinking. A tester’s brain is needed to assess the tests needed, and I despair when I read of really good manual testers with many years experience who feel that they have to leave the field of testing as they do not have automation experience. I sympathise as I came from a manual background. Coding holds no interest for me – if it did, I would have become a developer. So, my career has been based on testing software and working out how best to do so.

We need to stop this freefall ride into automation oblivion, and look at hiring and supporting multi-skilled testers. If a tester cannot write automated code, does it really matter? Better to have a tester who can look at a requirement and work out what needs testing, than a tester who can code but has no clue how to test something!
Developers can write code, so why not pair a develope with a tester to write the automated tests that a tester defines. If a tester wants to write code, and has the time to do so, then that’s great – but we should not be penalising people for knowing how to assess a requriement and define the tests needed (i.e. the core elements of the job), just because they do not have an additional skill in coding automation tests.

I do feel that there are a number of us trying to push back the tide a little to show people the benefits of doing both manual and automated testing, but the more that speakers such as Ingo and myself are getting out there and promoting the benefits of exploratory testing, the better. We need to stop this damaging trend, and ensure that we retain the best skilled testers before they feel undervalued and move on.

Now, who’s with me on this?


10 thoughts on “Manual Testing is dead – long live Manual Testing!

  1. Hi, Steve. Thank you a lot for the interesting blog post. The ideas, you have described, are very common these days.
    First of all, the rise of automated testing comes from increasing the complexity of the software under test and descreasing the “time to market” value in order to get advantage over competitors in business. As a result – we need to perform testing in the fast, reliable and effective way. In some projects testers do not have enough time to come up with a interesting and creative ways of testing – they need to ensure that the product works (at least the basic functionality) before it’s released to production.
    Creative and interesting testing, in my opinion, can be and should be applied after a strong and reliable automated testing pipeline is established on the project. It gives us, testers, freedom to do exploration, while and after automated tests are running the most boring and repetitive part of our job (regression). But it’s completely impossible without skilled testers with coding skilss and/or the help from the development team.


  2. Thanks for great post. Most of the time when writing automation code by testers, we behave like a developers without thinking testers mindset such as we may miss to apply exploratory testing to automation. Therefore the cases performed by manual testing most probably missed. I believe that manual testing can not be died since the users use the application in their environments with their experiences and customs.


  3. I couldn’t agree more with your post.

    My first Test Manager had Seven Rules for testers. (This was so long ago, I’ve been unable to find this on the Net. It ought to be required reading.) One of them said “A Tester tries to break what a simpler, gentler soul (a Developer) has poured their heart and soul into crafting.”

    I once worked on a project whose requirements had been inadequately collected by a consultant, and where there was no-one left in the company who had had anything to do with the project. I devised test cases, worked with the developers, and finally rolled the application out where it was acclaimed as the “best tested application ever” by a range of senior people in the company.

    It lasted two days when it was released into the wild. We found that end users in the real world were trying to use it in ways no-one had bothered to find out about; that there was a range of data items the legacy system collected which were not accommodated in the API and so made the new system refuse to create new data records; and there were a range of test scenarios which no-one had ever anticipated. Automated tests would have confirmed that the system was working as expected when compared with the specification. It was only manual exploratory work that uncovered what was actually wrong with the system.

    The next project aimed to avoid these problems by an extensive programme of requirements gathering from real users. But a new cohort of senior managers decided that this was taking too long and pulled the plug on it after nine months. Of course, none of them had been there whilst we were fixing the first project and cursing those who didn’t do a good enough job in drafting the spec in the first place.


  4. Thanks for helping more people remember the importance of manual testing as part of the quality process. In addition to what you mentioned, I always point out that manual testing finds more bugs faster than automated testing when testing brand new code.

    I can generally find a lot of bugs within the first hour of testing new code, which is faster than writing and debugging new automated tests.

    Liked by 1 person

  5. Oh, what a beauty of an article. This is so true. In todays world, more focus is on tools & technologies as compared to the human touch. And this is creeping into the testing field as well. Finding a bug is an art, which needs to be developed, which needs to be encouraged. Instead I see testers, frantically trying to learn “how to code” in an attempt to hold their jobs.
    More importance is given to automation testing – which is fine, if there are resources critically looking at the product. But I see all testers focusing on automation testing & when a behavioural fault is found at the end of a release cycle, the retrospective action is to improve the automation suite, as opposed to investing time in testing the product.

    Liked by 2 people

  6. I’m with you on this absolutely, but I really think we shoot ourselves in the foot when we talk of “manual testing”. This enables and preserves the false dichotomy between “manual testing” and “automated testing”, rather than simply “testing” and “automated checking”.

    The word “manual” does no useful work in positive descriptions of testing. Testing is not something we do with our hands; it’s something we do with our brains, with our observational and critical faculties. It’s not like we’re obliged to say “manual”; there are tons of other options available. Instead of calling it “manual”, we could call it “naturalistic”, “direct”, “unmediated”, “humanistic”, “conscious”, “brainual”, “analytical”, “cognitive“, “experimental”, “exploratory”, “adaptive”, “risk-oriented”, “attentive”, “investigative”, “discovery-focused”… or just “testing”, since testing is all of those things, by turns. “Manual” is a lame and dismissive way to express all of those things.

    —Michael B.

    Liked by 1 person

    • I take your point – but this was deliberate, to point out how we still need to refer to the difference between testing performed using our brains and hands on a keyboard, versus coding tests in an automation tool and allowing ourselves to think that this will solve all problems. Lets see how this continues to rumble on for the next few years 🙂


  7. Nice post. Whilst I enjoy the intellectual exercise of writing automated checks, and the regression test time they save, I find all my juiciest defects by playing.

    Good old fashioned manual testing – there is nothing to beat it.

    Eventually the management layers are going to twig this, but only when they have seen their projects crash and burn, only when they have watched the thieves running out of the door clutching their user details, or their customers stop buying because the big green button is now opaque.

    I may yet be a dinosaur of course, but from what I’ve seen so far robots can’t test, robots only check.

    Liked by 1 person

  8. I do agree with your post and just want to add that both automated and manual testing have their pros and cons. I believe automated testing should be considered like a tool which helps in testing the repetitive and boring testing tasks like regression and performance test cases. But automated testing cannot eliminated the manual testing particularly the exploratory testing which is the core part of the manual testing and is responsible for finding most of the bugs. Usually, automation and manual tester have different mind sets and they should be used according to their natural make-up. For sure, if someone is genius enough at both, they can do both the jobs but there are high chance of bugs creeping into Production in my personal experience..


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.