My hope for Testing in 2018…

It’s the 1st of January 2018, and at 3pm the rain and grey skies have cleared, and a little blue sky and a few rays of sunshine appear. It’s that little ray of hope in an otherwise grey day that helps make me think of the future, and to wonder where we as an industry will be at the end of the year.

What will be have learned? What will we be doing differently? What new skills and approaches will we have adopted? How will our jobs have evolved?

I have one overriding hope for the testing industry this year, and that is to finally put aside the obsession with just one aspect of the testing craft – ‘Automation’.

There have been so many debates and I think to be honest it’s time to move on, as it is an unnecessary distraction from other things that we should be discussing.

I am going to quote James Bach here (see Testing vs Checking), and the White Paper that you can link to from that page:
“The trouble with “test automation” starts with the words themselves. Testing is a part of the creative and critical work that happens in the design studio, but “automation” encourages people to think of mechanizable assembly-line work done on the factory floor.”

Testing is a craft. It is something that requires thought. It requires a skill to be able to identify what needs to be tested and how to go about testing.

Automation is just the ‘how’, which is fine, but with the focus very much on the ‘how’, we seem to have overlooked the importance of the ‘what’. 

Various comments on LinkedIn by other testing professionals have suggested that this demeans the craft – and I have to agree. Anyone who is not a tester/has no testing background, maybe in a senior management position with budgetary control, may well look at the testing activities, and assume it’s basically writing code to perform tests. This does not help us to showcase the thought processes that we have to go through to identify what needs to be tested – using risk based approaches, exploratory testing, story walk-through’s and our own experiences in general to work out how to try to break something that hasn’t yet been built.

Test automation looks great on paper – who doesn’t want to save time, and get rid of the boring repetitive work. It’s an easy sell. And in theory if we can automate a bunch of repeatable tests, then we have time to spend elsewhere. However this is not always the case. Because we only ever discuss automated tests, senior management can lack visibility of the other types of tests that need to be factored in, not to mention that if you leave an automation pack untouched for any length of time, it will need some work to get it running as there are bound to have been changes to the application in the mean-time.

Lets assume we are testing a new web page. The testers do some manual tests and then start to write automated tests to cover the scenarios. Unless they know otherwise, a team can then assume that the job is done – we have repeatable tests so lets move on. But the automation that is so often talked about covers just ONE PART of the testing needed – Regression.

So – what about performance and load testing for example? Where do they fit in? Another tool is needed to create load tests, but there is also the critical thinking needed to establish what the acceptable performance benchmarks are for 1 user, 10 , 100, 1000 etc. And then the understanding needed as to how to scale up the load tests – do they all repeat the same scenarios, or do we try to mimic user behaviour? The running is the last element of a long thought-driven process.

And I haven’t really covered the benefits of exploratory testing. I’ve raised this point in a previous post – automated tests cannot stop part way through and do something different. Not yet anyway – maybe that’s something that machine learning will introduce! But for now, automated tests will just keep doing the same thing over and over again – checking.

This is not testing.

I’ll repeat myself here – testing is the thinking, the investigating, the risk assessment, the planning of what we need to do, looking for things that have been missed by whoever created the requirement – something that they had never considered could happen. After that, it becomes the ‘how’ – what is the best way to perform the tests – as a person using a keyboard to navigate our way round a web application or by writing automated tests to do that for us in a repeatable way.

My wish for 2018 is that we stop making it seem as though testing is all about the automation. It is not. We are far more than writers of testing code, so let’s showcase what we do that adds real value to our organisations.

We are the critical thinkers – let’s be proud of that.

Happy New Year!

 

10 thoughts on “My hope for Testing in 2018…

  1. I’m becoming fond of the analogy of the test pilot. When a new aeroplane is turned over to test pilots, it has already been tested in terms of flicking all the switches to make sure that they work. But the test pilot then sets out to establish what the aeroplane can do, whether it meets its design brief, whether it is safe to use and under what conditions of mis-use is it still safe to use… and so on. The same goes for software testing, Test automation can cover the basics – does the application work, Yes or No? But to find the limits and establish the safe working parameters of the software, you need a human tester with a devious mind.

    Liked by 1 person

    • Good post, Steve!

      Hello Robert, I know you are trying to emphasize on the importance of good testing, but your airplane testing analogy does not do justice with the aviation testing process. There is so much more to ‘that’ testing than just flicking the switches. Indeed what the airplane can do, whether it meets the design, whether it’s safe to use and everything else that you have mentioned are all tested long before a pilot formally takes an aircraft for a test flight. The ‘lab’ testing is gruelling enough to test the resilience, fatigue, performance, stress and much more of an aircraft.
      All simulated behavior then gets tested in test flights. That does include flying with bags of potatoes (instead of humans), throwing frozen chicken in running engines, flying to Siberia or Alaska in winters facing snow storms etc. That is, testing the behavior that was checked by using models before.

      Like

      • Rajesh,

        Yes, I know that I was probably over-simplifying the entire test cycle of any aeroplane before it’s ever rolled out of the hangar door. I suppose what I was trying to do was to make the comparison between automated and manual testing. It’s the difference between testing the plane in controlled conditions where a degree of repetition is possible, and testing in the real world – in this case, the very real world – which requires some degree of imagination as to how to simulate edge cases or just plain unexpected user behaviour.

        And so you are right: manual, exploratory testing takes the test model set out for automated testing and stacks it up against what happens when an app is released into the wild and real users get their hands on it.

        Liked by 1 person

  2. Thanks Robert. I love the ‘devious mind’ part. You are right – it takes a different thought process to find out how to break something, I’d just never thought of it as devious. I’m going to use that phrase in future!

    Liked by 1 person

  3. Great article. I think in some ways we now higher automated check engineers right off the bat. With their primary work artifact being scripts which check software. These “engineers” simply automate all day long without questioning what they are testing. It’s the what that matters most in software testing not the how 🙂

    I would like test tools to be so simple to use in 2018 that we spend more time testing and less time implementing.

    Brad

    Liked by 1 person

  4. Pingback: Five Blogs – 5 January 2018 – 5blogs

  5. Interesting article Steve.

    To answer your question, load/ performance testing should be identified, planned and estimated right during Task Breakdown/ User Story Estimation. In fact these tests should be carried out much earlier by Dev Team (includes Tester) to optimize the code and avoid late detection and nightmares immediately before release.

    I believe every Dev team should have 1 or 2 Testers (per Dev team) with the right mix of experience and skillset. In addition good Done/ Test Closure criteria should cover relevant effort and time estimation in manual, exploratory and automation tests.

    Lastly it is a pity if decisions are made by the management/ manager alone without involving the whole team or key members of the team.

    Prateek

    Liked by 1 person

  6. Pingback: Mis esperanzas para Testing en 2018… – Software Quality and Testing

  7. Pingback: Is Testing as a profession underrated? | Steve Watson - Musings of a Test/Project Manager

  8. Hi Steve – awesome article, I’m seeing the same issues here in NZ that projects focus purely on automation, but they don’t value the planning, questioning, designing, of tests as well as making the effort to understand the end user and what makes a project a success, that’s where we as testers and test practitioners add value. The other issue is that projects that rely heavily on automation don’t invest in maintaining the automation code once a project is delivered. Every production issue that’s found, investigated and fixed degrades the value of test automation if the code is not maintained and kept up to date…
    It’s good to know I’m not the only voice out there saying that automation is another tool in a testers tool box but it does not and will not replace the innovation and creativity of a good tester…
    Tony

    Like

Leave a reply to robertday154 Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.