My hope for Testing in 2018…

It’s the 1st of January 2018, and at 3pm the rain and grey skies have cleared, and a little blue sky and a few rays of sunshine appear. It’s that little ray of hope in an otherwise grey day that helps make me think of the future, and to wonder where we as an industry will be at the end of the year.

What will be have learned? What will we be doing differently? What new skills and approaches will we have adopted? How will our jobs have evolved?

I have one overriding hope for the testing industry this year, and that is to finally put aside the obsession with just one aspect of the testing craft – ‘Automation’.

There have been so many debates and I think to be honest it’s time to move on, as it is an unnecessary distraction from other things that we should be discussing.

I am going to quote James Bach here (see Testing vs Checking), and the White Paper that you can link to from that page:
“The trouble with “test automation” starts with the words themselves. Testing is a part of the creative and critical work that happens in the design studio, but “automation” encourages people to think of mechanizable assembly-line work done on the factory floor.”

Testing is a craft. It is something that requires thought. It requires a skill to be able to identify what needs to be tested and how to go about testing.

Automation is just the ‘how’, which is fine, but with the focus very much on the ‘how’, we seem to have overlooked the importance of the ‘what’. 

Various comments on LinkedIn by other testing professionals have suggested that this demeans the craft – and I have to agree. Anyone who is not a tester/has no testing background, maybe in a senior management position with budgetary control, may well look at the testing activities, and assume it’s basically writing code to perform tests. This does not help us to showcase the thought processes that we have to go through to identify what needs to be tested – using risk based approaches, exploratory testing, story walk-through’s and our own experiences in general to work out how to try to break something that hasn’t yet been built.

Test automation looks great on paper – who doesn’t want to save time, and get rid of the boring repetitive work. It’s an easy sell. And in theory if we can automate a bunch of repeatable tests, then we have time to spend elsewhere. However this is not always the case. Because we only ever discuss automated tests, senior management can lack visibility of the other types of tests that need to be factored in, not to mention that if you leave an automation pack untouched for any length of time, it will need some work to get it running as there are bound to have been changes to the application in the mean-time.

Lets assume we are testing a new web page. The testers do some manual tests and then start to write automated tests to cover the scenarios. Unless they know otherwise, a team can then assume that the job is done – we have repeatable tests so lets move on. But the automation that is so often talked about covers just ONE PART of the testing needed – Regression.

So – what about performance and load testing for example? Where do they fit in? Another tool is needed to create load tests, but there is also the critical thinking needed to establish what the acceptable performance benchmarks are for 1 user, 10 , 100, 1000 etc. And then the understanding needed as to how to scale up the load tests – do they all repeat the same scenarios, or do we try to mimic user behaviour? The running is the last element of a long thought-driven process.

And I haven’t really covered the benefits of exploratory testing. I’ve raised this point in a previous post – automated tests cannot stop part way through and do something different. Not yet anyway – maybe that’s something that machine learning will introduce! But for now, automated tests will just keep doing the same thing over and over again – checking.

This is not testing.

I’ll repeat myself here – testing is the thinking, the investigating, the risk assessment, the planning of what we need to do, looking for things that have been missed by whoever created the requirement – something that they had never considered could happen. After that, it becomes the ‘how’ – what is the best way to perform the tests – as a person using a keyboard to navigate our way round a web application or by writing automated tests to do that for us in a repeatable way.

My wish for 2018 is that we stop making it seem as though testing is all about the automation. It is not. We are far more than writers of testing code, so let’s showcase what we do that adds real value to our organisations.

We are the critical thinkers – let’s be proud of that.

Happy New Year!



TestExpo – a reflection

I was invited to speak at the Unicom TestExpo conference on Tuesday 31st October at a hotel in Heathrow. I’ve spoken at a number of different events, and I have worked with Unicom before, so knew what to expect.

The day started with everyone together, before breaking into three streams after lunch – Agile, Test and DevOps. It’s a little odd when the big room is half empty as people have moved away to other areas, but it seemed to work ok somehow, although I noticed that people tended to gather at either end of the room, and the middle was a bit empty.

There were some good talks – obviously there were sales based ones, as is to be expected, but I sat with Mark Winteringham (Software Testing Clinic) and got to see his talk on the Automated Acceptance Testing Paradox, followed by Mark Fewster discussing whether Equivalence Partitioning and Boundary Value Analysis were old hat or not. I have to say that I derived more value from these talks than the sales ones, but that is to be expected to a degree, unless you attend an event specifically to find a tool to solve a problem.

My talk was the last of the day, at 5pm, and I was aware that I was the last person standing between the attendees and the drinks reception – no pressure then! My talk was around the Core Competencies of a Good Tester, something I want to expand on in the future (maybe in a small book at some point). I was aware that whilst we had 2 screens showing the slides, I was in the middle, so I needed to walk up and down the stage to make sure that the people at each end of the room felt included, so apologies to anyone who thought I had issues standing still – it was deliberate!

Without giving too much away, I delivered the talk, asked questions to get responses (to keep people engaged), got a few laughs too, which is always a good sign, and at the end had 4 questions from audience members. Now this can seem scary – you don’t know what someone will ask, and you are on the spot, but thankfully I felt that the answers I gave were of some benefit.

It did make me think though in terms of how I see the value in what I deliver, given the preparation time and a day out of the office. One lady particularly had a question, which continued as a conversation with Mark Winteringham and myself afterwards, as she was struggling to make her voice heard as a tester in her team. It reminded me why I set up a global QA Chapter in my organisation, so that testers who felt isolated could talk to others for help. She was so relieved to find that this was not about her – it can be a common theme, and it was at that point that I realised that the value was right there in front of me. Even if just one person got something from my talk, then it was worth it. If I have helped just one person to feel less alone, and to give some guidance and support, then it was worth it.

After the event, and next day too, I received some positive feedback as to how useful the session was, and I want to thank everyone who responded. Gaining feedback helps me to become a better speaker, and to remind me why I am doing so.

This will spur me on to do further talks, and look for ways to give something back to others working within technology. So, if you are reading this, and are stuck with something then reach out, either to me, or to a testing community. There are many out there.

As I said in a previous post – the testing community is such a friendly and helpful one, and I am glad to be apart of it.


Automated testing – the holy grail?

One of the most interesting debates that continues to rear it’s head is around automated testing, and people’s view as to what it is.

It ranges from seeing it as a panacea for all of the testing shortcomings, removing the need for testers as the automated tests do their jobs, through to automated tests being an optional extra if time permits.

I sit somewhere to the centre, not of the view that automated tests can replace people, but seeing the value that having a good set of repeatable automated tests can bring.

So, unpacking that a little, why do I think automated tests are a good thing?

  • Firstly, the concept of using a tool to help us to do our jobs easier has been around since our ancestors started to make flint tools for digging. We advance little without a tool of some sort, but we need to use the right tool, at the right time, for the right reason, otherwise it’s a waste of time.
  • Secondly, who wants to repeat the same tests manually over and over again? Been there, done that, and it’s not fun.
  • Thirdly, any time that you can save by manually repeating the same thing, means more time to think about and cover different test scenarios.
  • Fourthly, its an opportunity to learn a new skill – writing code to exercise code! For many people, it’s actually a fun challenge, but also gives us an insight into the mind of a developer. And that can only be an advantage.

The problems we have though is that some people (normally budget holders, normally outside of technology) believe that adopting automated tests means that they can dispense with testers. No! That is not the case. You cannot automate what you haven’t defined. Someone needs to use analytical thinking skills to determine what the test cases are, and to then test those cases.

The automated test tool is just that – a means to an end. There are skills that are needed to define the tests, and skills needed to write the code to make those tests run. Someone with a background in testing is still needed – whether you call them Quality Engineers, Developers in Test or Testers (I think another blog post on role names is pending!).

Used appropriately, automated tests complement other testing techniques, such as manual exploratory testing. There are different tools for functional UI testing, API testing, non-functional load and performance testing, so don’t be tempted to think that one tool is all that is needed. Selecting the right tool for the job is important.

There is a time element here – investment is needed in the right framework that is sustainable, and familiar to a number of team members, not just one person. It takes time to write automated tests, time to check failed tests, time to fix failed tests, time to maintain the framework and improve it as new ideas and techniques are discovered.

Automated tests cost time to create and maintain – but they save manual effort in execution. Used properly, automated tests can give confidence that:

  • A build has not broken fundamental use cases.
  • Functions built in previous sprints have not been broken (regression tests).
  • The performance of the application meets or exceeds expectations.
  • The application can handle the anticipated load.

I think it’s important to set some expectations here:

  • Automation itself is not the holy grail.
  • It is not a magic wand that gets rid of every problem.
  • It is also not an excuse to remove testing as a function from an organisation.
  • Automated tests cannot stop part way through and go off script, but a manual tester can decide to press the F5 or back button and see what happens. Exploratory testing!
  • Automated tests are a way of executing the same tests over and over again. That’s it!
  • Automated tests complement manual exploratory tests.
  • Automated tests are a tool to help the team.

Remember the saying ‘A fool with a tool is still a fool’.

So use the tools wisely.


Mind The Gap

Testing is all about the gaps.

Ask anyone what a software tester’s job is, and the answer will more than likely be along the lines of ‘to see if a bunch of code does what it is supposed to do or not’.

Ok, I’ve simplified it, but you get the idea. The perception is that it’s basically exercising software to see if it behaves as it should do, in accordance with the requirements.

But testing is more than that. Or it ought to be! It’s about finding the gaps between what a product owner/stakeholder asked for, and what was delivered – from a software behaviour perspective (at functional and non-functional levels) and also from a customer behaviour and usability perspective.

There may be gaps in the requirements that no-one has considered, gaps in the process that the end user will follow, or gaps in the understanding of the data flow between applications. A tester is in a great position to apply critical thinking skills to not just assess the requirements in terms of what is stated, but to identify what is missing, ask the awkward questions and feed that information back into the team.

I wonder though if we push this enough within the industry. So much emphasis is on the ‘automation’ side of things that we lose sight of the other areas where testers can (and should) add value. Automated tests are necessary and valuable, as we cannot manually regression test everything, and I am not saying that we should do so. But these are essentially ‘dumb’ tests, just repeating what they have been written to do. An individual with great critical thinking and analytical skills can be far more productive spending their time looking for gaps than writing automated tests, yet we seem to value writing automated tests over critical thinking. (See what I did there – good use of the Agile Manifesto syntax).

The writing of automated test scenarios is different to the actual automated tests themselves. The scenarios need to be thought out and established before they can be coded, and you need a particular skill to do that (what I term the testers mindset). The coding of the tests also requires a particular skill in developing code. Developers are great at writing code – it’s their bread and butter, so would it not make sense for testers to focus on the scenarios and ask the developers to code the tests?

I appreciate that there are testers who enjoy coding, so this wouldn’t suit everyone, but if I had to make a choice due to resource constraints, I would rather a tester focus on what needs to be tested, and let someone else code how it is done. After all – in order to automate a scenario, you must first define it.

I expect there will be many who disagree with me, and I welcome other opinions on this. As I state on the home page, this is just my ramblings based on my perception of the industry as it stands, and I could be mistaken, so if you know or feel differently, please do let me know.



Manual Testing is dead – long live Manual Testing!

This posting is a little later than planned (by about a week), as I had intended writing it after attending the National Software Testing Conference in London, where I was fortunate to speak as well as attend some great talks. I came away with 4 blog ideas, and this is the first one of them.

The demise of manual testing is being discussed in blogs, magazines, conferences, meetup’s etc. If you look at job ad’s, you’d think that manual testing has already died a death and been buried! They all mention ‘Automation tester’ – like its the ONLY thing that testers need to do. So, it was refreshing to attend sessions where people took a different view of things.

I’ve mentioned before about the need for testers to be able to do manual exploratory testing, and it was great to hear Ingo Philipp from Tricentis discuss this in a conference setting. As an industry we need to push testers back towards performing manual exploratory testing, to be complemented by automated regression testing, otherwise we are going to start missing defects due to deficiencies in the overall coverage.

Think about it. An automated test is only as good as the person who wrote it, and only as up to date as when it was last maintained. An automated test cannot make allowances for something that has changed. It cannot stop part-way through and think ‘I wonder what happens if I click this button rather than following the process flow’. It cannot look at the number of steps and highlight that the application sucks from a usability perspective. It cannot point out that the colour scheme is unreadable, or that the company logo is the wrong colour/shape/size etc. It can only run the steps it has been coded to do and validate against what is has been told to check for. So a test may pass, as the application displays what was expected, but what if additional text is present that shouldn’t be there? The test would pass, and unless anyone manually tested that screen, it would go undetected.

I’m not saying that automated tests are unimportant, far from it, but they have their place within a tester’s toolset and are not the only tool available to a tester. Of course you need to have automated tests in place to be able to follow a Continuous integration process, but automated tests cannot cover every possible scenario.

There is another aspect to this as well. Most applications that we develop are going to be used by human beings, so why do we insist on believing that it is best tested by code, without any human test covereage as well? Automated tests should cover the repetitive tests, the load and performance tests, but a person, a tester, needs to look at the application and think about the different paths that the end user could take.

The fun is in the thinking. The benefits are derived from the thinking. A tester’s brain is needed to assess the tests needed, and I despair when I read of really good manual testers with many years experience who feel that they have to leave the field of testing as they do not have automation experience. I sympathise as I came from a manual background. Coding holds no interest for me – if it did, I would have become a developer. So, my career has been based on testing software and working out how best to do so.

We need to stop this freefall ride into automation oblivion, and look at hiring and supporting multi-skilled testers. If a tester cannot write automated code, does it really matter? Better to have a tester who can look at a requirement and work out what needs testing, than a tester who can code but has no clue how to test something!
Developers can write code, so why not pair a develope with a tester to write the automated tests that a tester defines. If a tester wants to write code, and has the time to do so, then that’s great – but we should not be penalising people for knowing how to assess a requriement and define the tests needed (i.e. the core elements of the job), just because they do not have an additional skill in coding automation tests.

I do feel that there are a number of us trying to push back the tide a little to show people the benefits of doing both manual and automated testing, but the more that speakers such as Ingo and myself are getting out there and promoting the benefits of exploratory testing, the better. We need to stop this damaging trend, and ensure that we retain the best skilled testers before they feel undervalued and move on.

Now, who’s with me on this?


There are a number of things I have been involved with recently which have highlighted the importance and necessity of the Testing Community.

We are fortunate to work in an exciting industry – testing technologies, some ground-breaking and others less so, but what we do has an impact on others, and we like to think that our testing efforts result in better end products.

It’s this mindset which I think also feeds into some of the great collaboration that we see.

In my role, I co-run a Testers Chapter (we refer as the QA Chapter – I dislike using the term QA, but thats for another post). This was created in 2011, with no more than 20 testers from different groups within our organisation. As of May 2017, I have 2 people helping me to organise and run the sessions, and there are 85 invitees from the UK, USA and Europe. It’s staggering to be honest to look at the numbers and think that we have that many testers across the globe – many of whom have never met, and are unaware of each others existence. It is a fantastic achievement, and something I am proud to have created and fostered. Our internal community has helped many testers to share problems and solutions, ask for general guidance, and not to feel alone in their day jobs. Some of the testers work with others, but there are testers who are alone in a group and this really serves a need – people to reach out to and ask for help when needed. And that for me is why I do this.

Outside of work, there are many other communities of testers here in the UK, and I have started getting involved in mentoring, as part of the Ministry of Testing, and also the BCS Specialist Testing Interest group, as I want to give something back.

Actually there are a lot of other testers out there who are doing just that. I could name so many who have set up free meetups, training sessions etc (e.g. Rosie Sherry, Tony Bruce, Abby Bangser, Mark Winteringham, Dan Ashby, Richard Bradshaw to name a few), and what I love about this is how much time and effort people are prepared to give back – for nothing!

We are very lucky to be part of this, and I’d encourage you to get involved in your community – whether it is specific to a particular job discipline such as Testing, Business Analysis, or more general around Agile and delivery. It’s really rewarding to meet new people who you’d never get a chance to meet in your day job, and it’s fun to try out new things too.

If you haven’t connected with any other testers yet, why not do a quick Google search – within 5 minutes you’ll have found something going on near you.

Right, now to ‘walk the walk’ and book the next Testers Chapter….


Beating them at their own game!

This is a post about Google Chrome, the F12 Developer tool feature and how I felt very smug after using it to get around restrictions!!

I installed an Ad-blocker into Chrome, which is the main browser I used, and I noticed that certain sites were showing messages asking me to remove the Ad-blocker (like the images below), or sign-up for content – neither of which I want to do to be honest. I have ensured that the sites cannot be identified from the screenshots below, as it is not my intention to draw attention to any specific sites, as there are many out there that have these restrictions in place.


There was one particular article I wanted to read and I could see no reason why I couldnt do so, seeing it as it wasn’t something that was exclusive to this particular site. I could have searched elsewhere, but was feeling in a less than co-operative mood, so decided to have a play.

Pressing the F12 button and opening the Developer Tools gave me the chance to inspect the element on the page that was blocking the text, by clicking on the button (highlighted in yellow)….

….and then clicking on the blurred area on the page that I wanted to inspect:

This then showed the element in more detail and I could then investigate further.

I found that I had two different choices, depending upon the type of restrictions imposed:

  1. To read the plain text within the Developer Tools pane rather than on the screen, but that meant having to expand every element in order to reach each paragraph:
  2. To try removing the blocker itself in order to read the text on screen as intended by just deleting that line of text.

On one site I had to use option 1 to open each element to read it as deleting the element actually deleted the text within. On another site I used option 2, and simply deleted the element off the page and the text was then visible with no restrictions.

I was surprised how easy it was and I guess that over time, the website builders will try to make this more difficult to do, but not many people really know about the F12 function, so I feel it my duty to help spread the word a little.

It really is that simple. If you are not sure, just have a play. If you delete things that you didnt want to, just reload the page and try again. It really is satisfying to beat people at their own game sometimes!