Good testing – luck or judgement?

I get to talk to a lot of testers, not just those in my team, but other teams in my organisation, testers at conferences and those I interview, and it struck me the other day that there is no real way to tell from a person’s background whether they are going to be a good tester or not. Wouldn’t it be great if we could apply a formula – it would make life so much simpler, but I don’t believe it comes down to training or background – it’s either in-built or not. What I mean by that is that in my opinion, good testing cannot be just left to luck (although it can be I suppose if you just happen to find that critical bug without looking for it!), for me it is all about attitude and mindset. Technical ability and the focus on automation seems to be the overriding requirements for many testing roles, and whilst they are important, they are not the ‘be all and end all’.

If we define a good tester as someone who can write automated tests, then we are essentially defining a developer. So why do we need a tester to focus on writing code if developers can do that?

Good testing comes from understanding the application, understanding the requirements (usually from a User Story if we are following Agile), and determining whether the requirement is testable, whether there are any omissions, clashes with other stories, potential impacts on other applications, and what types of testing are needed in order to prove to the Business Owner that the team have delivered what was requested. Good testing is found in how a tester approaches their job, and I know I have mentioned this before, but it is where a tester adds value to a team. Having another individual who can just automate the acceptance criteria that a Business Analyst wrote adds no additional value – and the team have employed a developer essentially.

A good tester needs to think about the application, and consider it as an end user, ask the questions that no-one else thinks of and be inquisitive. A good tester has to exercise good judgement in determining what to test and how to test it, and whilst this can be learned (to a degree), much of this will come from a person’s character.

If you are a good tester, then luck will play second fiddle to judgement, every time.

Advertisements

Taking Testbash to work

Well, as promised, this is the follow up to my earlier posts about Testbash.

I said that my colleague Bhagya  and I were going to use the materials from the excellent ‘Building Quality with Distributed teams’ workshop by Lisa Crispin and Abby Bangser  and run it in-house – and yesterday we did!

It ran for a whole morning – 3 hours, and to say that it was tiring was an understatement. I really had no idea how much it was going to take out of me, but I felt such a buzz afterwards, as did Bhagya.

We had 23 people involved (one couldnt make it), in order to have two teams. Each team was split into an onsite and offsite group, with us as Product Owners, and we had two fantastic observers (one was Michael @lopezma) to help who went upstairs to the meeting rooms we had secured for the ‘offsite’ guys. We had a travel freeze so could only stay with the onsite team. Communication between the two teams was therefore paramount! We needed to communicate timings of planning, sprints and retros to the observers so they could let the offsite team know.

Each team had a laptop for communicating over Skype, although that in itself caused some issues – exactly mirroring the problems that arise when relying on technology at times.

The objective was simple – to colour in pictures over a 3 sprint period, with the onsite team doing the first estimations. Sprints 1 & 3 would run in succession and then there was a break before sprint 3 for coffee and a whole team catch-up. The idea was that having had 2 iterations and retrospectives, the teams should be learning from their mistakes and assumptions, and putting things in place for the final sprint. Of course we threw in a few curve balls – I changed my mind and said crayons were not good enough, they had to use pens (but only one team had pens), anything brown had to be tan, and then I took a ‘holiday’ for half the sprint (3 1/2 minutes).

After the final sprint, we looked at what had been accepted, and I was obviously a tough taskmaster as I had rejected many pictures through quality issues (going over the lines, leaving gaps etc), but it was all good fun.

The roundup was interesting with a lot of thoughts shared in the room from both on and offshore teams around the problems they had faced – comms, offshore not knowing the full story of what the task was, not having enough work to do, not having any face time with the PO unless requested etc. So many assumptions had been made, and a lot of people who had been in the offsite team came to realise just how little information we pass to our offsite colleagues, and how hard it makes their job.

We asked for feedback and of the 23 people, 21 learned something new – the other 2 had experienced this before. There were many comments on what had been learned, and some useful feedback as to how we can improve things for next time. One thing I will do is ask for another helper to look after sprint timings, as it was hard to keep us all on track and be a Product Owner at the same time.

Yesterday was one of the best days I have had at work for a long time – making a difference to how we as an organisation can work with distributed teams. Absolutely fantastic! The work continues…..

  

Exploratory testing isn’t just playing!

Exploratory testing is something that has been discussed, put on CV’s, and probably put in the syllabus of testing exams, but I wonder if we really understand what it is.

I’ve heard it described as ‘playing’, ‘random testing’, ‘unstructured testing’, and ‘the tests you fit in before you look at a user story’.

Out of those, the only one that gets close is ‘unstructured testing’. The point of performing exploratory testing is to step away from the user story confirmations and acceptance criteria, and as a tester allow your mind to really think about what is being delivered.

The questions we need to ask ourselves are:

  • Does the story make sense on its own.
  • Does the story offer functionality that fits in with the rest of the application. There is no point it working but then not making any sense when added to the application!
  • Are there any provisions for the edge cases that users will do:
    • hitting F5,
    • accidentally pressing the Back button,
    • right mouse clicking,
    • accidentally clicking on a link and going back to the page,
    • trying to by-pass a process to save time,
    • pasting text into a text or search box rather than typing it in.
    • etc

You get the idea from the above list.

We don’t often have time to think much about what we do – there are always time pressures, so it makes sense to put aside even just 15 to 20 minutes to step back, and allow your mind to come up with other scenarios that are not part of the acceptance criteria, because no-one can possibly think of everything before the code exists. There will always be something that you spot once the code is delivered and there is something tangible to navigate through.

Based on my own experience, exploratory testing has real benefits and should be actively encouraged within teams.

So, after reading that, do you agree or disagree with my view of what exploratory testing is and whether it is a benefit or not? Feel free to comment, thank you.

Testers have forgotten how to plan tests……

The tester of 2015 cannot seem to plan tests in a general sense, as they are lost without a user story.

I’m being contentious here, but this is a serious subject, it’s not just done to provoke a reaction, although I would welcome a debate whether you agree or disagree with my viewpoint.

Before Agile we worked from large requirement documents. It meant that we were testing something of a reasonable size and involved having to plan out our tests to cover all the scenarios. This pretty much covered a complete piece of work from end to end, and as a tester I could plan all the tests needed, identify gaps and add in those as well.

Then came Agile. There are many good things about Agile, and thus far I have only found one thing that is not great (in sprints people don’t go back and compare actual versus estimated hours for each task, so no lessons regarding over or under estimating can be learned), but over the past 6 months, I have noticed another.

During tester interviews, I ask candidates how they would approach testing a website – what are the things they should be concerned with. What I am looking for is a considered list:

  • UI – text, fonts etc
  • Field level tests (positive and negative)
  • Tab order
  • Links, to other pages and other sites,
  • Button functions,
  • Page load times,
  • Cross browser tests,
  • Usability tests,
  • etc

Without exception, every candidate starts by saying that it depends on the user story.  Ok, I get that they work in an Agile way, so I stop them and ask them to do some blue sky thinking. Imagine there are no requirements (a look of shock has appeared on one candidates face!) and you are just placed in front of a website. What general tests would you do.

To my surprise and somewhat sadly I feel, they struggle with this concept – the security blanket of a story has been removed and they do not know what to do.

We have created a class of testers who are only able to work from the very narrow world of user stories, and it is detrimental to our industry. I hear testers talk about Exploratory testing and think they know what it is, but I don’t think they do. Exploratory testing is the art of not constraining yourself to a story as it is written, and to the acceptance criteria, but to think more broadly. What other tests could you do that someone else didn’t think of for you? Does the story fit with the others?

If testers can do this, then they can answer a simple question about how to do a general test of a website.

My worry is that is the tester of 2015 can only test against criteria that a BA has provided, where is the value in having a tester at all? Testers are uniquely placed in any development team to think of scenarios that no-one else has even considered. Every tester should be looking to add value every day to their team by thinking more broadly than just what is written in the user story.

If you read this as a tester, I challenge you to write down for a week every test you have thought of that no-one else did, and share them. Let’s encourage each other to regain the skills that we seem to be losing.

Thinking outside the box is good – you just need to try it!!

UKTMF – Food for thought

I attended the UK Test Management Forum yesterday (see previous blog), and I had invited Stephen Janaway from Net-A-Porter to do a talk for us.

His talk was excellent to be honest. It generated so much discussion that we ran out of time (we had 75 minutes allocated), and that is a sign of a great discussion topic. And it wasn’t just 2 people speaking up – most of the 15 people in that session contributed.

The talk was about the future of Test Management – how to manage testers without there being a formal Test Manager role in place in an Agile organisation. It is ironic that many of us spent years working our way up to a role which conceivably may not exist in 5-10 years time! But there is some hope….

Stephen recounted his experience moving away from a formal management type role to a coaching role, and it made me really think about the benefits of that type of approach. All the developers, testers and BA’s report to one person in a project team, rather than the testers reporting outside, but of course this leads to concerns about non testers managing testers, how that works in terms of career paths, training and also maintaining good test standards in each team. And that is where the coach comes in.

It actually sounds like a really interesting role, to be able to work alongside teams who need guidance in doing what they do better – as long as it doesnt feel like they are being seen as a failing team. Removing the people management from a role can free up the time to really look at the test process. Is it something the tester does, or do the whole team take on test tasks (which they should in an Agile world). Are tests automated or performed manually, so that regression is patchy. Is non-functional testing covered? What state are the user stories in – are they of good quality so that the team are delivering what the customer actually wants, to the standard they want? And so on.

Stephen also runs a Chapter for testers (actually I do the same thing in my organisation), as it is a great way to bring together testers from disparate teams to be able to share best practice, do showcases, invite speakers etc. He is really enjoying the role, and it is encouraging to see where we as Test Managers could be progressing towards in the future.

It isn’t often that I have come away from a talk with my head spinning with thoughts and ideas, so thanks Stephen!

And you can read more about Stephen and his experiences at http://www.stephenjanaway.co.uk/

UKTMF

First off, Happy New Year!

It’s been about a month since I last posted something, and with the Christmas holiday I did think it would be better to wait until now before restarting.

So, one full day back at work and the holiday seems a distant memory already, but there is something to look forward to, and that is the UKTMF, for Test Managers.

UKTMF is the UK Test Management Forum, which has been around for 10 years, meeting quarterly in central London, and the 45th session is on Wednesday 28th January from 1.30pm GMT. It was run single handedly by Paul Gerrard up until last summer when he asked for volunteers to help, and I was one of a number of people who stepped forward, and am now a ‘friend of the forum’ helping out where needed, so I will declare an interest right now.

I do not intend to use this blog to promote attendance by Test Managers at every one of these, as I think you would soon get bored, but there is a specific talk that I would highlight, and that is by Stephen Janaway, on ‘How to Focus On Testing When There Are No Test Managers’.

Over the past few years there has been a trend moving away from having Test Managers performing the traditional role. With Agile teams, testers are often reporting in with the other Agile team members into another role, such as Product Owner. This then leaves us (who have spent years in testing working our way up to Test Manager level) to wonder where our careers will go next. Stephen has been through this and can give us an interesting perspective on how his role has changed, and I hope this will open a discussion as to how we need to adapt to the changing role of the Test Manager, and to look at skills we have that are transferable to other roles within IT.

The cost is £20, plus VAT, which is great value for an afternoon session. There are three parallel talks at 2pm and three more at 3.45pm – you decide on the day which ones to attend. And there are refreshments, and drinks afterwards.

The other talks are equally as good, and Joanna Newman will be doing one at the same time as Stephen’s on how to attract, retain and motivate ‘millenials’. I will be very torn on the day as I’d like to go to both!!

If you’d like to book a place, then here is the link: http://www.eventbrite.co.uk/e/test-management-forum-wednesday-28-january-2015-tickets-14988876132

Thanks for reading!

Quality – is a team goal.

Quality.

Its an interesting word to try and define, but that is not the point of the post, so I am going to clarify in what terms I am using it. In providing software for someone to use, quality means that the application or program does what the user wants it to do, is as free from defects as it can realistically be, is usable, does not cause the user to get cross or frustrated but to satisfy their needs. I’m aware people will disagree, so i may take this up as a separate post!

In software delivery teams, Quality is often alluded to as meaning ‘there’s no bugs in the code and the acceptance criteria in the user story (I am using Agile as an example here) have been met’. It is also very often left as the Tester’s job in the team to ensure that Quality has been met.

I have a huge problem with this. Quality cannot be tested in, so what is the point in leaving the tester to find out if something doesn’t work as it should. Is that not the same as leaving the stable door open and then asking the tester to see if the horse in is there, and the raise the alarm if it has gone missing?

The most sensible things is to make product Quality a TEAM goal. Quality is everyone’s responsibility:

  • The Product Manager working with the team to promote quality in every task.
  • The Business Analyst in writing the user story must work with business stakeholder to question anything that would appear to make the user journey more complex than it need be, highlighting improvements in the proposals.
  • The Business Analyst in walking through the story with Developer and Tester to explain what is requires, and answer any questions, so there is a common understanding (the 3 amigos discussion).
  • Developers must write good code, fully unit tested and integration tested before performing a demo to the tester.
  • The Tester must exercise tests to cover the delivered code, to regression test, to perform exploratory testing around the story – are there any potential pitfalls or problem areas, and to execute performance and security tests.
  • The Team then perform a demo to the end user/stakeholder, and there should be no surprises. The delivered software meets their requirements, is robust and does not break!

It is basic common sense to not expect the very last person to look at the software to be the only person responsible for the quality of what has been delivered, unless the team likes to spend time rewriting code, missing deadlines and seeing a high turnover of bored and frustrated testers!

So, if you are reading this and am one of those testers who are expected to ‘do quality’ yourself, take heart. Things are changing, expectations are changing, and if your organisation will not accept the inevitable, then you have 3 choices: 1) Stay, and moan, 2) Give up and leave, 3) Stay and educate the team. Find a developer and BA who share your opinions on Quality. Work with them and use the success to show the rest of the team how much better and quicker the turnover was without rewriting and rushing to meet the deadline. Offer to work with the more reluctant team members, point out the issues that you would term as ‘quality’ issues, and they will start to understand and learn from you.

If you are reading this as a PM, BA or developer, please do not leave the tester in your team to bear the overall responsibility. Remember that in football, if the goalie lets in a goal, its a team failure – midfield and defence failed to do their jobs and left the goalie to do all the work!