Thoughts of a new(ish) Software tester


Leave a comment

When should you Stop Testing?

As a testing community I think we can all agree that, in practice, you can never test everything with every possible scenario.
In theory if you had infinite time then maybe you could.  Obviously we don’t have infinity, so we need to know when we’re ‘Done’ as a way to know when to stop.  To be clear, for the purpose of this blog, I’m talking about testing in the context of a manual test environment.  In the real world one could argue that testing never stops even once software is in a live production environment.

 

I suppose the question I’m really asking here is ‘Where do you draw the line?’  I’m not sure any two testers would draw it in the same place if they were individually asked to ‘test’ something.  One tester might spend more time looking at security and scalability whereas another may put more emphasis into looking at functionality and user experience.  If a tester is not considering both of these areas, amongst many many others, before they begin testing, then they are a potentially flawed tester.
draw line
When should you Stop Testing? is a vague subjective question without more context.
There are several factors to consider when answering this question depending on the situation.

 

Some considerations might include:
  • Are you testing a minimum viable product (MVP) or a polished fully fledged feature?
  • Are you limited by a strict deadline? (I would stick to the riskiest areas first if time was restricting my ability to test everything I want to)
  • What industry do you work in? (you may be ‘forced’ to test certain things in certain ways)
  • How much of what you’re testing is covered by automated tests?
In my team we have an ‘In scope’ and ‘Out of scope’ section in our story template which we agree upon when we kick off our stories.  The Product Owner, Developers and tester(s) are all party to the conversation so important areas to consider have less chance of being missed.  Sometimes, after the kick off, I will create a mind map for test ideas and share that with the team.  This can often lead to a clearer test plan in terms of clearing up assumptions and can lead to a more accurate scope list.
When I test, the first thing I do is to read through the acceptance criteria (AC).  Even though some, if not all, of the AC may have unit or integration tests I never quite feel comfortable in assuming these areas are fully covered until I test them manually and see the results for myself.  This is no reflection on my confidence in the developers but more a lack of trust in automated tests and understanding that each test only tends to cover one specific unchanging scenario.  I will then move onto the In Scope section written in the kick off.  The In Scope section is a very good starting point for testing.  However it could be restrictive and potentially dangerous if you treat it as a black and white rule, especially if you plan to do some exploratory testing (ET) later, which I highly recommend.

 

ET.PNG

 

Finally I would run some ET, where necessary (several ET charters may be created but not necessarily run)  The very nature of ET is to have a charter to make sure you don’t stray too far from the area you want to test.  If it would cause you to stray too far from the scope of your ET charter then simply create a separate charter for each new area of discovery.   I wouldn’t say that you should ignore a particular area of curiosity just because it’s not stated as In Scope, or even if it’s on the Out of Scope list.  You may also come up with more test ideas to try out that you had not thought of during the story kick off.
As I’ve said there is no black and white answer to the question of when to stop testing.  It’s down to the tester and supporting team to collaborate on what will and will not be tested.  The important thing is that everyone is aware and comfortable with the results.  Hopefully this has given you some food for thought with regard to things to consider before you can confidently say ‘I’m Done!’
im done

 

Advertisements


Leave a comment

What NOT to test?

 

This is a question which is often overlooked when testing.  The first thing you would normally consider when you get a new feature is what tests you will run and in what areas.  Why bother thinking about what you won’t test?


My answer is stated in the context of carrying out exploratory testing for new features, as opposed to scripted or automated testing.  In scripted testing, the test cases are usually explicitly written (normally beforehand), have an expected outcome and a check-box to tick.  Once the tests have been written, there is no need to think about what and what not to test, or how to test it.  The ‘Tester’ (checker) can simply follow the instructions without diversion and tick or cross their boxes; their mind remaining unexpanded.


This discussion also excludes automation (a bit like a computer running the human scripted tests).  I would expect basic happy path and failure (sad path) cases to be automated.
This leaves Exploratory Testing.  This is where the world is the testers oyster and they can let their creative juices flow (this is why we all love testing right?).  There are no limits as to what he or she chooses to test.  It is up to the tester to use their mind (expanding all the while) to work out what to test, and what not to test.


dont test


This question can’t be properly answered unless we get the answers to some further questions.


  • What are the acceptance criteria? (most if not all of these should already be covered by automation)
  • What part of the product is being changed? (and what other parts of the product are touched by this area?)
  • What is the deadline, i.e. how long can you spend testing? (If time is very limited you might want to stick to risk based testing and think about the most fundamental areas first, i.e. those which would impact the customers most if they did not work)
  • What level of quality (for want of a better word) are customers expecting? (Minimum Viable Product (MVP) or super shiny finished product?)
  • Are 3rd parties involved? (i.e. does the code interact with 3rd parties and does your integration with those need testing?)
  • Are there dependencies? (e.g. does the new feature need to be backwards compatible with older versions of the product)


Talk to the developers to get their opinion of what parts of the product their code is likely to impact.  Don’t forget to take what they say with a pinch of salt (https://patterson2a.wordpress.com/2012/07/01/take-it-with-a-pinch-of-salt), as they may be adamant that you can disregard a certain area of the product (they may also be adamant their code never contains bugs!)


Ask the developers to show you the code and the tests they’ve written.  You are unlikely to understand it as well as they do but it shouldn’t do you any harm in seeing it, even better if they can talk you through it.  It may lead onto more test ideas and help answer the question of what not to test.


One method I like to use is to create a mind map for the story or feature (I try to pair on this with a fellow tester using mindmup.com)  The mind map normally contains areas of the product I am considering testing for the story and I may include some questions.  I then share this mind map with the rest of the team (developers, testers and product manager) asking for feedback.   Preferably this happens before any development (coding) has begun.  I want feedback in regard to whether they feel (in their opinion) the areas I’ve mentioned are appropriate for testing or whether any areas I’ve mentioned have no bearing on the story and I could consider not testing them.


Once you have all this information it is up to you to decide what you will and will not test.  There is no secret formula but at least you can make a decision armed with information which will aid your decision.