Thoughts of a new(ish) Software tester


1 Comment

The Invisible Gorilla – Book Review

I have recently finished reading ‘The Invisible Gorilla’ by Christopher Chabris and Daniel Simons.
It was quite an interesting read and contained many (perhaps too many) examples of where we can make assumptions and fall into traps.   This is largely because we don’t analyse our views as thoroughly as we might and therefore can fool ourselves into believing we have all the information we need to make an observation or decision.  On reading this book I’ve realised that much of what we see and experience in life, as well as testing, is not always quite what it appears on closer inspection.  I don’t think the concept of WYSIATI (what you see is all there is) is referred to in this book but it sums up a lot of what the book describes.  We must sometimes look beyond the immediately obvious visual information as what we see is not always the full picture.  One of the reasons we fall for it even when we are aware of this shortcoming is that we are right the majority of the time in our initial assessment so we have less reason to believe that sometimes we will be wrong in our judgment.  Below I’ve written about some of the main areas covered in the book.
Confidence
Confidence can be mistaken for an indication of the accuracy of someone’s statements.  Equally, a statement delivered by an individual with low confidence can come across as less believable even if they know exactly what they’re talking about.  So it’s important not to base our own confidence in others statements on the confidence of their delivery or demeanor.   It’s better to make a decision based on a fully informed assessment of the facts rather than the opinions of the most confident or highly ranked.
Familiarity
Another trap we can fall into is believing we know more about a subject than we actually do.  For example, if you were asked if you know how a bike works you are very likely to say ‘Yes’.  But if you are asked to actually describe technically in detail how the brakes or gears work you would find this a lot harder.  This made me realise that understanding the general concept of how things work by no means makes me an expert on the subject and made me realise how much I don’t know.
Memory
The illusion of memory is another area covered and describes how we often we fill in the gaps in our memory with fictional details and this is not always deliberate.  Recounting of an event may also change each time it is recalled even though we may be confident we know exactly what happened each time the story is told.
Correlation and Causation
I believe we can all be guilty of making conclusions based on associations where two events happen at the same time or one just before the other.  It seems perfectly natural or almost expected to make a link between the events where there may not necessarily be one.  Even when two events consistently happen together they may not be causal or related; there may be a third event which happens leading to the first two unrelated events to happen.  Since reading the book I’ve noticed that news reports will make very suspect associations such as these with very little concrete evidence.  They use phrases such as ‘may be linked’ or ‘there could be a correlation between’.  I found myself feeling infuriated with this where before I wouldn’t have given it a second thought.  Especially when they suggest questionable links in relation to health and disease, causing unnecessary worry for the public.
In relation to testing, all of the points made in the book are relevant and it is a very good idea to keep them in mind when making observations.  This is important not only of software but also of other people and also to be aware of our own assumptions and interpretations of events.  I highly recommend this book to software testers as it should make you think differently and make you take your time rather than relying on your first impressions.
Advertisements


Leave a comment

Thinking, Fast and Slow – Book Review

Over the past few weeks I have been reading a book called Thinking, Fast and Slow by Daniel Kahnemann.

This book was recommended to me by a good friend of mine who works as a software tester for another company.  He told me that it would change the way I think about how our minds work, and indeed it has.

As the title suggests, Daniel Kahnemann describes how our minds are split into two main systems which we use when we think and make decisions.  He refers to these as System 1 and System 2.  System 1 is described as automatic and subconscious.  So when we feel we are acting on our instincts, gut reactions or hunches, we are said to be using our (fast) System 1 to make these decisions.  When we act on instinct we do not take a step back to analyse the situation before coming to the decision we make, we just feel that it’s right.  Often, this is exactly how we want our minds to work.  For example, if someone throws a ball in your direction you need to make a quick decision as to whether you’re going to dodge or catch the ball.  Not a lot of conscious thought is going into the decision as there is not enough time to think about what action you will take.

There are other situations where taking your time before acting is much more appropriate.  For example, if you’re looking to buy a new car you won’t make a quick choice based on looks alone, you will want to know think about many aspects such as performance, economy, mileage etc.  Therefore, before making your choice you will have to use your (slow) conscious System 2.

It is the occasions where we don’t feel a decision requires a great amount of thought which can lead to errors of judgement.  Something may seem simple and obvious on the face of it but only when you really apply conscious time and thought do you see things more clearly.  This is a good point to bear in mind, especially when testing software.

One of the mistakes we are prone make is to let our own personal experiences of events bias our view of the probability that those events will happen in the future.  For example, if you have a family history of heart attacks and you are asked what percentage of deaths nationally are caused by heart attack then chances are you will overestimate the likelihood when compared to someone who has no personal experience of heart attacks.  This is known as the Availability heuristic, since instances which come to mind (are available) lead us to think that events are more common than they are in reality.
When we have personal experience of a subject we must not let that influence our view of the facts, however, this is easier said than done.  Another similar idea which is related to the Availability heuristic is the acronym WYSIATI, which stands for ‘What you see is all there is’.  We each have our own view of the world and everyone’s view is different.  We often don’t look or investigate any further than what we’ve seen personally as we don’t always believe there is more to see.
We need to train ourselves to think about the bigger picture as there is often a lot more going on that we don’t realise just because we haven’t paid attention to it.

Below I’ve briefly described a few of the many ideas Daniel talks about in the book which are useful to bear in mind, especially when applied to software testing.

Sample size
One way in which it can be very easy to arrive at an incorrect conclusion is when judgements are made based on a small sample size.  The sample size should be large enough so that natural fluctuations in results do not skew the overall result.  For example, if you toss a coin 1000 times, as well as being very bored and worn out, the percentage of times you see heads should not be far from 50%.  However, if you only toss the coin 10 times, you could quite easily see 7 heads out of 10.  We all know that the probability of seeing a head is 50% but when we don’t know the probability we need to choose a sample large enough to eradicate the influence of natural fluctuations in results.

Answering an easier question
When you are trying to answer a difficult question or one that you do not have much knowledge about it can be natural to give an answer based on what knowledge we do have about something linked to the question.  For example, if you are asked the question ‘How happy are you with your life?’ you are likely to give an answer based on how you feel about things right now rather than thinking more objectively about your life as a whole since birth.

Regression to the mean
Sometimes people can misinterpret a correlation between events happening as causal just because they both occurred at the same time.  Daniel explains that it’s important to bear in mind that measurements of a particular category tend to create a bell shaped curve (see below) over time.  So there will be fewer results at the extremes with the majority falling somewhere between these two extremes.  Due to this fact there is a general regression to the mean, or average result.
This can explain why, more often than not, punishment of a bad result or low score is followed by an improvement, and rewarding success is followed by a worsening in performance.  This phenomenon can result in the misbelief that it was the punishment that caused the improvement or the reward that lead to the deterioration.  This is a great shame as many employers are not aware of regression to the mean and believe that their punishment of poor performance is always responsible for the subsequent improvement, so they have no reason to change this behaviour.

Since reading Thinking, Fast and Slow I’ve found myself thinking about real life situations where I can apply the principles described.  Even when you have read the book and have the knowledge of how our minds work it still seems unavoidable to fall into a lot of the traps which our brains appear hardwired to make us susceptible to.  But to have the knowledge about how our minds work and the flaws that exist can be empowering and should lead to more comprehensive testing.