I have recently finished reading ‘The Invisible Gorilla’ by Christopher Chabris and Daniel Simons.
It was quite an interesting read and contained many (perhaps too many) examples of where we can make assumptions and fall into traps. This is largely because we don’t analyse our views as thoroughly as we might and therefore can fool ourselves into believing we have all the information we need to make an observation or decision. On reading this book I’ve realised that much of what we see and experience in life, as well as testing, is not always quite what it appears on closer inspection. I don’t think the concept of WYSIATI (what you see is all there is) is referred to in this book but it sums up a lot of what the book describes. We must sometimes look beyond the immediately obvious visual information as what we see is not always the full picture. One of the reasons we fall for it even when we are aware of this shortcoming is that we are right the majority of the time in our initial assessment so we have less reason to believe that sometimes we will be wrong in our judgment. Below I’ve written about some of the main areas covered in the book.
Confidence can be mistaken for an indication of the accuracy of someone’s statements. Equally, a statement delivered by an individual with low confidence can come across as less believable even if they know exactly what they’re talking about. So it’s important not to base our own confidence in others statements on the confidence of their delivery or demeanor. It’s better to make a decision based on a fully informed assessment of the facts rather than the opinions of the most confident or highly ranked.
Another trap we can fall into is believing we know more about a subject than we actually do. For example, if you were asked if you know how a bike works you are very likely to say ‘Yes’. But if you are asked to actually describe technically in detail how the brakes or gears work you would find this a lot harder. This made me realise that understanding the general concept of how things work by no means makes me an expert on the subject and made me realise how much I don’t know.
The illusion of memory is another area covered and describes how we often we fill in the gaps in our memory with fictional details and this is not always deliberate. Recounting of an event may also change each time it is recalled even though we may be confident we know exactly what happened each time the story is told.
Correlation and Causation
I believe we can all be guilty of making conclusions based on associations where two events happen at the same time or one just before the other. It seems perfectly natural or almost expected to make a link between the events where there may not necessarily be one. Even when two events consistently happen together they may not be causal or related; there may be a third event which happens leading to the first two unrelated events to happen. Since reading the book I’ve noticed that news reports will make very suspect associations such as these with very little concrete evidence. They use phrases such as ‘may be linked’ or ‘there could be a correlation between’. I found myself feeling infuriated with this where before I wouldn’t have given it a second thought. Especially when they suggest questionable links in relation to health and disease, causing unnecessary worry for the public.
In relation to testing, all of the points made in the book are relevant and it is a very good idea to keep them in mind when making observations. This is important not only of software but also of other people and also to be aware of our own assumptions and interpretations of events. I highly recommend this book to software testers as it should make you think differently and make you take your time rather than relying on your first impressions.