Insight

What is automation bias and how can you prevent it?

At the heart of good foresight work is exposing and confronting assumptions and biases. Most people are unaware of just how many types of cognitive bias there are – Wikipedia lists 184.

And with the rapid expansion of artificial intelligence (AI) and automation technologies, one will likely demand more of our attention – automation bias. That’s the tendency to depend excessively on automated systems, which can lead to mistaken automated information overriding correct decisions.

Slave to the machine

We already have a rich history of automation bias to draw on, showing how people trust automated systems over their own judgements. For example, there are numerous cases of people blindly following GPS, like the group of tourists in Australia who drove into the Pacific Ocean.

This sort of thing happens so often in Death Valley, California, that the local rangers have coined the term “death by GPS”. There are many equivalent examples in aviation, where pilots have trusted automated navigation systems even when their own best judgement suggests otherwise.

Exploiting human weaknesses

Stress and time pressures, like those we see every day at work, can make automation bias worse. Under such conditions, and when dealing with complex problems, the human brain tends to favour the path of least resistance. This often leads us to outsource decision-making and discretion to automated systems. I’ve written previously on the wider issues associated with placing faith in ‘black box’ algorithms and how we might embed other kinds of bias in the automation code we create. Automation bias then further amplifies these embedded biases.

Automation bias versus reality

When we consider how increasing automation bias, driven by the rapid rollout of AI and automation, will play out in the future, we begin to appreciate the risks of letting machines lead human thinking. At the heart of the problem is the fundamental way that AI and automation works – essentially by learning from large sets of data. This kind of computation creates the assumption that, when projected forward, things won’t be radically different. There’s also the risk that if the training data is flawed then the learning will be flawed.

By placing too much trust in automated decision making, we risk reducing the rich spectrum of future possibilities to those that are computable. Should we base our decisions about the future on the recommendations of the SatNav? Or should we trust our own judgement and take a different course? Following the automated route leaves less room to address uncertainty, ambiguity, novelty and volatility – all conditions that are very much part of dealing with our reality, which is very rarely entirely stable.

Oversight and symbiosis

What can organisations do to mitigate the risk of automation bias in their human workforce?

Firstly, there needs to be effective and robust governance of AI. This must ensure other types of bias aren’t encoded into algorithms and perpetuated through automation bias. Organisations need strong and considered oversight of both algorithms and training data to better understand why our automated systems behave the way they do.

Secondly, organisations must train and empower human workers to use their unique skills and judgement – intuition, empathy, critical thinking, abstract reasoning, etc. This will enhance automation and lead to the whole being greater than the sum of the parts (man and machine working together). Gary Kasparov’s ‘Centaur Chess’ is an example of this principle in action.

For many types of work, it would be better to consider ‘the machine’ a trusted advisor, rather than a leader. By following automated systems unquestioningly and encouraging people to think like machines, we risk side-lining our unique human skills and being worse prepared for a messy, chaotic and unpredictable future.

Explore more

Contact the team

We look forward to hearing from you.

Get actionable insight straight to your inbox via our monthly newsletter.