You may have come across Charles Perrow’s book that analyses the social side of technological risk. In ‘Normal accidents: living with high risk technologies’, he argues that conventional approaches to ensuring safety (for example, more warnings and safeguards) fail. Why? Because the complexity of systems means failure’s inevitable.
Perrow then presents a framework for analysing the risk of accidents – undermining the idea that better management and better trained employees alone can prevent serious industrial accidents. His book was last updated in 1999, but having recently dipped into it again, I’ve found plenty that’s relevant in this era of hyper-connectivity.
I find myself wondering what Perrow, had he written his book now, would have made of the Internet of Things (IoT) and the increasing use of automation and artificial intelligence (AI). These are technologies that can considerably increase complexity and make systems less easy for humans to understand. And this greater interconnectivity with automation could, in some circumstances, lead to new categories of accidents emerging.
Today, many aspects of people’s lives are experienced through connected devices. And this is an increasing trend. So we need to prepare for more accidents caused by greater complexity and interconnectedness. But what are the main areas of risk?
The massively diverse and expanding ecosystem of the IoT presents a number of challenges – particularly in the area of security. Many of the new players in the IoT have little experience in cyber security and a lot of the attacks targeting toys, cars, white goods and factories are those that the security industry has been aware of for many years. Botnets in particular specifically target the complexity and interconnectedness of today’s internet.
‘Black box’ algorithms
Another potential risk comes from the often opaque ‘black box’ algorithms of AI. Unintended consequences might arise from poorly specified goals towards which the system strives to optimise, or simply from the unanticipated interactions between these highly complex pieces of software.
The ‘flash crashes’ of 2010 and 2016 are examples of accidents arising from automated, high-frequency trading. Meanwhile, incorrectly heating your house based on a machine learning algorithm that thought you were going to be home is wasteful. Doing the same for a cooling part of a factory could have more significant impact.
The IoT provides us with an amazing platform for innovation and has the potential to greatly improve our world. And it would be a shame to see its future benefits eroded, and for our risk of normal accidents to increase, because we failed to include concepts such as risk assessment, ethical oversight, secure design and privacy amid the excitement of rapidly developing technology.
Creating risk management practices for the IoT is no small matter. Many of the first generation products and services contain vulnerabilities and elements of bad and risky design. Remember the widespread exploitation of connected devices by the Mirai malware?
Although designed in a different era, Perrow’s framework provides us with a useful model to consider risk in our complex, automated and hyper-connected world. Its key dimensions – essentially the complexity of interactions and degree of interconnectedness – remain highly relevant today. With careful thought from businesses and regulators, as well as better education for individuals, our future could be one that’s more secure and contains less risk from accidents.