Anyone pondering how to make the most of artificial intelligence needs to be clear about exactly what they want to achieve with it, and what they want it to do. They also need to avoid falling into the trap of comparing it to human intelligence.
To think of AI as a ‘clever human’ is to misunderstand it, and what it’s capable of. And that could affect how you develop it and what tasks you give it.
What do I mean? Well, if you’re wondering, for instance, how humans will interact with AI, instead ask yourself: How much does your dog worry about what you think? And how could you possibly know?
When we think about animals being intelligent, we always frame that thought in human terms. So, dogs are ‘loyal’ or ‘obedient’. In other words, they’re ‘dumb humans’.
Spaniels can’t solve equations
But this comparison doesn’t stack up, for two reasons. First, scale. A dog’s brain has about one billion neurons, and 1013 synapses connecting them. We have about 100 times more neurons and 1.5x1014 synapses. (You’ll notice we don’t actually make our brains work as hard as the average Labrador.)
On that basis, 100 dogs should be as intelligent as one person. But that’s ridiculous – 100 cocker spaniels will never solve a set of partial differential equations, no matter how long you give them. They may be cuter than a physics graduate, but their smaller individual brains run into a capability limit. Our larger brain lets us do processing tasks that aren’t possible for a dog.
The comparison also falls down because of context. If your brain is the hardware you do your thinking with, your mind is the operating system – the bit of you that thinks it’s you. It works by manipulating symbols and concepts from the environment, or context, that the mind develops in. We can only interact with dogs at all because our minds have developed in the same physical world as theirs. But many social and cultural concepts also shape our minds, and dogs have many of their own social constructs.
These different contexts mean there are concepts in our thoughts that dogs can never understand. The environment that’s developed our minds gives us tools for concept building and problem solving that means we can ponder issues a dog can’t even think about.
Seven things your board should consider when bringing AI and automation into the organisation. Download our guide to AI and automation
Why does this matter? Because we make exactly the same mistake when we think about artificial intelligence. Human intelligence is the only one we understand, so we think of AI as a ‘smart human’. Or ‘like a person thinking very fast’, or ‘like someone doing massive multi-tasking’.
We talk about the scale of AI being like 100 human brains, then make the mistake of thinking it will be as clever as 100 people working together. But really, we can’t understand AI’s thinking any more than a dog can understand ours. A mature AI will be able to solve problems we’ll never be able to understand, and manipulate symbols and concepts we could never hold in our minds. It’s also likely to have developed its mind in a different context to ours, making communication difficult and potentially frustrating.
The more AI exceeds the capability of a human brain, the more likely it is that we will have serious problems communicating on level footing. An AI that’s too powerful won’t be able to take on the human tasks we want it to do. That computing power would probably be better off spread across 100 AIs than packed into one. One of the challenges of developing AI will be finding the sweet spot where everything works perfectly and everyone gets what they want.
So next time you catch yourself wondering about what AI is thinking about, ask yourself how you can possibly know.