This article was first published on Computer Weekly
AI is commonly defined as the ability of a machine to perform tasks associated with intelligent beings. And that’s where our first problem with language appears.
Intelligence is a highly subjective phenomenon. Often the tasks machines struggle with most, such as navigating a busy station, are those people do effortlessly without a great deal of intelligence.
We tend to anthropomorphise AI based on our own understanding of “intelligence” and cultural baggage, such as the portrayal of AI in science fiction.
In 1983, the American developmental psychologist Howard Gardener described nine types of human intelligence – naturalist (nature smart), musical (sound smart), logical-mathematical (number/reasoning smart), existential (life smart), interpersonal (people smart), bodily-kinaesthetic (body smart), and linguistic (word smart).
If AI were truly intelligent, it should have equal potential in all these areas, but we instinctively know machines would be better at some than others.
Even when technological progress appears to be made, the language can mask what is actually happening. In the field of affective computing, where machines can both recognise and reflect human emotions, the machine processing of emotions is entirely different from the biological process in people, and the interpersonal emotional intelligence categorised by Gardener.
So, having established the term “intelligence” can be somewhat problematic in describing what machines can and can’t do, let’s now focus on machine learning – the domain within AI that offers the greatest attraction and benefits to businesses today.
Make the artificial real. Artificial intelligence and automation.
The problem with learning
The idea of learning itself is somewhat loaded. For many, it conjures mental images of our school days and experiences in education.
Here again, the process of machine learning is different – machines draw conclusions from huge quantities of data according to rules set down in algorithms. They have no inherent context, experience, ethics, or culture to draw upon.
Machines simply don’t learn in the same way as people. So for organisations to ensure they get the best out of AI and put in place the necessary governance and oversight, it’s important to break down what machine learning actually means.
To do this, it might help to explore the language around machine learning and ask “what is being taught?” and “who is teaching?”
The answer to the first question essentially boils down to the training data. In structured machine learning this will be your own data that has been validated to ensure it will give the answer you want, or external data that has been validated by others. In both cases the question to ask is: “is this training data valid and free from prejudice and bias?”
The answer to the second question is no-one. With machine learning, the idea is that you feed in vast amounts of data and the computer uses some pre-defined rules to derive meaning, sometimes even enhancing the original rules.
So what needs to be understood is “how has the algorithm been designed and by whom?”, “what specific data will be used?” and “how will this data be chosen?”
For organisations adopting AI, greater emphasis should be placed on who is teaching than what is being taught. In the same way that a bad teacher will deliver worse outcomes for school children than a good one, badly taught AI will lead to bias, discrimination and undesirable outcomes.
Having strong, detailed answers to the questions above will be key.
Don’t be misled by language
Before we cede important decision-making with wide ranging impacts for business and society to AI, we need to make sure we teach machine learning systems to the highest standards possible. This will require oversight from the board and strong governance extending right down to the technical level of algorithm design.
Like any good teacher, those building machine learning applications should take pride and responsibility for creating “good”. Legislation and regulations will be required to hold organisations to account for decisions taken by their algorithms.
In the fog of language and bewildering pace of technology change, we must not lose sight of the fact that humans are the teachers of AI. The intelligence in AI is not human-like and requires its own specific forms of care and nurturing. Where AI goes rogue, it will be people who are to blame.
Rob Gear is a futurist at PA Consulting Group