Skip to content

Share

  • Add this article to your LinkedIn page
  • Add this article to your Twitter feed
  • Add this article to your Facebook page
  • Email this article
  • View or print a PDF of this page
  • Share further
  • Add this article to your Pinterest board
  • Add this article to your Google page
  • Share this article on Reddit
  • Share this article on StumbleUpon
  • Bookmark this page
PA IN THE MEDIA

When eliminating emotion can give a business advantage

This article was first published in E&T Magazine

It’s easy to assume people are better than computers at making decisions that require a subjective judgement. Computers have traditionally been binary yes/no machines. They have a great ability to make fast decisions based on hard logic when given structured data, but without the intuition we so often use to guide us.AI tools are now giving computers an ability to make those subjective decisions, but somewhere in our human psyche we persist in thinking we’re better at teasing out the nuances of a situation. So today, AI can subjectively interpret electrocardiogram traces far faster than humans, but we wouldn’t commit to heart surgery without an experienced human checking the diagnosis first.

What if there are subjective decisions to be made where humans are not only slower than machines but also inferior?

For better or worse, we all have biases, and many are based on the dataset we all carry round inside us – our experience. In some cases I know I’m being biased by my database of experience and I can deliberately filter that out before making a decision.

For example, I often interview candidates applying to join PA Consulting. I also happen to dislike novelty socks a lot because a former colleague wore them and the person wasn’t particularly pleasant to work with. So, when a candidate wears Homer Simpson socks to an interview, I shudder at the memory of that unpleasant former colleague. Rationally, I know the candidate’s choice of socks has no bearing on their abilities, so of course I will deliberately filter this and other obviously irrelevant data out of my decision-making. 

More worrying and harder to deal with, though, is weak data that masquerades as being logical and is then prioritised because it supports a bias.

For example, I always take lots of notes at meetings because it helps me reflect on the situation afterwards. So I like it when job candidates make notes in interviews. But basing my decision to hire or not on whether a candidate makes notes isn’t a good idea. It isn’t high-quality data, as for all I know they could be scribbling down a shopping list.

It’s easy to see, then, how AI could actually be a more ethical, superior decision-maker in subjective situations. Perhaps it could be better than a human high court judge as it won’t be biased by skin colour or gender. Perhaps it will be a better bank manager as it won’t be swayed by the old school tie of the loan applicant. And perhaps it will be a better recruiter as it won’t be influenced by novelty socks.

For AI systems to become better and more ethical than humans, we first need to filter out any data that we know would be unethical, or even illegal, to use as the basis for a decision. For example, using gender in car insurance applications. You must make sure your organisation isn’t letting AI explore data lakes containing information that would be unethical to use. This is a lot easier with AI than with humans.

The next step is to ensure any AI system can be easily audited so you can see when weak data is being over-relied on to draw conclusions. AI isn’t magic, it looks at a lot of data, has no frame of reference beyond the data available, and can’t tell anything about the quality of the data. Analysing a lot of weak data can be useful, but AI could make decisions based on the volume of data rather than the quality. For example, analysing mobile phone signals will say something about traffic flow on city roads, but not as much as cameras that track number plates and can distinguish between a motorbike, car and bus.

Even with an audit trail, however, it can be hard to tell AI what data is high and low quality. A recent study of fitness tracker accuracy by Stanford University found pulse rates measured by the six market-leading devices were within 5 per cent of the actual pulse. But measured energy expenditure was only between 27 and 90 per cent of the actual expenditure. Insurance companies have already started to sponsor customers to use these devices and use AI to derive business insight from the data. If Stanford University is to be trusted, insurance companies can make good use of one subset of data, while drawing false conclusions from the other.

Many organisations are looking to use AI agents to mine existing data for new insight. Chief information officers will need to put in place review procedures that audit how AI systems use this data to make decisions and, if necessary, manually recalibrate the prioritisation based on human insight.

Rather than thinking AI is ethically inferior to humans because it has limited experience of being human, we should think of its lack of human emotions as an advantage. AI can make decisions transparently and without the hidden grudges humans are prone to. But to do this, we can’t allow AI to evolve without supervision.

How you can plan for an uncertain future. A Guide to AI & Automation

Download now

Contact the author

Contact the IT transformation team

×

By using this website, you accept the use of cookies. For more information on how to manage cookies, please read our privacy policy.