In the media

AI opens up an ethical minefield for businesses

Management Today

14 February 2020

This article was first published in Management Today

Ethics has always been a delicate issue for businesses who need to make sure their behaviour aligns and complies with their values and wider societal norms. Traditionally they have managed ethical dilemmas through careful human oversight, and subtle and subjective processes.

The advent and proliferation of Artificial Intelligence (AI) systems has changed that. Automated, algorithmic decision making has created very new challenges for businesses because it reduces decision making time to mere milliseconds, based on past data patterns and little, if any, contextual input.

This is now an issue for more and more businesses as AI systems become omnipresent, with for example 66 per cent of financial services institutions now using it. If the ethical considerations of using algorithmic decision making are not thought through early enough and processed thoroughly, they can impact an organisation’s reputation, lead to legal challenges and financial loss.

More than half of the public say they are concerned about algorithms making decisions humans would normally take and 87 per cent think it’s important for organisations to use personal data ethically (though most are unconvinced that they will). That is beginning to affect business decisions, with a large US global bank postponing the IPO listing of a Chinese AI ‘unicorn’ that was a pioneer in the field of facial recognition because they were concerned about the ethical implications.

Furthermore, there are growing concerns in the data privacy arena over the way in which AI systems are trained, which could cause a negative bias (as well as valid claims under the raft of new Global Data Privacy laws) and doubts about how well protected AI algorithms are from cyber-attack or nefarious influence by hackers.

That creates a clear need for businesses to think carefully about the ethical aspects of AI and how they can avoid any commercial or reputational problems as a result of its use. They should invest in making sure they have a detailed understanding of how algorithmic decision-making impacts customers and any ethical consequences that emerge, as well as the way in which the AI algorithms are protected. This can then help them ensure that the algorithms they use are ethical, fair and designed in a way that eliminates bias.

This requires input from across the business - not just the engineering and design teams, but also legal, compliance, cyber security, marketing, corporate and social responsibility, and external parties such as regulatory bodies and independent experts. We work with such diverse teams to ensure ethical considerations are taken care of at the design phases of AI systems. Having built this knowledge, businesses are ready to tackle key concerns such as: is our AI consistent with our values? does it protect our employees? does it serve our customers? does it comply with the law? but also: will it make money?

Organisations should also establish anticipatory safeguards. These will prevent or mitigate risk, and minimise organisational exposure to damaging legal and economic loss resulting from undetected biases. One simple operational safeguard is to monitor the use of protected characteristics set out in the Equality Act 2010 (age, disability, gender reassignment, race, religion or belief, sex, sexual orientation, marriage and civil partnership, pregnancy and maternity) in algorithmic decision making.

As AI algorithms could use them to discriminate indirectly against applicants for credit, housing, employment and so on. For example, by associating the protected characteristics with the lack of past data for creditworthiness, an algorithm could infer that parts of the population should not be grated loans based on their racial, gender or age characteristics.

Other areas of high controversy where there is limited consensus about their use and a lack of legal frameworks should be explained openly. These include predictive policing, where data is used to identify targets for police action, as well as using algorithms to set prices in the capital markets which can lead to the appearance of collusion.

There are also some emerging national, regional and international guidelines on the ethics of AI, such as the OECD’s. Organisations should make sure they know about the guidance that is available and understand these early positions on the ethical boundaries on what is acceptable and responsible use of AI.

All this underlines that to operate trustworthy AI requires organisations to set a number of ethics checkpoints throughout its design, development, testing and roll out where the business reviews the approach and considers the consequences of adopting particular algorithms. The multidisciplinary squads mentioned above would be carrying out these checks on a regular basis, alerting decision makers on ethical issues and risk.

It is important to recognise that AI does not introduce new ethical challenges, it merely reflects the biases and inconsistencies in our data histories and inadvertently amplifies them. As a result, most work done on AI ethics today is about stopping negative outcomes – how to prevent discrimination, how to rule out unfairness, and injustice.

But there is a positive, progressive aspect to AI ethics. It can, for the first time in history, reveal and uncover all the latent biases in our past data histories, and help us to design and build a better approach. Organisations that can operate ethical, secure and trustworthy AI will be more appealing to the 21st century conscious consumer.

Yannis Kalfoglou is AI and ethics expert at PA Consulting

Explore more

Contact the team

We look forward to hearing from you.

Get actionable insight straight to your inbox via our monthly newsletter.