Skip to content

Share

  • Add this article to your LinkedIn page
  • Add this article to your Twitter feed
  • Add this article to your Facebook page
  • Email this article
  • View or print a PDF of this page
  • Share further
  • Add this article to your Pinterest board
  • Add this article to your Google page
  • Share this article on Reddit
  • Share this article on StumbleUpon
  • Bookmark this page
PA OPINION

Discovering artificial intelligence: key concerns and how to address them

We live in a world where artificial intelligence (AI) systems are becoming more prevalent in our daily lives, from determining whether a bank will accept your loan application to recognising your face to unlock your phone. Used correctly, AI can be a force for good, but these powerful systems can also detrimentally impact people’s lives. Recently, the UK Government was forced into a U-turn after many people felt an algorithm used to moderate A-Level grades was disproportionately impacting students from disadvantaged backgrounds. It’s clear that AI presents both opportunities and challenges, especially in the context of diversity and inclusion.

What are the current major concerns around AI?

There are frequent concerns around accuracy and bias in facial recognition technology, with research finding the technology struggles to identify women with darker skin tones. The AI Now Institute, a research group at New York University, even said there’s a diversity crisis in AI and the lack of diversity in the field has caused shortcomings in technology. Yet these concerns haven’t slowed the increase in facial recognition technology, with airports, shopping centres and law enforcement all adopting such systems – in spite of the potentially life changing impact of any wrongful application.

It’s not just facial recognition that’s attracting attention. In 2016, Microsoft’s chatbot Tay had to be pulled after only 16 hours as it started posting offensive comments after learning from the collective output of Twitter. And in 2015, Google launched an AI tool to automatically tag people’s images, but it ended up labelling two individuals as ‘gorillas’ (something the tech giant apologised for and quickly corrected). Such examples highlight why many people are questioning the fairness of AI and the data that drives it.

What are governing bodies and businesses doing to address these concerns?

Governing bodies are starting to legislate against some of this technology. Oregon and New Hampshire in the United States, for example, have banned the use of facial recognition on body cameras for law enforcement. But their powers are limited as most AI governance lies with business.

Microsoft has recently employed philosophers, creative writers and artists to help their bots avoid making controversial and offensive statements. IBM has implemented independent bias ratings to determine the fairness of its AI systems before being launched commercially. And Google has published a set of AI principles for designing AI systems and launched a set of summer camps with AI4ALL to increase diversity in AI jobs.

So, what more can we do to build fairer AI systems?

A good place to start is ensuring the data sets that power AI systems are representative. For example, if we were going to look at reducing car use in a city and only used data from one suburb, we wouldn’t have a representative data set for the city. It’s the same with building AI – we can’t rely on the assumptions of the developers. Instead, we need to really understand a broad segment of the end users.

A good second step is to adopt Google’s AI principles in the design process. By adhering to them, we can ensure AI systems respect cultural, legal and societal norms and incorporate best practices.

Involving a diverse team in designing and developing AI systems is also important as it ensures we challenge assumptions and remove unconscious bias. There’s still a long way to go to address this challenge, though, as Nesta estimates that women hold less than 14 per cent of AI roles, a shocking statistic considering AI is set to dominate the future.

We can make AI fairer

The growth of AI systems will continue to accelerate. So, it’s vital to put in place the right safeguards and build diversity to ensure these systems have a positive impact on people and society. Governing bodies, businesses and non-profits should all work together to establish industry standards and commit to them. That way, we can ensure AI systems are fair for everyone and lead to a more positive human future.

Contact the author

Contact AI and automation team

Søren Knudsen

Søren Knudsen

Mark Griep

Mark Griep

Andrew Jaminson

Andrew Jaminson

Katharine Henley

Katharine Henley

Lee Howells

Lee Howells

×

By using this website, you accept the use of cookies. For more information on how to manage cookies, please read our privacy policy.