In the media

Preventing artificial deception in the age of AI

By Alwin Magimay

Computer Weekly

30 March 2023

This article was first published in Computer Weekly

The UK government’s whitepaper on the regulation of the artificial intelligence (AI) industry confirms the sector’s growing importance. It employed more than 50,000 people and contributed £3.7bn to the economy last year and could add almost £1tn of economic value by 2035, according to some estimates.

However, as its potential applications continue to evolve, and new ones are discovered every day, it is generating a challenging combination of both excitement and apprehension. The recent widespread adoption of Chat-GPT is an example of both. Its emergence has led to a significant increase in the fear around AI and its misuse. Yet it has also created huge interest, within five days of its launch, more than a million people were using it. By comparison, it took Netflix three and a half years to get that many people on board.

Managing the concerns without stifling the potential of AI is the key challenge facing regulators across the world. The US has chosen a hands-off approach, encouraging private sector investment and prioritising AI research and development. China has opted for a centralised system focused on economic development and societal governance. The EU has focused more on regulation emphasising transparency, accountability, and protection of human rights. This includes proposed new regulations to establish standards for AI development and deployment, including strict rules for high-risk AI applications and biometric data usage, aiming to build trust in AI through transparency and accountability while ensuring safety and ethical considerations.

The UK has adopted what it is calling a pro-innovation approach by enabling current regulators to determine how best to adapt existing regulation to the deluge of AI development and progress using a set of common principles.

Whichever approach is adopted, a new regulatory mindset will be required to keep up with the pace of change.

A balancing act

A report by the Royal Society, “Machine Learning: The Power and Promise of Computers that Learn by Example” found that a lack of regulation around AI could lead to unintended consequences, such as biases in decision-making algorithms, job displacement, and privacy violations. However, the same report also highlighted that overregulation could stifle innovation and limit the potential benefits of AI.

Regulation needs to strike a balance between encouraging innovation and managing ethical and societal concerns. It should prioritise transparency, accountability, and fairness while ensuring that the development and deployment of AI aligns with ethical values and benefits society as a whole.

Traditionally, a single regulator seemed to be the preferred route and it is encouraging that the UK government has chosen not to adopt this approach. Instead of giving responsibility for AI governance to a new single regulator, the White Paper calls for the existing regulators such as the Health and Safety Executive (HSE), Equality and Human Rights Commission (EHRC), and Competition and Markets Authority (CMA) to come up with their own approaches that suit the way AI is actually being used in their respective sectors. These regulators will be using existing laws rather than being given new powers. Each of these will need to recognise that the pace of change is so rapid and unpredictable and any creation of a choke point in AI regulation would stifle innovation.

Building public trust

As AI continues to advance, preventing artificial deception through responsible AI development and deployment practices will be vital. A 2021 study by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems found that only 26% of the general public trust AI, while 73% of respondents believe that AI should be regulated. Public trust in AI remains low, and greater transparency, accountability, and ethical considerations in the development and deployment of AI systems will be needed to convince a sceptical public.

This can be achieved through the implementation of AI systems that provide clear and understandable explanations for their decisions. That should be underpinned by ways to hold them accountable for their actions, to make sure they are designed with ethical considerations in mind, and to educate users on the benefits and risks. There is also a need to collaborate with stakeholders to develop responsible and transparent AI systems. If regulators implement these measures, we can increase public trust and ensure AI’s successful integration into society.

A perpetual beta mindset to regulation

Critics have accused the government of taking a light-touch approach to AI regulation, but this criticism assumes that once a policy or framework is determined, it is set in stone. Historically, this has been the case for other disruptive technologies and has shown how being too prescriptive can limit innovation and hinder progress. What is actually needed is a “perpetual beta” mindset, where regulation continually changes and rapidly adapts to the developments in AI. This then supports a “test-learn-feedback-and-change” approach.

The whitepaper needs to develop its recommendations further on how the existing regulators will be able to achieve this. By embracing this agile-like approach, the government can strike the right balance between regulation and innovation and ensure that AI is developed and deployed in a responsible, ethical, and beneficial way. The current proposals are a good start, but more creative thinking and investment will be required to achieve truly pro-innovation AI regulation.

Explore more

Contact the team

We look forward to hearing from you.

Get actionable insight straight to your inbox via our monthly newsletter.