In the media

Steps Organisations Can Take to Counter Adversarial Attacks in AI

Computer Business Review

15 September 2020

This article first appeared in Computer Business Review

This arms race between the security community and malicious actors is nothing new, but the proliferation of AI systems increases the attack surface. In simple terms, AI can be fooled by things that would not fool a human. That means adversarial AI attacks can target vulnerabilities in the underlying system architecture with malicious inputs designed to fool AI models and cause the system to malfunction. In a real-world example, Tencent Keen Security researchers were able to force a Tesla Model S to change lanes by adding stickers to markings on the road. These types of attacks can also cause an AI-powered security monitoring tool to generate false positives or in a worst-case scenario, confuse it so it allows a genuine attack to progress undetected. Importantly, these AI malfunctions are meaningfully different from traditional software failures, requiring different responses.

Adversarial attacks in AI: a present and growing threat

If not addressed, adversarial attacks can impact the confidentiality, integrity and availability of AI systems. Worryingly, a recent survey conducted by Microsoft researchers found that 25 out of the 28 organisations from sectors such as healthcare, banking and government were ill-prepared for attacks on their AI systems and were explicitly looking for guidance. Yet if organisations do not act now there could be catastrophic consequences for the privacy, security and safety of their assets and they need to focus urgently on working with regulators, hardening AI systems and establishing a security monitoring capability.

Work with regulators, security communities and AI suppliers to understand upcoming regulations, establish best practice and demarcate roles and responsibilities

Earlier this year the European Commission issued a white paper on the need to get a grip on the malicious use of AI technology. This means there will soon be requirements from industry regulators to ensure safety, security and privacy threats related to AI systems are mitigated. Therefore, it is imperative for organisations to work with regulators and AI suppliers to determine roles and responsibilities for securing AI systems and begin to fill the gaps that exist throughout the supply chain. It is likely that a lot of smaller AI suppliers will be ill-prepared to comply with the regulations, so larger organisations will need to pass requirements for AI safety and security assurance down the supply chain and mandate them through SLAs.

GDPR has shown that passing on requirements is not a straightforward task, with particular challenges around demarcation of roles and responsibilities.

Even when roles have been established, standardisation and common frameworks are vital for organisations to communicate requirements. Standards bodies such as NIST and ISO/IEC are beginning to establish AI standards for security and privacy. Alignment of these initiatives will help to establish a common way to assess the robustness of any AI system, allowing organisations to mandate compliance with specific industry-leading standards.

Harden AI systems and embed as part of the System Development Lifecycle

A further complication for organisations comes from the fact that they may not be building their own AI systems and in some cases may be unaware of underlying AI technology in the software or cloud services they use. What is becoming clear is that engineers and business leaders incorrectly assume that ubiquitous AI platforms used to build models, such as Keras and TensorFlow, have robustness factored in. They often don’t, so AI systems must be hardened during system development by injecting adversarial AI attacks as part of model training and integrating secure coding practices specific to these attacks.

After deployment the emphasis needs to be on security teams to compensate for weaknesses in the systems; for example, they should implement incident response playbooks designed for AI system attacks. Security detection and monitoring capability then becomes key to spotting a malicious attack. Whilst systems should be developed against known adversarial attacks, utilising AI within monitoring tools helps to spot unknown attacks. Failure to harden AI monitoring tools risks exposure to an adversarial attack which causes the tool to misclassify and could allow a genuine attack to progress undetected.

Establish security monitoring capability with clearly articulated objectives, roles and responsibilities for humans and AI

Clearly articulating hand-off points between humans and AI helps to plug gaps in the system’s defences and is a key part of integrating an AI monitoring solution within the team. Security monitoring should not be just about buying the latest tool to act as a silver bullet. It is imperative to conduct appropriate assessments to establish the organisation’s security maturity and the skills of security analysts. What we have seen with several clients is that they have security monitoring tools which use AI, but they are either not configured correctly or they do not have the personnel to respond to events when they are flagged.

The best AI tools can respond to and shut down an attack, or reduce dwell time, by prioritising events. Through triage and attribution of incidents, AI systems are essentially performing the role of a level 1 or level 2 security analyst; in these cases, personnel with deep expertise are still needed to perform detailed investigations. Some of our clients have required a whole new analyst skill set around investigations of AI-based alerts. This kind of organisational change goes beyond technology, for example requiring new approaches to HR policies when a malicious or inadvertent cyber incident is attributable to a staff member. By understanding the strengths and limitations of personnel and AI, organisations can reduce the likelihood of an attack going undetected or unresolved.

Adversarial AI attacks are a present and growing threat to the safety, security and privacy of organisations, third parties and customer assets. To address this, they need to integrate AI correctly within their security monitoring capability, and work collaboratively with regulators, security communities and suppliers to ensure AI systems are hardened throughout the system development lifecycle.

Explore more

Contact the team

We look forward to hearing from you.

Get actionable insight straight to your inbox via our monthly newsletter.