Skip to content

Share

  • Add this article to your LinkedIn page
  • Add this article to your Twitter feed
  • Add this article to your Facebook page
  • Email this article
  • View or print a PDF of this page
  • Share further
  • Add this article to your Pinterest board
  • Add this article to your Google page
  • Share this article on Reddit
  • Share this article on StumbleUpon
  • Bookmark this page
PA IN THE MEDIA

Security think tank: AI cyber attacks will be a step-change for criminals

This article was first published on Computer Weekly

AI and machine learning techniques are said to hold great promise in security, enabling organisations to operate an IT predictive security stance and automate reactive measures when needed. Is this perception accurate, or is the importance of automation gravely overestimated?

Whether or not your organisation suffers a cyber attack has long been considered a case of ‘when, not if’, with cyber attacks having a huge impact on organisations.

In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than $654bn. In 2019, this had increased to an exposure of 4.1 billion records.

While the use of artificial intelligence (AI) and machine learning as a primary offensive tool in cyber attacks is not yet mainstream, its use and capabilities are growing and becoming more sophisticated. In time, cyber criminals will, inevitably, take advantage of AI, and such a move will increase threats to digital security and increase the volume and sophistication of cyber attacks. 

AI provides multiple opportunities for cyber attacks – from the mundane, such as increasing the speed and volume of attacks, to the sophisticated, such as making attribution and detection harder, impersonating trusted users and deep fakes.

Seymour and Tully’s SNAP_R (Social Media Automated Phishing and Reconnaissance) provides an example of a simple but elegant AI-based attack.

AI’s ability to analyse large amounts of data at pace means many of these attacks are likely to be uniquely tailored to a specific organisation. These kinds of highly sophisticated cyber attacks, executed by professional criminal networks leveraging AI and machine learning, will enable attacks to be mounted at a speed and thoroughness that will overwhelm an organisation’s IT security capabilities.

However, AI can also be part of the solution by fighting fire with fire. In 2016, the Defense Advanced Research Projects Agency (Darpa), of the US Department of Defense, held a Cyber Grand Challenge – the world’s first all-machine (no human intervention allowed) cyber hacking tournament.

This was a competition to create automatic defensive systems capable of reasoning about flaws, formulating patches and deploying them on a network in real time. Using this type of combative AI as a part of cyber defence will become more commonplace.

One approach to enhance defences might be to use behaviour-based analytics, deploying the unparalleled pattern-matching capability of machine learning.

Assuming the appropriate data access consents are in place, the abundance of user behaviour data available from streaming, devices and traditional IT infrastructure, gives organisations a sophisticated picture of people’s behaviour.

This includes being able to determine what device they use at a particular time (e.g. iPad at 10pm), what activity do they typically do at that time (e.g. processing emails at 10pm), who are they interacting with (e.g. no video calls at 10pm), what data do they typically access (e.g. no shared drive access at 10pm).

This can be built, maintained and updated in real-time by a well-trained machine learning system. Any detected deviations from the normal pattern will be analysed and trigger an alert that could lead to cyber defence mechanisms being deployed.

The use of behavioural data is a long-standing practice in traditional SIEM systems; however, AI technology takes it to a different level. There is no need to craft pre-operation rules on choosing the right behavioural data or even slicing the time and task patterns to fit a particular threat. The machine learning algorithms will do that for you. The solution could also take in data points from peripheral behavioural activity to provide robust evidence of an emerging threat pattern.

For example, a top-tier global bank is using neural networks to predict whether connections to the outside world are legitimate or fake. A fake connection will attempt a connection via snoopware on the infected device or a link to a drive-by-download site. These are typically launched using sophisticated botnets, such as banking trojan cyber attacks.

The neural network has been trained by the bank using LSTM (long short-term memory) architecture to examine the URL or domain name used to open the connection to determine its legitimacy. The bank trained the algorithms on more than 270,000 phishing URLs, and the detection rate was in excess of 90%, higher than traditional cyber security systems for new types of attacks, such as the botnet detection.

Be under no illusion, offensive-AI is an issue that all organisations must be prepared to deal with sooner rather than later. The time to review your approach and capabilities is now – before you are forced to do it retrospectively.

A successful strategy needs to develop and deploy not only technical capabilities, but change cultural processes and governance to deal with the new approaches that AI will bring to an organisation.

  

Helping to protect and grow your organisation in a digital world

Find out more

Contact the authors

Contact AI and automation team

Søren Knudsen

Søren Knudsen

Mark Griep

Mark Griep

Andrew Jaminson

Andrew Jaminson

Katharine Henley

Katharine Henley

Lee Howells

Lee Howells

×

By using this website, you accept the use of cookies. For more information on how to manage cookies, please read our privacy policy.