In the media

Artificial intelligence: the next evolution in cyber threat detection?

By Paul Whitlock

ISC2

01 June 2023

In the face of unprecedented change and advancement, how do we keep pace with new advances and maintain a safe and secure digital environment?

IT systems have evolved over the years from being single standalone mainframe computers, to being highly complex, connected and scalable platforms that operate multiple different services and use a variety of protocols to make it all work. Security approaches are continually having to change to keep up with this evolving landscape and the ever-changing threats.

Early computer systems had minimal, if any, security measures beyond basic authentication and simple logging. In some cases, where the system was a single standalone workstation, the reliance on physical security was enough.

As systems have evolved there has been an increasing need for better monitoring and threat detection. Early systems used signature-based detection, which relied on matching bit patterns. These could be easily defeated by changing the compilation of the malware to change the patterns and hence its signature (assuming the attacker could determine the pattern being used as a signature within the detection system).

Signature detection then evolved to look more holistically at the computer platform using behavioral analysis, by taking account of file accesses, CPU & Memory usage profiles and network interface utilization to identify threats. This was an improvement on signature-based detection, but attackers could design their code to minimize the use of resources and stay under the radar.

Machine Learning (ML) then improved on threat detection by analyzing and training on large amounts of data to learn patterns of user behavior, file access profiles and network connections to produce a model that could be used to quickly, and hopefully accurately, to identify threats hiding in the noise. However, ML is a staged process where data sets of benign and malicious data are used to train the model, which is then tested and validated to ensure that it accurately predicts threats. As new threats are identified the model needs to be updated and distributed to remain relevant.

Artificial intelligence is the next evolution

Artificial Intelligence (AI) is a step on from ML where it continually evolves to become better at responding. AI has the potential to improve security not only by detecting known threats that are exploiting known vulnerabilities, but also by identifying unknown threats that are exploiting unknown vulnerabilities.

AI uses large data sets and neural networks to make predictions, which makes it more adaptive than machine learning. This is great news for cybersecurity where the Threat Detection can continually evolve to match the evolution of the technology and the threats. AI also has the advantage that its knowledge and capabilities cannot be easily understood or predicted; because it is continually evolving it is much harder for threats to avoid detection. However, this also makes it more difficult to determine that it is operating as needed.

The UK National Cyber Security Centre has published a set of machine learning principles, which aim to raise awareness about the potential vulnerabilities of systems that utilize AI and ML and many of the principles are also applicable to systems that use AI for cyber defense.

There is no doubt that AI will bring massive benefits to cybersecurity. However, it is still an emerging technology, so some caution is needed during its adoption. There have been several incidents, outside of cybersecurity, which have highlighted that AI doesn’t always provide the expected result and doesn’t always “do the right thing.” There have been a number of cases in recent years where AI has demonstrated unintended bias and discrimination or in some cases providing information that is unhelpful or just simply wrong. Some of these cases are directly relevant to cybersecurity and should be used to heed a warning. For example, there is the potential for cyber criminals to influence AI engines over time to weaken their detection and response, and possibly even turn them rogue if they are entrusted with undertaking threat response.

This then raises the question about how reliable AI is. Recently there has been an advancement in generally available AI systems such as ChatGPT. These are amazing systems and can do the “heavy lifting” of many tasks. However, currently they can’t replace the human ability to understand and apply knowledge and experience to a situation. They require the question to be presented in the right way and without ambiguity to get the desired result. Even then they can make mistakes and convincingly present incorrect information as fact.

Taking a holistic approach to cybersecurity

AI will undoubtedly improve and do some of the “heavy lifting” on threat detection and cybersecurity. Helping organizations focus on improving three key areas as part of a holistic approach to cybersecurity:

Organizations must be secure by design with AI

Supporting the adoption of defense in depth and providing visibility and rapid analysis of data as part of well-designed structures and architectures with well-defined This makes it easier for organizations to take decisions to minimize their vulnerabilities, control their attack surfaces and understand where their weaknesses are. Ultimately, making it harder for an adversary to gain a foothold and pivot to gain access to wider systems and information. AI will help by monitoring and learning to the interactions and information flows that take place and highlighting areas for further investigation.

Instil a cyber savvy mindset in the people

Security spans the entire organization at every level and threats can come from any Through training and awareness an organization greatly improves its security culture; regular exercises can help to embed the knowledge and reduce the chances of an organization being caught out. AI will enable organizations to learn where its weaknesses are and help security specialists to better focus training and exercises to improve its security posture, as well as introduce ‘nudges’ into the interactions people have with technology to remind them to do the right thing.

Adopt processes that encourage compliance

Security is not just about technology it’s about the whole organization and how it Designing processes to be easier to comply with than to avoid is key to consistent implementation. The same applies to installing a no blame culture so that weaknesses and issues can be quickly identified and steps taken to rectify them and drive continuous improvement. Security assessments, monitoring and meaningful metrification will enable AI to identify areas where improvements can be made. Where possible external dependencies should be included to enable AI to identify potential weaknesses.

In conclusion

Artificial Intelligence is still in its infancy, but there is no doubt that it will continue to develop and improve over time: transparency in its assessments will get better; regulation of its use will evolve; and the costs will drop making it more accessible and its use more widely adopted. Within the context of cybersecurity AI will undoubtedly evolve to make cyber criminals’ lives more difficult and it will make security monitoring and response more efficient and focused. However, organizations still need to adopt a holistic approach to security and develop their use of AI to support their cyber strategy to harness the power of systems, processes and people.

This article was first published in ISC2

Explore more

Contact the team

We look forward to hearing from you.

Get actionable insight straight to your inbox via our monthly newsletter.