AI accelerated cyber-attacks aren’t new, but they are faster
Tags
Recent attention around advanced AI‑driven cyber tools such as Claude Mythos has sparked concern well beyond the security community.
Market jitters followed reports that Anthropic’s latest model could identify vulnerabilities that traditional tools had failed to detect. Yet much of the commentary risks obscuring a more important point that AI is not inventing new forms of cyber-attack, it is accelerating existing ones.
In April, the UK AI Security Institute (AISI) published an independent evaluation of Claude Mythos Preview, assessing its cyber‑security capabilities ahead of any wider release. In controlled testing, AISI found that the model could autonomously chain multiple stages of cyber-attack activity, from reconnaissance through to vulnerability exploitation, significantly reducing the human effort required.
Most notably, Mythos became the first AI system to complete AISI’s “The Last Ones” evaluation, a 32‑step corporate network intrusion scenario estimated to take humans around 20 hours.
AISI is explicit about the limits of these findings. The evaluations were conducted in controlled, weakly defended environments, and AISI states it cannot conclude that the same performance would translate to hardened, well‑monitored enterprise systems.
The significance of frontier AI lies less in novelty than in compression. These models materially reduce the time, cost and specialist expertise required to identify and exploit known weaknesses. Poorly patched, misconfigured or weakly monitored environments are therefore exposed more quickly than before.
At the same time, neither AISI nor the UK National Cyber Security Centre (NCSC) suggests that well‑defended environments are obsolete or that compromise is inevitable. The NCSC has been clear to stress that while AI makes vulnerability discovery faster and cheaper, established cyber hygiene remains effective.
The fundamentals of compromise have not changed, but the pace of exploitation has.
Speed is now a board-level issue
The most material shift for organizations is therefore one of tempo. Pressure on organizations to patch, detect and respond quickly will only grow as AI‑enabled reconnaissance and exploitation becomes more widely available.
Boards and executive teams should consider how quickly existing controls operate in practice. Organizations should treat patch time for internet‑facing services and identity platforms as a core risk indicator. Leaders should ask which security controls operate automatically under pressure, and which still depend on manual intervention.
Practical focus areas for leadership
There are several areas that deserve particular attention:
Accelerate remediation where it matters most.
Internet‑facing services and identity systems should be treated as patch‑critical. Long‑lived exceptions and technical debt now carry higher consequences. Where feasible, organizations should move towards automated patching for externally exposed services to reduce reliance on manual change processes and narrow the window for exploitation.
Strengthen identity and privilege controls.
AI‑enabled attackers still rely heavily on credential abuse and privilege escalation. Enforcing least privilege for administrators and service accounts remains one of the highest‑value defences available. Organizations should also isolate high‑risk administrative activity through approaches such as privileged access workstations, reducing the impact of credential compromise.
Improve visibility and detection speed.
Centralized logging, adequate retention, and full visibility across cloud and legacy platforms are essential if organizations are to respond within shrinking timelines.
Rehearse faster‑moving incidents.
Exercises should assume shorter dwell times and earlier executive involvement, including regulator and customer communications, not just technical containment.
Using AI defensively with care
The same capabilities that raise concern can, in principle, be used defensively. Both AISI and NCSC highlight the potential for AI to help suppliers and defenders identify vulnerabilities earlier and at greater scale. These approaches are still maturing and require careful governance, appropriate data access and skilled human oversight.
Organisations that adopt defensive AI prematurely, without assurance or auditability, risk creating new blind spots rather than closing existing ones.
Frontier AI will continue to advance, and attackers will be experimenting with these tools. Organisations that focus on raising their security baseline, reducing exposure, and moving faster than before will remain resilient – even as attackers gain new capabilities. The challenge for leaders is to not be distracted by the hype and respond with discipline, realism and pace.
This article was first published in Infosecurity Magazine.
Explore more