The criminals are winning the AI race
Tags
An effective fightback against AI-driven crime requires a united defence, not isolated systems. Banks, authorities, telecom providers and tech platforms must share warnings in real time and act before the fraud reaches consumers. Yet cross-sector collaboration remains too fragmented.
A new arms race is underway in the financial sector. Artificial intelligence has changed the rules of the game – and right now, the criminals are in the lead.
Representatives from banks, technology providers and industry associations took part in a closed panel during Nordic Fintech Week. The consensus was striking: “The criminals are innovating faster than we are.” Europol and Nasdaq confirm this picture. AI is now involved in almost half of all documented fraud attempts – a dramatic rise that continues to accelerate. In Denmark, more than 500 citizens fall victim to cybercrime every day.
AI has made fraud cheaper, more convincing, and frighteningly scalable. Deepfakes and voice cloning are being used to simulate management meetings, create fake identities and stage false customer interactions. In Hong Kong earlier this year, a finance employee was tricked into transferring 25 million dollars after a virtual meeting with deepfake-generated “executives”. The same pattern is now emerging in Denmark, where banks report a steep increase in AI-driven fraud attempts.
Several Danish business leaders have had their faces and voices misused in fake AI videos promoting fraudulent investment schemes. One Danish bank has seen criminals use customers’ cloned voices to gain access to accounts. These are no longer amateur attacks – this is industrialised fraud.
The United Nations, meanwhile, reports that organised networks are running scam centres in Southeast Asia, where thousands of forced labourers conduct AI-powered fraud operations around the clock. Crime has become a global industry – and artificial intelligence is its growth engine. One of the criminals’ greatest advantages is that fraud crosses sectors such as social media, telecoms, banking and investment platforms – while those responding to attacks in these sectors remain poorly coordinated.
Banks are on the frontline. They are both victims and society’s first line of defence. Billion-euro investments in anti–money laundering and cybersecurity have strengthened the banks, but the criminals’ pace of innovation continues to outstrip these developments. The criminals operate professionally – unbound by GDPR, compliance rules or the need to maintain customer trust.
Machine learning works
One major Danish bank has shown that machine learning can make a real difference. By applying machine learning, it has increased its ability to detect payment fraud by around 60 per cent while halving the number of false positives – a significant improvement. But it is still far from enough. Banks operate under heavy regulation, while criminals enjoy freedom to innovate. There is no easy way around it: only good AI can fight malicious AI.
Despite the progress that has been made, many remain hesitant. A European industry survey shows that fewer than a quarter of financial institutions have implemented AI-based countermeasures against deepfakes and identity fraud. Fear of regulatory pitfalls and reputational risk is slowing innovation – and that unfortunately gives the criminals a head start.
AI must not remain an experiment within innovation departments. Organisations must embed it strategically within their business models and link it closely to risk management and customer protection.
There are, however, signs of movement. A Danish sector taskforce, bringing together banks, authorities and telecom providers, has put forward 18 concrete recommendations: faster blocking of fraudulent websites, freezing suspicious transfers, real-time sharing of fraud patterns and improved data exchange. Several banks are now testing technologies that enable collective data analysis without sharing personal data – using so-called privacy-preserving computation.
That is the way forward. Effective defence against AI-driven crime demands a united front, not isolated systems. Banks, authorities, telecoms and tech platforms must share warnings in real time and act before the fraud reaches consumers. Yet collaboration across sectors remains too fragmented.
With every day that passes without a coordinated defence, the criminals’ lead grows. Only by cooperating – across competitors and sectors – can we crack the criminals’ algorithms, close the loopholes, and preserve trust in the financial system.
Otherwise, the costs will continue to rise with: money lost, identities exploited, and erosion of trust in the very institutions meant to protect us.
This article was first published in Danish in Børsen.
Explore more