Elliot Rose, head of cyber security at PA Consulting, comments in InfoSecurity Magazine’s article on autonomous technology, machine learning and what happens when bots “go bad”.
Elliot says that AI systems suffer from several unresolved vulnerabilities which criminals can exploit to create new opportunities for attacks.
“Machine learning algorithms like those in self-driving cars create an opportunity to cause crashes by presenting the cars with misinformation. Military systems could be misled in a way that could lead to a friendly fire incident,” Elliot notes.
He adds that AI systems are susceptible to attacks in a number of ways. “Data poisoning introduces training data that causes a machine learning system to make mistakes,” he continues. “Adversarial attacks provide inputs designed to be misclassified by machine learning systems such as teaching an autonomous vehicle to misclassify a stop sign. Attackers can also exploit flaws in the design of autonomous systems’ goals.”
Elliot warns that AI-enabled impersonation is a new threat to systems that can mimic individual voices. “Significant progress is developing speech syntheses that learn to imitate individuals’ voices opens up new methods of spreading disinformation and impersonating others,” he explains.
The article goes on to note that just as AI speeds up legitimate activity, it creates opportunities for criminals to increase the effectiveness of their attacks.
Elliot says that spear phishing attacks which use personalised messages to attract sensitive information or money from individuals require a significant amount of effort and expertise.
“AI could automate the identification of suitable targets, research their social and professional networks, and then generate messages in the right language. This could enable the mass production of these attacks. AI could also be used to increase the speed of attackers in identifying code vulnerabilities and trends,” he says.