AI fuels social engineering but isn’t yet revolutionising hacking
Security leaders in the public and private sectors have fretted for years about the impact of AI on cybercrime, but Intel 471’s report concluded that hackers aren’t rushing to completely overhaul their techniques to incorporate AI.
“Although AI is often touted as a game‑changer for the social‑engineering landscape, in the context of phishing, most threat actors still lean on [phishing-as-a-service] platforms and off‑the‑shelf kits and use AI primarily for content drafting and localization – not for true automation or innovation,” researchers wrote.
The report cited three reasons for this phenomenon: computational limitations, the difficulty of integrating AI into hacking tools and the continued effectiveness of existing tactics.
Incorporating AI into cyberattacks “involves training or configuring models, automating them within an attack infrastructure, integrating them with delivery systems and devising methods to evade detection,” Intel 471 said, all of which take time away from hackers’ profitable work. As a result, researchers wrote, cybercriminals favour ‘plug-and-play phishing kits’ that “are easier to implement, [are] faster to deploy and have a proven track record of success”.
Still, hackers are finding generative AI useful in multiple ways, including audio deepfakes that can impersonate executives, AI-powered call centers that can automate scams, video deepfakes that can fool job interviewers and AI-powered voice bots that can solicit victims’ multifactor authentication codes and credit-card numbers. Intel 471’s report mentions one call center that used three AI models, including one from Google and another from OpenAI. The report also describes a cybercriminal advertising an AI voice bot service that boasts it successfully stole data from 10% of victims.
At the moment, however, “there is limited evidence of AI-driven tools circulating in underground markets,” Intel 471 said, “and discussions among threat actors rarely reference the operational use of generative AI.”
The company concluded that “practical adoption” of the technology by cybercriminals “is still in its infancy,” with widespread use hinging on “a decrease in the costs of model hosting and the emergence of state-of-the-art AI kits comparable to today’s popular PhaaS offers.” Still, it also predicted “more deepfake-enabled impersonation calls” targeting business leaders and an AI-fuelled disinformation surge “during elections, geopolitical flash points and social justice debates”.
Cybersecurity Dive





Subscribers 0
Fans 0
Followers 0
Followers