Deep fake attacks expected to be next major threat to businesses
Deep fake-driven cyber attacks are set to become more popular in the near future as the artificial intelligence technology (AI) becomes more widely used, security experts at Cisco warned this week.
Such attacks could involve fake videos of companies’ CEOs being sent to employees, telling them to conduct wire transfers, for example.
Deep fake technology involves training an AI program with large amounts of data in order for it to learn how any given individual would look when saying certain words, and how they sound, including accurate intonation and speech pauses.
“Well, your targets are those that have public personas, because you need lots of training footage to do this,” said Nick Biasini, head of outreach at Cisco Talos. “So it’d be much easier to pick your CEO, go after the CEO, because they’re on video constantly, and they’re talking constantly. You could use that to easily make a video of them that all of a sudden your CEO is calling you, it looks like your CEO sounds like your CEO, and they’re telling you to do a wire transfer.”
“There literally is a threshold of how much data you need to establish a ground truth to model the voiceprint and once that model is sufficient, shove whatever you want through it,” said TK Keanini, VP of security architecture and CTO at Cisco Secure.
Keanini also said that social norms could become “super weird” if such attacks became more popular. Giving the example of a family member calling a loved one, knowledge of this kind of attack may result in scenarios where additional questions will need to be asked just to check that the person they are dealing with is real. In this sense, it’s seen as an evolution of the type of phishing attacks we know today, with a layer of suspicion attached to communication from specific people.
Fears around the use of deep fake technology in the cyber security landscape have been present for a number of years. Trend Micro revealed that such attacks were on its list of top cyber threats for the future as far back as 2019, when it presented to delegates of CloudSec.
When asked if deep fake use in cyber security was simply a gimmick that would never materialise, Keanini said: “It’s definitely real. It doesn’t take much to fake the backgrounds, it’s not that much further to fake the foreground”.
“And as we move more and more to [hybrid working] collaboration, everybody’s on video conferencing now, so it makes it even easier to launch those types of attacks than it would have been before,” said Biasini.
The pair revealed their expectations during a discussion about emerging cyber threats, chief among which was the idea that social engineering tactics would become more sophisticated and more pervasive.
Speaking at Cisco Live, JJ Cummings, managing principal of threat intelligence and interdiction at Cisco, said that foreign adversaries, specifically, were using increasingly sophisticated social engineering tactics on victims, based on the cases Cisco Talos has seen.
“One of the things that we started to see and one of the groups that we’re tracking, since at least September of 2021, is very directed, very effective social engineering,” said Cummings.
“[It involves] making phone calls to specific strategically targeted individuals within an organisation, convincing those individuals that they’re members of IT, or some support staff, and those individuals are doing one of two things: possibly giving up a password, certainly accepting a multifactor authentication push to their device, letting the bad guy in because the bad guy’s stolen the password.”
Biasini said that social engineering should be one of the biggest concerns for businesses over the coming years, adding that because the security industry is getting better at stopping systems from being exploited, attackers will turn to people instead.
Deep fake technology is what’s going to make the threat “exponentially worse” and that “people have a hard enough time not trusting stuff that they read online; just wait until they’re having to not trust their eyes and their ears when they’re watching people say the things that they’re saying,” he said.
Ⓒ Future Publishing