Hackers go around AI with flooding, poisoning and social engineering

(Image: Stockfresh)

19 December 2016

Machine learning technologies can help companies spot suspicious user behaviours, malicious software, and fraudulent purchases, but even as the defensive technologies are getting better, attackers are finding ways to get around them.

Many defensive systems need to be tuned, or tune themselves, in order to appropriately respond to possible threats.

Smoke alarms that go off each time someone microwaves popcorn get replaced with less sensitive ones, or are moved farther away from the kitchen. Old-school crowbar-and-ski mask crooks already know this.

“If there’s a motion detector and I ride my bike by innocently and set off their alarm, and do that every day for a month, they’ll either turn the motion detector off or recalibrate it,” said Steve Grobman, security CTO, Intel. “That gives me the opportunity to break in.”

When the same approach is used against machine learning systems, it is called flooding, he said.

The thing to remember is that cyberdefense is not like, say, predicting the weather.

“If you’re using AI to better track hurricanes, as your accuracy evolves, the laws of physics don’t suddenly say, ‘we’re going to change the way water evaporates’,” he said. “In cybersecurity, there’s a human on the other end who has the objective to make the model fail.”

With flooding, attackers increase signals, sometimes gradually, to the point where the attackers can slip in under the cover of the legitimate activity. Or a distributed denial of service attack can tie up resources.

To deal with this, companies need to look beyond just data analysis.

Human ingenuity
“We’re dealing with a changing landscape, and machine learning and AI can only go so far in dealing with these issues and we’ll need some human ingenuity,” said Zulfikar Ramzan, CTO, RSA Security. “And it’s not enough to just have a data science background — you need an intersection of data science and domain expertise.”

For now, at least, it takes human expertise to understand that the smoke detector was too close to the microwave, or that the guy riding the bike past the house every night at 02:00 and throwing a rock near the house to set off the alarm is someone to be wary of.

In cybersecurity terms, that requires understanding how the business works, and whether particular changes in behaviour make sense or might indicate suspicious behaviours.

Similarly, domain expertise can help defenders spot attempts to manipulate the data sets that are being used to train machine learning systems.

Malware writers might create a large number of legitimate applications that share the characteristics of the malicious software that they plan to write. Rogue employees might adjust their behaviour so that when they carry out their nefarious actions they do not get picked up as suspicious.

It is the old “garbage in, garbage out” problem, said David Molnar, IEEE member and senior researcher at Microsoft.

Security pros need to have a strategy in place for figuring out whether an attacker is attempting to trick an AI into making wrong decisions, he said. “If you did make the wrong decision based on bad data, how long would it take you to find out?”

Human judgment will play a big role here, said Elizabeth Lawler, CEO and co-founder at security firm Conjur. “There’s no magic bullet here.”

In particular, companies need to be careful not to set up systems, get them running, and then forget about them. “Things drift over time,” she said.

Checking up to make see if systems are miscalibrated can be a routine and annoying job, especially if employees forget about how they work, and companies might not be able to afford multiple systems that approach problems from different directions, to check up on one another.

This is a good area in which to consider a managed security services provider, she added, one with expertise in those particular systems, and plenty of opportunities to learn the tricks that the bad guys are using to get around them.

“A managed service would be awesome for this particular domain, because you’d have a broader set of data than [from] just one institution’s,” she said.

Old tactics
Although the machine learning systems might be new, the tactics used against them are evolutions of tried-and-true methods, said Dale Meredith, author and cybersecurity trainer at Pluralsight.

“Flooding and poisoning — that’s what they did with routers and firewalls,” he said.

Another old-school technique that will continue to work is social engineering, he added. It does not matter how good the AI is if there is someone in headquarters who can flip a switch and turn it off.

“The users are always going to be the weakest link no matter what we put in place,” he said.


IDG News Service

Read More:

Comments are closed.

Back to Top ↑