Hacker

Thinking machines and the malware fight back

Pro
Image: Stockfresh

20 September 2017

Artificial intelligence (AI) and machine learning (ML) have been flavour of the month in technology for some time now, with new, advanced algorithms engaged in processing data for all kinds of applications, from the trivial if interesting, such as Microsoft’s AI-enabled Twitter experiment, to the very serious, notably healthcare and law enforcement.

Having endured the AI winter, the technology is back, even making the pages of the mainstream press, albeit typically as wonder stories – or predictions of the end of work.

The reason you hear about AI so much is that there have been some significant wins in deep learning, which has made practical progress. The reason for that is that the world has moved along and there is now sufficient processing power, storage capacity and data out there, Barry O’Sullivan, UCC

Giant man-made brains and the handing out of P45s aside, the promise of artificial intelligence today is more pedestrian, though far from insignificant. And one area where the techniques of AI and ML are beginning to enjoy widespread adoption is in information security.

Practical applications
Professor Barry O’Sullivan, director of the Insight Centre for Data Analytics at University College Cork said that AI and ML have practical applications in information security, but that discussion, particularly in the mainstream press, can be misleading.

“We have to be careful about a lot of things [when discussing AI]. There’s two types of hype going on around AI: a hyping of the downsides and negatives – killer robots and job replacement, for instance.”

It is not that AI is not changing the industry, even society, he said, it is that both hype and hysteria obscure the reality of the development of new techniques and of gradual improvements to existing methodologies.

“It’s certainly true that there are different kinds of jobs that can be replaced, but the idea that AI is coming over the hill and will make us all unemployed is overstated,” he said.

“On the flip side, the reason you hear about it so much is that there have been some significant wins in deep learning, which has made practical progress. The reason for that is that the world has moved along and there is now sufficient processing power, storage capacity and data out there.”

AI conference
O’Sullivan was speaking to TechPro from the International Joint Conference on Artificial Intelligence in Australia, the world’s largest AI conference, where he and Michael Madden, senior lecturer in IT at NUI Galway, were giving the conference’s first ever workshop on AI in information security.

Interest levels were high, he said, with researchers from across the globe interested in developing a greater understanding of the techniques that can be deployed.

We use IBM AI as part of our service and it takes a lot of the repetitive work away from our analysts, allowing them to go into deep dives. It’s not going to take over the world, but it’s a new layer of defence and a vital one because the attackers are becoming more and more sophisticated, John Ryan, Zinopy

Industry is no different, and most security firms now offer AI- and ML-based approaches to information security, though the majority of firms do not base their marketing on the fact that the employ the techniques.

“The AI and ML hype is not something we go leading on,” said Paul Hogan, chief technology officer of Ward Solutions.

Threat investigators
Ward Solutions does use AI and ML as part of its operations, however, including to assist the work of threat investigators.

“There’s been a lot of talk about this in the last twelve to eighteen months. There’s [both] a lot of hype and there’s a lot of good work being done,” said Hogan.

According to Hogan, some of what is being described as artificial intelligence is a rehash of what has already been done, but there is also discussion asking will it be a game changer. The key is to identify what is truly new, he said.

One key area of innovation is behaviour recognition, a technique based on the development of new and more powerful cognitive algorithms. This is qualitatively different from traditional rules-based techniques as it seeks to make sense of behaviour rather than simply work though a checklist.

“A lot of AV [anti-virus] and network products will look at patterns, but failed log-ins happening a thousand times time a day being noticed isn’t machine learning, it’s just invoking a rule. On the flip side, we have seen good attempts at ML using advanced maths, and if you look at what they do it’s more than pattern recognition, it’s behaviour recognition,” he said.

One really good example of AI is IBM’s Watson. IBM’s QRadar looks at a whole corpus of data, bad IP addresses, white papers and blogs, and effectively acts like an investigator, Paul Hogan, Ward Solutions

“What AV tries to do is look at a piece of malware for a signature, but if it’s a zero day attack or polymorphic, where it changes every so often, it can’t detect that. A cognitive tool can recognise when tools that are not known try to do something that is not normal behaviour.

“The second thing is to use tools that help threat investigators. There is so much information coming in that people can’t cope with it. One really good example of AI is IBM’s Watson. IBM’s QRadar looks at a whole corpus of data, bad IP addresses, white papers and blogs, and effectively acts like an investigator.

“That’s more AI than ML,” he said.

True intelligence
The distinction Hogan raises, that between AI and ML, is not always entirely clear as the two areas are related, but he and others say that it is an important one.

“The important thing about deep learning is that it’s just a technique: it’s an input-output thing, but it tells us nothing about intelligence,” said UCC’s Barry O’Sullivan.

“It’s not intelligence in a Turing sense. For example, it can be used to identify faces, but it doesn’t understand the concept of a face the way a two year old child does. That’s not to criticise it, but just to explain what it can do in a cognitive sense.

Machine learning, in particular, has been doing very well in security, said O’Sullivan.

ML in security
“If you look at Darktrace, they’re a really good example of the power of machine learning coupled with data: they can identify behaviours that are considered abnormal, and that has great power as you don’t have to characterise what an attack looks like.”

Machine learning’s main advantage is that its cognitive basis gives it a major advantage over traditional rules-based techniques.

We’re seeing people investing in tighter controls at the perimeter, not just trying to stop the known attacks but also the unknown attacks. A combination of machine learning, sandboxing and behaviour analytics are the things we see people investing in, Aatish Pattni, Checkpoint

“In the security industry, one of the big challenges is how you deal with zero day events – attacks you haven’t previously seen. Techniques that are rules-based can be slow to pick it up as someone has to sit down and describe how it works,” he said.

John Ryan, chief executive of Zinopy, which also deploys IBM’s Watson-based QRadar as part of its information security solutions, said that both artificial intelligence and machine learning have useful functions in the here and now.

In reality
“It is actually happening. Normally, we hear about these things long before they become real, but this is real. It’s not entirely mainstream yet, but we’re extracting value from it [and] the usefulness can only deepen with time.

“We’re an IBM partner and AI is one of its main strategies for the future of computing in general,” said Ryan.

“We use it as part of our service and it takes a lot of the repetitive work away from our analysts, allowing them to go into deep dives. It’s not going to take over the world, but it’s a new layer of defence and a vital one because the attackers are becoming more and more sophisticated,” he said.

Ryan said that that it can also be deployed to understand user behaviour, particularly when it comes to differentiating between typical and atypical activities on the network, and do so on an individual basis.

“The other real life area we see it working it is [in] the area of user behaviour analysis: attackers often move laterally witting an organisation they have penetrated, so we can classify normal and abnormal behaviour,” he said.

Grave threat
Aatish Pattni, head of threat prevention for northern Europe at Check Point Technologies, said that artificial intelligence and machine learning more common as the security threat is now so grave and businesses, increasingly connected, are taking it seriously.

“We’re seeing people investing in tighter controls at the perimeter, not just trying to stop the known attacks but also the unknown attacks.

“A combination of machine learning, sandboxing and behaviour analytics are the things we see people investing in,” he said.

“In our core technology we have 28 different engines, including three ML engines. We even have an engine to look specifically for ransomware behaviours,” he said.

There is also a big difference between detection capabilities and prevention capabilities, said Pattni, and this is something that he feels needs to be better understood if businesses are to make the most of AI and ML in security.

“A lot of people are merely looking at detection, but it’s only a small leap to prevention and it provides a lot more security. A preventative solution will see stop the attack from happening,” he said.

When AI attacks
One of the great truths of information technology has been the arms race: whether it was the war over software piracy in the past or information security today, the very same technologies are typically available to both sides.

“As we start to use cognitive technology for the defences, the attacker will start using it for attacks,” said John Ryan of Zinopy.

If the possibility of AI-based cyberattacks seems unlikely, Aatish Pattni has some words of warning.

“On the attack side if you look at the everyday threat actor they don’t have the resources, but what worries me is that we do see nation state actors – and they would have those resources.

“It’s almost like the technology from a Formula One car coming to the cars on the road,” he said.

“We will maybe not soon be seeing machine learning as such, but adaptive attacks – an attack that understand the victim – are a very real possibility,” he said.

Higher stakes
Ryan of Zinopy said that one of the issues that today drives the move toward AI and ML, particularly as a preventative solution, is that the stakes are now higher than they have ever been.

“Financial transactions and banking are key, but the big scary part is the jump between the virtual and the physical. Computers now manage our electricity generation, our hospitals, even right down to the amount of medicine we receive on a drip, which means that they can be susceptible to attack.

“That’s the big fear coming down the line,” said Ryan.

UCC’s O’Sullivan reiterates the threat is real and the potential is there for attackers to also take advantage of these technologies.

“There are cyber hacking competitions, most academic and research based, and there are two sides: AI is being used to defend against attacks, but there are also teams looking at how to use AI to exploit vulnerabilities.

“We are likely to see more of this and the bad guys will be building AI systems to attack,” he said.

Read More:


Back to Top ↑

TechCentral.ie