Here’s how machine learning works best in identifying security threats

Jon Oliver, Trend Micro

Print

PrintPrint
Pro

Read More:

2 February 2018 | 0

Machine learning (ML) appears to have suddenly emerged in security, and almost as quickly it has assumed the mantle of a new “next-generation” tool to tackle cybercrime. In fact, the story is a little more nuanced than that.

Some well-established security companies, including Trend Micro, have worked with machine learning for more than a decade. Until recently, they tended not to discuss this work openly, mainly because of understandable concerns that the technology, applied on its own, flagged too many false positives.

More recently, two things have happened, and I believe they are correlated. One is the rise of ransomware like CryptoLocker around 2014. The other is the emergence of next-generation security vendors who promote machine learning as the “new” control companies must have in order to tackle advanced threats. Many organisations working with established security companies will, in fact, have been applying machine learning in their solutions for many years.

Ransomware changed the game because it made timing a critical part of malware detection. Other types of malware might try to steal intellectual property or start a spambot. Catching them an hour or so after first infection — having vastly minimised the chance of false positives first — may have been an acceptable trade-off. With ransomware, however, there is no room for manoeuvre. The moment it encrypts files and locks victims out of their data, it starts to cause financial damage and business disruption. Catching it at ‘time zero’ is critical.

“The moment ransomware encrypts files and locks victims out of their data, it starts to cause financial damage and business disruption. Catching it at ‘time zero’ is critical”

Around the same time as ransomware started becoming prominent, ‘next-generation’ vendors began actively promoting machine learning in their endpoint security products. It makes sense to harness artificial systems to recognise malware in a climate where threats are multiplying faster than ever. But getting this right, and minimising false positive errors in the process, is not trivial.

The fact is, machine learning is ideal for tackling those critical ‘time zero’ issues like ransomware, but it still leaves the possibility of false positives. Machine learning is best used after other security methods have been applied — and further meta data about the context of the file has been collected. Machine learning is excellent for processing files where the context suggests that they are more suspicious such as those files that arrive via email, downloads or infected USB sticks. Other security layers, a dynamic whitelist and context can be used to make sure that the machine learning is given minimal opportunity to mistakenly flag good files as false positives.

The volume of good and bad files to scan is increasing exponentially. Clearly, we need to augment our current systems of detection to cope with this level of activity. Historically, malware detection has looked in the rear-view mirror. The industry needed a virus sample before it could develop an antidote. But many malware samples we get today are unique. For example, a new instance of the Cerber ransomware is created every 15 seconds. It tells us how profitable ransomware must be, that cybercriminals think this is worth the effort. The thing is, we have seen a similar effect at work with benevolent software too. The growth of DevOps and the cloud model means that new versions of legitimate software such as Google or Dropbox updates appear on an almost hourly basis.

Driving by looking backwards is impossible when the terrain changes so fast. We need machine learning, and ultimately, artificial intelligence, to change that paradigm by protecting against threats we have not yet seen.

The more extreme marketing hype around machine learning would have you believe that its amazing formulas give no false positives; that machine learning is a magic black box that provides all the security you need in a single layer. Our take is that machine learning is very useful when it interoperates carefully with other layers to mitigate risks like false positives, or to enhance whitelists.

Our product set uses XGen security, which is Trend Micro’s blended approach of defence techniques that includes – but is not limited to – machine learning. When scanning files, the toolset applies traditional techniques to identify known good and known malicious files. Further pruning is done using the context of the files and meta data about the files leaving a small subset of files. We process these remaining suspicious files with machine learning, so we only apply it to a subset of the total number of files.

Gartner has validated our approach, placing Trend Micro highest and furthest in the leader’s quadrant for “ability to execute” and “completeness of vision” in its 2017 Magic Quadrant for Endpoint Protection Platforms.

I am a huge advocate for machine learning, but no one solution will solve all security problems — it never has. We have seen some cybercriminals already experimenting with modifying programs to beat machine learning. That is another reason why a defence-in-depth approach ensures that nothing malicious gets missed. A multi-layered approach is far more effective at providing a defensive posture which is hard to attack.

 

Jonathan Oliver, senior data scientist and director, Trend Micro

 

Read More:



Comments are closed.

Back to Top ↑