Predictions 2017

(Image: Stockfresh)

Print

PrintPrint
Pro

Read More:

15 January 2017 | 0

While 2016 could certainly be described as interesting times, this year will see the first real moves on Brexit, the beginning of the Trump Presidency and the countdown to GDPR. With so many uncertainties, we look at some predictions for what to expect in the world of technology and enterprise IT.

 

AI for all? Oh dear!

There will be consequences to the widespread application of artificial intelligence and machine learning technologies

2016 was something of a momentous year when it came to world events, with the likes of the shock Brexit vote, as well as the somewhat unexpected success of Donald Trump’s US presidency bid.

Paul Hearns_31102013_web

All that notwithstanding, it also saw a rise in data breaches that culminated in the Yahoo admission of a breach that saw more than a 1 billion user records compromised.

At home too, we heard the revelations around Meath County Council falling victim to a fraud that saw somewhere in the region of €4.3 million disappear into a Hong Kong bank account, albeit a frozen one.

Alas, these hacks do not appear to be any frightening new technologies or techniques, rather they have been carefully planned, well-orchestrated attacks that used startlingly familiar techniques, such as phishing and social engineering.

However, another major trend in 2016 was for so-called ‘fake news’, which dogged the US election. This is where people either just made stuff up, or on finding out that some news story was based on incorrect information or misinterpretation, circulated it anyway.

One assertion by UK-based news outlet The Register is that artificial intelligence (AI) fell within that spectrum of fake news, as it has not only been overblown in reporting, but has been singularly unsuccessful in delivering beyond what it terms ‘novelty’ applications.

“Almost everything you read about AI is fake news. The AI coverage comes from a media willing itself into a mind of a three-year-old child, in order to be impressed,” wrote Andrew Orlowski.

“It is inevitable the technologies and methodologies of AI will have become a source of interest, investment and experimentation with the cybercriminal fraternity”

While there may be a grain of truth in Orlowski’s interpretation, what is very real is the fact that AI development has now gone to a point where it is being brought to the masses, and that has consequences.

Salesforce declared its ambition to ‘democratise’ AI through its Einstein analytics package.

At the Dreamforce conference, CEO Mark Benioff said “you know the world has been changing,” noting that cloud gives access to the new world of AI technology. The company says that Salesforce Einstein brings machine learning, predictive analytics and natural language processing to the entire Salesforce platform.

The intention behind this, says the company, is for ‘Einstein to help Salesforce customers use AI to change the way they work, making them smarter and enabling them to do their best work’.

“I know that sounds like magic,” said Benioff, “but so did it when we said were going to give the cloud to everyone.”

Up to now, the main development behind AI took place in the various x-project/Skunkworks/black ops facilities in the likes of IBM, various research organisations and academia. But 2016 also saw the maturing of private companies, particularly in the information security space, applying AI technologies and principals to keep up with the flood of new, sophisticated and combined technologies in malware, advanced persistent threats (APT) and more. The likes of Darktrace, Cylance and others analyse traffic to establish what normal looks like so that anything abnormal can be further examined to establish if mitigation steps need to be taken. As with all analytics, the more that can be examined and analysed, the better it gets.

A group of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a system called AI2 with machine-learning start-up PatternEx, that can detect, with 85% accuracy, malicious activity by reviewing data from more than 3.6 billion lines of logs per day. Another system developed by Deep Instinct, called the same, claims a 98.8% accuracy level, given certain thresholds of data with which to work.

But AI is now seeping out into all sorts of areas, from recruitment to customer service and even online dating — the very democratisation that Salesforce is aiming for.

We have written many times about the phenomenon whereby the blackhat hackers mirror the legitimate business world in the way that enables cybercrime as a service. They provide support, testimonials, escrow services and substantial discounts for volume business. Therefore, it is inevitable the technologies and methodologies of AI will have become a source of interest, investment and experimentation with the cybercriminal fraternity.

This, in all likelihood, will not mean chatbots and better IVRs to respond to your query as to why your DDoS attack on the local pizza place’s web site has failed after they destroyed your order yet again.

Trendmicro reported during the year that the SIMDA botnet control software already has certain AI capabilities that allowed it to perform autonomously to a certain extent. Though this is not yet a widespread scenario, it is likely to become a more common one.

The company asserts that “today’s malware is undoubtedly menacing, but the bulk of it is not self-aware in the way that, say, a game-playing AI with machine learning capabilities is. As such, it requires human guidance and, frequently, the presence of command-and-control infrastructure.”

But there is an inevitability about where all this is going, according to David Palmer, director of technology at the aforementioned Darktrace. This will mean not only AI-powered malware, but a blended, hybrid strategy leveraging various types and media.

“We’ll see coordinated action,” said Palmer. “So, imagine ransomware waiting until it’s spread across a number of areas of the network before it suddenly takes action.”

“I’m convinced we’ll see the extortion of assets as well as data. So, factory equipment, MRI scanners in hospitals, retail equipment — stuff that you’d pay to have back online because you can’t actually function as a business without it. Data’s one thing and you can back that up, but if your machine stops working then you’re not going to be making any more money.”

The sentiments are echoed by Sophos in its Naked Security blog.

“Attacks increasingly bring together multiple technical and social elements, and reflect careful, lengthy probing of the victim organisation’s network. Attackers compromise multiple servers and workstations long before they start to steal data or act aggressively.”

Sophos seems to think that ultimately, there will still be human experts overseeing the efforts, ready to exploit to the fullest, the information gained.

“Closely managed by experts, these attacks are strategic, not tactical, and can cause far more damage. This is a very different world to the pre-programmed and automated malware payloads we used to see – patient and evading detection.”

AI then, could be used to develop autonomous, self-aware, intelligent malware that can think for itself, judge a situation and act accordingly, waiting for an optimal set of circumstances in which to act.

The fact that the legitimate industry has had exclusivity in AI development up to now, partly due to its expense and complexity, is some comfort for those at risk, but from previous experience that is no reason for complacency.

Blackhats have a very good track record of refining and streamlining technologies to leverage purely what they need, producing very lean results.

Sophos states it bleakly — “Cybersecurity companies will come to a rude awakening when it becomes clear that they don’t have a monopoly on machine learning in 2017.”

“Machine learning has done far more than any human could to help the security industry become more predictive and less reactive in the fight against malware. By analysing gigantic datasets and huge catalogues of good and bad files, these systems can recognise patterns that assist information security pros in rooting out never-before-seen threats.”

In 2017, Sophos warns, “advanced cybercriminals will turn the tables and begin leveraging machine learning themselves to cook up new and improved malware to challenge machine learning defences.”

You’ve been warned.

 

Paul Hearns, associate publisher and editor, Mediateam

 

Read More:



Comments are closed.

Back to Top ↑