Basic failures undermine security postures

Blogs
(Image: Stockfresh)

10 February 2016

Paul HearnsThe year just passed was not a good one when it comes to enterprise security. The year opened with US healthcare company Premera losing 11 million personal data records, Ashley Madison in May losing 37 million and July seeing a US government body hit for 21.5 million.

These latter two were particularly interesting because there are reports of the two dumps being cross referenced and government employees being targeted for blackmail through their use of the adultery site. Big data indeed.

Taking these incidents in perspective, a report from edgscan, a software as a service vulnerability scanner, nearly two thirds (63%) of all vulnerabilities discovered in 2015 could have been mitigated via patch, configuration and component management combined.

“We are still not maintaining our systems in a secure manner,” says edgscan’s report. “This is not difficult to do but can be time consuming. A major cause of this is awareness and a lack of adequate patch management process and policy.”

“As we all cower before the fear of being the target of an APT, the shadow of a nation state or a spear phishing attack, it would appear as if the majority of organisations are instead doing the equivalent of leaving the back door of the castle not only unlocked but wide open with an invitation to pillage”

The report goes on to say patching and component management relates to web applications, frameworks and hosting servers running insecure instances and services. In terms of web applications, many instances of insecure frameworks, plug-ins, and components to support cryptography, HTTP, language processors and parsers contained exploitable vulnerabilities. In addition, the operating systems and services supporting such components also had known vulnerabilities (CVE’s).

This is staggering. As we all cower before the fear of being the target of an advanced persistent threat (APT), the shadow of a nation state or a spear phishing attack, it would appear as if the majority of organisations are instead doing the equivalent of leaving the back door of the castle not only unlocked but wide open with an invitation to pillage.

The edgescan report says nearly the same proportion of servers (62%), had a cryptographic vulnerability.

“This in effect may result in data privacy and eavesdropping attacks against users’ data,” it said. “This is a cause of concern as our economy relies on privacy and protection of sensitive information for many reasons. Such weaknesses are regularly exploited by both cyber criminals and nation state agents in order to get a competitive edge in business or aid in identity and financial theft.”

Added to this is that 15.1% of all assets scanned have high or critical risk vulnerabilities. These high or critical vulnerabilities are defined as easily exploitable, remotely exploitable or such issues can affect both application and network layers combined in some cases. The root causes of these are coding errors, configuration flaws and out-of-date/no patching applied.

It would appear as if there is no need for APTs, rooms full of hackers or expensive zero day vulnerabilities as most organisations do not patch known vulnerabilities are unaware of their actual estate and configurations to the point where vulnerabilities can be identified.

This is borne out by a talk recently by a rather curious information security professional at the Usenix Enigma conference. Rob Joyce is the head of the Tailored Access Operations group for the US National Security Agency. That means he was the group leader of the foreign hack team.

“A lot of people think that nation states are running their operations on zero days, but it’s not that common,” said Joyce. “For big corporate networks persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

Bringing this a little closer to home, at the Irish Reporting and Information Security Service conference (IRISScon), the annual report summation said that, based on incident reporting, Irish organisations were failing on the basics of information security — in 2014!

Fast forward to 2015, and the same conference heard Brian Honan, of BH Consulting and director of IRISS Computer Emergency Response Team (CERT), say the breakdown of incident causes showed that basic measures would have prevented many of the reported attacks — attacks which rose fourfold in 2015 compared to the previous year. Contributory causes included poor passwords, missing patches, known vulnerabilities in applications and web platforms, out of date antivirus and a lack of monitoring.

These are the basics of security, warned Honan, and yet they are still being ignored leaving Irish organisations open to attack and the threat of both financial and reputational losses.

Why is there such a blind spot when it comes to patching and vulnerability management?

Having been in that situation myself, though admittedly, some years ago now, there are all too human reasons for this.

Firstly, there is the fog of information, the sheer tide of updates from all quarters warning of various vulnerabilities, attacks, exploits and vectors, that it is often difficult to know where to start. The process of evaluation, triaging essentially, to determine which holes must be plugged first, is challenging to say the least. Despite many services providing just this value, it is still an ordeal for infosec pros to determine how best to proceed on a daily or weekly basis to best protect their organisation.

In the past, tools to handle patch management, vulnerability scanning and general posture checking were often unwieldy, complicated and difficult to use. Now, that is no longer an excuse as tools have evolved and many, like the edgescan referenced above, have now become services allowing the task to be farmed out to experts. So this is increasingly a poor excuse for the oversites reported in the likes of IRISS reports of security incidents.

However, another failure that is often highlighted is a lack of active monitoring. Often, if monitoring is employed at all, it tends to be historical or at best passive. Log analysis can identify anomalous activity, but it is by its nature after the fact. Again in the current market there is really no excuse for this as there are any number of solutions to provide appropriate active monitoring of traffic, activity, transactions and machines to show what is going on in the here and now.

However, even the best monitored infrastructure is still vulnerable if there is insufficient oversite and action taken from all the monitoring and data produced. There is no point in tracking a vulnerability or attack, if no one is going to do anything about it — another consistent issue identified in various security incident reports.

 

Read More:


Back to Top ↑

TechCentral.ie