Another massive DDoS internet blackout could be imminent

Image: Stockfresh

1 March 2018

A massive internet blackout, similar to the Dyn DNS outage in 2016, could easily happen again, despite relatively low-cost countermeasures, according to a new study out of Harvard University.

The DDoS attack on Dyn took many major web sites offline for most of a day, including Twitter, PayPal, Reddit, Amazon, and Netflix. Millions of compromised IoT devices, belonging to the Mirai botnet, flooded Dyn’s DNS service with up to 1.2 TBps of bogus traffic, making it impossible to respond to genuine DNS requests for their customers’ web sites.

The Dyn attack did not affect the PayPal or Twitter servers in any way, but these sites were unreachable for the vast majority of humans who prefer not to memorise IP addresses when sending money to scammers or posting on social media.

“The growing legion of insecure IoT devices — insecure out of the box, and often unpatchable — means that the next DDoS attack on the domain name system could be much more severe. The centralisation of DNS providers is largely to blame”

The attackers were not nation-state actors but rather garden-variety criminals with an axe to grind. “The perpetrators were most likely hackers mad at Dyn for helping Brian Krebs identify — and the FBI arrest — two Israeli hackers who were running a DDoS-for-hire ring,” Bruce Schneier wrote at the time.

The growing legion of insecure IoT devices — insecure out of the box, and often unpatchable — means that the next DDoS attack on the domain name system could be much more severe. The centralisation of DNS providers is largely to blame.

When single points of failure fail
DNS was designed to be distributed, but the growing centralisation of DNS creates single points of failure, the authors note. “The attack’s devastating success highlights many of the ways in which a concentrated DNS space with relatively little provider diversification on the part of domain administrators can leave even large firms vulnerable to service disruptions.”

How did we get here, one may ask? Turns out our decade-long love affair with other people’s computers, or the cloud, has resulted in a concentration of internet infrastructure that the designers of DNS never anticipated.

In ye olden days, companies managed their own DNS in house. That required humans managing computers in an office who could otherwise be building the next great thing, such as Uber. While older, more established companies are still more likely to host their own DNS, the emergence of cloud as infrastructure means that newer companies are outsourcing everything to the cloud, including DNS.

“The concentration of DNS services into a small number of hands… exposes single points of failure that weren’t present under the more distributed DNS paradigm of yesteryear (one in which enterprises most often hosted their own DNS servers on-site),” said John Bowers, one of the report’s co-authors. “The Dyn attack offers a perfect illustration of this concentration of risk — a single DDoS attack brought down a significant fraction of the internet by targeting a provider used by dozens of high profile web sites and CDNs [content delivery networks].”

The shocking part of this report is that despite the clear danger this concentration poses, too few enterprises have bothered to implement any secondary DNS.

Doomed to repeat
The Dyn attack got a lot of media coverage. Cassandras preached about the need to diversify DNS, but few in the audience bothered to listen, the numbers show. “It seems that the lessons of the Dyn attack were learned primarily by those who suffered from them directly,” the report notes.

Before the 2016 attack, more than 90% of the domains studied used name servers from just one provider. Following the attack, that percentage dropped from 92.2% to 87.3% six months later, in May 2017. Most of those were Dyn customers who experienced the outage.

Even Dyn themselves, now owned by Oracle, offers a secondary DNS service and encourages their customers to use it. In a brief prepared statement, Dyn’s director of architecture Andrew Sullivan told CSO that “web site operators need diversity all through their stack, and to select components like DNS services, web firewalls, and DDoS protection that support diversity.”

One difficulty of diversifying external DNS providers, the report notes, is that external DNS is often bundled with other services, like a CDN and DDoS protection. CloudFlare has more than 15% market share as DNS provider for the domains studied, yet the company’s DDoS protection service, the report notes, “make it impossible for domains to register DNS name servers managed by other providers.”

The report notes a trend among new domains to use cloud-based platforms that include DNS as one of a suite of service offerings. Amazon AWS can withstand any DDoS attack, you might think, but remember that time a typo by an Amazon employee brought down S3? Both accidents and adversaries threaten single points of failure.

You wouldn’t build a bridge without redundancy, why would you build your DNS infrastructure without redundancy?

Making DNS redundant
The first thing you should do is figure out what your current setup is, if you don’t already know. Check your name servers:

dig +ns

“If the names that come back are in your own domain, that means you’re doing it yourself,” said Andy Ellis, CSO of CDN provider Akamai. “You should consider whether that’s the right call, for most companies it isn’t. If you already have a CDN provider, there is a good chance DNS service is available either with your existing contract or as an add on; that’s a fast way to add, or switch, a provider.”

While low traffic sites typically list only two name servers, DNS permits up to eight. Use them all, Ellis advises, in a 6:2 configuration. Organisations wanting additional redundancy can self-host in a 5:2:1 configuration.

What is striking about this problem is that it is hardly new. RFC 2182 laid down the law on secondary DNS best practices in 1997, the report notes. “A major reason for having multiple servers for each zone,” RFC 2182 tells us, “is to allow information from the zone to be available widely and reliably to clients throughout the Internet, that is, throughout the world, even when one server is unavailable or unreachable.”

While some of the RFC suggestions are now out of date — swapping secondary zones with another organisation now seems a bit antiquated — the fundamental principles of avoiding central points of failure and ensuring redundancy haven’t changed. “Provider redundancy both gives you scale, and ensures that issues with one provider don’t take your business offline,” Ellis says.

Diversify, diversify, diversify
Central points of failure on the Internet are a big no-no, especially when any idiot renting a botnet can take major web sites offline for the better part of a day. Mitigating that risk by diversifying your DNS smells a lot like due diligence these days.

“It is not that difficult to do, and it does not cost much, and it is good practice,” Shane Greenstein, professor at Harvard Business School, says. “To be sure, it is a hassle for a very big company, but that is no excuse. All cyber security is a hassle, and this one is pretty minor in comparison to other preventative actions.”



IDG News Service

Read More:

Back to Top ↑