Google Logo

Google gives a glimpse of its SD data centre networks

Pro
(Source: Google)

23 June 2015

Google has been building its own software-defined data centre networks for 10 years because traditional gear can’t handle the scale of what are essentially warehouse-sized computers.

The company has not said much before about that home-grown infrastructure, but one of its networking chiefs provided some details at Open Network Summit and in a blog post.

The current network design, which powers all of Google’s data centres, has a maximum capacity of 1.13 petabits per second. That’s more than 100 times as much as the first datacentre network Google developed 10 years ago. The network is a hierarchical design with three tiers of switches, but they all use the same commodity chips. And it’s not controlled by standard protocols but by software that treats all the switches as one.

Networking is critical in Google’s data centres, where tasks are distributed across pools of computing and storage, said Amin Vahdat, Google Fellow and networking technical lead. The network is what lets Google make the best use of all those components. But the need for network capacity in the company’s data centres has grown so fast that conventional routers and switches can’t keep up.

“The amount of bandwidth that we have to deliver to our servers is outpacing even Moore’s Law,” Vahdat said. Over the past six years, it’s grown by a factor of 50. In addition to keeping up with computing power, the networks will need ever higher performance to take advantage of fast storage technologies using flash and non-volatile memory, he said.

Back when Google was using traditional gear from vendors, the size of the network was defined by the biggest router the company could buy. And when a bigger one came along, the network had to be rebuilt, Vahdat said. Finally, that did not work.

“We could not buy, for any price, a datacentre network that would meet the requirements of our distributed systems,” Vahdat said. Managing 1,000 individual network boxes made Google’s operations more complex, and replacing a whole data centre’s network was too disruptive.

So the company started building its own networks using generic hardware, centrally controlled by software. It used a so-called Clos topology, a mesh architecture with multiple paths between devices, and equipment built with merchant silicon, the kinds of chips that generic white-box vendors use. The software stack that controls it is Google’s own but works through the open-source OpenFlow protocol.

Google started with a project called Firehose 1.0, which it couldn’t implement in production but learned from, Vahdat said. At the time, there were no good protocols with multiple paths between destinations and no good open-source networking stacks at first, so Google developed its own. The company is now using a fifth-generation homegrown network, called Jupiter, with 40-Gigabit Ethernet connections and a hierarchy of top-of-rack, aggregation and spine switches.

The design lets Google upgrade its networks without disrupting a data centre’s operation, Vahdat said. “I have to be constantly refreshing my infrastructure, upgrading the network, having the old live with the new.”

Google is now opening up the network technology it took a decade to develop so other developers can use it.

“What we’re really hoping for is that the next great service can leverage this infrastructure and the networking that goes along with it, without having to invent it,” Vahdat said.

 

Stephen Lawson, IDG News Service

Read More:


Back to Top ↑

TechCentral.ie