Network

Microsoft Azure networking speeds up with custom hardware

Pro
(Source: Stockfresh)

28 September 2016

Networking among virtual machines in Microsoft Azure is going to get a whole lot faster thanks to some new hardware that Microsoft has rolled out across its fleet of data centres.

The company announced Monday that it has deployed hundreds of thousands of Field-Programmable Gate Arrays (FPGA) across servers in 15 countries and five different continents. The chips have been put to use in a variety of first-party Microsoft services, and they are now starting to accelerate networking on the company’s Azure cloud platform.

Machine learning boost
In addition to improving networking speeds, the FPGAs, which sit on custom, Microsoft-designed boards connected to Azure servers, can also be used to improve the speed of machine-learning tasks and other key cloud functionality. Microsoft has not said exactly what the contents of the boards include, other than revealing that they hold an FPGA, static RAM chips and hardened digital signal processors.

Microsoft’s deployment of the programmable hardware is important as the previously reliable increase in CPU speeds continues to slow down. FPGAs can provide an additional speed boost in processing power for the particular tasks that they’ve been configured to work on, cutting down on the time it takes to do things like manage the flow of network traffic or translate text.

With Microsoft looking to squeeze every ounce of power out of the computing hardware and footprint that it already has to compete with other players in the cloud market, this hardware could give the company an advantage.

Accelerated Networking, a new feature available in beta on Monday, is one example of the features that an FPGA deployment enables. Between two VMs that both have it enabled, it will give users speeds as high as 25Gbps and latency of about 100 microseconds, for no extra charge.

Oracle unveiling
The Accelerated Networking announcement comes after Oracle unveiled its second-generation infrastructure-as-a-service offering at OpenWorld, which also features off-server, software-defined networking to drive improved performance.

Azure CTO Mark Russinovich said using the FPGAs was key to helping Azure take advantage of the networking hardware that it put into its data centres. While the hardware could support 40Gbps speeds, actually moving all that network traffic with the different software-defined networking rules that are attached to it took a massive amount of CPU power.

“That’s just not economically viable,” he said in an interview. “Why take those CPUs away from what we can sell to customers in virtual machines, when we could potentially have that off-loaded into FPGA? They could serve that purpose as well as future purposes, and get us familiar with FPGAs in our data centre. It became a pretty clear win for us.”

Codename Catapult
The project is the brainchild of Doug Burger, a distinguished engineer in Microsoft Research’s New Experiences and Technologies (NExT) group. Burger started the FPGA project, codenamed Catapult, in 2010. The team first started working with Bing, and then expanded to Azure. That work led to the second, current design of Microsoft’s FPGA hardware layout.

Read More:


Back to Top ↑

TechCentral.ie