Hyperconvergence gathering pace for ‘18

Pro
(Image: Stockfresh)

1 December 2017

Hyperconvergence is on a roll.

Enterprises are shifting storage investments from legacy architectures to software-defined systems in an effort to achieve greater agility, easier provisioning and lower administrative costs. Hyperconverged systems, which combine storage, compute and network functionality in a single virtualised solution, are on their radars.

Enterprise interest in hyperconverged systems as potential replacements for legacy SAN and NAS storage systems has in turn inspired major storage vendors to make hyperconvergence plays of their own, acquiring start-ups and building out their offerings.

All that attention has made an impact: the largest segment of software-defined storage is hyperconverged infrastructure (HCI), which boasts a five-year CAGR of 26.6% and revenues that are forecast to hit $7.15 billion (€6 billion) in 2021, according to research firm IDC.

“HCI is the fastest growing market of all the multi-billion-dollar storage segments,” says Eric Burgener, research director for storage at IDC.

The hype
Hyperconverged platforms include a hypervisor for virtualised computing, software-defined storage, and virtualised networking, and they typically run on standard off-the-shelf servers. Multiple nodes can be clustered to create pools of shared compute and storage resources, designed for convenient consumption; companies can start small and grow resources as needed. The use of commodity hardware, supported by a single vendor, yields an infrastructure that’s designed to be more flexible and simpler to manage than traditional enterprise storage infrastructure.

Ease of expansion is a key driver of HCI adoption. “When your business grows and it’s time to expand, you just buy an x86 server with some additional storage in it, you connect it to the rest of the hyperconverged infrastructure, and the software handles all of the load balancing,” Burgener says. “It’s very easy to do that, and it’s a single purchase.”

HCI systems were initially targeted at virtual desktop infrastructure (VDI) and other general-purpose workloads with fairly predictable resource requirements. Over time they have grown from being specialty solutions for VDI into generally scalable platforms for databases, commercial applications, collaboration, file and print services, and more.

Small and midsize enterprises have driven most of the adoption of hyperconverged systems, but that may be changing as the technology matures. One development that is getting the attention of large enterprises is the ability to independently scale the compute and storage capacity, Burgener says.

Disadvantages
“One of the disadvantages of hyperconverged infrastructure, because you buy it all as a single node, is that you really can’t adjust the amount of performance you need versus the amount of capacity,” he says. If there is a performance mismatch, it is often not relevant in a smaller environment. But in a large environment, a company might wind up spending a lot more on the processing component that it did not want, just to get the capacity that it needs.

The solution is to allow companies to shift their HCI deployment to a disaggregated model, without having to do a data migration, as workloads require it.

“In larger environments, it’s very attractive to be able to independently scale the compute and storage capacity,” Burgener say. With a disaggregated model, “if you have a workload that needs a lot more storage but doesn’t need a lot more performance, then you don’t end up paying for CPUs to get the storage capacity that you need.”

“One of things you’re going to see from vendors in 2018 is that they will allow customers to configure their hyperconverged plays either as a true hyperconverged model or as a disaggregated storage model,” he says. “As customers grow larger, they don’t want to lose those guys.”

NVMe over fabrics
A second big development in the HCI world is the ability to create a hyperconverged solution using NVMe over fabrics. Most HCI systems today connect the cluster nodes over Ethernet, which creates a data locality issue as enterprises try to grow their HCI environments. “This is one reason why people don’t buy hyperconverged: When the data set is too big to fit in a single node, and you have to go out to another node to access data, that introduces pretty significant latency,” Burgener says.

Looking ahead, the low latency and high throughput of NVMe over fabrics could vastly improve that issue.

“If you can start to attach HCI nodes over NVMe over fabric, and use RDMA—remote direct memory access—now you’re talking about maybe a 5-microsecond latency differential between whether the data is in the same node or has to go talk to another node. And in the big scheme of things, 5 microseconds is nothing,” Burgener says.

NVMe connections could alleviate concerns about data locality issues, assuming everything is sitting within the same campus environment. Addressing those latency concerns will open the use of hyperconverged to larger data set workloads, which will be a draw for larger companies, he says.

“Two of the reasons why [large enterprises] didn’t like to buy HCI in the past are now being addressed with this disaggregated option and NVMe over fabric, which means the larger data set environments could actually be run on this architecture more effectively.”

 

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie