Leveraging Windows Server 2016 for hyperconvergence

(Image: Microsoft)

Print

PrintPrint
Pro

Read More:

2 March 2018 | 0

With the release of Microsoft Windows Server 2016 a couple years ago, Microsoft directly entered the hyperconverged infrastructure (HCI) platform space that has been served by organisations like Nutanix, Scale, Cisco, HP, Dell, and others — only Microsoft comes at it with a fully software-defined platform rather than hardware and appliances.

The underpinnings
HCI environments are based on the following:

  • Scalable and Shared Compute: The ability to aggregate processing power beyond a traditional “server” with two or four sockets spanning a finite 24, 32, 64 cores to an array of multiple servers where the core processing capabilities brings together four, eight, 16, or more servers with hundreds of cores that can be shared and allocated to workloads as needed.
  • Scalable and Shared Storage: The core storage component of HCL is very similar to the traditional Storage Area Network (SAN) model of the past decade where dozens of drive subsystems are spanned for high performance and capacity and allocated to workloads as needed.
  • Flexible and Customisable Networking: The networking component of HCI provides virtual networks that isolate traffic and shape communications to optimise the workload to workload communications for performance and security purposes

HCI compute on Windows Server 2016 is based on HyperV
A decade or two ago, there was a constant shootout between Microsoft’s HyperV and VMware for virtualisation, but these days no one cares about the hypervisor. That whole virtual-networking environment shifted away from the basic running of virtual machines to the entire data centre environment based on HCI. While VMware has its HCI offering, these days it comes down to cost to provide core scalable data centre functionality.

With Microsoft’s ability to provide HCI riding on top of components that are all included as part of the Microsoft Windows Server 2016 licensing, the cost per workload and cost per running virtual machines can be considered a low-cost solution.

HyperV running on a series of server systems — name brand or white label — forms the backbone of the environment. Host servers can be dynamically added or removed from the HyperV HCI cluster to add or decrease capacity.

Virtual machine instances running on any of the nodes in the cluster can be moved to other nodes in the cluster with no downtime of the workloads. So nodes can be patched, upgraded, updated, removed, or replaced and still provide 24×7 zero-downtime operations.

Virtual machines running on the latest Windows Server 2016 environment can be Windows systems, Linux systems, even containers. In the past three years, Microsoft has shaken itself from the “Microsoft only” working environment and now provides equal support for other platforms running on HCI.

Software-defined storage in Windows Server 2016
For years we’ve connected servers to storage area networks (SANs) that are merely aggregated and shared hard drives providing storage for network-based workloads.  Microsoft has taken the technical concepts of a SAN and built a fully software-defined storage subsystem in what it calls Storage Spaces Direct.

Hard drives or solid-state drives are combined in various RAID striping configurations across multiple physical systems to create a large storage pool. The inclusion of memory caching and solid-state storage provides high IOPS performance, and the distribution across multiple drives and subsystems, provides high availability and redundancy expected of storage systems.

Algorithms built in to Storage Spaces Direct prioritise read/write access and caching, so the most demanded data is on the fastest access memory and storage possible; less active data is stored on lower cost spindle disks, providing an optimised performance and cost storage layer.

Being software defined and driven, the underlying storage can be a name-brand storage subsystem or organisations can run Storage Spaces Direct on their own server and storage configuration. The latter provides OEMs the ability to create lower cost solutions with all of the optimised disk, redundancy, and caching needed in enterprises at a better price point.

Software-defined networking in Windows Server 2016
The final leg of HCI on Windows Server 2016 is the ability to integrate networking between compute and storage systems. Instead of traditional networking that requires servers to communicate through physical network controllers, out hundreds of feet of fibre and wired connections to and through physical networking layer appliances, the software-defined networking in Windows Server 2016 is all contained with the integrated server cluster. A virtual network along with virtual switches are all software-controlled right within the internal clustered compute and storage systems of the HCI subsystem.

Network communications between a database server, index server, and application server never has to leave the virtualised HCI environment. By cutting out the routing of communications through cables, connectors, wiring closets, and appliances, the communications between applications is greatly improved.

The only time network communications has to leave the HCI environment is to reach out to external resources or communicate to users over the internet. Microsoft’s software-defined networking has direct connectivity to traditional Top of Rack switches and can communicate via industry-standard network protocols to route, manage, filter, and shape traffic as needed.

While many organisations still gravitate to hardware vendors that have ready-built HCI solutions, as many hosting providers have found, IT is about decreasing costs of providing core IT infrastructure services. If an organisation can leverage out-of-the box functionality in Windows Server 2016 to build its own software-defined data centre at a lower cost, then that is a solution worth considering.

It is no different than what IT has done for the past two decades, which is build servers/platforms/systems themselves that are optimised for the workloads they provide. The options are to move to the cloud and let someone else handle the infrastructure or build your own HCI environment in the most cost-effective model as possible.

 

IDG News Service

Read More:



Comments are closed.

Back to Top ↑