Data centre

Data centre availability should be measured to the edge

Pro
(Image: Stockfresh)

1 February 2017

Today’s reliance on data centre services means that measuring availability has become critical to many businesses. However, Schneider Electric says that based on its researches, much is now being relied upon from what it terms edge computing sites, which are often not included in such evaluations, and are far below data centre standards.

According to the research, the company reports that edge computing sites are having a disproportionate effect on the resilience and reliability of digital IT services.

In a white paper entitled “Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge”, Schneider suggests a new methodology is required that measures data centre availability to include the effect of the operation of these edge computing sites.

The approach, it says, is based on considering the criticality of all edge sites on which a business depends for its IT services and concludes that a greater concentration on physical infrastructure in smaller data centres is necessary to improve overall resilience.

“The industry is seeing a change in the way it delivers services to customers”, said Kevin Brown, CTO and SVP, Innovation, IT Division, Schneider Electric. “More businesses are utilising a hybrid-cloud environment in which users in any one company access applications and services that may reside in several data centres, all being of different sizes and with differing levels of availability. This supply chain is only as strong as its weakest link, therefore the industry has to consider which services are the most business critical and create a secure method for ensuring they remain available to their users.”

The white paper says that larger, centralised, Tier 3 data centres are built to be highly resilient with multiple levels of redundancy, high standards of security and meticulous monitoring of all critical elements. Further down the chain are smaller regional data centres that, it says nevertheless, have similar high standards of monitoring and back-up. But at the lowest level and nearest to the end-users, are what Schneider terms ‘micro data centres’, which are often co-located on the customers’ premises and most susceptible to downtime.

“Smaller data centres are often found on company premises, with little or-no security, unorganised racks, no redundancy, without dedicated cooling and little or no DCIM software. These edge sites provide only a minority of services the business uses but are often of critical importance,” said Brown. “They may include proprietary applications, on which the company depends but also the network infrastructure necessary to connect to outsourced services.”

The white paper proposes that the overall availability of IT services to a business should be a product of the percentage downtime of all data centres providing critical functions.

Although large centralised data centres might have highly resilient uptime figures, typically 99.98% or more, when these are combined with a typical Tier 1 data centre on the edge whose corresponding benchmark is 99.67%, overall availability is reduced. Further complicating the calculations, the paper argues, are that more people in an organisation may be dependent on the locally hosted applications so that any downtime in such a centre will have a relatively high impact on business productivity.

The new score-card methodology which factors systems availability from all sites, includes the number of people impacted, criticality of each site, and annual downtime into a single dashboard which helps identify areas in need of attention.

“We need to rethink the design of the data centre systems at the edge of the network,” said Brown. “As an industry, we have to improve physical security, monitoring and increase redundancy in power, cooling and networking in micro data centres to improve the overall availability.”

The white paper is available here.

 

TechCentral Reporters

Read More:


Back to Top ↑

TechCentral.ie