The future of data centres goes to the edge
9 March 2017 | 0
As highlighted in this month’s feature on disappearing data centres (DC), the future of these facilities is far from certain. While their ultimate survival may not be in question, in the foreseeable timeframe their design and utilisation is changing.
Schneider Electric’s senior vice president for innovation and CTO of its IT division thinks that a major part of the future of data centres will be edge facilities that require a different manner of thinking.
Kevin Brown argues that massive growth in data produced, not to mention the nature of energy consumption, growing populations of people and devices, all in the context of tighter carbon controls, will drive the need for greater edge compute capability that currently takes place in less than ideal facilities.
Brown cited the likes of a Boeing 787 Dreamliner which is said to produce around 500GB of information per flight. Such information has to be stored and processed, and as more autonomous vehicles and systems come online, there will be greater need for facilities to be able to receive, store and make available for processing the vast amounts of data without necessarily having to rely on highly centralised systems.
While the trend towards less, more massive data centres is widespread, said Brown, the thinking behind such moves did not anticipate the likes of gaming, the Internet of Things (IoT), healthcare networks and the like.
In this context, Brown argues that less centralised architectures are necessary in certain instances. He cites Netflix, where the company found that more localised hubs to distribute content can be cheaper when it comes to overall bandwidth costs to work from a more centralised architecture.
When properly accommodated, said Brown, edge computing is a “high-performance bridge” to the centralised cloud.
This produces another variation of hybrid infrastructure — “Even the big cloud providers are moving to a hybrid environment,” says Brown, citing the likes of Microsoft, Dropbox and Netflix.
All of this leads to three types of data centres, all of which, he asserts, are mission critical: the centralised DC, the regional DC and the localised or micro DC.
Needs dictate design
The design of these facilities is going to be determined by application needs, he said.
Brown said that while a certain amount of edge computing is already utilised, it often falls far below the facilities standard of more centralised DCs, with unsecured racks, little redundancy, poor cable management, lack of dedicated cooling and little monitoring.
“Moving apps to the cloud makes the ‘edge’ sites and their connection to the cloud, mission critical,” said Brown.
This and other factors are changing the way organisations think about smaller sites and edge computing. The influence of millennials in the market is also significant and they are not willing to put up with lesser capabilities in branch offices or remote sites. In pushing out the desired tools and capabilities, being data-driven, data infrastructures need to be commensurate. Added to this is the fact that hybrid environments tend to add, not reduce complexity and the need for edge computing to be the equal of anything delivered from a massive, centralised DC becomes critical.
However, another area that requires change is the current perception of failure.
Brown said these developments require a different view of uptime, outage and failure. The current paradigm, according to Brown, says failure is a disruption to any IT equipment within a single data centre. However, citing an example, if a streaming media service experiences a failure that triggers an automated fail over, is it really a failure if the end user notices no interruption?