Data centre evolution: the emerging edge

Image: Stockfresh

Edge computing is unlikely to supplant the traditional data centre any time soon, writes ALEX MEEHAN



Read More:

20 May 2019 | 0

Spend any time reading about the Internet of Things and very quickly the concept of edge computing comes into play and it is not hard to understand why.

IoT technology involves putting internet-enabled sensors where they have never been before, creating a new world of connected devices, all of which generate enormous amounts of data. Using conventional computing models, all that data has to be transmitted back to a central hub where it can be processed, managed and used to build up a picture of the physical world.

But what if those IoT sensors are used to generate real time information that is only useful if it can be acted upon in real time? This is where edge computing comes in, pushing the power needed to allow IoT devices to make quick decisions out to the edge of the network and close to where the data is generated.




So far, so good. But what about the traditional data centre at the heart of thousands upon thousands of cloud installations around the world – does edge computing make these redundant? Not necessarily. According to Tom Long, head of technical strategy at Cisco Ireland, the solution to amalgamating these two methodologies lies in playing to the strengths of each.

Centralised versus distributed

“There has been an ebb and flow over the last twenty years between the idea of centralised and distributed computing. Although there have been different phases in this ebb and flow – and each phase has been quite distinct – each phase has reflected the business needs of people and the technology available at that time,” said Long.

There have been four different phases of this ebb and flow. In the 1970s, computing was predominantly centralised on mainframe computers and critical business applications were stored and run on these devices. The mid-1980s saw the emergence of the first wave of distributed computing in the form of the client/server model.

“That worked great with a PC on every desktop and everyone was happy with it. It was seen as an evolution from the mainframe age. But from around 2010 onwards, business computing started to focus on virtualisation and on how to consolidate workloads. The solution was to move them into a data centre to be accessed remotely and cloud computing started to gain widespread adoption,” said Long.

Tom Long, Cisco
Tom Long, Cisco

The growth of the cloud meant a return to the centralised model but in an upgraded form that took advantage of the technological progress of
the day. Today, according to Long, there is a move back towards distributed computing and the reason is because once again it suits the business case.

“I think it’s really important to acknowledge the back and forth nature of this evolution because in my experience customers can feel a bit confused by it and wonder why the IT industry keeps changing its mind and rethinking things,” said Long.

Business sense

There are a variety of reasons for this movement of compute power back out to the edge of the network, but they all make business sense and ultimately serve the needs of the enterprise.

“On the journey to the cloud, the ability to run applications and process data was all centralised. With edge computing what is provided back to the business is the ability to manage, handle and process data close to where it was collected, ie ‘close to the pings’,” said Long.

So the obvious question from the business side of the house is why is that relevant? Why do we need to bring things close to the edge? These are reasonable and good questions in Cisco’s opinion, and there are two main reasons.

“The first is to do with the growth in IoT, in other words, in devices that have internet-technologies built into them where traditionally it’s been too expensive or prohibitive in other ways to install sensors,” said Long.

“The volume of data being created by these ‘things’ is enormous and
it’s difficult for organisations to manage that data. Edge computing allows those devices to have some intelligence and to ‘triage’ that data and decide what to send for storage and what to process locally on the device.”

The second reason why edge computing is important is to do with speed and latency.

“When IoT devices are generating all this data, much of the time businesses want to be able to act on that data straight away. So if that data has to be sent to a centralised repository for a decision to be made and then sent back out to the edge of the network for some action to take place, then that all takes time. In some cases, it’s just too slow,” said Long.

Use cases

Most of the use cases for edge computing come down to these two factors – the need to handle a large volume of data and the ability to make decisions based on IoT data quickly.

One example can be seen in the retail space. When a shopper walks up to the till to pay for an item with a loyalty card, edge computing could be used to analyse that person’s purchasing history and make a personalised offer to them there and then to encourage more sales.

“This is about using insight to enable personalisation and better user experience,” said Long.

Likewise, if an oil rig out at sea detects a leak in a pipe it makes sense to shut the pipe down immediately rather than wait for that data to be analysed remotely. But what does a move towards edge computing mean for companies that have invested significant resources in cloud infrastructure? Is this investment redundant moving into future?

Not necessarily, according to Steven Carlini, vice president of innovation and data centres for the secure power division of Schneider Electric. He argues that edge will be a complementary technology that will coexist with cloud for a long time to come, with different use cases recommending each depending on the need.

“There are many applications and services that typify cloud distributed architecture, apps like Office 365 from Microsoft and for example. When they were rolled out, they were hosted in large centralised data centres,” he said.

“Now, there has been a move to the edge by the internet giants and enterprise customers are asking questions around which applications does it make sense to move to the cloud and which are best kept close to the edge?”

“For those companies that have already invested in data centres, the good news is that there are a lot of applications that can’t be broken out or disaggregated from core processes and you have to keep those on-site. There are a lot of applications that are just too hard to break apart and just can’t be hosted in a cloud environment,” he said.

“The Internet giants have moved into a lot of cities and have housed their data mainly in colocation facilities and the next stage of that over to the edge is actually to put smaller micro data centres all around the enterprises that use them.”

“We’ve even seen situations where Google, AWS Outpost and Microsoft Azure Stack have hosted cloud services directly at their customer’s facilities.”

At the same time that the biggest players have been engaging in this kind of move, enterprise level companies have been trying to offload as many applications as possible to the cloud.

Steve Carlini, Schneider Electric

“But depending on the use case, you don’t want to send all that data to the cloud and then send it back to where it was collected, you want to process all that information locally. So there’s a trend towards moving all that processing closer to where the data is and there’re two reasons for this. The first is latency and the second is expense,” said Carlini.

“What most companies have ended up with today is a data centre where some applications have been moved to the cloud and some have remained on site. Your data centre probably has servers intermittently dispersed in racks. That may function okay but it’s probably not working as efficiently as it could from an electrical point of view.”

Because of this, Schneider Electric is helping companies in this position to redesign their infrastructure in the face of developments like the move towards the edge.

“We help them consolidate into a smaller footprint, using the right kind of cooling systems to allow them operate more effectively,” said

Impact field

Marc O’Regan, chief technology officer for Dell Technologies, thinks
that the impact of edge computing will be felt further afield than is currently thought.

“Eight out of 10 people you ask to define edge computing will tell you it’s about pushing compute out to the edge of the network and it’s been seen as something mainly of use in manufacturing 4.0 environments and potentially in hospitals and so on,” he said.

“I’d go much wider, deeper and further than that. I think that distributed edge and in general distributed architecture is the irection that technology has been going for some time now. Edge computing is bang at the heart of distributed architecture and how we’re trying to build technologies that have the horse power to be able to drive the kind of models we need to execute next generation use cases.”

He points to the speed with which data transfer speeds improved and how big an impact that had in enabling cloud technologies to go mainstream. Edge computing is set to evolve at a similar speed and in
many cases, he thinks many of the uses it will be put to have not
even been invented yet.

“Data transfer speeds improved dramatically in recent times and that’s how cloud became viable. We saw speeds going from one gigabit to 10 to 40 to 100 gigabits of capacity really quickly, probably within a year and half or so and we saw different types of protocols developing to take advantage of that,” he said.

At the same time there has also been an explosion in the rate at hich data is created and a big acceleration in compute horsepower. This sets the stage for the era of the edge, he argues.

Marc O’Regan, Dell Technologies

“Ten years ago, hyperscalers started to be used, around six months after the release of Azure and AWS. Around 2011 as an industry we started using the term Big Data to mean not just focusing on traditional descriptive data, in other words data about things that happened in the past, but also prescriptive data that was being generated in real time and predictive data associated with maths that allows us to predict what is about to happen in the near future,” he said.

“That concept takes us out to the edge and the technology is rapidly
catching up to enable that. Although we have good data transfer rates, we also have this massive explosion in data over a huge and diverse devices and sensors, travelling across all sorts of networks.”

The challenge lies in putting the compute horsepower necessary to work with this data at the edge so that data can be gathered analysed and acted upon in real time.

“That can mean being able to solve a yield problem on a production line in real time, for example. It can give us vital information that can save companies millions of euro in a very short space of time, it can improve processes and taken to extremes, it can save lives,” he said.

“The closer we get to the data, and the less time we have to wait for
decisions based on that data, the better.”

Read More:

Comments are closed.

Back to Top ↑