Edge computing: the new network reality
Compute capability at the edge of corporate networks is going to significantly change network design, as it accommodates the likes of Internet of Things and analytics capabilities. How are organisations adapting to this new architecture? ALEX MEEHAN reportsPrint
10 July 2018 | 0
As artificial intelligence, internet of things and business analytics become more pervasive, it seems likely that edge computing will significantly change network design. But just what is it? The answer seems to depend on the industry concerned.
However, a good example can be found in the telecommunications sector. There, edge computing will see data processing shift from the centre of the telecommunications network out towards the periphery. Instead of gathering data on the periphery of the network and then processing it in the cloud, a growing range of applications will see that processing happen back out on the edge.
“Edge computing brings the application service closer to the user. The way I think of this is that there is a lot of data generated by applications these days and as we connect more and more things and devices to the network, the volume of data that is being generated is growing exponentially,” said Dr Csaba Kiss Kallo, head of connectivity mobility and security portfolio at Vodafone Ireland.
“A lot of that data is only useful where it is collected, and for this reason there is no point in bringing all that data to a centralised cloud and then back down again — it is more effective to keep it close to the application. And the reason why this is important is that it reduces latency and it reduces traffic on the core network and overall it reduces the cost of communication.”
5G technology will be an important enabler of edge computing because it will enable faster transfer speeds, facilitating new use cases and making new technologies feasible.
“5G will enable a multitude of new application types and I can think of a few areas in which it will drive the adoption of edge computing. One is that ultra-high-speed data transfer will enable better video streaming,” he said
It is thought for example that more than 60% of the traffic on the average mobile network is made up of video. This is expected to grow significantly though, with some estimates putting that figure at around 80% by 2020.
“That growth needs to be catered for and speed of processing is important. For example, at the moment we know that many people experience buffering problems when they attempt to watch video on a mobile device and according to statistics from Marx.com – 39% of users stop watching after just one buffering event,” said Dr Kiss Kallo.
“Edge computing can reduce that latency by allowing us to buffer video closer to the user at the 5G base station. This will reduce, or this will increase things a lot. Another massive area would be Internet of Things adoption which will bring huge amounts of scale.”
When many more devices have SIMs integrated into them, potentially millions of devices will end up connected to the internet. This scale needs to be managed as all these devices will generate data that will require processing.
“You need to make sure that you are only bringing the data that is necessary to a central processing unit. Otherwise you are better off processing that data at the edge,” said Dr Kiss Kallo.
“Reliability and latency are other reasons why 5G will have a positive effect on edge computing. Take something like drone technology. This is an area where latency is very important as drones operate in real time and need to be able to react to their physical surroundings in real time. Latency has to be really short.”
“With drones you need to have a control system that has in-air coordination and can make sure one won’t collide with another or with other objects. Every millisecond of latency is important in that sort of scenario. If a drone drops out of the sky and into a crowd, for example, that could cause serious injury.”
Latency is one of the areas that people like to talk about when it comes to edge computing, and it’s not hard to see why. While the current generation of mobile technology does a fantastic job of moving data around, it’s not equipped to deal with some of the use cases coming down the line.
According to Andrew McCreath, senior principal with Equinix, this is particularly true with the Internet of Things.
“Increasingly computers are finding their way into all sorts of new places and there are sensors everywhere. For some applications, sending all that data back across a network to be processed is fine but in others time is crucial,” he said.
“I’ve got sensors in my wrist watch and they’re in my running shoes. They’re also increasingly in our cars, on the trains we travel on and elsewhere. All this data is being collected and packaged up into useful and indeed sellable packets. It all contributes to creating ‘digital moments’ that will be packaged up and sold by data brokers and this is going to be hugely important, for example, in the insurance market going forward.”
According to McCreath, the current motor insurance model is based on very limited data resources. It uses poor data sources and makes many generalised assumptions about people and the risk they pose when it comes to calculating rates.
“But knowing what routes you drive, the weather conditions you live in, your driving style, the condition of the engine in your car – all that data could lead to much more cost-effective insurance models for the consumer. Today’s model is fatally flawed in that, for example, it makes the assumption that a piano teacher in a blue Ford Mondeo is perhaps a lower risk than the estate agent in a Volkswagen Polo.”
“But maybe the Mondeo hasn’t been serviced regularly and has had a brake warning light flashing on the dashboard for weeks? Maybe the driver has an erratic driving style, or indeed, a drink problem?” said McCreath.
Staying with motoring, it is interesting to consider a recent development in edge computing that is seeing more intelligence move out to periphery of the network. Known as fog computing, this uses artificial intelligence at the edge to help solve the problem of what to do with large amounts of data that need to be moved around when latency is an issue.
“The OpenFog Consortium was founded in 2015 and built out the reference architecture that enables us to share resources at the edge to deploy AI and really develop a lot of value in this near-edge computing model that we call fog computing,” said Matt Vasey, OpenFog Consortium board member and director, IoT business Development for Microsoft.
“What’s happening right now is you have a couple of technical technology transformations occurring, such as 5G coming to market quickly and from a latency and performance perspective moving the edge and cloud closer together, reducing that latency in terms of time from 300 to 600 milliseconds down to double digit milliseconds in the near term.”
In addition, a tremendous amount of data is being created near the edge of the network and being able to communicate on a subnet at megabit router speeds will enable companies to tackle new scenarios.
“These scenarios are those like smart cities, autonomous vehicle use-cases, 5G-powered user experiences like for instance at sporting events and finally things like delivery drones and other types of autonomous vehicles all working together. I think what’s happening is that the edge, the fog, and the cloud are beginning to pool resources.”
Vasey offers the following example to help explain what he means. 5G is helping connect devices across the edge so data does not just move north-south but also east-west to other devices nearby.
“Perhaps a camera for instance may not have a huge storage capacity on it but there may be a device nearby that has storage or an AI model that’s running on an edge device may need hardware acceleration, either GPU or FPGA acceleration and by sharing resources at the edge, you’re enabling inference to occur at or near the edge which would normally have had hundreds of milliseconds delay to get to work,” he said.
“Or take the example of autonomous vehicles. Imagine two vehicles driving on the road and perhaps they’re following a bit too close and something occurs ahead on the road. Currently you’d see the person’s brake light turn on and then you’d do some processing and figure out that I should put my brakes on to start slowing down.”
In a fog computing world, those two cars would actually be in communication with each other, sharing status information about whether their brakes are on and what their camera sees ahead, and they could share those resources and create a composite sensor.
“Maybe the car behind is sharing data from the camera in front of it or maybe it’s getting data from its braking systems in real time and so by getting that near-real-time connectivity between the cars, you can actually get braking to occur at a much faster rate,” said Vase.
In such a situation having the cars connected using 5G to each other as well as a centralised network, would allow for much less latency when time is clearly of the essence. It could work even when there is no cell tower in the area to facilitate cloud communications.
Elsewhere in the industry, commentators are looking to edge computing to play a role in enhancing the effects of digital transformation initiatives. According to Jason Covitz, director of global segment strategy for APC by Schneider Electric, many digital transformation projects are either very bandwidth intensive, are very latency sensitive or carry regulatory implications.
For these, edge computing could be revolutionary.
“The relevance of edge computing when it comes to digital transformation is that when the cloud was developed it was not meant for these types of applications, it wasn’t meant for heavy bandwidth applications or for latency sensitive applications and it wasn’t meant to handle data sovereignty issues,” he said.
“So, the importance of edge computing is that it makes digital transformation and IoT initiatives highly available and reliable, and edge computing is allowing companies to deploy these edge sites in a way that makes IoT applications reliable and available for them.”
Covitz suggests that as digital transformation initiatives become more common, more core applications will move to the cloud, which is another way of saying will be hosted in data centres. As a result, data centre downtime will become a bigger issue whenever it occurs.
“The problem is that because of IoT, the applications that are going into data centres are becoming mission critical. And so businesses need to not only deploy the application closer to the user because of bandwidth latency and data management issues but they also are faced with the challenge of making dozens or hundreds or thousands of more sites much more available than they’ve been in the past,” he said.
“That’s where Schneider Electric, and in this case specifically APC, comes in — we ensure that those edge computing sites are as close to the availability of those tier three datacentres as possible. We can help bring the availability of those sites to a higher level than what’s really been required in the past.”