Data

Choose flexible edge deployments carefully

Since the need for edge applications over time may shift, it’s important to find an architecture that works now but is flexible enough to meet future needs
Pro
Image: Stockfresh

19 May 2020

Many edge-computing deployments are driven by very specific needs, but since new needs may arise down the road with a different set of edge requirements, IT leaders should adopt edge-computing architectures with flexibility and adaptability in mind.

The fact that all edge-computing systems have certain things in common – complex combinations of hardware, applications, infrastructure software and networking – does not mean they should all have the same design.

Every new project requires highly specialised software and integrated custom networking to deliver on project goals across such diverse applications as industrial control, autonomous transportation, health services, public safety, and energy management. Each use case will have its unique requirements in terms of performance, response times, quantity of data to be gathered and processed, and cost.

Since projecting the return on investment for these systems remains challenging, it is best to choose an architecture that is readily customisable in order to minimise the cost of meeting future edge needs.

Deploying an edge computing system requires a complex combination of servers, applications, infrastructure software and networking. The server runs the software-control systems and analytics to translate, for example, large amounts of IoT data into actionable information. Open APIs are available on both software and networking systems to customise traffic flows. The network is designed to deliver the latency and reliability required within the cost constraints. 

Given the variables that come into play with each element of an edge system – devices, local network, gateway, computing – making them all work to meet required specifications may require system-integration skills.

Here is a brief look at these elements.

Edge Devices

The variety of devices gathering data at the edge of networks is rich. It includes sensors, industrial machines, medical devices, video cameras, actuators, RFID gear, asset monitors, self-order kiosks, PCs, tablets, phones, and a range of smart devices like thermostats, refrigerators, and food preparation systems. Key variables include the number and type of devices, distance from the branch or campus network, amount of data generated, battery life, and security requirements.

Connectivity

With the variety of devices comes a diversity of connectivity needs, from real-time high-bandwidth control of factory robots to moisture sensors in farm fields that need to upload small amounts of data at wide intervals. Enterprises need to decide the most appropriate links the types of edge devices they have deployed. Networking options include Wi-Fi, Bluetooth, Zigbee, private or public 4/5G, cellular data, and various kinds of wired connectivity, among others. Key variables include the mobility of the devices, cost of the network service, latency requirements, and whether the data flow is one-way or bi-directional.

The gateway

Edge gateways are aggregation points for data generated by dispersed devices, but they may also filter, process and store data as well as apply security policies. Key variables include the type of device used, which depends on the needs of the system as a whole. The gateway can be a server, a networking device or a specialised appliance.

Compute power

This is what processes the data and sends it to the centralised cloud location or data centre via the wide area network.  Edge-computing resources can be deployed at the branch or other on-premises location or can leverage edge-compute resources as a service from cloud or other service providers.

All four of these elements must work in unison to provide an edge-computing architecture that works, and to achieve that IT organisations must answer a complex set of questions about of architecture, performance, security/compliance, application software, supplier options and cost.

Benefits and challenges

The cost and complexity of deployment have slowed early development of edge-computing systems, but the benefits can be significant.

Edge computing is driven by the need for very low latency. The current architecture in which data is sent to a centralised data centre or public cloud and then returned to the edge location for action creates latency of a second or more.  Edge computing can deliver predictable millisecond latency, which is critical for manufacturing, health, and public-safety applications. 

Edge computing can provide real-time data analysis and vastly reduce the amount and frequency of data required to be sent to distant, centralised locations. It can also provide for high availability and redundancy and offer security/compliance by keeping sensitive data in local locations and not exposing it to the Internet.

Options for Edge Computing

IT organisations have several options as they consider how to build out their edge computing capabilities.  As edge computing applications are highly specific, such as IoT deployments, IT teams need to consider the ability to customise edge computing architecture to their needs.

Many early deployments have been made by sophisticated IT organisations with the skills and resources to deploy edge computing on their own. By doing it themselves, these IT organisations gain the advantage of creating architectures that meet their specific performance, latency and security metrics.  But these DIY deployments can be complex and thus time consuming and costly.

Many IT departments are leveraging innovative start-ups for technology and larger IT firms for service and support to help jump start their edge-computing deployments. The challenge is to adapt broad edge-computing solutions, especially those from large vendors, to the unique application requirements of individual deployments.

Edge computing can also be delivered as a service from cloud providers, telcos, and other service providers. These organisations are slowly building out low-latency computing services in or near the largest cities in the largest countries.  Organisations within this limited footprint need to consider latency guarantees, cost and the ability to integrate with the specific service offerings.

Recommendations

Edge Computing is in its early stages of technological development.  Each potential edge-computing application has significantly different requirements, including hardware, software, networking and costs. Architectures should be designed with flexibility and adaptability to meet changing business requirements.  Finally, IT organisations should move cautiously with initial deployments, starting with pilot projects that have a clear business case.

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie