As the number of connected devices increases, we have seen IT processes and functions increasingly shifting to what is called the “edge.” The edge is essentially the opposite of a centralized system. Edges are remote systems that operate closest to where the users or services that consume them are.
One of the most well-defined examples of operating “at the edge” is in telecommunications companies. Today, organizations are having to massively scale and adjust the way they meet consumer needs by expanding the proliferation of their edge locations and how they operate at the edge. In this post, I help define what operating at the edge means for telco companies and present some of the biggest challenges that come along with it -- and how they can be addressed.
First off, let’s try to look at this from the perspective of a real-world example and what edge deployments look like for telco companies. When a massive telecommunications organization rolls out their mobile services, the way they do that is what they call “virtual network functions” that have to run at the edge. The edge, in this case, is basically a point of presence or a location that serves a number of cell phone towers. For years, this has been a large task but one that was considered manageable. However, the advent of 5G technology is changing that.
What happens with 5G is that the frequency of the signal is so high (in comparison with 3G or even 4G), that it is not able to travel very far before it gets attenuated or becomes week. So, in order to generate the performance that comes with 5G, they have to install more towers -- which means more points of presence and correspondingly more edge locations. Each one of these points of presence has a certain amount of compute or processing that needs to be done -- this is its virtual network function. When you make that mobile call through the cell tower, the signal is converted into a digital signal and then it passes through a piece of software. This is all happening at the edge. So there has to be a software that's actually deployed at the edge and the underlying hardware, meaning servers, networking, storage, and everything that's required to run that function or service, needs to be deployed out there at that edge as well.
If this sounds complicated -- just wait. Imagine having hundreds and thousands of these locations spread out not only around the country but across the globe. The way these organizations do it is they have regions, regional PoPs, and then city PoPs. For example, the Bay Area might have 200 of these serving different areas. There might be a few dozen for San Jose and a few others for San Francisco and so on. And each one of them has this PoP and each one of them is an edge location where a micro data center has to be deployed. Each one needs a rack of servers, compute capability, storage, and networking.
When you have this scenario with hundreds of micro data centers serving millions of customers across the globe with a product that is mission-critical for many of the people using it -- you can imagine the pressure on delivering very high availability. The challenge then becomes how can they provision something new and ensure uptime? Even more so, it becomes, how can we keep things operating smoothly on a day-in and day-out basis? They also have to have an effective way of getting new micro data centers up and running very quickly as they grow their networks to remain competitive.
This is where tools like SaaS-managed Kubernetes platforms come in - that is, solutions that have been designed from the ground up to manage -- remotely and centrally -- any number of locations out of the box. It has built-in remote monitoring, central management, and zero-touch operations.
SaaS-managed Kubernetes Explained
Think about the alternative for a company if they didn’t have the ability for remote management of their edge infrastructure and applications across thousands of locations. If something goes wrong at one of those locations -- say, the point of presence location fails and there's a network connectivity issue, or one of the servers goes down -- what do you do? Without remote managed access, you have to send a person down there. The technician goes into the PoP location, opens up the server, figures out what's going wrong, troubleshoots it, and then maybe a few hours later it's up and running. So that's a manual, operationally intensive and costly process.
With SaaS-managed Kubernetes, all that is built-in. In the above scenario, you just log in, bring up a PoP location, the servers automatically get discovered and they get registered with the central SaaS platform. From there, it is a zero-touch operation and they can diagnose and solve the problem much more quickly and with significant cost savings.
Edge deployments are increasingly being delivered using containers and Kubernetes. Software developers, product engineering and DevOps teams have been driving the adoption of Kubernetes for edge use cases. Platform9 Managed Kubernetes (PMK) is an example of a solution that enables enterprises to easily run Kubernetes-as-a-Service at scale in their edge environments with no operational burden. The solution ensures fully automated Day-2 edge operations with 99.9% SLA using a unique SaaS Management Plane that remotely monitors, optimizes and heals your Kubernetes clusters and underlying infrastructure. With automatic security patches, upgrades, proactive monitoring, troubleshooting, auto-healing, and more — users can run innovative new applications in their edge environments.
What's Next for the Edge?
In general, Kubernetes edge deployments are in the early phase of the hype cycle and are expected to grow rapidly in 2020. In fact, edge computing is expected to account for a major share of enterprise computing. According to leading analyst firms, there could be more than 20 times as many smart devices at the edge of the network as in conventional IT roles. Furthermore, the amount of enterprise-generated data created and processed outside of a traditional centralized data center could reach 75 percent by 2025.
The variety of edge applications and the scale at which they are being deployed is mind-boggling. A recent survey highlighted the diversity of use cases being deployed. The use cases highlighted here include edge locations owned by the company (e.g. retail stores, cruise liners, oil and gas rigs, manufacturing facilities), and in the case of on-premises software companies, their end customers’ data centers. Edge deployments typically need to support heterogeneity of location, remote management and autonomy at scale; enable developers; and integrate well with public cloud and/or core data centers.
For many companies -- not just in telecommunications but across verticals of all kinds -- living on the edge can be just that. Many challenges need to be addressed including the need to figure out consistent and scalable edge operations that can manage dozens or hundreds of pseudo-data centers with low or no touch, usually with no staff and little access. But with the right partners and tools for success, moving your technology to the edge can mean driving increased value, speeding up the velocity of innovation, and delivering the kinds of customers experiences that set your organization apart.
Top comments (0)