Fight Latency at the Edge with Kubernetes-Based Infrastructure – Part II

In the first part of this post, we looked at how Kubernetes addresses the need to mix and match hybrid infrastructures like cloud, edge, and on-premise, as well as how Kubernetes fights latency in edge environments with the help of agility, scalability, and HA features. In this post, we’re going to take a closer look at how Kubernetes not only aids better overall management by providing better monitoring and observability into its core infrastructural operations but also its health and remediation features as well.

Centralized Management

Now there seems to be some confusion with regards to the centralized management of edge infrastructure. Edge – or as some people are calling it, fog computing – is about moving resources closer to the user and hence decentralizing them. How is this different from when we all had our own personal computers? The big difference is that with Edge computing, the cloud still manages and defines what needs to be processed; it’s only the actual location of the processing that gets changed.

Today’s IoT devices are growing more and more powerful, and while they may be capable of gathering, storing, and processing more data than ever before, this just increases the need for centralization. As the number of edge computing resources grows, so does the required number of data repositories within your ecosystem. Additionally, modern workflows have quite heavy demands and while there are rumors that AWS is making AI chips for Alexa, edge-based AI and ML applications still require access to core infrastructure for the “heavy lifting.”

Monitoring at the edge

To continue from where we left off in the first post in terms of managing a hybrid mix of clouds, devices, and platforms from a single Kubernetes control plane, monitoring is a key aspect. In other words, it’s the “glue” that holds it all together. Make no mistake, containers are tricky to monitor, and due to their ephemeral nature add a significant layer of complexity to the process. Traditional monitoring methods and tools find it especially difficult to keep up with the unpredictable nature, increased volume, and shorter life span of containers.

Monitoring with Kubernetes requires an entirely new approach as well as a new set of cloud-native tools. As opposed to sitting around waiting for a service to break and notify you, tools like Prometheus are used to actively probe each service to make sure they are all running. Other tools like Grafana are used to give you a graphical representation of the metrics collected by Prometheus. Incidentally, both Prometheus and Grafana can be installed on edge devices running stripped-down versions of Kubernetes (like MicroK8s). These lightweight distributions are specifically built for the edge, where resources are limited.

Monitoring is a key aspect of a successful Kubernetes deployment, especially in terms of security. In a modern distributed environment, you not only need to make sure you know what’s going on inside your application but also what’s going on outside. This means keeping track of all data going to and from the cloud and on-prem data centers, or data traversing the network edge. Though it’s not a standard out-of-the-box feature, Kubernetes does enable this level of monitoring.

Short of using a managed service like Platform9, it can be challenging to not only view the flow of traffic across your entire hybrid application stack but also to govern that flow. This governance is especially critical when dealing with microservices and containers. Additionally, monitoring is essential for troubleshooting, optimizing, and security, and you need a Kubernetes-powered platform to maintain that level of monitoring consistently across the edge, on-prem data centers, multi-cloud, and IoT.

Kubernetes Health & Remediation

Now that we have a general idea of how to use monitoring tools to detect if an application is broken, it’s time to look at how it heals them, as well as how it deals with dependencies. Orchestration is about more than just monitoring and also pertains to dealing with unhealthy containers. Kubernetes has self-healing or auto-healing features that include auto-placement, auto-replication, and auto-restart of containers. What this means is Kubernetes either restarts, replaces, or kills containers that fail or don’t respond to probes in accordance with user configurations.

While pods don’t self-heal by default, you can use Kubernetes Controllers or a managed service that provides self-healing capabilities at the cluster level. With regards to how Kubernetes deals with dependencies in case of a failure, Kubernetes decouples all software dependencies and uses self-healing mechanisms in addition to service discovery and request routing to ensure all requests reach the right pod, regardless of failure. This is why self-healing and automatic service discovery are key elements to look for in a Kubernetes platform for edge computing.

Killer Kubernetes

While there is much talk of a “killer” app that will magically make Edge computing irresistible to organizations, there are others saying it has already arrived and is called Kubernetes. Calling Kubernetes an app is a bit like calling the internet a network and we need to remember that we are just scratching the surface with regard to what Kubernetes is actually capable of. Similar to how Kubernetes just happens to excel at infrastructure abstraction as well as container orchestration, being ideally suited to deal with Edge infrastructure also seems almost incidental.

In conclusion, with 5G and ML/AI applications accelerating, this rush to put more power at the edge, centralization, and more importantly, standardization is more “mission-critical” now than ever before. Kubernetes’ stateless, agnostic nature along with its infinite scalability and self-healing mechanisms make it an ideal platform to take on the future, regardless of whether that future lies in the cloud, the fog, on the edge or a new place that hasn’t been invented yet.

Platform9

You may also enjoy

Maximize cloud usage and cost efficiency using FinOps best practices

By Kamesh Pemmaraju

Edge Computing and Video Streaming: Improving User Experience

By Platform9

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Leaving VMware? Get the VMware alternatives guideDownload now