Kubernetes seems to be a winner – one single platform that happens to be ideally suited for both cloud and edge computing. What are the chances? While cloud-native computing is all about flexibility, scalability, on-demand resources and the like, the edge acts as a conduit between the cloud and the ever-expanding army of IoT devices and their sensors. Simply put: edge computing is about performing the data processing close to the device rather than sending it back to a centralized cloud location and waiting for a response. Processing done at the edge of the network is the next big thing in computing. If you’re wondering where Kubernetes fits into the picture, the answer lies in its infrastructure abstraction capabilities.
Kubernetes is great at decoupling stuff, plain and simple. If you’re looking for a platform that’s truly agnostic to infrastructure while also being robust and scalable enough to run containers on a “thick” edge, Kubernetes has a compelling sales pitch. Designed to orchestrate self-contained units of code, Kubernetes doesn’t care where you run it and gives you a common layer of abstraction over a mixed-bag of physical resources.
What this means is you could be running hybrid applications in environments that consist of cloud resources, on-prem data centers, and edge devices and still have one single control plane to manage everything in a way that deployment is standardized. This is especially useful with the almost alarming growth of devices at the edge, along with the massive amounts of data being produced on a daily basis.
Thick in the Edge
With sensors running practically around the clock, that’s an awful lot of data to keep sending to the cloud and back, and it’s only increasing. This brings us to the current requirement for an increase in processing power at the edge, in order to filter data locally before sending it to the cloud or on-prem data center. If you look at the cloud as your “core,” and your IoT platform that manages your devices and their sensors as your “endpoint,” the edge is an intelligent layer in between.
This layer not only provides distributed computing, data persistence and network aggregation, but also pre-processes data with filters and analytics so that only information pertaining to items that require immediate attention are sent to the core. This takes quite a load off the network as well as the resources in the cloud and brings us to the inference that the more storage, compute, and networking power you have at the edge, the more of an edge you have.
As opposed to cloud-native computing where we’re simplifying and centralizing everything so that it’s all easier to manage, edge computing is doing exactly the opposite. Here we’re decentralizing our resources and spreading them out. While this does decrease latency and save bandwidth, it certainly has a few downsides as well. More resource on the edge means more moving parts, which in turn translates to more complexity, management, and a bigger attack surface to protect.
Luckily for us, Kubernetes is not just the king of orchestration, but also of decoupling complexities so that they can be dealt with “separately.” While the learning curve might seem intimidating at first, there are a number of Kubernetes platforms that make the going a lot easier. This includes open-source PaaS type platforms like OpenShift or PKS, Kubernetes distributions like Canonical or Kublr, and managed Kubernetes platforms like Platform9.
The Edge of the Kube
Managed Kubernetes edge platforms take away a lot of the guesswork involved with deploying Kubernetes at the edge, which is no simple task. While Kubernetes is surprisingly well suited to managing complex, decentralized and distributed environments, managing Kubernetes is another thing altogether. This is probably where you run into the edge of the Kube and the limit as to what you can do without first learning in detail about Docker, Kubernetes, containers, and microservice architecture.
One of the biggest drawbacks or hurdles is definitely going to be getting your edge platform to work with the rest of your infrastructure. In a multi-cloud setup that’s quite popular in the enterprise right now, this can be especially time-consuming unless your platform is “intelligent” and can automatically discover the rest of your infrastructure just by plugging in. Autoscaling, one-click cluster deployments, and automatic upgrades are other essential features to look for.
Welcome to the Neighborhood
Where Kubernetes’ capabilities end is literally where the ecosystem of tools surrounding it begins. Successful Kubernetes deployments involve a deploying number of open source tools, the majority of which are often backed by the CNCF. This includes networking solutions like Flannel, Weave, and Calico that take care of the heavy lifting involved with Kubernetes networking, tools like Gluster and Rook for storage, as well as logging and monitoring tools like Sumo Logic and Prometheus, respectively.
If your platform happens to not support integration with this ecosystem of tools, the terms “high and dry” or “out in the cold” would best describe your situation. Additionally, microservices need their own network and governance, which is achieved with service mesh tools like Istio or Consul. The Kubernetes ecosystem is not only large but also ever-expanding. KubeEdge is a Kubernetes edge computing framework that expands Kubernetes capabilities past containers and edge infrastructure, and into the world of actual IoT device management.
In conclusion, edge computing is gaining more and more importance with the need for mission-critical decisions to be made in real-time at the edge. Organizations are recognizing this need for power at the edge and in addition to new system models being developed with AI functionality built in, a number of IoT devices now support ML locally. Lastly, with the move toward 5G and the increasing need to embrace cloud-native technology and containers, you’ll be hard-pressed to find a better option to manage millions of devices across a multi-tiered mix of resources.