Enterprises are witnessing a radical transformation of the intelligent edge as a key strategic component of the future network. According to leading analysts, by 2025, 30 percent of workloads will run at the edge, up from only 1 percent today. Increasing amounts of enterprise generated data (more than 50 percent for many organizations) will be created outside the traditional center or the public cloud.
Macro environment issues such as the coronavirus have also accelerated the need for edge capacity for remote work, telemedicine and distance learning. Much of the data from this brave new world will stay local, where it is best applied, rather than sent elsewhere. Low latency and high bandwidth applications will run closer to users and devices they operate, e.g. IOT and AR/VR. Regulatory fiat will also require that much data remain within state boundaries. The application environment for the edge must also change. Developers used to the public cloud model will demand scalable, easily programmable, responsive infrastructure at the edge.
Edge cloud trends are clear, but also give rise to new operational challenges. The breadth and scale of the edge make existing architectures totally insufficient. The cost and complexity of management make traditional virtualization and process solutions at the edge far less effective. Remote management will be required to overcome the lack of technical staff or easy access. Managing these sites’ availability and governance will also be critical as remote sites are inherently not as secure and do not have reliable network connectivity as central locations.
Different use cases, for example, AI/ML or Big Data analytics will demand diverse workload types: containers, hypervisors, or bare metal. These different deployment types will require a shared platform to avoid duplicate or single location solutions that slow adoption.
It may be tempting to try to solve many of these issues via the public cloud model. However, the public clouds are designed for tens of homogenous regions, and existing edge infrastructure is not tethered seamlessly to the public cloud. The same holds true in following private cloud implementation blueprints. Enterprises must have an operational plan to avoid the pitfalls of the private cloud, including designing point solutions that lack agile operation or intelligent platform engineering.
Operating edge clouds at scale, whether using Kubernetes, VMs or bare metal requires a fundamentally different architectural approach and model. A SaaS management approach for deployment provides key benefits, including faster time to market, reduced operational costs, zero-touch operations, and shared visibility and control across all edge infrastructure. A centralized SaaS approach also simplifies Day 2 management challenges, providing central management across diverse sites. Our new white paper on Edge Platform challenges and solutions provides guidance on how to navigate this emerging edge opportunity. Download here
How to Operate Edge Clouds at Scale?Operating edge clouds at scale, whether using Kubernetes, VMs or bare metal requires a fundamentally different architectural approach and model. In this whitepaper you’ll learn:
- Why edge infrastructure is not tethered seamlessly to the public cloud
- How to use Kubernetes, VMs or bare metal
- Challenges and solutions to navigate the emerging edge opportunity.
- Understanding KVM’s role in modern cloud environments - January 24, 2024
- Tackling Kubernetes Underutilization: Cutting EKS Costs by 50% - January 3, 2024
- Unveiling the future of AI/ML deployments at KubeCon 2023 with Platform9 - October 17, 2023