As enterprises move towards deploying multi-cloud, hybrid, and edge workloads, we are seeing an evolution in the landscape towards distributed Kubernetes products and solutions managed through cloud-hosted and SaaS control planes.
This paper compares Platform9 Managed Kubernetes (PMK) and Google Anthos both of which offer the following key capabilities:
Let’s first review the key differences between the two solutions, before drilling into a detailed comparison table. The two most important differences are:
Lock-in comes in several forms:
Platform9 has the least lock-in considering the above factors. Google Anthos runs on-premises and on AWS with limited functionality compared to GKE, and each version of Anthos is tightly coupled with a specific Kubernetes version.
Maturity of the managed Kubernetes service is important because once deployed, the clusters need to be operated in an efficient and risk-averse manner. There also needs to be a minimal set of capabilities and integrations that allows DevOps teams to quickly deploy and get value out of Kubernetes. Here are some factors that help assess the maturity of a managed Kubernetes service:
Platform9 has offered production-ready managed Kubernetes to enterprises since February 2017. Its SaaS-managed control plane can manage thousands of edge sites just as easily as multi-cloud or on-premises multiple k8s clusters. Google Anthos is inconsistent in the functionality it provides on GKE versus on-premises and other clouds.
The following detailed comparison table covers 19 technical and operational categories including deployment & provisioning, application & infrastructure management, and production features such as HA, zero-touch upgrades, and monitoring. The pie charts indicate level of completeness of the corresponding capability in PMK and Google Anthos.
While the promise of Kubernetes is complete portability of workloads, in reality managed Kubernetes services don't always support clusters running just anywhere. Organizations planning for hybrid or multi-cloud long term goals must consider the support and compatibility of managed Kubernetes services on the various cloud providers and infrastructure options that are a part of their long term plan. As opposed to simply attaching Kubernetes clusters for visibility, a fully-managed Kubernetes offering would include maintenance, upgrades, deploys, and other capabilities on the supported infrastructure that go beyond providing basic visibilty.
A successful Kubernetes platform must be easy to implement and maintain so organizations can leverage containers continuously. This alone is a major barrier that many organizations do not overcome.
Organizations may need to run multiple versions of Kubernetes rather than be forced to stick to a single, specific version based on the managed service providers' support. This is advatageous when test/dev clusters are not preferred to be on the same Kubernetes version as production clusters, or when an organization is currently on a different version of Kubernetes than the one that the managed service supports.
For organizations looking to deploy Kubernetes clusters on premises, deploying clusters on bare metal must be a seamless experience in order for them to fully realize the value of Kubernetes and containers. This enables organizations to do away with virtualization licensing costs, the management overhead associated with the virtualization stack, and the performance hit that applications incur because of the hypervisor layer. Kubernetes solutions that include or tightly integrate with bare metal orchestration and automation tools have an advantage, since they enable organizations to run Kubernetes on bare metal without the pain of managing bare metal themselves.
While running containers efficiently is usually the main motivation for adopting Kubernetes, most organizations find that they still need to run and manage VMs. This happens because of various reasons - some applications are not designed to be run on containers, legacy workloads are harder to containerize, the effort needed to migrate applications to containers is too large and expensive, to name a few. This results in the maintenance of separate stacks for VMs and containers, which increases the operational burden on infrastructure teams. Support for VM management alongside Kubernetes containers simplifies this.
A single Kubernetes cluster can scale horizontally to support large sets of workloads. However, running Kubernetes in production requires being able to run multiple Kubernetes clusters, as you will want to fully isolate your dev/test/staging applications from production applications by deploying them on a separate cluster.
A production Kubernetes cluster must be monitored at all times to handle any issues and outages without severely affecting cluster and application availability to users. An enterprise Kubernetes solution must provide this capability out of the box.
As more and more organizations are running their business on Kubernetes, IT must ensure that it can support the SLAs that the business requires. IT must ensure that Kubernetes is available to developers and the business to support key initiatives. Most organizations require 99.9% uptime.
Running containerized applications on Kubernetes clusters requires having access to a container registry where your application images will be stored. A large enterprise organization will typically want a secure private container registry to store their proprietary application images. An enterprise Kubernetes solution should provide image management capability out of box.
Kubernetes has a large community of contributors and a new version is available every 3 months. An enterprise-class solution will support rolling upgrades of clusters, such that the cluster and the cluster API is always available even while the cluster is being upgraded. Additionally, it will provide the ability to rollback to previous stable version upon failure.
Kubernetes supports multi-tenancy at the cluster level using the namespace abstraction. However, in a multi-cluster environment, you need a higher level multi-tenancy abstraction to supplement Kubernetes multi-tenancy and provide the right level of isolation across different teams of users. It should integrate with Single-Sign On (SSO) solutions most commonly used by enterprises such as Active Directory or ADFS, Okta, and other popular SAML providers.
Application catalog and Helm provides easy one-click deployment for a set of pre-packaged applications on top of Kubernetes. It also provides end users a vehicle to build and publish their own applications via the catalog for others in their team or their organization to deploy in a one click manner. The application catalog enables organizations to standardize on a set of application deployment recipes or blueprints, avoiding sprawl of configurations.
Networking in Kubernetes can get complex, and is not trivial to understand. Kubernetes specifies a Container Network Interface (CNI) which enables software defined networking plugins like Calico and Flannel to be integrated with Kubernetes clusters. The lifecycle of the CNI must be managed by the platform provider, and the vendor is able to help troubleshoot issues with the CNI. Services are provided on the design and implementation of cluster networking by the managed offering.
Load balancers are an important component of Kubernetes clusters - not just for load distribution, but also for Ingress. A complete, production-ready Kubernetes solution should include load balancers that are supported on the underlying infrastructure. It should also manage the lifecycle of the load balancer.
One of the most critical workloads run by the developers is Continuous Integration / Continuous Delivery. A robust CI / CD pipeline is critical to ensure agile development and rapid delivery of new software releases to customers.
Similar to networking, integration with enterprise grade storage is an essential component of running Kubernetes clusters in production. Kubernetes provides an abstraction called Persistent Volumes to hold data persisted by stateful applications. It is important for a Enterprise Kubernetes product to map PVs to an actual highly-available storage technology. Enterprises will typically want their Kubernetes deployment to integrate with storage solutions that they have already deployed such as NetApp, Pure, SolidFire, etc. or they may want to integrate with a container native storage technology such as Portworx
Lock-in occurs in differnet ways. Some of the common ways are: cloud services that tie organizations to vendors, vendor-specfiic Kubernetes distributions, architecture, and the skillsets and culture of teams. This is an important factor to consider when adopting a managed Kubernetes service.
Given the complexity of Kubernetes, it is important for the managed Kubernetes service to have been generally available in the market for a reasonable amount of time. Especially for clusters deployed in production environments, experienced support and a reliable, battle-tested product are important factors to consider.
Not every company is ready to go into production right away. Kubernetes is still new and companies need the room to start free, learn, test, and then scale to production on their terms. A 100% free managed service gives users the freedom to start at zero cost and grow at their own pace into more supported options.
Getting the most out of Kubernetes for multi-cloud, edge, and distributed deployments is very complex and challenging. Despite all of the built-in automation features that Kubernetes offers for managing container workloads intelligently, achieving optimal performance, cost and reliability at distributed scale requires careful planning and tuning of Kubernetes environments.
The operational complexity increases exponentially with large-scale distributed sites such as 5G rolls-outs or retail stores have to deal with. Challenges increase when you add requirements for advanced networking, stringent latency and performance needs, central/remote management, and bare metal orchestration.
Unlike Google Anthos which is basically an extension of GKE to work with on-premise environments, Platform9 Managed Kubernetes has been designed from the ground up to centrally manage massively distributed and diverse infrastructure environments. Platform9 has large-scale production deployment experience since 2017 using 100% open source Kubernetes that is 100% cloud or infrastructure agnostic.