Platform9 Kubernetes (PMK) is a fully SaaS managed Kubernetes platform that makes your multi-cluster, multi-cloud Kubernetes management delightfully uncomplicated.
Unlike Google Anthos, we bring up your POC environment in a matter of hours, with much better support for VMware vSphere, Linux KVM, and at a fraction of Anthos pricing.
However, several enterprise features are only available when deployed on GKE, such as: multi-cluster Ingress, security/encryption features, service mesh features, usage metering, and auto-scaling. Outside of GKE, Google Anthos lags Platform9 in its depth of feature integrations. If you are looking for deep, and native infrastructure integrations, look elsewhere.
For organizations looking to deploy Kubernetes clusters on premises, deploying clusters on bare metal must be a seamless experience in order for them to fully realize the value of Kubernetes and containers.
This enables organizations to do away with virtualization licensing costs, the management overhead associated with the virtualization stack, and the performance hit that applications incur because of the hypervisor layer.
Does the Managed Kubernetes solution include or tightly integrate with bare metal orchestration and automation tools? How well does it enable the value of bare metal without additional complexity?
A production Kubernetes cluster must be monitored at all times to handle any issues and outages without severely affecting cluster and application availability to users. An enterprise Kubernetes solution must provide this capability out of the box.
As more and more organizations are running their business on Kubernetes, IT must ensure that it can support the SLAs that the business requires. IT must ensure that Kubernetes is available to developers and the business to support key initiatives. Most organizations require 99.9% uptime.
Running containerized applications on Kubernetes clusters requires having access to a container registry where your application images will be stored. A large enterprise organization will typically want a secure private container registry to store their proprietary application images. An enterprise Kubernetes solution should provide image management capability out of box.
Kubernetes has a large community of contributors and a new version is available every 3 months. An enterprise-class solution will support rolling upgrades of clusters, such that the cluster and the cluster API is always available even while the cluster is being upgraded. Additionally, it will provide the ability to rollback to previous stable version upon failure.
It is critical to support concurrent use of multiple Kubernetes versions. This is often required in order to enable testing and validation of an application on a new version; but is also required because different versions of applications or teams may have compatibility constraints.
Kubernetes supports multi-tenancy at the cluster level using the namespace abstraction. However, in a multi-cluster environment, you need a higher level multi-tenancy abstraction to supplement Kubernetes multi-tenancy and provide the right level of isolation across different teams of users. It should integrate with Single-Sign On (SSO) solutions most commonly used by enterprises such as Active Directory or ADFS, Okta, and other popular SAML providers.
Application catalog and Helm provides easy one-click deployment for a set of pre-packaged applications on top of Kubernetes. It also provides end users a vehicle to build and publish their own applications via the catalog for others in their team or their organization to deploy in a one click manner. The application catalog enables organizations to standardize on a set of application deployment recipes or blueprints, avoiding sprawl of configurations.
Networking in Kubernetes can get complex, and is not trivial to understand. Kubernetes specifies a Container Network Interface (CNI) which enables software defined networking plugins like Calico and Flannel to be integrated with Kubernetes clusters. The lifecycle of the CNI must be managed by the platform provider, and the vendor is able to help troubleshoot issues with the CNI. Services are provided on the design and implementation of cluster networking by the managed offering.
Load balancers are an important component of Kubernetes clusters - not just for load distribution, but also for Ingress. A complete, production-ready Kubernetes solution should include load balancers that are supported on the underlying infrastructure. It should also manage the lifecycle of the load balancer.
One of the most critical workloads run by the developers is Continuous Integration / Continuous Delivery. A robust CI / CD pipeline is critical to ensure agile development and rapid delivery of new software releases to customers.
Similar to networking, integration with enterprise grade storage is an essential component of running Kubernetes clusters in production. Kubernetes provides an abstraction called Persistent Volumes to hold data persisted by stateful applications. It is important for a Enterprise Kubernetes product to map PVs to an actual highly-available storage technology. Enterprises will typically want their Kubernetes deployment to integrate with storage solutions that they have already deployed such as NetApp, Pure, SolidFire, etc. or they may want to integrate with a container native storage technology such as Portworx
Lock-in occurs in differnet ways. Some of the common ways are: cloud services that tie organizations to vendors, vendor-specfiic Kubernetes distributions, architecture, and the skillsets and culture of teams. This is an important factor to consider when adopting a managed Kubernetes service.
Given the complexity of Kubernetes, it is important for the managed Kubernetes service to have been generally available in the market for a reasonable amount of time. Especially for clusters deployed in production environments, experienced support and a reliable, battle-tested product are important factors to consider.
Not every company is ready to go into production right away. Kubernetes is still new and companies need the room to start free, learn, test, and then scale to production on their terms. A 100% free managed service gives users the freedom to start at zero cost and grow at their own pace into more supported options.
Getting the most out of Kubernetes for multi-cloud, edge, and distributed deployments is very complex and challenging. Despite all of the built-in automation features that Kubernetes offers for managing container workloads intelligently, achieving optimal performance, cost and reliability at distributed scale requires careful planning and tuning of Kubernetes environments.
The operational complexity increases exponentially with large-scale distributed sites such as 5G rolls-outs or retail stores have to deal with. Challenges increase when you add requirements for advanced networking, stringent latency and performance needs, central/remote management, and bare metal orchestration.
Unlike Google Anthos which is basically an extension of GKE to work with on-premise environments, Platform9 Managed Kubernetes has been designed from the ground up to centrally manage massively distributed and diverse infrastructure environments. Platform9 has large-scale production deployment experience since 2017 using 100% open source Kubernetes that is 100% cloud or infrastructure agnostic.