Multi-Cluster Kubernetes Deployments – When and Why?
There are a variety of ways to segment workloads from each other in Kubernetes. You can run them on different pods or set up different namespaces. But if you want the maximum level of granularity – and the high availability and performance benefits that it can bring – a multi-cluster Kubernetes deployment is the way to go.
This article explains how multi-cluster Kubernetes works, which use cases it aligns with, and several different approaches to building and managing a multi-cluster deployment.
What Is Multi-Cluster Kubernetes?
A multi-cluster Kubernetes deployment is (as the term implies) one that consists of two or more clusters.
Multi-cluster doesn’t necessarily imply multi-cloud: all Kubernetes clusters in a multi-cluster deployment could run within the same cloud (or the same local data center, if you deploy them outside the cloud). However, because the ability to spread workloads across a wider geographic area is often one of the chief benefits of multi-cluster Kubernetes, it’s common to see this type of deployment within a multi-cloud architecture.
A multi-cluster deployment also does not necessarily have to be managed through a single control plane or interface. Technically, you could set up two different clusters and manage them using totally separate tooling, then call yourself a multi-cluster organization. But that approach is inefficient. It’s much more common to see multi-cluster deployments that are managed through a single platform.
And while a multi-cluster deployment will usually have multiple masters, that’s not strictly necessary. Multi-cluster Kubernetes deployments should therefore not be confused with multi-master (or, for that matter, multi-tenant) deployments.
The Benefits of Multi-Cluster Deployments
The benefits of a multi-cluster deployment can be broken down into four main categories: performance, workload isolation, flexibility, and security.
Performance
A multi-cluster deployment may improve Kubernetes performance in several ways:
- Lower latency: By giving your team the option to deploy multiple clusters, multi-cluster Kubernetes makes it easier to deploy workloads close to different groups of users. You could run one cluster in the cloud and another in a colocation center that is closer to one of your target demographics, for example. By reducing the geographic distance between your users and your clusters, you can reduce latency.
- Availability: Multiple clusters can improve the availability of your workload. You can use one cluster as a failover or backup environment in the event that another cluster fails, for instance. In addition, by spreading clusters between different data centers and/or clouds, you avoid the risk that the failure of a single data center or cloud will disrupt all of your workloads.
- Workload scalability: Running more than one cluster may improve your ability to scale up workloads when necessary. If everything runs in a single cluster, it’s harder to determine which specific workloads need more resources or more replicas, especially if you lack good performance data for specific workloads (which you may if you are only tracking cluster-level health). You are also more likely to run into “noisy neighbor” issues when running everything in a single cluster. And for very large clusters, you may hit the ceiling on the cluster size that Kubernetes supports; currently, you can’t have more than 5,000 pods per cluster.
Workload Isolation
Running different workloads in different clusters provides the maximum possible isolation between workloads. Workloads in separate clusters have essentially no chance of communicating with each other or consuming each other’s resources.
That said, multi-cluster deployments are certainly not the only way to achieve workload isolation in Kubernetes. You can also set up multiple namespaces within a single cluster. In theory, at least, namespaces provide rigid isolation between workloads in most respects. Even at the pod level, you can use resource quotas and network policies to achieve some level of isolation between pods. But none of these methods deliver the iron-clad isolation of a multi-cluster deployment.
Workload isolation is especially important if you have multiple teams or multiple departments that want to deploy workloads on Kubernetes, but that don’t want to worry about noisy-neighbor or privacy issues. It is also often desirable if your team wants to separate a dev/test environment from production, or if you are experimenting with different Kubernetes settings and don’t want to risk a configuration change that could cause issues for production workloads.
Flexibility
When you run multiple clusters in Kubernetes, you gain fine-grained control over how each cluster is configured. You could use a different version of Kubernetes for each cluster, for instance, or choose a different CNI.
The configuration flexibility of multi-cluster deployments is beneficial if you have an application that depends on a certain setup or version of a tool in the stack. It’s also valuable if you want to test new versions of Kubernetes in an isolated dev/test cluster before upgrading your production clusters to the new version.
Security and Compliance
The workload isolation that multi-cluster Kubernetes offers also provides some security and compliance benefits. Strict isolation between workloads mitigates the risk that a security issue in one pod could escalate to impact others.
Here again, multi-cluster deployment is not the only way to achieve this benefit. There are a variety of other tools in Kubernetes, like pod security policies, that can help stop security issues from escalating. But at the end of the day, segmenting workloads in different clusters provides the strongest isolation.
A second factor to consider from a compliance perspective is that running multiple clusters may make it easier to meet certain compliance rules. For example, if you need to keep some workloads on-premises or keep data within a certain geographic region due to regulatory requirements, you can deploy a cluster in a location that addresses those needs, while running other clusters elsewhere.
Single-Cluster vs. Multi-Cluster: How to Decide
The biggest factor in deciding between a single- or multi-cluster deployment is scale. The larger your organization and the greater the number of applications you need to deploy, the more you stand to gain from multiple clusters.
Your approach to dev/test is a factor to consider as well. If you want your development applications to run in the same cluster as production, multiple clusters make less sense. On the other hand, multiple clusters are a good strategy if you want to isolate dev/test from production.
The dispersion of your users is worth thinking about, too. If you have users spread over a large geographic area, multiple clusters may help you reduce latency and improve performance, as noted above.
Finally, consider your Kubernetes management tooling. If you lack an efficient way to manage multiple clusters, a multi-cluster deployment is likely to be more hassle than it is worth. It’s only when you can deploy and manage multiple clusters easily that you can reap the full benefits of a multi-cluster architecture without having management complexity undercut the value that Kubernetes provides.
Approaches to Multi-Cluster Kubernetes
There are several ways to go about setting up and managing a multi-cluster deployment.
DIY Multi-Cluster
The simplest is the so-called DIY approach, in which you set up multiple clusters manually. This method requires the greatest effort, but it offers the benefit of giving you maximum flexibility over where the clusters run and how they are managed. You can set up the clusters in virtually any cloud or private data center, and you can manage them using any platform that supports multi-cluster management (which Platform9 does for clusters running on most types of infrastructure and clouds).
Using a Multi-Cluster Distribution
You can also leverage a Kubernetes distribution that is designed for multi-cluster support. Most of the major distributions now do this:
- Anthos: Anthos, Google’s Kubernetes-based hybrid cloud platform, can manage clusters running on multiple clouds as well as on-premises. The management layer is provided by GKE, Google’s Kubernetes distribution.
- EKS Anywhere: Amazon’s recently announced EKS Anywhere platform lets you deploy and manage clusters both in the AWS cloud and on-premises. EKS Anywhere will likely also support clusters in other public clouds in the future, but Amazon has not yet announced concrete plans in this regard.
- Tanzu: Tanzu, VMware’s Kubernetes platform, supports multiple clusters running in any public cloud or on-premises, as long as the clusters conform to CNCF standards.
Using a Multi-Cluster Management Platform
A third approach is to use a Kubernetes management platform that can support multiple clusters.
Platform9 Managed Kubernetes (PMK) is the prime example here. PMK is not a Kubernetes distribution; it doesn’t provide its own implementation of Kubernetes or require you to configure your clusters in a specific way. It supports a variety of different CNIs, storage drivers, and so on.
Instead, PMK lets you deploy clusters on virtually any infrastructure of your choice – whether a public cloud or a private data center – and then manage them all through the Platform9 control plane.
The main advantage of this approach as compared to using a Kubernetes distribution to manage multiple clusters is that you are not tied to specific types of cluster configurations or tooling. With Platform9, you could also configure each cluster in a different way (using different CNIs, for instance) and still manage them centrally via PMK.
You can even leverage Platform9’s multi-version support to deploy different versions of Kubernetes in each cluster. In this way, you gain maximum flexibility to run whichever version or versions of Kubernetes you need, which can be helpful if, for example, you want to test a newer version of Kubernetes in one cluster while sticking to a tried-and-true version for your production clusters. With Platform9, you can manage all of these clusters and versions centrally, while also enjoying granular control over versioning.
Conclusion
Multi-cluster Kubernetes offers a range of benefits, especially for teams that need to operate at large scale or that require strict isolation between their workloads. In order to get the most out of a multi-cluster approach, however, it’s critical to select a management platform that allows you to manage multiple clusters efficiently.
- Beyond Kubernetes Operations: Discover Platform9’s Always-On Assurance™ - November 29, 2023
- KubeCon 2023 Through Platform9’s Lens: Key Takeaways and Innovative Demos - November 14, 2023
- Getting to know Nate Conger: A candid conversation - June 12, 2023