6 Key Concepts to Master for a Successful Kubernetes Implementation [Intro to Webinar Series]

People often talk about Kubernetes as if it’s just one tool that is always deployed in a single way. If only!

The reality of Kubernetes is that it is much more complicated. Kubernetes is actually more than a half-dozen different tools, which are combined with various third-party services to build out a complete application hosting platform. There are also multiple deployment models for Kubernetes, an array of different distributions and a multitude of core concepts that you have to understand to use Kubernetes effectively.

Suffice it to say, then, that Kubernetes is much more complicated in practice than it often appears in theory. If you’re just getting started with it, it can be very easy to under-appreciate the many variables surrounding Kubernetes. As a result, you end up sticking with whichever Kubernetes implementation approach seems simplest or is the “default” for the software ecosystem you’re accustomed to working within. (For instance, if you already use Red Hat Enterprise Linux, maybe you default to OpenShift as your Kubernetes platform because it’s also a Red Hat product.)

To help teams plan a more informed Kubernetes implementation strategy, this article lays out six key concepts to understand about Kubernetes before you commit to a certain deployment model or strategy. Although this blog alone isn’t enough to offer comprehensive guidance on Kubernetes implementation planning, it highlights some of the most important considerations to weigh in order to prevent short-sightedness or pitfalls as you plan a Kubernetes strategy for the first time.

Interested in more information on Kubernetes in production deployment?

Sign up for an instructive webinar series from Platform9 and key partners:
Enterprise Action Plan – Moving to Production with Kubernetes

Kubernetes architecture

At its core, the Kubernetes architecture is straightforward enough: Kubernetes relies on worker nodes to host applications and one or more “master” nodes to manage the workers.

But things are actually considerably more complicated once you dive deeper. Workloads don’t run directly on the nodes; instead, they are typically hosted in containers (although it’s possible to orchestrate virtual machines with Kubernetes as well via an API like KubeVirt). Individual containers are grouped into what are called Pods, and Pods can be segmented by namespaces.

Worker nodes and master nodes also run different software components. Worker nodes are powered by kubelet (the Kubernetes agent), kube-proxy (a proxy handler) and a container runtime (see below for more on this), while the master node hosts a key-value store (Etcd), an API server (kube-apiserver) and more.

And these are just the essential aspects of Kubernetes architecture. For a more detailed breakdown, this Kubernetes architectural guide is useful.

Kubernetes’s underlying technologies

Kubernetes can be deployed on a range of different operating systems and hardware, but it still depends on certain key underlying technologies.

One is Linux. Although it’s possible to run Kubernetes worker nodes on Windows, you’ll need Linux servers to host your master nodes.

Another is container runtimes, which are the technology that allows Kubernetes to execute containerized applications. Kubernetes is compatible with several major runtimes, including Docker, CRI-O and containerd. But some of the more obscure runtimes that you may find in the wild won’t work with Kubernetes.

Beyond this, Kubernetes can be integrated with a range of different tools and drivers to support services like logging and persistent storage. As noted below, however, the Kubernetes distribution you use may impose limitations on which integrations are available (or at least, which ones are supported by the distribution vendor).

Kubernetes distributions

Kubernetes itself is an open source project, and you can download and install it yourself at no cost.

However, some organizations opt instead to use a Kubernetes distribution, such as OpenShift, Rancher or Google Kubernetes Engine. Distributions provide Kubernetes as a package that also includes (in most cases) a user-friendly installer and management tools.

The tradeoff is that some distributions are “opinionated” about the way they expect you to do things. For example, whereas Kubernetes itself (as noted above) is compatible with a range of different logging tools, a distribution like OpenShift expects you to use a logging stack based on Elasticsearch, Fluentd and Kibana.

The lesson here is that, while using a distribution may make it easier to set up and deploy Kubernetes, you should be aware of which limitations the distribution places on other tooling. If the distribution you are considering imposes too many constraints but you don’t want the hassle of managing open source Kubernetes on your own, consider instead using a Kubernetes management service like Platform9, which provides the ease-of-use of a Kubernetes distribution without locking you into a certain technology stack.

Deployment models

There are three main ways to deploy Kubernetes:

  • “Minified” Kubernetes: You can create a small cluster using a Kubernetes distribution designed for this purpose, such as K3s or MicroK8s. These distributions are designed primarily for testing purposes; they’re not a good choice for building a production Kubernetes implementation.
  • Self-managed Kubernetes: You can set up and manage a Kubernetes cluster on your own servers. Those servers could run on-premises or through a cloud provider’s IaaS service, like AWS EC2 or Azure Virtual Machines.
  • Kubernetes-as-a-Service: Most public cloud providers offer Kubernetes as a service, meaning they automate the cluster setup process for you and host the cluster on their own hardware. Google Kubernetes Engine, Azure Kubernetes Service and Amazon Elastic Kubernetes Service are the major offerings in this vein. You can also run some Kubernetes distributions, such as OpenShift, via a Software-as-a-Service model if you wish.

To learn more about the pros and cons of different deployment models, check out this Kubernetes deployment guide.

The meaning of “managed Kubernetes”

Speaking of Kubernetes-as-a-Service, some cloud vendors that offer this type of solution refer to it as “managed Kubernetes.” This is a somewhat ambiguous term.

In some contexts, such as when you’re talking about a service like Google Kubernetes Engine, managed Kubernetes means mostly that the work of provisioning the cluster is handled for you. End-users are still responsible for monitoring and troubleshooting their clusters on their own.

In other contexts, like Platform9 Managed Kubernetes (PMK), users get a more extensive and proactive management service. (For details on how PMK compares to other Kubernetes deployment options, check out this Kubernetes buyer’s guide.)

In short, “managed Kubernetes” has a range of meanings. Depending on the amount of active support you require, one type of managed Kubernetes service may be better for you than another.

Multi Cloud Kubernetes

Over the past year or so, the idea of deploying Kubernetes clusters that span multiple clouds (and/or on-premises data centers) has grown in popularity. Anthos, a newish Kubernetes-based platform from Google, was designed first and foremost for this purpose. Rancher also actively supports multi-cloud Kubernetes.

There are a variety of pros and cons to running Kubernetes as part of a multi-cloud architecture (this blog post offers some additional guidance). If you’re just getting started with Kubernetes, sticking with just one cloud can keep things simpler, but expanding to multiple clouds can make it easier to manage sprawling, multi-cloud environments.

Conclusion

No matter how you choose to use it, Kubernetes is a complicated platform that can be deployed in a variety of ways. The choices offered by different Kubernetes architectures, distributions and deployment strategies are part of the reason why Kubernetes is so flexible and powerful, but all of the variables and moving pieces also make it more difficult to master Kubernetes.

No matter how you approach your Kubernetes implementation, Platform9 can help. By providing a fully managed Kubernetes solution that is compatible with a range of third-party hosting infrastructures, Platform9 allows you to build your Kubernetes cluster in whichever way makes most sense for your needs, while still offering the support and management you need to keep Kubernetes running smoothly.

Interested in more information on Kubernetes in production deployment?

Sign up for an instructive webinar series from Platform9 and key partners:
Enterprise Action Plan – Moving to Production with Kubernetes

Platform9

You may also enjoy

Top 5 considerations for migrating off VMware

By Chris Jones

Maximize cloud usage and cost efficiency using FinOps best practices

By Kamesh Pemmaraju

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Leaving VMware? Get the VMware alternatives guideDownload now