Best Practices for Production-Grade Kubernetes
Kubernetes is a complex platform that provides for highly scalable, efficient use of containers. Operating enterprise Kubernetes deployment is difficult. it can also be highly problematic for companies that just try to “wing it” and figure out what to do as they go along.
Don’t let that be you. Here are some best practices to employ when running Kubernetes in production.
Deployment
Kubernetes environments are extremely dynamic. Services are deployed and updated frequently. Nodes are added and removed from clusters. Clusters are spun up and down according to workload.
To streamline the ability to deploy and maintain services, keep in mind deployment best practices—these ones in particular:
- Ensuring Open and Flexible Environments
Kubernetes runs on a variety of computing infrastructure, including commodity servers. Existing hardware can be redeployed to run Kubernetes along with newly procured servers. You have your choice of running Kubernetes on bare metal, virtual machines (VMs), or in public clouds.If you already have an established VM environment, running Kubernetes in that environment can be a logical choice. If you’d rather not maintain physical infrastructure, running Kubernetes in a public cloud is a good option and one with low barriers to entry.
- Standardize the Container Build Process
Containers are a key building block of a microservice architecture, and how they’re managed directly impacts the efficiency, reliability, and availability of services running in Kubernetes clusters.The container build process should be automated using a continuous integration/continuous deployment (CI/CD) system. Open source tools such as Jenkins are widely used CI/CD tools. Major public cloud providers also offer CI/CD services. These tools reduce the workload on developers when deploying a service. They also allow for automated testing prior to deployment, and can support rollback operations when needed.
- Ensure Self-Service
Developers shouldn’t have to coordinate with IT administrators to deploy and monitor services running in Kubernetes clusters. To ensure developers can manage deployments, it’s important to provide tools for deploying and scanning applications.For example, Helm is a package manager for Kubernetes and supports defining, deploying, and upgrading applications, which can streamline the management of applications. Security scanning tools should be in place, as well, to help developers identify vulnerabilities in applications before they’re deployed.
- Applications and Storage
With developers and admins alike working across multiple environments, it’s also important to have policies in place to enable efficient use of resources. Consider role-based access control (RBAC) policies and limits to ensure resources are used fairly, and that no single deployment consumes an excessive amount of resources.
Operations
There are a number of operations best practices to consider, which will help reduce the operational overhead associated with maintaining Kubernetes clusters.
- Cluster Observability
The idea behind single-pane-of-glass visibility is that all information needed to understand and diagnose the current state of the cluster, deployments, and other components should be available from a single tool.For example, from a single application, administrators should be able to configure monitoring, analyze monitoring data, and specify alerts triggered by that monitoring data. Plan to use a standardized set of monitoring tools for collecting, storing, analyzing, and visualizing performance monitoring data.
- Scaling
When scaling with Kubernetes, you have the option of scaling the size of a cluster or increasing the number of clusters. When workloads vary widely, the Horizontal Pod Autoscaler can be used to adjust the number of nodes in a deployment. Kubernetes also has a Vertical Pod Autoscaler, but that’s currently in beta release and shouldn’t be used in production.One scaling question you’ll face is whether to run one cluster or multiple clusters. Kubernetes can scale to thousands of nodes and hundreds of thousands of pods, so a single cluster can meet many use cases.
- Governance and Security
To start with, plan to implement granular RBACs. These can be used to implement the principle of “least privilege,” which states that users should have only the permissions they need to perform tasks assigned to them and no more.Use audit trails to track security-related changes in the system. For example, operations, like adding a user and changing permissions, should be logged. Also, use encryption to secure communications both within and outside of the cluster.
It’s a best practice to scan applications for vulnerabilities. In a similar way, you should review vulnerabilities in Kubernetes software and patch as necessary. This is a situation in which it may be beneficial to have multiple clusters, because a patch can be deployed to a single cluster and evaluated before rolling it out to others.
They’re ‘Best’ Practices for a Reason
Kubernetes is a complex platform that provides for highly scalable, efficient use of computing and storage resources. But it can also be highly problematic for companies that just try to “wing it” and figure out what to do as they go along.
Don’t let that be you. Following the best practices outlined here will help to ensure that you realize the optimal benefit of your Kubernetes investment. Download the full tech brief, Production Grade Kubernetes best practices checklist.