10 Kubernetes Performance tips

Overview

This article will cover Top 10 Kubernetes Performance Best Practices:

Kubernetes is a scalable and performant engine that orchestrates containers in a server environment. It is highly optimized by default, and it scales nicely in a suitable infrastructure.

It is also less opinionated by default, and there are plenty of customizations for end-users to define. This flexibility allows Kubernetes to cover many different use cases and penetrate the market faster, making it extremely popular.

Since server costs can increase quickly, you have to find ways to increase your infrastructure utilization and reduce costs to get the most out of your environments. In this article, we will give you ten tips for squeezing every bit of efficiency and performance out of your Kubernetes distribution.

Kubernetes Performance Best Practices

1Define Deployment Resources

Kubernetes orchestrates containers at scale, and the heart of this mechanism is the efficient scheduling of pods into nodes. You can actually help the Kubernetes scheduler do this by specifying resource constraints.

In other words, you can define request and limit resources such as CPU, memory, or Linux HugePages.

For example, let’s say you have a Java microservice that acts as an email sender service, so it primarily handles network requests. You can assign the following resource profile:

resources:
   requests:
     memory: 1Gi
     cpu: 250m
   limits:
     memory: 2.5Gi
     cpu: 750m

If you were to implement the same application in Go, you would use a different set of resource constraints (mostly memory-related):

resources:
   requests:
     memory: 128m
     cpu: 250m
   limits:
     memory: 1Gi
     cpu: 750m

So what do we mean by CPU and memory? That depends, since there are different kinds of CPUs and different kinds of memory chips which vary from provider to provider. Xeon CPUs are better suited for server environments, for example, while DDR4 chips have higher read/write throughput. You can also choose to use either CPU or memory-intensive nodes for specific deployments so that Kubernetes can schedule them appropriately.

Ultimately, when you clearly define the resource requirements in the deployment descriptor, you make it easier for the scheduler to ensure that each resource is allocated to the best available node, which will improve runtime performance.

2Deploy Clusters closer to customers

The geographic location of the cluster nodes that Kubernetes manages is closely related to the latency that clients experience.

For example, nodes that host pods located in Europe will have faster DNS resolve times and lower latencies for customers in that region.

Before spawning Kubernetes clusters here and there, however, you need to devise a careful plan for handling multi-zone Kubernetes clusters. Each provider has limitations on which zones can be used to provide the best failure tolerations. For example, AKS can use this list of zones, while GKE offers options for multi-zonal or regional-clusters, each with its own list of pros and cons.

It’s important to start locally and then expand if you observe issues with the current traffic or if you offer priority routing for some services. This will give you more time to figure out what the major bottlenecks are when serving content to customers.

3Choose better Persistent Storage and Quality of Service

Just as there are different kinds of CPUs and memory chips, there are different kinds of persistent storage hardware. For example, SSDs offer better read/write performance than HDDs, and NVMe SSDs are even better under heavy workloads.

There are many different types of persistent volumes available to Kubernetes, which is very extensible and less opinionated about storage.

Some Kubernetes providers extend the persistent volume claims schema definitions with Quality of Service levels. That means they prioritize volume read/write quotas for specific deployments, offering better throughput under certain circumstances.

In general, everyone agrees that better hardware offers better performance, but it’s important to realize that, in most cases, the order of performance is by a constant factor c. So if you were to upgrade from SSD to NVMe, you would expect to see a percentage increase in read/write operations, but you would not expect to see any difference in network latency.

4Configure Node Affinities

Not all nodes run on the same hardware, and not all pods need to run CPU-intensive applications. Specializations for nodes and pods are available to Kubernetes via the Node Affinity and Pod Affinity.

When you have nodes that are suitable for CPU-intensive operations, you want to pair them with CPU-intensive applications to maximize effort. To do that, you would use the nodeSelector with the specified label.

For example, let’s say that you have two nodes: one with CPUType=HIGHCORE that offers high CPU core count and frequency, and another with MemoryType=HIGHMEMORY that offers the highest and fastest memory available.

The simplest way to do this is to issue a POD deployment to the HIGHCORE by adding the following selector in the spec section:

…
nodeSelector:
	CPUType: HIGHCORE

A more expressive yet specific way to do this is to use the nodeAffinity of the affinity field in the spec section. There are currently two choices:

  • requiredDuringSchedulingIgnoredDuringExecution: This is a hard preference, and the scheduler will deploy the pods only to specific nodes (and nowhere else).
  • preferredDuringSchedulingIgnoredDuringExecution: This is a soft preference, which means that the scheduler will try to deploy to the specified nodes, and if that’s not feasible, it will try to schedule to the next available node.

You can use specific syndax for controlling the node label, such as In, NotIn, Exists, DoesNotExist, Gt, or Lt. Bear in mind, however, that complex predicates within big lists of labels will slow down the decision-making process in critical scenarios. In other words, keep it simple.

5Configure Pod Affinity

As we mentioned earlier, Kubernetes allows you to configure the Pod affinity in terms of existing running pods. Simply speaking, you can allow certain pods to run along other pods in the same zone or cluster of nodes so that they communicate with each other more frequently.

The available fields under the podAffinity of the affinity field in the spec section are the same as the ones for nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution. The sole difference is that the matchExpressions will assign pods to a node that already has a running pod with that label.

Kubernetes also offers a podAntiAffinity field that does the opposite of the above: it will not schedule a pod into a node that contains specific pods.

In either case, the same advice applies to nodeAffinity expressions: try to keep the rules simple and logical, with fewer labels to match (not thousands). It’s remarkably easy to produce a non-matching rule that will create more work for the scheduler down the line and thus diminish the overall performance.

6Configure Taints

The ability to customize Pod scheduling options does not end there. When you have thousands of pods, labels, or nodes, it’s sometimes very hard not to allow certain pods to land on certain nodes.

This is where taints come in. Taints denote rules for not allowing things to happen, such as not allowing a specific set of nodes to be scheduled to particular places for various reasons. To apply a taint to a specific node, you have to use the taint option in the kubectl. You need to specify a key and a value part followed by a taint effect like NoSchedule or NoExecute:

$ kubectl taint nodes backup-node-1=backups-only:NoSchedule

Interestingly, you can also override this behavior using tolerations. This comes in handy when you have a node that you tainted, and so nothing was scheduled. Now, you want just the backup jobs to be scheduled there. So how do you schedule them? You need to enable only pods that have a toleration to be scheduled there.

In this example, you would add the following fields in the Pod Spec:

spec:
   tolerations:
     - key: "backup-node-1"
        operator: "Equal"
        value: "backups-only"
        effect: "NoSchedule"

Since this matches the tainted node, any pod with that spec will be able to be deployed in the backup-node-1.

Taints and tolerations give Kubernetes admins the highest level of control, but they require some manual setup before you can make use of them.

7Build optimized images

Naturally, it’s best to use container optimized images so that Kubernetes can pull them faster and run them more efficiently.

What we mean by optimized is that they:

  • Only contain one application or do one thing.
  • Have small images, since big images are not so portable over the network.
  • Have endpoints for health and readiness checks so that Kubernetes can take action in case of downtimes.
  • Use a container-friendly OS (like Alpine or CoreOS) so that they are more resistant to misconfigurations.
  • Use multistage builds so that you deploy only the compiled application and not the dev sources that come with it.

Lots of tools and services let you scan and optimize images on the fly. It’s important to keep them up-to-date and security-assessed at all times.

8Configure Pod Priorities

Just because you have configured Node and Pod affinities does not mean that all of the Pods should be scheduled with the same priority. For example, you may have some pods that need to be deployed before others for various reasons.

In that case, Kubernetes offers the option to assign priorities to Pods. First you need to create a pod, for example PriorityClass:

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority
value: 9999
globalDefault: false
description: "This priority class should be used for smoke testing environments before any other pod gets deployed."

You can create as many priority classes as you want, although it’s recommended that you have just a few levels (low, medium, and high, for example). The higher value number indicates higher priority order. You can add a priorityClassName under the Pod spec:

priorityClassName: high-priority

That way, you can customize in order to efficiently run some deployments as necessary.

9Configure Kubernetes Features:

Kubernetes runs a feature gate framework that allows administrators to enable or disable features for their environments.

Some of these features could be beneficial for scaling performance, so they are worth investigating:

  • CPUManager: offers basic core affinity features and constrains workloads to specific CPUs.
  • Accelerators: enables GPU support.
  • PodOverhead: handles Pod overhead.

Among other considerations, it’s equally important to manage memory overcommitment so that it doesn’t panic and so that processes are killed based on priority.

For example, it’s worth verifying the following system settings:

vm.overcommit_memory=1
vm.panic_on_oom=0

In addition, kubelet config has a setting that controls the pods-per-core, which is the number of pods that you run on the node based on the number of cores that node has available. For example, for pods-per-core=10 and a 4-core node, you can have a maximum of 40 pods per node.

Many optimizations could affect the maximum cluster limit for the best performance (typically latency under 1s) and the maximum number of pods per cluster, though this may not be feasible to verify in practice.

10Optimize Etcd Cluster

Etcd is the brain of Kubernetes, so it’s important to keep it healthy and optimized. Ideally, etcd clusters should be co-located together with the kube-apiserver in order to minimize latency between them. If co-location is not possible, then you should have an optimized routing path with high network throughput between them.

Be careful, though, as scaling up etcd clusters leads to higher availability at the expense of higher performance, so do not overcommit to extra etcd members if it’s not necessary.

Summary

For what it’s worth, Kubernetes is very efficient and performant out of the box. The more effort you put into properly configuring your deployments to match your current needs, the more performance you will get in the long run.

You can also make your job easier by utilizing a Kubernetes service like Platform9 Managed Kubernetes, which is already optimized, scales well, and offers solutions for every workload. If you want to try out the platform, you can sign up for the sandbox here.

Platform9

You may also enjoy

Mastering the operational model challenge for distributed AI/ML infrastructure

By Kamesh Pemmaraju

Catapult, Kubernetes Monitoring Re-imagined Remotely

By Chris Jones

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Learn the FinOps best practices to maximize your cloud usage & budget:Register Now
+