Kubernetes in the Enterprise: Top Use Cases


Overview

In this article you’ll learn:

Top Use Cases for Kubernetes

Kubernetes has gained popularity for a number of use cases, given its unique features. It’s a suitable platform to run stateless, 12-factor apps, and is easy to integrate into CI/CD pipelines due to its open API. Let’s review some of the key use cases that are primed for taking advantage of Kubernetes-based applications.

(Note also that there are many use cases and tutorials beyond the ones detailed here, available at the Kubernetes website.)

Simple Deployment of Stateless Applications

A very popular stateless app to run on top of Kubernetes is nginx, the open-source web server. Running nginx on Kubernetes requires a deployment YAML file that describes the pod and underlying containers. Here’s an example of a deployment.yaml for nginx:


apiVersion: apps/v1 # for versions before 1.9.0 use
apps/v1beta2
kind: Deployment
metadata:
 name: nginx-deployment
spec:
 selector:
 matchLabels:
 app: nginx
 replicas: 2 # tells deployment to run 2 pods
matching the template
 template:
 metadata:
 labels:
 app: nginx
 spec:
 containers:
 - name: nginx
 image: nginx:1.7.9
 ports:
 - containerPort: 80

Create this deployment on a cluster based on the YAML file:


kubectl apply -f https://k8s.io/examples/application/
deployment.yaml

The code creates two pods, each running a single container:


NAME READY STATUS RESTARTS AGE
nginx-deployment-1771418926-7o5ns 1/1 Running 0 16h
nginx-deployment-1771418926-r18az 1/1 Running 0 16h

While manually creating these deployment files and creating pods is helpful for learning Kubernetes, it isn’t the easiest way.

An easier way to deploy the same nginx application is to use the Kubernetes package manager, Helm. Using Helm, the deployment looks like this:


helm install docs/examples/nginx

Deploying an app like this is easy; in the example, we skipped over the harder parts, like exposing the web server outside of the cluster, and adding storage to the pod.

And this is where Kubernetes is both a little complex to get started, as well as explicit about separation of services and functionality. For example, if we were to add a database connection to this nginx pod to store data for a WordPress-based website, here’s how that would work.

First, we’d need to add a service, to make the database (running outside of the Kubernetes cluster for now) available for consumption in pods:


kind: Service
apiVersion: v1
metadata:
 name: external-mysql-service
Spec:
 type: ClusterIP
 ports:
 - port: 3306
 targetPort: 3306
selector: {}

Since the database doesn’t run on the cluster, we don’t need to add any pod selectors. Kubernetes doesn’t know where to route the traffic, so we need to create an endpoint to instruct Kubernetes where to send traffic from this service.


kind: Endpoints
apiVersion: v1
metadata:
 name: external-mysql-service
subsets:
 - addresses:
 - ip: 10.240.0.4
 ports:
 - port: 3306

Within the WordPress configuration, you can now add the MySQL database using the metadata name from the example above,


external-mysql-service.

Determining what components your app uses before deployment makes it easier to separate them into separate containers in a single pod, or different pods; even different services, external or on Kubernetes. This separation helps in fully using all of Kubernetes’ features like horizontal auto-scaling and self-healing, which only works well if the pods adhere to the rules and assumptions Kubernetes makes about data persistence.

Deploy Stateful Data Services

In the previous example, we assumed that the MySQL instance ran outside of Kubernetes. What if we want to put it under Kubernetes’ control and run it as a pod?

The StatefulSet pod type and a PersistentVolume help with this. Let’s look at an example:


apiVersion: v1
kind: Service
metadata:
 name: mysql-on-kubernetes
spec:
 ports:
 - port: 3306
 selector:
 app: mysql
 clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use
apps/v1beta2
kind: Deployment
metadata:
 name: mysql-on-kubernetes
spec:
 selector:
 matchLabels:
 app: mysql
 strategy:
 type: Recreate
 template:
 metadata:
 labels:
 app: mysql
 spec:
 containers:
 - image: mysql:5.6
 name: mysql
 env:
 # Use secret in real usage
 - name: MYSQL_ROOT_PASSWORD
 value: password
 ports:
 - containerPort: 3306
 name: mysql
 volumeMounts:
 - name: mysql-persistent-storage
 mountPath: /var/lib/mysql
 volumes:
 - name: mysql-persistent-storage
 persistentVolumeClaim:
 claimName: mysql-pv-claim

The file defines a volume mount for /var/lib/mysql, and then creates a PersistentVolumeClaim that looks for a 20GB volume. This claim is satisfied by any existing volume that meets the requirements, or by a dynamic provisioner.

This means we need to create a PersistentVolume that satisfies the claim:


kind: PersistentVolume
apiVersion: v1
metadata:
 name: mysql-pv-volume
 labels:
 type: local
spec:
 storageClassName: manual
 capacity:
 storage: 20Gi
 accessModes:
 - ReadWriteOnce
 hostPath:
 path: “/mnt/data”
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: mysql-pv-claim
spec:
 storageClassName: manual
 accessModes:
 - ReadWriteOnce
 resources:
 requests:
 storage: 20Gi

Using the same data structure and the same tools, it’s possible to create services on the platform with persistent storage. While it’s a little extra work to re-platform data services like databases onto the Kubernetes platform, it does make sense.

With such a clear separation between application binaries, configuration and data in the Kubernetes layers, and a distinct separation between a container and the underlying OS, many of the lifecycle issues of the VM world disappear. Managing the lifecycle for these applications is traditionally very hard, a lot of work, and, not unusually, requires downtime.

With Kubernetes, by contrast, a new version of the application can be deployed by deploying a new pod (with the new container version) to production. As this only switches the container, and doesn’t need to include the underlying OS and higher-level configuration and data, this is a very fast and lightweight operation.

CI/CD Platform with Kubernetes

Kubernetes’ open API brings many advantages to developers. The level of control means developers can integrate Kubernetes into their automated CI/CD workflow effortlessly. So even while Kubernetes doesn’t provide any CI/CD features out of the box, it’s very easy to add Kubernetes to a CI/CD pipeline.

Let’s look at Jenkins, the popular CI solution. Running Jenkins as a pod is easy, by deploying it via the Kubernetes package manager, Helm.


$ helm install --name my-release stable/jenkins

The more interesting part is driving Kubernetes from within Jenkins. This uses a plugin in Jenkins that allows Jenkins to dynamically provision a Jenkins Agent on Kubernetes to run a single build. These agents even have a ready-to-go Jenkins-slave Docker image.

This setup allows developers to automate their pipeline: each new code commit on git triggers a container build (which is built using the Jenkins Agent) and subsequently pushed to Kubernetes to replace the old version of the app in question for a rolling upgrade. (see more detailed explanation here).


There’s More:

 

On the next posts we’ll dive deeper into the big decisions around how to support Kubernetes in your organization and best practices for operating Kubernetes at scale.

Can’t wait?

To learn more about Kubernetes in the Enterprise, download the complete guide now.

Platform9

You may also enjoy

Shared Kubernetes Clusters on Azure with Platform9

By Platform9

Maximize cloud usage and cost efficiency using FinOps best practices

By Kamesh Pemmaraju

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Leaving VMware? Get the VMware alternatives guideDownload now