Tutorial: Dynamic Provisioning of Persistent Storage in Kubernetes with Minikube

In the previous article, we deep-dived into the constructs of Kubernetes storage, and what the different types of storage are good for. We discussed dynamic provisioning, StorageClasses, and CSI external storage. In this article, we set up a simple, private sandbox – using minikube – where we can observe and hack on the inner-workings of Kubernetes storage.


When I recently realized that Minikube ships its own Dynamic Provisioner, which uses, gasp, hostPath, I was delighted. Now, I can finally share an environment with anyone which is capable of fundamentally illustrating – and reproducing – the subtleties of the Kubernetes storage model in an understandable way.

What kind of real-world questions can you answer with Minikube?

Here are a few examples of storage-related questions customers encounter, which can be easily demonstrated and answered in minikube:

  • What happens to a recycled volume claim once the claim and the pod is deleted? Trick question: Recycling isn’t a thing anymore – dynamic provisioners allow you to customize the logic in any manner you choose.
  • Is there a way to force storage to change permissions of a volume before/after it is mounted?
    Yes, look at various examples in the external-storage repository (more on this later).
  • What happens to a hostPath volume if a directory doesn’t exist yet?
    It’s created by the kubelet.
  • Does the kubelet modify volume permissions for me?
  • What kind of security context settings do I need to use a host network port (often asked for glusterFS)?
  • Is there a distributed filesystem i can run quickly that is fully containerized?
    (yes, NFS, and it runs OOTB in minikube)
  • If I have a NAS available on an IP address, can I just make subdirectories under *that* my default storage class?
    Sure – and yes you can mount it on your laptop to as long as the IP is routable.

Why MiniKube?

MiniKube is still the most important tool in the Kubernetes toolbox when it comes to reasoning about Kubernetes’ functionality and the system’s behavior.

Although many people use MiniKube for testing YAML formatting and basic kubectl command syntax, there are very few working MiniKube examples for how to dive into the internals of more intricate Kubernetes constructs — such as storage. Therefore, many people’s first experience debugging a StorageClass or CSI issue winds up, unfortunately, happening in a live production cluster.

After diving into various aspects of how MiniKube’s storage model worked, I realized that many of us use persistent volumes every day without ever having an opportunity to truly understand how Kubernetes storage is implemented in a simple environment which lends itself to playing, hacking, and learning. Hence, I figured I’d write a blog post about the internals of my favorite dynamic storage model: MiniKube (yes, MiniKube)!

Welcome to MiniKube University

The first thing people say when they hear me say “MiniKube” is typically along the lines of “oh, that’s not real, I need a real cluster.” But “real” clusters often mislead you with distributed systems problems and vendor-specific functionality that have nothing to do with the internals of Kubernetes. Using MiniKube, we can rapidly iterate on kubelet configurations, storage configurations, and actually understand in its entirety, a whole dynamic provisioning system, end-to-end.

So, if you want to deeply understand how Kubernetes provisioning works, read on… no matter how much technical chops you have, MiniKube can always take you to school.

Why a Sandbox for Modeling Storage is Important

Of all the knobs and sliders in a Kubernetes deployment, storage remains the only core aspect of a Kubernetes cluster for which there is no 80/20 solution which “just works” out of the box for most workloads (I’ll justify this assertion in a second). Thus, non-trivial storage policies are eventually going to be an important part of your cluster.

In this post, we’ll take something easy to understand, MiniKube, and use it as a springboard for deeply learning about the internals of the Kubernetes storage model in an environment that is cloud and platform neutral. In the next installment, we’ll go on to extend this model to implement CSI (Container Storage Interface) features which are now GA in Kubernetes 1.13.

Let’s Get Started:

The best way to wrap your head around what Dynamic Provisioning means is to observe how it works without any buzzwords floating around. No hipster distributed filesystem projects, no GCP or EBS volume types, just pure dynamic provisioning.

We would set up dynamic provisioning using MiniKube installed locally — no public cloud, or CSI required.

In addition to being a good environment for learning about how Kubernetes internals work, MiniKube (with –vm-driver=none) can also be used for performance testing your Kubernetes storage directly on bare metal. These sorts of isolated experiments which de-couple the cloud infrastructure from the container can tell you a lot about where your performance bottlenecks are.

Testing your storage model in a sandboxed Kubernetes cluster can be a powerful way to determine whether, at a baseline, you’ve got the right storage model. Large JVM applications can suffer from long startup times due to traditionally large amounts of Disk or GC related I/O, and this can be deadly in a distributed filesystem. Similarly, high levels of I/O in a database — an intensive application that uses any sort of disk-based row locking or sequential scanning can bring an application to its knees. And worst of all, some filesystems simply will not, and cannot, support the semantics of file operations needed by persistent containers.

Ok, so let’s start hacking on this stuff!

Step by Step: Hackable MiniKube in Local Mode

There are two ways to quickly run Kubernetes without an appliance in a reproducible manner:

  • hack/local-up-cluster.sh,
  • minikube –vm-driver=none

In this tutorial, we’ll cover the latter because VirtualBox VMs may not work for storage testing (i.e. real-world storage arrays in your data center or in the cloud probably aren’t mountable on your laptop). Note: If you don’t have a real Linux box, you can use vm-driver=virtualbox.

Note that you cannot do these volume and dynamic provisioning experiments in their entirety on all platforms. MiniKube is unique in that it’s the only Kubernetes distribution that can be spun up almost instantly, on any hardware, with a completely transparent and accessible model to access all components and modify them in real time. Cloud providers often only guarantee that the Kubernetes API will be available, and will obfuscate the internals of Master components.

For example, on the Google Kubernetes Engine (GKE), you won’t even be able to directly access your master, or even view its corresponding etcd configuration. Although you may be able to modify kubelet options and restart them in GKE, these modifications might not be supported, and cannot be done on the same node which is running your master (since your master is inaccessible). Thus, although any kube environment will allow you to explore some aspects of storage classes, you won’t be able to see how they’re modified and implemented in all platforms.

Part 1: Setting up a hackable minikube environment

Note: you can jump ahead to part 2 if you’re not actually testing a real application that needs a real amount of CPU or memory allocated.

A quick note on hacking Kubernetes master component deployments:

You can easily modify the internals of how your Controller Manager logs control plane events, and this can be very helpful when investigating the internal generic volume binding, attachment, and detachment logic. These events are responded to by a typical dynamic volume provisioner, which will note that PVCs have been created, and respond to these events by ultimately creating a PV under the hood based on the way storage classes are configured. If you want to dive into these internal details (note that you don’t need to, but it’s particularly interesting to do so, due to the ‘space shuttle’ style of logging in the volume controllers [4]) you can open /etc/kubernetes/manifests/kube-controller-manager.yaml and modify its start-up settings, specifically by adding a “-v=5” option to the KCM container.

Hacking the minikube core services: Static Pods.

The best example of how to hack a master component in minikube can be found looking at the kubelet manifest for the controller manager, which is continually monitored by the kubelet, and the static pod it’s associated with is restarted if it changes. The reason it is implemented by the kubelet, rather than a replication controller, is that replication controllers assume, alas, that the controller manager is already running, which is not the case for a static pod. Similarly, there are static pods for the api-server and other ‘inception’ components. You can see them all under /etc/kubernetes/manifests. Specifically, we’ll look into a modification that’s useful for investigating how storage works, which is the subject of this post :).

vi /etc/kubernetes/manifests/kube-controller-manager.yaml

Now, in the kube-controller-manager (KCM), make the modification at the end for logging section:

  - command:
    - kube-controller-manager
    - --address=127.0.0.1
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --client-ca-file=/var/lib/minikube/certs/ca.crt
    - --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt
    - --cluster-signing-key-file=/var/lib/minikube/certs/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
    - --root-ca-file=/var/lib/minikube/certs/ca.crt
    - --service-account-private-key-file=/var/lib/minikube/certs/sa.key
    - --use-service-account-credentials=true
    # Add verbosity here … 
    - -v=5

You only need to do this if you want to understand how Kubernetes works inside of Kubernetes (i.e. from the perspective of how PVCs are bound and unbound, and how the volume controller itself manages communication events that trigger your storage provisioner.)

Now, back to the original point of this post: uncovering the internal details of the storage model in your cluster.

Hacking the Minikube docker or CRI version so it works in your cloud.

Running minikube outside of a VM in local mode, on the same hardware you run other things on, is the best way to A/B test a Kubernetes distribution’s behavior against a known constant. Additionally, this may be necessary if your workload uses GPUs, SSDs, or other performance-sensitive devices that don’t translate directly into your hypervisor, which you want to test, modify, and hack against without needing to modify a heavyweight kube cluster which is controlled by some kind of automation (which you often don’t have access to).

In this case, there is a caveat: you’ll want to make sure you install a stable Docker version. minikube is strict about Docker versions, and won’t accept the latest release in all cases. So you need to make sure you have the right Docker version (which may require uninstalling the default Docker version on your Linux machine). For minikube 0.33, docker-ce-17 works great, so here’s how to do that.

sudo yum remove docker docker-common docker-selinux docker-engine
  sudo yum install autoconf 
  sudo yum remove containerd.io
  sudo yum remove -y docker-ce-cli
  sudo yum install --setopt=obsoletes=0  docker-ce-17.03.2.ce-1.el7.centos.x86_64 

When this is done, run systemctl start docker to get your stable Docker daemon up and running.

Assuming you’ve installed minikube from somewhere (it’s easy, just get it from the release page on github), you can run minikube start —vm-driver=none.

Part 2: Experimenting with the dynamic provisioning model

Create an app

At this point, you have minikube running, and it has already set up storage classes for you. We’ll see how in a minute.

Now, you can create a pod, any pod, which relies on some kind of storage. Since everyone always uses NGINX for these sorts of smoke tests we’ll do the same – combined with a completely generic PVC. All that really matters is that the volume mount in the pod is referencing a PVC with the name ‘my-pv-claim’, like so:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: my-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
kind: Pod
apiVersion: v1
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: my-pv-storage
      persistentVolumeClaim:
       claimName: my-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: my-pv-storage

Now you’ve got a pod that is somehow running in a single node cluster. Since you’ve never set up any kind of storage volume provisioning – then your first question is “how did this pod magically get its PVC fulfilled ?”

The answer is: StorageClasses. StorageClasses are at the heart of dynamic provisioning. By asking declaratively for a type of storage, your Dynamic Provisioner can decide how to fulfill that storage at runtime. This is what allows your application to run exactly the same on minikube as it would on other clouds or Container Management platforms (like our own Managed Kubernetes.)

But, I’m sure you already knew that. What you probably didn’t know is that minikube works with dynamic provisioning out of the box. And that the way it works is actually by defining a provisioner.

Now, lets figure out how that PVC got fulfilled

First, run kubectl get PVC in your namespace, to make sure everything is working right:

➜  csi-hacking kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-pv-claim   Bound    pvc-ada22e4b-2351-11e9-b86b-fa163ef9a3a4   1Gi        RWO            standard       45h

We can see that the PVC claim we made was indeed fulfilled with the capacity, 1Gi, that we asked for, and its AccessMode is Read Write Once. However, interestingly, its STORAGECLASS, is set – but we didn’t set a storage class, did we?

You’re right, we didn’t.

So, this is getting weird. Our PVC’s specification (not its status) was modified, somewhere along the way, to actually request the ‘standard’ storageclass. This is done by an AdmissionController which is running in our cluster, intercepting our requests to the Kubernetes API server and modifying incoming objects. If we had provided a storage class, the admission controller would not have injected this.

But there are still mysteries to be revealed here. In addition to not asking for a particular StorageClass on our nginx pod, we also never created a storage class to begin with! What’s going on?

➜  csi-exp kubectl get sc
NAME                 PROVISIONER                AGE
standard (default)   k8s.io/minikube-hostpath   2d

Hack the StorageClass

It turns out that when you start minikube itself, a StorageClass is created for you. You can easily modify this storage class, as a Kubernetes object but not as a file. The reason is that it’s bootstrapped for you on your minikube installation. So you can read its contents but you can only modify it inside of Kubernetes itself, using “kubectl edit sc standard”.

For example, we can disable the default storage classes entirely, like so: after running the edit command, modify the storageclass.beta.kubernetes.io/is-default-class: “true” annotation value to have value ”false”.

➜  csi-exp kubectl get sc
NAME       PROVISIONER                AGE
standard   k8s.io/minikube-hostpath   22d
 

➜  csi-hacking cat /etc/kubernetes/addons/storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  namespace: kube-system
  name: standard
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
  labels:
    addonmanager.kubernetes.io/mode: Reconcile

provisioner: k8s.io/minikube-hostpath

So the StorageClass isn’t magical after all. But you already knew that. Now, undo your changes so your cluster behaves normally again.

Let’s look at the internal details of the storage class:

➜  csi-hacking kubectl get sc                 
NAME                 PROVISIONER                AGE
standard (default)   k8s.io/minikube-hostpath   21d
➜  csi-hacking kubectl get sc -o yaml
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile"},"name":"standard","namespace":""},"provisioner":"k8s.io/minikube-hostpath"}
      storageclass.beta.kubernetes.io/is-default-class: "true"
    creationTimestamp: "2019-01-08T03:56:44Z"
    labels:
      addonmanager.kubernetes.io/mode: Reconcile
    name: standard
    resourceVersion: "424"
    selfLink: /apis/storage.k8s.io/v1/storageclasses/standard
    uid: 69f175d0-12f9-11e9-a834-fa163ef9a3a4
  provisioner: k8s.io/minikube-hostpath
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

The thing to note here is that the annotations have a provisioner:k8s.io/minikube-hostpath. So, what exactly is that? Looking into the existing Provisioner controller in the minikube codebase, you can see that this key maps to its constant, provisionerName key:

const provisionerName = "k8s.io/minikube-hostpath"

Meanwhile, “volume.beta.kubernetes.io/storage-provisioner” is a standard annotation, which any Kubernetes watcher can pull down. Once you check the value of the provisioner, if you are indeed the “host-path” provisioner, you logically will go on to do the work of provisioning a volume for that claim.

Of course, since it’s easy to intercept and create volumes, now is a good time to make the obligatory statement about “you shouldn’t trust any volume controllers running on your cluster”, but that goes without saying. Since volume controllers could intercept any and all storage requests, and mount malicious stuff into your pods. Note that “provisioner” is now out of beta, and is also a first class field in the StorageClass object.

Part 3: Now, lets dive into the inner workings of minikube-hostpath

Now that you understand how the provisioner works, lets see how we might build our own dynamic provisioner. It’s actually not that hard, and there are utilities out there you can borrow. Again, the key thing to note here is that you don’t need to write a custom volume, or even build a CSI driver, in order to make your own dynamic provisioner.

External-Storage: The beginnings of CSI

The Kubernetes community is currently in-flux of transitioning from the “external-storage” repository (which relies on storage drivers compiled into Kubernetes) — to two decoupled repositories, which will allow for individual storage implementations (which are CSI based) to manage to register storage drivers and implementing storage specific logic. At that point (this library is mostly going to be deprecated), it has some generic functionality which will live, while the main external storage work is happening at the interface level. We will dive into the details of how the CSI-based external provisioners work in the next post in this series.

For now, its still important to understand how external-storage works, as most Kubernetes clusters you’ll be using probably are implementing some kind of in-tree storage, somewhere, and this will be the case for at least a year (or forever — remember that hostPath and emptyDir themselves are also in-tree drivers, and will be a part of Kubernetes core for a very long time).

Ultimately, the minikube-hostpath program which runs as part of MiniKube’s dynamic provisioner is a model that you can use for managing storage in your own clusters and is the same model that a CSI PV fulfillment will follow. To see how you would implement the same plugin in a post-CSI environment, you can see the progress being made in the external-provisioner repository, which has an implementation of hostPath as a CSI driver (rather than as a custom controller).

The takeaway from the simplicity of the storage_provisioner.go implementation above is that implementing dynamic storage can be very simple, and you don’t necessarily need a Ph.D. in Kubernetes, or CSI, or anything else to do it. Just scrape a field from a watch, and then, if you like the provisioner value, provision a corresponding volume — and return the volume object if you’re using the external-storage wrapper library above (as MiniKube does), or make your own implementation.

Part 4: Dynamic Storage + CSI: How They Affect Your Dev Workflow

Now that you understand dynamic storage, let’s dive into the difference between a dynamic provisioner which we reviewed above, and what it means to have dynamic storage plane underneath: CSI.

These two entities give you a matrix of four different types of storage models for a Kubernete cluster. Typically, for security considerations and in order to be able to commoditize your applications, you’ll want to know what the default storage model for your apps is (ideally it will be dynamic, at least at the provisioning level). Ultimately your development workflow for storage will be decided based on which of the four storage models you choose (in general, CSI provisioned+dynamic is the best of both worlds).

  1. CSI provisioned, non-dynamic
  2. CSI provisioned, dynamic
  3. Non-CSI provisioned (in tree), dynamic
  4. Non-CSI provisioned (in tree), non-dynamic

As you can see, there is dynamic provisioning, and there is CSI volume provisioning. A Kubernetes cluster supporting both of these is dynamic on the user end, as well as on the storage end, and can really handle anything. However, you can implement CSI without a dynamic provisioner, and vice versa.

CSI provisioned, non-dynamic:

Example: You write a golang or terminal program which directly calls CSI machinery without any use of dynamic provisioning. In this case, you might use something like GoCSI as a client to communicate with your running CSI service implementation.

CSI provisioned, dynamic:

Example: A filesystem created for you on the fly, by MiniKube, when you create a PVC for a provider from a vendor outside of the Kubernetes existing in-tree filesystems. In this case, you would have the following YAML objects in play at some point: A CSI driver as a storage class, and also a PersistentVolumeClaim which doesn’t have any knowledge of the CSI driver. When you create the PVC, it would be eventually fulfilled by the StorageClass which is ultimately relying on the CSI provisioner to create the corresponding volume.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-my-vendor
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi-my-vendor

Non CSI provisioned (in tree), dynamic:

As exemplified earlier, you can have completely dynamic storage without CSI: you’re just limited to the filesystem types which Kubernetes supports, or the flexVolume implementation.  Using a persistentVolumeClaim that is bound (like we did with minikube), or for that matter even implementing something like emptyDir, demonstrates how volumes can be provisioned dynamically (on-the-fly), with (or without) a PVC.

spec:
  containers:
  - image: my-app:1.0
    name: testing
    volumeMounts:
    - mountPath: /data-shared-with-another-container
      name: mounted-scratch
  volumes:
  - name: mounted-scratch
    emptyDir: {}

Non CSI provisioned (in tree), non-dynamic:

An NFS share that you’ve manually created in your data center, and that you’ve harded-code as a PersistentVolume. Then, you create a PVC which points to that volume. In this case, your pod itself declares the volume (rather than pointing to a PVC abstraction). For example, you’ll have a snippet like this, outside of your pod definition, notice that we define NFS inside of the volume definition, specifically creating a volume without any indirection.

volumes:
    - name: nfs-volume
      nfs:
        server: 192.100.200.212
        path: /data

So there you have it: Now you understand the difference between Dynamic Provisioning the CSI framework, and hopefully have an idea of whether you need one, or both, in your production environment.

Summary: The Four Quadrants of Kubernetes Storage

In summary, if you’re working on a storage solution for your cloud native environment, you’ve got four quadrants: CSI makes your storage plane declarative from a vendor perspective, Dynamic Provisioning makes storage declarative from a user perspective. CSI alongside Dynamic Provisioning means that developers don’t need to provision storage, and that Kubernetes doesn’t need to be aware of storage.

The ideal scenario is to have, of course, both CSI based volumes, as well as Dynamic Provisioning.

Looking at it from a glass-half-empty perspective, if you don’t have Dynamic Storage your developers have to know the PersistentVolume type for their apps, and declare it and reference it explicitly when they make their pod specifications. If you don’t have a CSI implementation, it often means you’ll have to do custom yum/apt installations of storage drivers inside your Kubernetes nodes, to have storage libraries that work exactly the way the Kubernetes tree wants them to.


This article originally appeared on The New Stack.

Platform9

You may also enjoy

Solving the Kubernetes and Storage Challenge

By Peter Fray

Kubernetes FinOps: Elastic Machine Pool Step-by-Step guide : Part 2

By Joe Thompson

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Leaving VMware? Get the VMware alternatives guideDownload now