The Top 5 Most Popular Kubernetes Storage Troubleshooting Issues

Trying to configure persistent Kubernetes storage options comes with its issues and challenges. Although the official storage docs are quite extensive, they only cover a fraction of the specific configurations you might need in order to successfully install and provision them.

From an operator’s point of view, you might encounter issues choosing a CSI and setting up a PV. From a developer’s point of view, on the other hand, you might be worried about claiming that PV and consuming it as a mounted drive. 

This article aims to offer a detailed explanation of the top 5 most popular Kubernetes storage troubleshooting issues as well as ways to resolve them.

Let’s get started.

1. Help! My Persistent Volume Claim Is Not Bound to the Persistent Volume

One of the most common issues when using PVCs is the inability to bind them to the specified PV. For example, you might receive this event when you try to apply a PVC:

PersistentVolumeClaim is not bound: “pv-demo”

To solve this problem, first make sure that you have created actual volumes, formatted them, and assigned a PV. For example, you can create disks with specific storage using the disk create command in GCloud.

$ gcloud compute disks create example-pv --size=10GB

Then, use the kubectl get pvc command to see if you get more information about their status. For example, you might get a message like this:

Waiting for the volume to be created, either by external provisioner or the system administrator.

In this case, the cluster may not support dynamic provisioning (such as a bare-metal cluster) and will therefore be unable to fulfil the request for a persistent volume and attach it to the container.

If the cluster does support dynamic provisioning, then you need to check the available storage and access modes. When you request a Persistent Volume Claim against a PV, you can specify a capacity and an access mode. If you request more storage than you have available in the PV, it won’t fulfill your PVC request.

resources:
    requests:
      storage: 10Gi

If you have sufficient storage, then it’s time to check the access modes. Each PV is assigned to a specific storage class depending on the physical characteristics of the volume. For example, in the case of GKE, this happens to be the Google Compute Engine Persistent Disk:

$ kubectl get storageclass

NAME                 PROVISIONER            AGE

standard (default)   kubernetes.io/gce-pd   10d

This storage class currently supports two access modes:

ReadWriteOnce
ReadOnlyMany

If you request an access mode in your PVC that is different from what is supported by the resource provider, K8s will not fulfill the request. For example, you might try to send the following PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pvc
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: standard
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi

Here, we requested a volume slice from the storage provider that supports ReadWriteMany. Since no available volume offers this access mode, our request will not be fulfilled.

2. Cannot Apply RBAC Rules to Persistent Volumes for Storage

If you enable RBAC rules, then you will need to create specific cluster roles with list/watch permissions for PVs just like any other resource (like namespaces and nodes). For example:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: <cluster_role_name
rules:
- apiGroups: [""]
  resources:
  - nodes
  - persistentvolumes
  - namespaces
  verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
  resources:
  - storageclasses
  verbs: ["list", "watch"]

Then, you can create relevant ClusterRoleBindings specifying the users or service accounts to bind to the ClusterRole. This way, you can control which users or service accounts can interface with your Persistent Volumes. Note that you don’t need to create ClusterRoleBindings for PVCs, as these are namespaced resources.

3. How to Get a List of Available CSI Drivers

You can get a list of the registered CSI drivers available in the cluster by issuing the following command:

$ kubectl get csidrivers.storage.k8s.io

NAME                    ATTACHREQUIRED   PODINFOONMOUNT   MODES        AGE

pd.csi.storage.gke.io   true             false            Persistent   11m

In this example, we used a GCloud K8s engine cluster, and it shows one registered CSI driver.

“ATTACHREQUIRED=true” means that K8s will skip any attach operations. It should call attach and wait for any attach operations to complete before mounting. 

“PODINFOONMOUNT=false” means that the CSI volume driver does not require additional pod information (such as the pod name or UUID) during mount operations.

The modes field maps to the volumeLifecycleModes beta field, which means that it can be either persistent or ephemeral. You can learn more about the purpose of volumeLifecycleModes here.

Note that querying the CSI drivers is different from querying storage class provisioners. You won’t have a good way to query the latter, and you will get a “no volume plugin matched” message if you reference a provisioner that does not exist.

4. Can’t Delete Storage PV or PVC

If you try to delete a PV or PVC and the command is stuck in “terminating” status, then you might check to see if the PV that the PVC is attached to is protected or if there are pods that use the PVC. This is documented in the “Storage Object in Use Protection” section of this doc.

You will see a “kubernetes.io/pv-protection” finalizer field for PVs and a “kubernetes.io/pvc-protection” field for PVCs, as follows:

$ kubectl describe pvc pvc-example -o json | jq -r '.Finalizers'

[kubernetes.io/pvc-protection]

You should first try to delete the pod associated with both the PV and the PVC. You can query the pods that are associated with a PVC like this:

$ kubectl get po -o json --all-namespaces | jq -j '.items[] | "\(.metadata.namespace), \(.metadata.name), \(.spec.volumes[].persistentVolumeClaim.claimName)\n"' | grep -v pvc-example

If that doesn’t work, you might want to update the PV or PVC manifest to remove the protection finalizers:

$ kubectl patch pvc pvc-example -p '{"metadata":{"finalizers":null}}'

5.  How to Share a Storage PVC Among Containers

You cannot use a common PVC among pods, since each PVC is uniquely bound to a persistent volume (in a one-to-one association). The most straightforward way is to have an NFS provisioner in your cluster and share the folder among all pods using the same PVC. 

Additionally, some provisioners like PortWorx enable easy installation of shared storage.

You can follow this guide to create an NFS server, expose it as a service, and create a PV that can be shared between pods.

Your PV also needs to support ReadWriteMany access modes in order to share volumes. You can create such disks using Google Filestore, since it’s a fully compliant NFS server:

gcloud filestore instances create nfs-server
    --project=[PROJECT_ID]
    --zone=[ZONE]
    --tier=STANDARD
    --file-share=name="export",capacity=100GB
    --network=name="default",reserved-ip-range="10.0.0.0/29"

This will give you an IP of the filestore to use as an NFS server so that you can provide it in a PV spec:

nfs:
    path: /export
    server: <IP_ADDRESS>

After you have created PVs that allow ReadWriteMany, you can use the same persistentVolumeClaim name under the same volumeMount.

Reading List to Further Your Understanding of How Kubernetes Storage Works

There are several detailed articles that can help you further your understanding of how K8s storage works. For example, Jay Vyas’s “Kubernetes Storage: Dynamic Volumes and the Container Storage Interface” and Twain Taylor’s “Storage Considerations as You Migrate to Containers” are excellent resources that can advance your knowledge in this area. As we mentioned above, PortWorx abstracts the complexities of managing and provisioning persistent storage in K8s, and we recommend that you try them out. The official CSI documentation site also contains detailed information about the CSI and how to deploy new CSI drivers.

Platform9

You may also enjoy

Kubernetes FinOps: Basics of cluster utilization

By Joe Thompson

Compare Top Kubernetes Storage Solutions: OpenEBS, Portworx, Rook, & Gluster

By Platform9

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Learn the FinOps best practices to maximize your cloud usage & budget:Register Now
+