Pods are Displaying "FailedAttachVolume" or "FailedMount" Errors in the Event Logs.
Problem
Pods are not functioning or running due to PVC mount failures, specifically displaying "FailedAttachVolume" or "FailedMount" errors.
x
Warning FailedMount 5h32m (x31004 over 6h27m) kubelet MountVolume.WaitForAttach failed for volume "pvc-[pvc-id]" : volume attachment is being deleted
Warning FailedAttachVolume 5h30m (x36 over 6h27m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-[pvc-id]" : volume attachment is being deleted
Environment
- Self-Hosted Private Cloud Director Virtualization – v2025.4 and Higher.
Procedure
- Check the number of pods in the
Init
state to identify any pod stuck in initialisation. The failure of PVC mount attachments can cause pods to remain in anInit
state.
$ kubectl get pods -A | grep -i "init"
- Run the following commands to get the storage backend.
$ kubectl get csidrivers
- Verify if the CSI driver pods are running. The pods can either be in a dedicated namespace or inside the
Kube-system
namespace. In this example below, the NetApp backend uses the Trident namespace to host its storage backend pods.
$ kubectl get pods -n <CSI-driver-namespace>/<kube-system>
E.g.
$ kubectl get pods -n trident
trident trident-controller-pod 0/6 ContainerCreating 0 6h44m
trident trident-node-linux-pod 0/2 CrashLoopBackOff 20 (5h33m ago) 23m
trident trident-node-linux-pod 0/2 CrashLoopBackOff 15 (5d4h ago) 23d
trident trident-node-linux-pod 0/2 CrashLoopBackOff 34 (5d3h ago) 23d
- As calico is responsible for providing pod networking, review all calico pods and determine why these pods are in a "
CrashLoopBackOff/ContainerCreating/OOMkilled/Pending/Error
" state; see the events sections from the below command output.
$ kubectl describe <pod-name> -n <calico-namespace>
- Get more information on the failure from the pod logs using the command:
$ kubectl logs <pod-name> -n <CSI-driver-namespace>/<kube-system>
- If these steps don't resolve the issue, please contact your Backend Storage Provider or reach out to the Platform9 Support Team for additional assistance.
Most common causes
- The storage backend is unreachable.
- The underlying host does not have sufficient resources to run these pods.
- CSI Driver itself is not configured correctly or has some errors.
- Calico network pods are not working as expected.
Was this page helpful?