Persistent Volume is in a "released" State and Failing to Mount Within the Pod
Problem
You deleted an existing PersistentVolumeClaim and are trying to mount the PersistentVolume to another pod but it is failing to mount. The volume is observed to be in a "released" state as seen below.
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51 4Gi RWO Retain Released default/old-claim test-ebs 5m31s
Environment
- Platform9 Managed Kubernetes - All Versions
Cause
When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume.
Resolution
- Describe the PersistentVolume and check the ClaimRef. You will see that it is referring to the older PersistentVolumeClaim (
old-claim
in this case) that has already been deleted.
# kubectl describe pv pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51
Name: pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51
Labels: topology.kubernetes.io/region=us-east-1
topology.kubernetes.io/zone=us-east-1a
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: test-ebs
Status: Released
Claim: default/old-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 4Gi
- Delete the old ClaimRef by running the following command.
# kubectl patch pv pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51 -p '{"spec":{"claimRef": null}}'
persistentvolume/pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51 patched
- Describe the PersistentVolume again and validate that the ClaimRef has been reset.
# kubectl describe pv pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51
Name: pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51
Labels: topology.kubernetes.io/region=us-east-1
topology.kubernetes.io/zone=us-east-1a
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: test-ebs
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 4Gi
- Validate the status of the PersistentVolume.
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51 4Gi RWO Retain Available test-ebs 12m
- Now that the PersistentVolume is in an Available status, it can be mounted to any other pod.
Was this page helpful?