Persistent Volume is in a "released" State and Failing to Mount Within the Pod
Problem
You deleted an existing PersistentVolumeClaim and are trying to mount the PersistentVolume to another pod but it is failing to mount. The volume is observed to be in a "released" state as seen below.
kubectl get pv,pvcNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-37a9d4dc-2f24-403d-ad77-13a203d02c51 4Gi RWO Retain Released default/old-claim test-ebs 5m31sEnvironment
- Platform9 Managed Kubernetes - All Versions
Cause
When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume.
Resolution
- Describe the PersistentVolume and check the ClaimRef. You will see that it is referring to the older PersistentVolumeClaim (
old-claimin this case) that has already been deleted.
# kubectl describe pv pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51Name: pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51Labels: topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1aAnnotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: yes pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebsFinalizers: [kubernetes.io/pv-protection]StorageClass: test-ebsStatus: ReleasedClaim: default/old-claimReclaim Policy: RetainAccess Modes: RWOVolumeMode: FilesystemCapacity: 4Gi- Delete the old ClaimRef by running the following command.
# kubectl patch pv pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51 -p '{"spec":{"claimRef": null}}'persistentvolume/pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51 patched- Describe the PersistentVolume again and validate that the ClaimRef has been reset.
# kubectl describe pv pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51Name: pvc-37a9d4dc-2f24-403d-ad77-13a203d02c51Labels: topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1aAnnotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: yes pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebsFinalizers: [kubernetes.io/pv-protection]StorageClass: test-ebsStatus: AvailableClaim:Reclaim Policy: RetainAccess Modes: RWOVolumeMode: FilesystemCapacity: 4Gi- Validate the status of the PersistentVolume.
# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-37a9d4dc-2f24-403d-ad77-13a203d02c51 4Gi RWO Retain Available test-ebs 12m- Now that the PersistentVolume is in an Available status, it can be mounted to any other pod.
Was this page helpful?