Persist Application PVC Data During SMCP Upgrade
Problem
- Any non-SMCP components deployed on the management plane including PVC data gets lost post management plane upgrade.
Environment
- Self Managed Cloud Platform (SMCP) 5.9 and Higher
- Any PVC backed custom workload applications.
Cause
The SMCP Management Plane upgrade is designed in a way to take the backup of the Management Plane components and destroy the cluster and create new Management Plane cluster in required version and then restore the backup.
During the backup process, no data/volume other than Platform9 components are preserved. The custom data will be lost.
Resolution
The steps to backup the volumes associated with the required application pod's PV/PVC are mentioned below.
As an example, a busybox application's volume is being backed here by PVC. The StatefulSet yaml file associated is mentioned below:
apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: null
labels:
app: busybox
name: busybox
spec:
replicas: 3
selector:
matchLabels:
app: busybox
template:
metadata:
creationTimestamp: null
labels:
app: busybox
spec:
containers:
- image: busybox
name: busybox
resources: {}
command: ["/bin/sh"]
args: ["-c", "sleep 600"]
volumeMounts:
- name: bb-volume
mountPath: /mydata
volumeClaimTemplates:
- metadata:
name: bb-volume
namespace: default
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
status: {}
BEFORE UPGRADE
- Perform graceful shutdown of the pods, wait till they are properly removed
$ kubectl scale sts busybox --replicas=0
- Identify the three busybox PVCs and match the name to the directory on disk. On each of the three nodes, move the PVC directory to the top level of
/opt
$ mv /opt/pf9/volumes/csi/pvc-xxx-xxx-xxx /opt
- Take backup of all three PVCs and PVs. Ensure to repeat the same for the other two PVCs and PVs.
x
$ kubectl get pvc <pvc-name> -o yaml > busybox-0_pvc.yaml
$ kubectl get pv <pv-name> -o yaml > busybox-0_pv.yaml
- Edit all three PV backup files generated above and comment out the
claimRef
section.
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 500Mi
- Upgrade the SMCP Management Plane to the desired version.
AFTER UPGRADE
- Move the PVC data on the disk, back into the original directory on each node. Repeat this step on all three nodes for respective PVC.
$ mv /opt/pvc-xxx-xxx-xxx /opt/pf9/volumes/csi/
- Re-apply the PVs and PVCs.
Start with the PVs that contains the commented out claimRef
, followed by the associated PVCs.
$ kubectl apply -f busybox-0_pv.yaml
$ kubectl get pv pvc-1938ad09-1363-4d3f-ab7d-5cdfe2e6e29b
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-1938ad09-1363-4d3f-ab7d-5cdfe2e6e29b 500Mi RWO Delete Available local-path <unset> 24s
- Apply the PVC post applying PV; the PVC should show
Bound
status. Repeat this with other PVCs & PVs.
$ kubectl apply -f busybox-0_pvc.yaml
$ kubectl get pvc bb-volume-busybox-0
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
bb-volume-busybox-0 Bound pvc-1938ad09-1363-4d3f-ab7d-5cdfe2e6e29b 500Mi RWO local-path <unset> 2s
- Redeploy the application. The pods should attach to the volumes successfully and the previous data from the volumes will be utilised.
Was this page helpful?