Changing Management of Volumes in a Shared Storage Backend
Problem
In an environment where volumes are provisioned under a shared storage backend (e.g. NetApp Solidfire, ONTAP, Tintri, etc.), one or more Block Storage hosts will manage the lifecycle of volumes associated with those backends from an OpenStack/Cinder perspective.
In that regard, should a Block Storage host need to be deauthorized, brought down for maintenance, or repurposed to another role, it may be necessary to re-manage the existing volumes under another Block Storage host. CRD (create, update, delete) operations will fail if this is not performed.
Note: any existing volumes managed by backend(s) associated with any Block Storage host(s) will otherwise go uninterrupted as the iSCSI connections for I/O operations from within the VMs are established directly with the backend, and are not reliant on the Block Storage host(s) being up.
Environment
- Platform9 Managed OpenStack - All versions
- OpenStack CLI
- Cinder
Procedure
- Identify the backend storage pools.
$ cinder get-pools
or
$ openstack volume service list
Example:
$ cinder get-pools
+----------+-----------------------------------------------------------------+
| Property | Value |
+----------+-----------------------------------------------------------------+
| name | 55ad6f9a-33da-4a1f-baa7-f0a15446ac9f@sf-backend#cinder-volumes |
+----------+-----------------------------------------------------------------+
+----------+-----------------------------------------------------------------+
| Property | Value |
+----------+-----------------------------------------------------------------+
| name | 7d180141-369f-4380-b688-2677af266f9b@sf-backend#cinder-volumes |
+----------+-----------------------------------------------------------------+
$ openstack volume service list
+------------------+----------------------------------------------------------------+------+----------+-------+----------------------------+---------+---------------+
| Binary | Host | Zone | Status | State | Updated At | Cluster | Backend State |
+------------------+----------------------------------------------------------------+------+----------+-------+----------------------------+---------+---------------+
| cinder-scheduler | cinder-scheduler-7d6685c78d-vx9j7 | nova | enabled | up | 2025-06-07T00:28:39.000000 | None | None |
| cinder-volume | 55ad6f9a-33da-4a1f-baa7-f0a15446ac9f@sf-backend#cinder-volumes | nova | enabled | up | 2025-06-07T00:28:07.000000 | None | None |
| cinder-volume | 7d180141-369f-4380-b688-2677af266f9b@sf-backend#cinder-volumes | nova | enabled | up | 2025-06-07T00:28:04.000000 | None | None |
+------------------+----------------------------------------------------------------+------+----------+-------+----------------------------+---------+---------------+
- From the list of storage pools, identify host_uuid@backend-name#pool of the current host and of the destination host.
- Update the host information for all volumes currently managed by the current host to be instead be managed by the destination host.
The cinder-manage binary is available only on the Platform9 Management Plane and Block Storage hosts.
If running from the control plane:
$ cinder-manage volume update_host --currenthost [CURRENT_HOST_UUID] --newhost [DEST_HOST_UUID]
If running from Block Storage Host:
$ LD_LIBRARY_PATH="/opt/pf9/pf9-cindervolume-base/pf9-lib:/opt/pf9/python/pf9-lib:${LD_LIBRARY_PATH}" PYTHONPATH="/opt/pf9/python/lib/python3.6:/opt/pf9/pf9-cindervolume-base/lib/python3.6/site-packages:${PYTHONPATH}" /opt/pf9/pf9-cindervolume-base/bin/cinder-manage --config-dir /opt/pf9/etc/pf9-cindervolume-base/conf.d volume update_host --currenthost [CURRENT_HOST_UUID] --newhost [DEST_HOST_UUID]
The os-vol-host-attr:host parameter should now be updated for all the volumes to reflect the destination host.
- To further verify that volumes are no longer managed by the previous host.
$ openstack volume list --all-projects --host [HOST_UUID]
or
$ openstack volume list --all-projects -f value -c ID | while read -r vol_id; do
host=$(openstack volume show "$vol_id" -f value -c os-vol-host-attr:host)
echo "$vol_id | $host"
done