OpenStack Volume Deletion Fails
Problem
Deletion of the volume fails with the following error in /var/log/pf9/cindervolume-base.log
.
Delete for volume [VOLUME-ID] failed: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.
(HTTP 400) (Request-ID: [REQUEST-ID]) ERROR: Unable to delete any of the specified volumes.
Environment
- Platform9 Managed OpenStack - v3.6.0 and Higher
- Cinder
Cause
In both these cases mentioned below, Cinder restricts the volume deletion.
- Case 1: The volume stuck during volume migration.
- Case 2: The volume was not cleanly detached during VM deletion.
Diagnosis
Case 1:
- Volume details
$ openstack volume show --fit <VOLUME_UUID>
+--------------------------------+--------------------------------------------------+
| Field | Value |
+--------------------------------+--------------------------------------------------+
| attachments | [] |
| availability_zone | [AZ-NAME] |
| bootable | false |
| consistencygroup_id | None |
| created_at | [TIME-STAMP] |
| description | [DESCRIPTION] |
| encrypted | False |
| id | [VOLUME-ID] |
| migration_status | deleting |
| multiattach | False |
| name | [VOLUME-NAME] |
| os-vol-host-attr:host | [HOST-ID] |
| os-vol-mig-status-attr:migstat | deleting |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | [TENANT-ID] |
| properties | attached_mode='rw', readonly='False' |
| replication_status | None |
| size | [SIZE] |
| snapshot_id | None |
| source_volid | None |
| status | deleting |
| type | None |
| updated_at | [TIME-STAMP] |
| user_id | [USER-ID] |
+--------------------------------+--------------------------------------------------+
- Cinder volume base logs
Delete for volume [VOLUME-ID] failed: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.
(HTTP 400) (Request-ID: [REQUEST-ID]) ERROR: Unable to delete any of the specified volumes.
Case 2:
- Volume details
$ openstack volume show --fit <VOLUME_UUID>
+--------------------------------+--------------------------------------------------+
| Field | Value |
+--------------------------------+--------------------------------------------------+
| attachments | [] |
| availability_zone | [AZ-NAME] |
| bootable | false |
| consistencygroup_id | None |
| created_at | [TIME-STAMP] |
| description | [DESCRIPTION] |
| encrypted | False |
| id | [VOLUME-ID] |
| migration_status | None |
| multiattach | False |
| name | [VOLUME-NAME] |
| os-vol-host-attr:host | [HOST-ID] |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | [TENANT-ID] |
| properties | readonly='False' |
| replication_status | None |
| size | [SIZE] |
| snapshot_id | None |
| source_volid | None |
| status | deleting |
| type | None |
| updated_at | [TIME-STAMP] |
| user_id | [USER-ID] |
+--------------------------------+--------------------------------------------------+
- Cinder volume base logs
Delete for volume [VOLUME-ID] failed: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.
(HTTP 400) (Request-ID: [REQUEST-ID]) ERROR: Unable to delete any of the specified volumes.
Resolution
Case 1: For volume stuck with migration and attach-related flags:
To run cinder
commands, install python-cinderclient==9.4.0
Generally cinder binary is available only on the Platform9 Management Plane and Block Storage hosts directly.
- Set the state of the volume to "available".
$ cinder reset-state --state available <VOLUME_UUID>
- Reset the migration status of the volume.
$ cinder reset-state --reset-migration-status <VOLUME_UUID>
- Set the attach status to "detached".
$ cinder reset-state --attach-status detached <VOLUME_UUID>
- Delete the volume.
$ cinder delete <VOLUME_UUID>
Case 2: For volumes stuck in deleting
state:
- Reset the state of the volume to "available"
$ openstack volume set --state available <VOLUME_UUID>
- Delete the volume
$ openstack volume delete <VOLUME_UUID>
Validation
Case 1: After applying fix for volume with migration and attach metadata:
- Volume status changes from deleting to available after the reset-state command.
- migration_status becomes None.
- attached_mode or properties like attached_mode='rw' are no longer blocking deletion.
- The volume is successfully deleted from the system and no longer appears in CLI or UI.
- Verify volume state:
x
$ openstack volume show <VOLUME_UUID> --fit-width
Expected Output:
No volume with a name or ID of '[VOLUME_UUID]' exists.
Case 2: After applying fix for volume with migration and attach metadata:
- Volume status changes from deleting to available after the reset-state command.
- The volume is successfully deleted from the system and no longer appears in CLI or UI.
- Verify volume state:
$ openstack volume show <VOLUME_UUID> --fit-width
Expected Output:
No volume with a name or ID of '[VOLUME_UUID]' exists.
Additional Information
- Always confirm that the volume is not in use or attached to a running instance before forcibly resetting its state.
- Resetting states bypasses safety checks and should only be used when metadata inconsistencies block clean-up.
Was this page helpful?