Storage Service Troubleshooting Guide
Problem
This troubleshooting guide aims to empower users by providing clear, actionable steps, common error explanations, and best practices to quickly and independently solve Storage-related problems. Specifically, Cinder in Private Cloud Director.
Environment
Private Cloud Director Virtualization - v2025.4 and Higher
Self-Hosted Private Cloud Director Virtualization - v2025.4 and Higher
Component - PCD Storage Service
Deep Dive
Volume Creation Flow
This is the process of provisioning a new block storage device from a storage backend.
API Request: A user sends a request to create a volume via the OpenStack CLI, Private Cloud Director dashboard, or direct API call. The
cinder-apiservice receives this request, authenticates the user with Keystone, and raises aPOSThttps://<FQDN>/v3/<TENANT_UUID>/volumesvolume request, which further creates a Cinder database entry for the volume with a status ofcreating. The belowcinder-apipod logs sample shows the POST request, Volume size and successful issue for volume creation request.Sample logs:INFO cinder.api.openstack.wsgi [None [REQ-ID] [USER_ID] [TENANT_ID] - - default default] POST https:/<FQDN>/v3/<TENANT_UUID>/volumes INFO cinder.api.v3.volumes [None [REQ-ID] [USER_ID] [TENANT_ID] - - default default] Create volume of 2 GB INFO cinder.volume.api [None [REQ-ID] [USER_ID] [TENANT_ID] - - default default] Availability Zones retrieved successfully. INFO cinder.volume.api [None [REQ-ID] [USER_ID] [TENANT_ID] - - default default] Create volume request issued successfully.Cinder-Scheduler: The request is passed to the
cinder-scheduler. This component makes a decision on where to store the volumes using filters like Capacity Filters, Availability zone filters and many other filters. More filters can be found here to decide which storage backend (e.g., Ceph, LVM) is the best place to create the volume based on size, type, and availability.Volume Service Action: The scheduler sends the request to the
pf9-cindervolume-baseservice responsible for the chosen backend. This service is the worker that uses a specific storage driver to command the backend.Backend Provisioning: The storage backend (the actual storage system) receives the commands and provisions the physical or logical block device. Here, on the underlying Persistent Storage hosts, the
/var/log/pf9/cindervolume-base.logwill show the requested raw volume specifications, which include Volume name, Volume UUID and Volume size.Sample logs:INFO cinder.volume.flows.manager.create_volume [[REQ-ID] None service] Volume [VOLUME_UUID]: being created as raw with specification: {'status': 'creating', 'volume_name': 'volume-[VOLUME_UUID]', 'volume_size': 2}Status Update: Once the backend confirms the volume is created, the
pf9-cindervolume-baseservice sends the update request to the cinder database, changing the volume's status toavailable. Here, on the underlying Persistent Storage hosts, the/var/log/pf9/cindervolume-base.logwill show the final status that volume is created.Sample logs:INFO cinder.volume.flows.manager.create_volume [[REQ-ID] None service] Volume volume-[VOLUME_UUID] ([VOLUME_UUID]): created successfully INFO cinder.volume.manager [[REQ-ID] None service] Created volume successfully.
Attaching a Volume to VM Flow
This process is a collaboration, primarily between Compute and Block Storage.
User Request (via Nova): A user requests to attach an existing,
availablevolume to a specific VM. This request goes to thenova-api-osapiservice, not the Cinder API.Nova to Cinder Communication: The
pf9-ostackhostservice on the host where the VM is running calls thecinder-apito get the connection information for the volume. Once volume information is received it further attach the volume as shown in/var/log/pf9/ostackhost.loglogs.Cinder Prepares the Attachment: The
cinder-apipasses the request to thepf9-cindervolume-baseservice. Cinder performs necessary actions to "reserve" the volume and prepares it for attachment. It then generates the required connection details (e.g., the iSCSI target, Ceph RBD path). Once that is successful the/var/log/pf9/cindervolume-base.loglogs will shows the attachment successful message. Volume status will be "reserved".Cinder Responds to Nova: Cinder sends these connection details back to
nova-computeto thepf9-ostackhostservice on the host.Nova Makes the Connection: Once
pf9-ostackhostreceives the connection info,pf9-ostackhostservice uses the host's operating system and hypervisor (e.g., QEMU/KVM) to connect the VM to the storage volume. Volume status will be "attaching".Final Status Update: Once the connection is successful,
pf9-ostackhostservice informs Cinder, and Cinder updates the volume's status in its database toin-useand records which VM it's attached to.
Volume Deletion Flow
This process is a collaboration, primarily on Block Storage.
User Request (via Nova): A user requests to delete an existing volume via the CLI, Private Cloud Director dashboard, or direct API call. which validates the user's authentication token with Keystone, and performs a permission check, and changes the volume status in the database to
deleting. This requestDELETE /v3/{project_id}/volumes/{volume_id}goes tocinder-apiservice.Further Validation: Cinder-service checks Volume state. if it is in available, error, error_restoring, error_extending then the Normal delete operation is performed. If the volume state is in-use (attached), then Normal delete will be rejected unless force delete option is used.
Cinder Prepares for delete: The RPC request is routed to the the
pf9-cindervolume-baseservice hosting the volume (no scheduler step needed for delete). Backend driver/manager attempts to terminate connections and detach (best-effort). If connector cleanup fails, delete may fail with error_deleting. Driver delete_volume() removes the LUN/target/extent from the storage backend. Further the/var/log/pf9/cindervolume-base.logshow the volume device mapper is being deleted.Cinder Volume Deletion Confirmation: On successful backend delete, quotas for volumes and gigabytes are decremented and further the
/var/log/pf9/cindervolume-base.logshow the volume is successfully deleted.Final Status Update: Persistent Storage service
pf9-cindervolume-basesends the database update request to the Cinder DB.
Detaching a Volume to VM Flow
This process is a collaboration, primarily between Compute and Block Storage.
User Request (via Nova): A user requests to detach a volume from a VM. This request goes to the nova-api-osapi pod, not directly to Cinder. Extract out the volume ID from the pod logs. Note the
request-idfrom the logs:
Nova Initiates Detach Operation: The
pf9-ostackhostservice on the compute node (where the VM is running) initiates the detach process. It first interacts with the hypervisor to safely remove the disk from the VM. Extract the capturedrequest-idfrom the logs:
Nova to Cinder Communication: After initiating the detach,
pf9-ostackhostcalls thecinder-apito terminate the volume connection. Extract therequest-idfromcinder-apipod logs:
Nova Finalizes Detach on Hypervisor: After receiving confirmation, pf9-ostackhost ensures the disk is fully removed from the VM via the hypervisor (QEMU/KVM).
If not already done earlier, this step ensures:
Device is no longer visible inside VM
Libvirt/QEMU mapping is removed
Final Status Update: Once the detachment is fully completed,
pf9-ostackhostinforms Cinder.Cinder updates:
Volume status → available
Attachment entry → removed
Run openstack volume show <volume-id> to validate the status.
Extending a Volume Flow
This process is primarily handled by Block Storage, with optional collaboration from Compute when the volume is attached.
User Request (via Cinder): A user requests to extend an existing volume. This request goes to the cinder-api service. Capture the
REQ_IDfrom thecinder-apipod logs:
Cinder Validates the Request: The cinder-api validates if the:
New size is greater than the current size
Volume is in valid state (available or in-use if supported)
Once validated:
Volume status → extending
Run openstack volume show <volume-id> to validate the status. Also note the hosting Cinder node.
Cinder Volume Service Processes Extend: The
cinder-apiforwards the request to thepf9-cindervolume-baseservice on the cinder host where the volume is placed. Here, on the underlying Cinder hosts, the/var/log/pf9/cindervolume-base.logwill show the volume getting resized.
Procedure
Info
Ensure that openstack and cinder binaries are present on the system.
Check if all cinder volume hosts are enable and running,
List all volumes and grep for the affected volumes and get volume details like hosts information, status, errors using below command:
The management plane has a cinder-api & cinder-scheduler pod to provide the volume service. Check if the a cinder-api & cinder-scheduler pods are running in the workload region namespace. Review all these pods::
Info
Step 3 is applicable only for Self-Hosted Private Cloud Director
Once the underlying cinder host is identified review the
pf9-cindervolume-baseservice status it should be up and running.Review the
/var/log/pf9/cindervolume-base.loglogs, check if there are any errors related to the Volume UUID.If these steps prove insufficient to resolve the issue, kindly reach out to the Platform9 Support Team for additional assistance.
Most common causes
Volume Stuck in Creating / Deleting / Detaching State
Volume Attach Failure
Cinder Scheduler Can’t Place Volume
Incorrect storage backend configuration
Last updated
