New Storage Configuration in Cluster Blueprint is Not Reflecting Persistent Role Host

Problem

The newly added hosts which have the persistent storage role applied are stuck in the converging state.

Environment

  • Private Cloud Director Virtualization - v2025.4 and Higher

  • Self-Hosted Private Cloud Director Virtualization - v2025.4 and Higher

  • Component: Blueprint and Persistent Storage

Cause

The cinder configurations applied from the clusterblueprint is not getting reflected at the cinder backend host due to syntax errors in cinder.conf

Diagnostics

Check the hostagent logs for the errors:

hostagent.logs
pf9_app.py ERROR - pf9-cindervolume-config:get_config failed:  Source contains parsing errors: '/opt/pf9/etc/pf9-cindervolume-base/conf.d/cinder.conf'
        [line 147]: 'volume_dri\n'
session.py ERROR - Bad message, app config or reading current app config. Message : {'opcode': 'heartbeat'}
Traceback (most recent call last):

Resolution

In the new host, having the persistent storage role applied, perform the following manual steps to apply the cinder configurations:

  1. Edit the Cinder configuration file in the cinder host /opt/pf9/etc/pf9-cindervolume-base/conf.d/cinder.conf

  2. Add the new backend sections in the end:

  3. Update the enabled backends In the [DEFAULT] section, append the new backends:

  4. Restart the pf9-cindervolume-base service:

Validation

  1. Check that the new backends are mounted on the host:

  2. Check backend pool

  3. If both commands show the mounts and the new backends, the configuration is applied successfully.

Last updated