# New Storage Configuration in Cluster Blueprint is Not Reflecting Persistent Role Host

### Problem <a href="#problem" id="problem"></a>

The newly added hosts which have the persistent storage role applied are stuck in the converging state.

### Environment <a href="#environment" id="environment"></a>

* Private Cloud Director Virtualization - v2025.4 and Higher
* Self-Hosted Private Cloud Director Virtualization - v2025.4 and Higher
* Component: Blueprint and Persistent Storage

### Cause <a href="#cause" id="cause"></a>

The cinder configurations applied from the clusterblueprint is not getting reflected at the cinder backend host due to syntax errors in `cinder.conf`

### Diagnostics <a href="#diagnostics" id="diagnostics"></a>

Check the hostagent logs for the errors:

{% code title="hostagent.logs" %}

```bash
pf9_app.py ERROR - pf9-cindervolume-config:get_config failed:  Source contains parsing errors: '/opt/pf9/etc/pf9-cindervolume-base/conf.d/cinder.conf'
        [line 147]: 'volume_dri\n'
session.py ERROR - Bad message, app config or reading current app config. Message : {'opcode': 'heartbeat'}
Traceback (most recent call last):
```

{% endcode %}

### Resolution <a href="#resolution" id="resolution"></a>

In the new host, having the persistent storage role applied, perform the following manual steps to apply the cinder configurations:

1. Edit the Cinder configuration file in the cinder host `/opt/pf9/etc/pf9-cindervolume-base/conf.d/cinder.conf`
2. Add the new backend sections in the end:

   <pre data-title="Example:"><code>[HOSTNAME]
   volume_driver = cinder.volume.drivers.vmstore.nfs.VmstoreNfsDriver
   volume_backend_name = [VOLUME_NAME]
   nas_host = [HOST_IP]
   vmstore_user = cinder
   nas_share_path = [PATH]
   vmstore_password = [VMSTORE_PASSWORD]
   nfs_mount_options = vers=3,proto=tcp,lookupcache=pos,nolock,noacl
   nfs_snapshot_support = true
   vmstore_rest_address = [VMSTORE_ADDRESS]
   vmstore_qcow2_volumes = true
   vmstore_rest_retry_count = 1
   </code></pre>
3. Update the enabled backends In the \[DEFAULT] section, append the new backends:

   <pre data-title="cinder.conf"><code>enabled_backends = [BACKEN_1],[BACKEN_2],[BACKEND_3]
   </code></pre>
4. Restart the pf9-cindervolume-base service:

   ```bash
   sudo systemctl restart pf9-cindervolume-base.service
   ```

## Validation

1. Check that the new backends are mounted on the host:

   <pre class="language-bash" data-title="Persistent Storage Host"><code class="lang-bash">$ mount | grep -E '10.18.8.6|10.18.8.7'
   </code></pre>
2. Check backend pool

   <pre class="language-bash"><code class="lang-bash"><strong>$ openstack volume backend pool list --long | grep -E '[CINDER_HOSTNAME]'
   </strong></code></pre>
3. If both commands show the mounts and the new backends, the configuration is applied successfully.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://platform9.com/kb/pcd/storage/new-storage-configuration-in-cluster-blueprint-is-not-reflecting-persistent-role-host.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
