How to Authenticate Ceph Using Libvirtd Secrets
Problem
While creating new instance in host where the authentication using libvirtd secrets for Ceph is not configured, throwing below traceback error in the logs:
2023-12-11 11:13:32.474 ERROR nova.compute.manager [req-ecdXXXXXXXX8000 User_name] [instance: d56c91ea-XXXXXXXX6d8cbe] Failed to build and run instance: libvirt.libvirtError: Secret not found: no secret with
matching uuid '0c3b58a3-XXXXXXXXXXbaa489'
2023-12-11 11:13:32.474 TRACE nova.compute.manager [instance: d56c91ea-XXXXXXXX6d8cbe] Traceback (most recent call last):
2023-12-11 11:13:32.474 TRACE nova.compute.manager [instance: d56c91ea-XXXXXXXX6d8cbe] File "/opt/pf9/venv/lib/python3.9/site-packages/nova/compute/manager.py", line 2431, in _build_and_run_instance
2023-12-11 11:13:32.474 TRACE nova.compute.manager [instance: d56c91ea-XXXXXXXX6d8cbe] raise libvirtError('virDomainCreateWithFlags() failed')
2023-12-11 11:13:32.474 TRACE nova.compute.manager [instance: d56c91ea-XXXXXXXX6d8cbe] libvirt.libvirtError: Secret not found: no secret with matching uuid '0c3b58a3-a22e-4aa1-a917-999e13baa489'
2023-12-11 11:13:32.474 TRACE nova.compute.manager [instance: d56c91ea-XXXXXXXX6d8cbe]
2023-12-11 11:13:32.567 ERROR os_vif [req-ecdXXXXXXXX8000 User_name] Failed to unplug vif VIFBridge(active=False,address=fa:XXXXXXXc8,bridge_name='xxxxxxx',has_traffic_filtering=True,id=a78fe8ea-67fa-4e41-bf3
9-0ebd5dfb405b,network=Network(4b9xxxxxxxxxxxec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa78fe8ea-67'): pyroute2.netlink.exceptions.NetlinkError: (95, 'Operation not supported')
2023-12-11 11:13:32.567 TRACE os_vif Traceback (most recent call last):
2023-12-11 11:13:32.567 TRACE os_vif File "/opt/pf9/venv/lib/python3.9/site-packages/nova/compute/manager.py", line 2431, in _build_and_run_instance
2023-12-11 11:13:32.567 TRACE os_vif self.driver.spawn(context, instance, image_meta,
2023-12-11 11:13:32.567 TRACE os_vif File "/opt/pf9/venv/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3718, in spawn
2023-12-11 11:13:32.567 TRACE os_vif self._create_domain_and_network(
2023-12-11 11:13:32.567 TRACE os_vif File "/opt/pf9/venv/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 6672, in _create_domain_and_network
2023-12-11 11:13:32.567 TRACE os_vif self._cleanup_failed_start(
2023-12-11 11:13:32.567 TRACE os_vif File "/opt/pf9/venv/lib/python3.9/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2023-12-11 11:13:32.567 TRACE os_vif self.force_reraise()
Environment
- Platform9 Managed OpenStack - v5.8 and Higher.
- Rocky Linux 9.1 and higher.
Procedure
The recommended steps to configure the libvirtd secret are listed below:
Setup Ceph Client Authentication
1. Create a new user for Cinder and Glance. Execute the following commands on the ceph-deploy admin node.
$ ceph auth get-or-create client.cinder mon "allow r" osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
$ ceph auth get-or-create client.glance mon "allow r" osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
2. Add the keyrings for client.cinder and client.glance to the respective nodes and change their ownership as shown below.
$ ceph auth get-or-create client.glance | ssh root@controller sudo tee /etc/ceph/ceph.client.glance.keyring
$ ssh root@controller sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
$ ceph auth get-or-create client.cinder | ssh root@cinder sudo tee /etc/ceph/ceph.client.cinder.keyring
$ ssh root@cinder sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
3. Nodes running nova-compute(Compute node) need the keyring file for the nova-compute process. Add the keyring file as shown below.
$ ceph auth get-or-create client.cinder | ssh root@compute sudo tee /etc/ceph/ceph.client.cinder.keyring
4. Add the secret key to libvirt and remove the temporary copy of the key on the compute node as shown below:
root@compute# uuidgen
The output will be a key similar to 457eb676-33da-42ec-9a8c-9293d545c337
root@compute# cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>457xxxxxxxxxxxxxxc337</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
root@compute# virsh secret-define --file secret.xml
Secret 457xxxxxxxxxxxxxxc337 created
root@compute# virsh secret-set-value --secret 457xxxxxxxxxxxxxxc337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Configure OpenStack to use CEPH
1. Configuring Glance
In order to use CEPH block device(RBD) by default, configure Glance as shown below. Edit/modify the entries as shown below in /etc/glance/glance-api.conf under the [DEFAULT] section on the controller node.
——————————————-
default_store = rbd
rbd_store_user = glance
rbd_store_pool = images
rbd_store_chunk_size = 8
show_image_direct_url = True #enables copy-on-write cloning of images
rbd_store_ceph_conf = /etc/ceph/ceph.conf
——————————————-
To avoid images getting cached under /var/lib/glance/image-cache/, add the following entries under the [paste_deploy] section in /etc/glance/glance-api.conf.
——————————–
flavor = keystone
——————————–
2. Configuring Cinder
OpenStack requires a driver to interact with Ceph block device RBD and also the pool name for the block device. On the cinder node, edit /etc/cinder/cinder.conf by adding the following entries under the [DEFAULT] section.
——————————————————–
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 457xxxxxxxxxxxxxxc337 (this is the uuid secret key created earlier)
——————————————————-