Troubleshooting GPU Support
When you configure GPU support or deploy GPU-enabled VMs, you might encounter configuration errors or issues with GPU resource availability. The following troubleshooting information addresses specific problems that have been identified during GPU setup and operations.
Before troubleshooting GPU issues, verify that your GPU model is supported and that you have completed all infrastructure setup steps. Most GPU errors result from incomplete configuration or mismatched settings between hosts and flavors.
vGPU script fails with SR-IOV unbindLock error
When running the vGPU configuration script, you may encounter this error during SR-IOV configuration.
Step 3: Enable SR-IOV for NVIDIA GPUs
Detecting PCI devices from /sys/bus/pci/devices...
Found the following NVIDIA PCI devices:
Found the following NVIDIA devices: 0000:c1:00.0
Enter the full PCI device IDs (e.g., 0000:17:00.0 0000:18:00.0) to enable sriov, separated by spaces.
Press Enter without input to configure ALL listed NVIDIA GPUs:
No PCI device IDs provided. Configuring all NVIDIA GPUs...
Enabling SR-IOV for 0000:c1:00.0...
Enabling VFs on 0000:c1:00.0
Cannot obtain unbindLock for 0000:c1:00.0
This error occurs when NVIDIA services are holding a lock on the PCI device, preventing SR-IOV configuration.
Prerequisites for this resolution
- NVIDIA license and drivers are installed successfully
- NVIDIA license server is created and configured
Resolution steps:
- Stop all NVIDIA-related services that might be holding the device lock:
systemctl stop nvidia-vgpu-mgr
systemctl stop nvidia-persistenced || echo "nvidia-persistenced not running"
systemctl stop nvidia-dcgm || echo "nvidia-dcgm not running"
killall -q nv-hostengine || echo "nv-hostengine not running"
- Remove and rescan the PCI device to reset its state:
echo 1 > /sys/bus/pci/devices/0000:c1:00.0/remove
echo 1 > /sys/bus/pci/rescan
Replace 0000:c1:00.0
with your actual PCI device ID from the error message.
- Manually enable SR-IOV for the GPU device:
/usr/lib/nvidia/sriov-manage -e 0000:c1:00.0
- Restart the NVIDIA vGPU manager:
systemctl start nvidia-vgpu-mgr
- Verify SR-IOV configuration by checking for virtual functions:
nvidia-smi vgpu
mdevctl list
Expected output after successful resolution:
# nvidia-smi vgpu output
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 570.148.06 Driver Version: 570.148.06 |
|---------------------------------+------------------------------+------------|
| GPU Name | Bus-Id | GPU-Util |
| vGPU ID Name | VM ID VM Name | vGPU-Util |
|=================================+==============================+============|
| 0 NVIDIA L4 | 00000000:C1:00.0 | 0% |
| 3251634321 NVIDIA L4-1A | ef2a... ef2acb59-4400-4... | 0% |
+---------------------------------+------------------------------+------------+
# mdevctl list output
8bfe3867-7918-4da6-b6f9-1a3ff4db030c 0000:c1:02.6 nvidia-918
vGPU profiles missing during host configuration
When configuring vGPU on a GPU host, certain vGPU profiles may be missing from the available options, even though your GPU model supports them.
This occurs when existing mediated devices (stale or active) are present in /sys/bus/mdev/devices/
. The system prevents selection of vGPU profiles that would create fewer or equal slices than the number of existing devices.
For example, if 2 devices are present at /sys/bus/mdev/devices/
, any vGPU profile that allows 2 or fewer GPU slices will not be listed under GPU hosts for configuration.
Resolution steps:
- Check for existing mediated devices:
ls /sys/bus/mdev/devices/
- Count the number of devices present:
ls /sys/bus/mdev/devices/ | wc -l
- Stop NVIDIA vGPU services:
systemctl stop nvidia-vgpu-mgr
- Remove existing mediated devices:
for device in /sys/bus/mdev/devices/*; do
if [ -d "$device" ]; then
echo 1 > "$device/remove"
fi
done
- Restart NVIDIA vGPU services:
systemctl start nvidia-vgpu-mgr
- Verify the devices are cleared:
ls /sys/bus/mdev/devices/
- Return to the GPU host configuration in PCD to see the full list of available vGPU profiles.
VM creation fails with GPU model validation error
If you select a GPU model in the VM flavor that doesn't match the configured GPU on the host, the system will provide an error during VM creation. This ensures that the selected GPU in the flavor aligns with the underlying GPU hardware.
To resolve this issue, verify that the GPU model specified in your flavor matches the GPU model configured on the host.
GPU host authorization and visibility issues
GPU host does not appear as a listed GPU host
After running the GPU configuration script and authorizing, the GPU host should appear in the GPU host list showing compatibility (passthrough or vGPU), GPU model, and device ID.
If your GPU host does not appear:
- Verify the GPU configuration script ran successfully on the host.
- Confirm you rebooted the host after running the script.
- Check that host authorization completed.
- Ensure both host configuration and cluster have GPU enabled.
Script execution requirements
The GPU configuration script requires administrator privileges to run. Only administrators with access to the host and script should execute it. End users or developers requesting GPU resources do not need to run the script themselves.