June 2025 Release
The latest release of Platform9 Private Cloud Director includes new features, usability improvements, and resolved issues to enhance product stability and performance.
New Features
Kubernetes Cluster Support for Self-Hosted Deployments
Private Cloud Director now supports Kubernetes clusters on self-hosted deployments. You can launch and manage Kubernetes clusters directly from the Private Cloud Director UI.
QoS Configuration Support Added for Networking Service
QoS (Quality of Service) is now enabled in the networking service configuration. Users can define and apply QoS policies using the OVN driver for improved bandwidth management.
VM High Availability (VMHA) Support for Two-Node Clusters
Private Cloud Director now supports VM High Availability (VMHA) on clusters with a minimum of two hosts. This enhancement enables more flexible HA configurations, especially for environments with limited hardware, and simplifies adoption for customers with smaller clusters / edge clusters.
Beta
Predictive Dynamic Resource Rebalancing (DRR)
Private Cloud Director now introduces a predictive mode for Dynamic Resource Rebalancing (DRR), expanding on the default proactive and reactive models already in use. This enhancement uses historical CPU and RAM data to forecast utilization trends and proactively identify hosts within a cluster that may run out of capacity.
The predictive mode is currently in beta
and not available by default. It requires support from the Platform9 support team to activate it on a per-cluster basis and is not accessible through the UI.
If you would like to explore predictive DRR for your virtualized clusters, please get in touch with the Platform9 Support team.
Support for Physical Nodes in Kubernetes
Private Cloud Director now supports creating, scaling, and deleting Kubernetes clusters using physical servers, in addition to virtual machines. You can now create a Kubernetes cluster using one or more physical nodes.
Beta
Application Catalog & Orchestration
You can now use the application catalog to simplify the deployment and management of multi-VM applications. The Private Cloud Director application catalog uses open source terraform under the hood, to enable you to orchestrate complex applications that may involve multiple virtual machines, networks, storage volumes and other Private Cloud Director objects.
Through this feature, you can:
- Store application templates as terraform files in github with secure API token access
- Provision one or more applications using the Private Cloud Director UI.
Beta
GPU Acceleration for High-Performance VM and Kubernetes Workloads
You can now run AI/ML, rendering, and simulation workloads directly on the Private Cloud Director console using GPU-enabled VMs and Kubernetes clusters. Administrators can configure passthrough or fractional vGPU modes (beta) for VMs, and enable GPU support for Kubernetes clusters using passthrough, Time Slice, or Multi-Instance GPU (MIG) modes. This feature helps run compute-heavy workloads efficiently without relying on external environments, improving performance for modern high-demand applications.
Enhancements
Core Component Upgrade to 2024.1 (Caracal)
Upgraded the core components of Compute, Storage, Images, and Identity services to the 2024.1 (Caracal) release. This update delivers improved stability, security, and compatibility with the latest features across the console.
Enhanced Observability Metrics
This release introduces improved observability in Private Cloud Director with additional metrics now available at the Cluster, Host, and VM levels. These metrics include:
- CPU, memory, and storage utilization
- CPU throttling
- Network data (received and transmitted)
- Disk IOPS
- Breakdown of allocated vs. used resources
In addition, credential management for Grafana access has been updated. For new installations, Grafana login uses the management plane admin credentials by default. However after an upgrade from previous releases, Grafana continues to use the default credentials (admin/admin
) or manually updated credentials as configured previously by the admin.
Metrics Collection With Prometheus
With this release, Private Cloud Director now utilizes open source Prometheus for collecting and storing resource metrics.
VM Migration Priority with DRR
This release adds support for VM migration priority configuration, that determines when a virtual machine gets selected for migration, when the DRR service identifies a candidate host for rebalancing. You can read more about DRR and VM Migration Priority here.
Dell Unity Storage Driver Support
Added support for Dell Unity storage systems by including the required storops
Python package in Storage service deployments. This enhancement simplifies integration and enables out-of-the-box block storage management for Dell Unity within the PCD console.
Support for Limiting Volume Types to Specific Tenants
You can now limit Volume Types to specific tenants on the Private Cloud Director console. To apply this setting, navigate to Storage > Volume Types, select a volume type, and choose Edit Tenants to assign access.
This enhancement improves tenant-level control and ensures better resource segmentation.
Hostname Based Management Cluster Creation for Self-hosted Private Cloud Director
In addition to the existing IP based deployment of self-hosted Private Cloud Director management clusters, you can now use hostnames to create these clusters. This enables better interoperability with some external storage providers.
System Requirements Validation for Community Edition Installs
Community Edition installs now include system requirement checks before installation begins. The installer validates available CPU, memory, and disk resources to ensure the system meets minimum requirements. If the system does not meet the expected thresholds, the installation fails early or issues a warning. This enhancement helps avoid failed or degraded installations by identifying insufficient system resources upfront.
Additionally, the requirements to run Community Edition have now been lowered to 8 vCPUs and 28GB (usable) memory, from the previous requirements of 16 vCPUs and 32 GB of memory.
Enhanced Deployment Progress Visibility for Community Edition
Community Edition installation now displays real-time deployment progress on the CLI. This enhancement improves user experience by replacing the earlier static message with dynamic updates pulled from per-service deployment tracking. Users can now monitor the status of each service during installation, making the process more intuitive and transparent.
Upgrade Notes
The June 2025 release includes improvements to how cluster hosts are managed. Make sure the following configuration is applied to any hosts that may be missing it.
Enforced Cluster Assignment for Hypervisors
In this release, assigning hypervisors to clusters is now mandatory. The change aims to simplify host configuration and aligns with the multi-cluster feature introduced in the previous release, where cluster selection was optional.
Before upgrading, ensure that every hypervisor host is assigned to a cluster. Navigate to Infrastructure > Cluster Hosts, select a specific host, and then select Edit Roles to add the hypervisor.
VM High Availability (VMHA) Support for Existing Clusters
To enable VMHA on two-node clusters, in case of older deployments being upgraded to this release, please follow the below steps:
- Disable VMHA before upgrading.
- Re-enable it once the upgrade is complete.
Manual AZ Name Update Required After Upgrade
Starting from the February release, the hypervisor availability zone (AZ) name defaults to the initial cluster name configured in the blueprint. If you're upgrading from this release, first disable VM High Availability (VMHA) in the blueprint. After upgrading to the April/June release, please ensure to create your first cluster with the same name as the existing AZ name to align with the new default behavior.
Proxy Configuration for Image Management
Image management services now recognize proxy settings defined in /etc/environment
. After upgrading, if the file appears incorrectly populated, restart the pf9-hostagent
service to ensure proxy settings are applied correctly.
Bug Fixes
Identity, Storage, and Networking Services
imagelibrary
hosts.
BlockDeviceMapping
configurations now allow LUN passthrough devices.
Self-Hosted Deployments
rabbitMQ
broker data correctly.
airctl delete-cluster
does not unmount /var/lib/containerd
.
airctl check
timed out after 5 minutes if a node was unreachable. The check now marks the node as Not Ready
and proceeds with validation on the remaining nodes.
airctl
to use custom SSH ports for management clusters. Previously, the SSH port was hardcoded to 22, blocking access in environments with custom ports.
airctl status
incorrectly showed the node as ready
even after airctl stop
was executed and services were stopped.
airctl-backup
services caused backup and taint operations to fail if the airctl
host went down. Pods were left in a terminating state, leading to inconsistent behavior.
airctl unconfigure-du --force
followed by airctl delete-cluster
.
PCD User Interface
pcdctl
command to automatically trust the local Private Cloud Director certificate authority.
--force
via CLI and Force Upload on the Private Cloud Director console.
Kubernetes on Private Cloud Director
byohctl
CLI runs apt update
to fetch the dependency of ebtables
utility.
Known Limitations
- GPU Passthrough Limitation for VM Creation: When using GPU passthrough mode, only one GPU host configuration is allowed per region.
- GPU VM Creation Fails with
No Valid Host Was Found
Error: You may see an error of No valid host was found. There are not enough hosts available when creating a VM using GPU passthrough flavors. This can occur if SR-IOV is not enabled for the GPU device. It is recommended to verify if the GPU supports SR-IOV and enable the same before configuring GPU passthrough. - Cluster Names Must Be Unique Across Regions: Two clusters cannot share the same name across regions within the same tenant.
- Tenant Name Restriction: Spaces are not supported in tenant names. Use only alphanumeric characters, dashes, or underscores.
- Kubernetes Cluster Support Not Available on Upgraded On-Premise Deployments: Kubernetes cluster support is only available on fresh on-premise deployments of the Private Cloud Director console. Deployments upgraded to the 2025.6 release from an older version will not support this feature.
Known Issues
- VM HA does not honor the host liveness traffic network interface configured in the cluster blueprint in this release.
- VM HA and DRR does not support vTPM-enabled VMs. Live migration and evacuation are not possible, so these VMs will not be migrated automatically.
- If you are using NFS as the backend for block storage, set the
image_volume_cache_enabled
flag tofalse
. If the flag is set totrue
, creating a VM from a cached image volume may lead to incorrect root disk sizing. - SSO users are unable to create Heat orchestration stacks at this time.
pcdctl config set
command is not supported for a user with MFA enabled.- Image upload to encrypted volumes is currently unsupported. Volume encryption only works with empty volumes at this time.
- Currently, rescue mode is only supported for VMs with ephemeral storage. The rescue operation does not work for instances backed by volumes. Users attempting to rescue a volume-backed instance will encounter failures.