June 2025 Release

The latest release of Platform9 Private Cloud Director includes new features, usability improvements, and resolved issues to enhance product stability and performance.

New Features

Kubernetes Cluster Support for Self-Hosted Deployments

Private Cloud Director now supports Kubernetes clusters on self-hosted deployments. You can launch and manage Kubernetes clusters directly from the Private Cloud Director UI.

QoS Configuration Support Added for Networking Service

QoS (Quality of Service) is now enabled in the networking service configuration. Users can define and apply QoS policies using the OVN driver for improved bandwidth management.

VM High Availability (VMHA) Support for Two-Node Clusters

Private Cloud Director now supports VM High Availability (VMHA) on clusters with a minimum of two hosts. This enhancement enables more flexible HA configurations, especially for environments with limited hardware, and simplifies adoption for customers with smaller clusters / edge clusters.

Beta Predictive Dynamic Resource Rebalancing (DRR)

Private Cloud Director now introduces a predictive mode for Dynamic Resource Rebalancing (DRR), expanding on the default proactive and reactive models already in use. This enhancement uses historical CPU and RAM data to forecast utilization trends and proactively identify hosts within a cluster that may run out of capacity.

The predictive mode is currently in beta and not available by default. It requires support from the Platform9 support team to activate it on a per-cluster basis and is not accessible through the UI.

If you would like to explore predictive DRR for your virtualized clusters, please get in touch with the Platform9 Support team.

Support for Physical Nodes in Kubernetes

Private Cloud Director now supports creating, scaling, and deleting Kubernetes clusters using physical servers, in addition to virtual machines. You can now create a Kubernetes cluster using one or more physical nodes.

Beta Application Catalog & Orchestration

You can now use the application catalog to simplify the deployment and management of multi-VM applications. The Private Cloud Director application catalog uses open source terraform under the hood, to enable you to orchestrate complex applications that may involve multiple virtual machines, networks, storage volumes and other Private Cloud Director objects.

Through this feature, you can:

  • Store application templates as terraform files in github with secure API token access
  • Provision one or more applications using the Private Cloud Director UI.

Beta GPU Acceleration for High-Performance VM and Kubernetes Workloads

You can now run AI/ML, rendering, and simulation workloads directly on the Private Cloud Director console using GPU-enabled VMs and Kubernetes clusters. Administrators can configure passthrough or fractional vGPU modes (beta) for VMs, and enable GPU support for Kubernetes clusters using passthrough, Time Slice, or Multi-Instance GPU (MIG) modes. This feature helps run compute-heavy workloads efficiently without relying on external environments, improving performance for modern high-demand applications.

Enhancements

Core Component Upgrade to 2024.1 (Caracal)

Upgraded the core components of Compute, Storage, Images, and Identity services to the 2024.1 (Caracal) release. This update delivers improved stability, security, and compatibility with the latest features across the console.

Enhanced Observability Metrics

This release introduces improved observability in Private Cloud Director with additional metrics now available at the Cluster, Host, and VM levels. These metrics include:

  • CPU, memory, and storage utilization
  • CPU throttling
  • Network data (received and transmitted)
  • Disk IOPS
  • Breakdown of allocated vs. used resources

In addition, credential management for Grafana access has been updated. For new installations, Grafana login uses the management plane admin credentials by default. However after an upgrade from previous releases, Grafana continues to use the default credentials (admin/admin) or manually updated credentials as configured previously by the admin.

Metrics Collection With Prometheus

With this release, Private Cloud Director now utilizes open source Prometheus for collecting and storing resource metrics.

VM Migration Priority with DRR

This release adds support for VM migration priority configuration, that determines when a virtual machine gets selected for migration, when the DRR service identifies a candidate host for rebalancing. You can read more about DRR and VM Migration Priority here.

Dell Unity Storage Driver Support

Added support for Dell Unity storage systems by including the required storops Python package in Storage service deployments. This enhancement simplifies integration and enables out-of-the-box block storage management for Dell Unity within the PCD console.

Support for Limiting Volume Types to Specific Tenants

You can now limit Volume Types to specific tenants on the Private Cloud Director console. To apply this setting, navigate to Storage > Volume Types, select a volume type, and choose Edit Tenants to assign access.

This enhancement improves tenant-level control and ensures better resource segmentation.

Hostname Based Management Cluster Creation for Self-hosted Private Cloud Director

In addition to the existing IP based deployment of self-hosted Private Cloud Director management clusters, you can now use hostnames to create these clusters. This enables better interoperability with some external storage providers.

System Requirements Validation for Community Edition Installs

Community Edition installs now include system requirement checks before installation begins. The installer validates available CPU, memory, and disk resources to ensure the system meets minimum requirements. If the system does not meet the expected thresholds, the installation fails early or issues a warning. This enhancement helps avoid failed or degraded installations by identifying insufficient system resources upfront.

Additionally, the requirements to run Community Edition have now been lowered to 8 vCPUs and 28GB (usable) memory, from the previous requirements of 16 vCPUs and 32 GB of memory.

Enhanced Deployment Progress Visibility for Community Edition

Community Edition installation now displays real-time deployment progress on the CLI. This enhancement improves user experience by replacing the earlier static message with dynamic updates pulled from per-service deployment tracking. Users can now monitor the status of each service during installation, making the process more intuitive and transparent.

Upgrade Notes

The June 2025 release includes improvements to how cluster hosts are managed. Make sure the following configuration is applied to any hosts that may be missing it.

Enforced Cluster Assignment for Hypervisors

In this release, assigning hypervisors to clusters is now mandatory. The change aims to simplify host configuration and aligns with the multi-cluster feature introduced in the previous release, where cluster selection was optional.

Before upgrading, ensure that every hypervisor host is assigned to a cluster. Navigate to Infrastructure > Cluster Hosts, select a specific host, and then select Edit Roles to add the hypervisor.

VM High Availability (VMHA) Support for Existing Clusters

To enable VMHA on two-node clusters, in case of older deployments being upgraded to this release, please follow the below steps:

  • Disable VMHA before upgrading.
  • Re-enable it once the upgrade is complete.

Manual AZ Name Update Required After Upgrade

Starting from the February release, the hypervisor availability zone (AZ) name defaults to the initial cluster name configured in the blueprint. If you're upgrading from this release, first disable VM High Availability (VMHA) in the blueprint. After upgrading to the April/June release, please ensure to create your first cluster with the same name as the existing AZ name to align with the new default behavior.

Proxy Configuration for Image Management

Image management services now recognize proxy settings defined in /etc/environment. After upgrading, if the file appears incorrectly populated, restart the pf9-hostagent service to ensure proxy settings are applied correctly.

Bug Fixes

Identity, Storage, and Networking Services

Fixed Enabled volume encryption configuration to allow users to create encrypted volumes after setting up host-side requirements.

Fixed Volumes can now be uploaded to images without requiring metadata changes.

Fixed Fixed an issue where parallel onboarding of multiple nodes failed due to intermittent errors in the hostagent installer script. You can now onboard multiple nodes reliably.

Fixed VMs can now use volumes backed by Fiber Channel based storage drivers.

Fixed Admins linked to a domain can now only view and manage resources to their assigned domains.

Fixed Previously, when the host had an image library role assigned and was later deauthorized, it could lead it to broken image service endpoints. Private Cloud Director now automatically updates the endpoint and picks one of the active imagelibrary hosts.

Fixed Volume BlockDeviceMapping configurations now allow LUN passthrough devices.

Fixed Fixed an issue where storage passwords in the volume management service were not decrypted before being sent to the storage drivers.

Fixed Made improvements to make the authentication service highly available.

Fixed Volume creation failed due to image-cluster endpoint not being reachable from a different region.

Self-Hosted Deployments

Fixed The self-hosted backup/restore commands now account for the rabbitMQ broker data correctly.

Fixed Fixed an issue where you could continue using old credentials after the admin password was reset using the CLI.

Fixed For self-hosted deployment now using airctl delete-cluster does not unmount /var/lib/containerd.

Fixed Fixed an issue where airctl check timed out after 5 minutes if a node was unreachable. The check now marks the node as Not Ready and proceeds with validation on the remaining nodes.

Fixed Added support in airctl to use custom SSH ports for management clusters. Previously, the SSH port was hardcoded to 22, blocking access in environments with custom ports.

Fixed Fixed an issue where airctl status incorrectly showed the node as ready even after airctl stop was executed and services were stopped.

Fixed Fixed an issue where lack of HA support for node-taint and airctl-backup services caused backup and taint operations to fail if the airctl host went down. Pods were left in a terminating state, leading to inconsistent behavior.

Fixed Added a cleanup script to allow complete removal of Community Edition from a node using airctl unconfigure-du --force followed by airctl delete-cluster.

PCD User Interface

Fixed From the previous release, hosts would appear offline on the Private Cloud Director console even though they are operational in some cases. With this release, hosts report health status more accurately.

Fixed Deauthorizing a host in the cluster now removes all assigned roles.

Fixed New Clusters can now be created using the same name as a previously deleted Cluster's name.

Fixed Added a pcdctl command to automatically trust the local Private Cloud Director certificate authority.

Fixed Added a Clone VM option to simplify VM duplication workflow. The action copies VM settings into the creation wizard with options to modify network, flavor, or security groups.

Fixed Updated the Host Aggregate column on Infrastructure > Cluster Hosts to display all aggregates the host is assigned to, instead of showing only one.

Fixed Fixed an issue where linking a Host Aggregate during flavor creation or cloning failed silently, resulting in incomplete configurations.

Fixed DHCP ports are not allowed to be assigned to a VM's private IP anymore.

Fixed Fixed an issue where the lease policy management would always pick Power Off as the end of lease action.

Fixed Fixed an issue where cluster names could not contain the underscore character.

Fixed Fixed an issue where host allocation ratios could not be customized during host aggregate creation.

Fixed Enabled image creation from volumes marked as "in use" by supporting --force via CLI and Force Upload on the Private Cloud Director console.

Fixed Fixed an issue where image deletion timed out if the image host was unreachable. Added a pre-check to verify host availability and display an appropriate message to the user.

Fixed Resolved an issue where removing all roles from a host also deleted the image library role even though a warning message on the Private Cloud Director console indicated it would be preserved.

Fixed Added an option to clone an existing Flavor to simplify flavor creation without having to manually input configuration values again.

Fixed Added support to mark a physical network as external from the Private Cloud Director console after its creation.

Fixed Updated the VMHA status display on Infrastructure > Clusters to reflect the actual HA status, in addition to the desired state that was displayed previously for clarity.

Fixed Added support for managing migration priority of VMs on the Private Cloud Director console.

Fixed Added support for VM lease management at the tenant level.

Fixed Added support to boot virtual machines directly from volume snapshots on PCD console.

Kubernetes on Private Cloud Director

Fixed During onboarding of physical hosts on Ubuntu 22, byohctl CLI runs apt update to fetch the dependency of ebtables utility.

Fixed Improved API server security by replacing self-signed certificates with certificates signed by a known Certificate Authority (CA). You will no longer see Action Required: Trust Cluster Endpoint Certificate message on the Private Cloud Director console.

Known Limitations

  • GPU Passthrough Limitation for VM Creation: When using GPU passthrough mode, only one GPU host configuration is allowed per region.
  • GPU VM Creation Fails with No Valid Host Was Found Error: You may see an error of No valid host was found. There are not enough hosts available when creating a VM using GPU passthrough flavors. This can occur if SR-IOV is not enabled for the GPU device. It is recommended to verify if the GPU supports SR-IOV and enable the same before configuring GPU passthrough.
  • Cluster Names Must Be Unique Across Regions: Two clusters cannot share the same name across regions within the same tenant.
  • Tenant Name Restriction: Spaces are not supported in tenant names. Use only alphanumeric characters, dashes, or underscores.
  • Kubernetes Cluster Support Not Available on Upgraded On-Premise Deployments: Kubernetes cluster support is only available on fresh on-premise deployments of the Private Cloud Director console. Deployments upgraded to the 2025.6 release from an older version will not support this feature.

Known Issues

  • VM HA does not honor the host liveness traffic network interface configured in the cluster blueprint in this release.
  • VM HA and DRR does not support vTPM-enabled VMs. Live migration and evacuation are not possible, so these VMs will not be migrated automatically.
  • If you are using NFS as the backend for block storage, set the image_volume_cache_enabled flag to false. If the flag is set to true , creating a VM from a cached image volume may lead to incorrect root disk sizing.
  • SSO users are unable to create Heat orchestration stacks at this time.
  • pcdctl config set command is not supported for a user with MFA enabled.
  • Image upload to encrypted volumes is currently unsupported. Volume encryption only works with empty volumes at this time.
  • Currently, rescue mode is only supported for VMs with ephemeral storage. The rescue operation does not work for instances backed by volumes. Users attempting to rescue a volume-backed instance will encounter failures.
Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard
  Last updated