Learn why Private Cloud Director is the best VMware alternative

Platform9

Frequently Asked Questions (FAQ) for VMware users moving to Private Cloud Director

Over the past few months, we’ve had numerous conversations with VMware users exploring alternatives to VMware vSphere Foundation (VVF) and VMware Cloud Foundation (VCF). 

Across these discussions, common questions keep coming up:

  • How easy is VM migration?
  • Is there a vMotion and DRS equivalent?
  • How is existing storage handled?
  • What about Disaster Recovery and High Availability?
  • Can I automate provisioning and integrate with existing tools like ServiceNow?
  • How does monitoring and reporting work?

In this blog, we’ve compiled a set of Frequently Asked Questions (FAQ) from VMware users evaluating Platform9 Private Cloud Director—along with clear, concise answers to help guide your decision-making.

Migration and Disaster Recovery (DR)

Can you migrate a Virtual Machine (VM) that crosses multiple Data Stores or comes from a vVol?

Yes, you can migrate a VM from a vVol. The platform is designed to handle diverse storage scenarios, allowing you to migrate VMs even if the disks come from different data stores and multiple disks are attached to the virtual machine.

How does the migration process handle VMware tools and drivers?

Platform9’s free automated migration tool vJailbreak removes VMware Tools and installs necessary drivers, including Ovirt drivers and KVM tools, as part of the migration process. The migration procedure can be scheduled by an admin, allows for the mapping of networks and disks, and supports live migration from a running VMware environment.

How does the platform handle backup and Disaster Recovery (DR) integrations?

Significant effort has gone into integrating with various backup and DR vendors. 

Fully certified backup platforms include

  • Commvault
  • Storware
  • Trilio
  • Hystax

For Disaster Recovery, we have Commvault, Storware, and Hystax to provide SRM-style disaster recovery. This proactive approach allows active data replication to meet specific RPO and RTO requirements for disaster recovery sites.

Networking and Security

Is the cluster bond interface configured manually within the Operating System (OS)?

Today, network bonds must be manually configured at the OS level. In 2025, we plan to automate this through the platform, reinforcing our commitment to continuous improvement.

In addition to the network interface configuration, what else needs to be set up on the servers ahead of platform installation?

Typically, you’ll need to prepare the OS (e.g., Ubuntu 22.04) for the hypervisor, configure network interfaces (including bonds if required), and ensure the disk layout meets the platform’s requirements for storing configuration files. Any specialized interfaces, such as Fibre Channel or custom network connections, must also be configured.

Can the platform operate behind a firewall or in an air-gapped environment?

Yes, if you have a firewall, the platform can work through your proxy. For on-premise installations without internet access, necessary packages can be cached locally.

What identity providers are supported for SSO into the platform?

The platform supports any SAML 2.0 provider, such as Google G Suite, Okta, or Microsoft ADFS, for Single Sign-On (SSO). Multi-factor authentication (MFA) tokens can also be used.

How are identity groups mapped within the platform?

Identity groups can be mapped to tenants, allowing for the setup of quotas. For instance, DevOps teams can have specific resource quotas assigned.

Storage

What is the purpose of the Storage role? Is it a physical host?

The Storage role is a logical function, not a dedicated physical host. It acts as a management gateway for communicating with persistent storage backends (such as Pure Storage arrays, as discussed in the examples). Compute hosts running VMs still access the storage devices directly. The Storage role simply handles management tasks, such as provisioning and tearing down volumes through API calls.

Is the Storage role similar to a VASA Provider?

Yes, it serves a function similar to that of a VASA provider in VMware environments.

What happens if the host assigned the Storage role goes down?

VMs with mounted storage will continue running without interruption. However, storage management operations—like provisioning new volumes or taking snapshots—will be temporarily unavailable. To avoid this risk, the Storage role can be configured for High Availability by assigning it to multiple hosts (e.g., two hosts managing the same storage array).

How does HA work for the Storage role?

HA operates in an active-passive mode. If the primary host becomes unavailable, a background service automatically redirects management API requests to the designated secondary host, ensuring continuity of storage operations.

Are dedicated hosts required for the Storage role?

No, dedicated hosts are not required. The Storage role is a lightweight management function with minimal resource needs. In most cases, the same hosts also serve as hypervisors running workloads.

Can storage retyping (storage migration) be performed on a running VM, or does it require downtime?

Yes, live storage retyping is supported—meaning you can migrate a VM’s storage without downtime, as long as the underlying storage array driver supports it. This includes migrations between volumes on the same array or across different arrays.

High Availability (HA)

Can you provide more details on the HA failover time?

The default failover time is three minutes, but this setting is configurable. A longer default helps prevent unnecessary failovers caused by transient network issues (avoiding split-brain scenarios). An isolation response feature, planned for release later in 2025, will further improve reliability and could enable faster, safer failovers.

Is the HA feature currently limited to clusters of a certain size (e.g., four servers)?

Yes, currently, HA requires a minimum of four servers. Support for two-server HA is expected in the June 2025 release.

Virtual Machine Management and Tenancy

Does the platform offer an equivalent of vMotion?

Yes, the platform supports Live Migration, which allows running VMs to be moved between hypervisors within the same environment, including transferring the hypervisor state. Live migration also works with block storage, providing functionality equivalent to VMware’s Storage vMotion.

Is it possible to search for VMs by name across all tenants, or do you need to know the specific tenant?

Yes, you can search for virtual machines by name across all tenants. In very large-scale environments, this feature might be disabled by default for performance reasons, but it can be enabled as needed.

Are there performance issues if the cross-tenant VM view is left enabled?

For typical environments—such as those with around 6,000 VMs—no performance issues are expected. Minor impact may occur only in extreme cases where the VM-to-hypervisor ratio is unusually high.

Is the ability to limit resources set at the user level or strictly at the tenant level?

Resource limits can be set at the user level, but they are primarily managed at the tenant level. Users belong to tenants, and resource allocations are controlled within each tenant’s overall resource pool.

Can different users within the same tenant have different resource limits assigned to them?

Yes, you can assign compute resource limits (CPU and memory) to individual users within the same tenant. However, user-specific networking quotas are not supported.

Platform Architecture and Components

Which components of the platform are based on open source?

The platform incorporates several core open-source components from the OpenInfra community (now part of the Linux Foundation) and other open-source projects:

  • Compute, storage, and networking APIs — to ensure compatibility and broad integration support.
  • Monitoring has been replaced with Prometheus and Grafana.

Over time, many components have been significantly modified or replaced to enhance functionality. For example, High availability (HA) and resource balancing are now custom-built implementations.

The goal is to leverage the strengths of open source—such as tenancy models—while delivering a user experience that feels familiar to VMware users.

Is the platform a fork of open source? Does Platform9 contribute back?

Yes, the platform is effectively a fork. This gives Platform9 control over the platform’s direction and the ability to re-architect components where upstream projects were insufficient (such as monitoring and high availability).

Platform9 contributes back to open source where feasible, typically when there’s no major difference in design approach. Some features, like cross-tenant search, have been developed entirely in-house. The platform remains API-compatible with the upstream services it builds upon.

How are platform upgrades handled? Is there downtime?

Upgrades cover both the host agents and the central management control plane:

  • Management plane upgrades require scheduled downtime, during which the UI and API services are temporarily unavailable. Zero-downtime upgrades for the management plane are targeted for a future release in 2025.
  • Host upgrades are performed sequentially, typically taking about ten minutes per host. These upgrades do not disrupt running virtual machines or workloads.

What is Platform9 Express?

Platform9 Express uses Ansible playbooks to automate the deployment process. It allows you to set up the platform through a designated server, avoiding the need for manual configuration through the API or UI.

Monitoring and Reporting

What is the basis for the platform’s monitoring capabilities?

Monitoring is built primarily on Prometheus and Grafana. Exporters collect metrics from across the infrastructure—hosts, storage, networking—as well as from platform components. These metrics power internal features like Dynamic Resource Rebalancing (DRR) and support planned capabilities such as auto-scaling within the application catalog.

Can monitoring and alerting be customized?

Yes. Since the platform uses standard components like Prometheus and Grafana, you can customize alerts and notifications using tools like AlertManager. This allows integrations with services such as Slack, email, and others based on your team’s requirements.

Are there built-in dashboards and reporting?

Grafana-based dashboards are available. There are plans to expose Grafana links directly from the platform UI for accessing built-in reports and creating custom ones. Enhanced built-in reporting within the UI (covering CPU, memory, storage, networking metrics) is also on the roadmap.

Is monitoring and alerting currently available for individual VMs?

Currently, the focus is on infrastructure-level metrics. Workload-level metrics (including individual VM monitoring and alerting) are targeted for a future release in 2025, incorporating them into the Prometheus backend.

Automation and Orchestration

Can customers fully automate the VM request and provisioning process using the platform?

Yes, some customers use the platform for self-service and have built custom front-ends or integrated it with existing systems like ServiceNow. Workflows can be set up where a blueprint provision triggers an approval process in ServiceNow before being fulfilled by the platform.

Is the automation workflow primarily developer-focused or application-focused?

It is primarily developer-focused. However, many customers integrate ServiceNow as the front-end, routing API calls to the platform backend while maintaining workflow logic and CMDB updates within ServiceNow.

Is there a direct equivalent product to vRA (vRealize Automation) and vRO (vRealize Orchestrator)?

While not a direct one-to-one product replacement, the platform provides the necessary APIs and integrations (e.g., with ServiceNow and Terraform) to achieve similar outcomes for self-service VM lifecycle management (provisioning, operations, and teardown). Terraform is being standardized as the method for managing application stacks (similar to vApp constructs), enabling automation across the platform, storage, IPAM (Infoblox), etc.

 Can containers be deployed using the platform? Does it use Kubernetes?

A: Yes, the platform supports deploying and managing Kubernetes clusters alongside virtualized infrastructure, allowing users to switch between managing VMs and container clusters within the same interface.

Operating System and Drivers

 Who manages OS updates and drivers on the hypervisor hosts?

A: Currently, customers are responsible for managing the underlying Linux OS (e.g., Ubuntu 22.04) and any specific drivers required for their hardware. The long-term vision is for the platform provider to take more responsibility for managing the OS layer.

 Are custom OS images required or provided?

A: Customers currently set up their own servers using a standard OS like Ubuntu 22.04. The ability to use custom ISOs provided by the platform is a future goal but is not available today.

 Is there support for newer OS versions like Ubuntu 24.04?

Support for Ubuntu 24.04 is roadmap and coming soon in mid 2025.

Is there an equivalent to VMware’s Lifecycle Manager for deploying drivers across a cluster?

There isn’t a direct equivalent fully integrated yet. However, the platform can leverage Ironic, an open-source tool, to automate bare-metal provisioning and OS deployment (such as pushing an updated Ubuntu image) across a fleet of servers using PXE boot. This setup can automate OS installation, initial configurations like networking and bonding, and potentially automated the installation of platform agents.

Can drivers be updated individually, or does it require a full OS update/re-image?

If automation tools like Ironic are used, updating drivers typically involves deploying a new OS image with the updated drivers. Alternatively, drivers can be manually updated on each host individually, without needing a full re-image.

Are there specific requirements for cloud-init templates used with the platform?

No, there are no proprietary requirements. The platform supports standard cloud-init functionality and syntax. IP addresses can be set manually within the template or assigned dynamically via DHCP from the platform’s software-defined networking stack for specific network segments.

Content Libraries

Do content libraries support replicating templates from a single source to multiple locations?

Currently, the content library uses a single storage endpoint. A replication feature is in development, which will allow multiple nodes to act as endpoints with replication handled automatically in the background. For now, replication is limited to a single region. Future enhancements may include support for different content backends or using a globally replicated S3 backend for cross-region content library access.

Authors

  • Chris Jones

    Chris Jones is the Head of Product Marketing at Platform9. He has previously held positions as an Account Executive and Director of Product Management. With over ten years of hands-on experience in the cloud-native infrastructure industry, Chris brings extensive expertise in observability and application performance management. He possesses deep technical knowledge of Kubernetes, OpenStack, and virtualization environments.

    View all posts
  • Kamesh Pemmaraju

    Kamesh leads enterprise product marketing at Platform9. Prior to joining Platform9, he has several years of product management and marketing experience at Dell, Mirantis and ZeroStack focused on delivering open source private and hybrid cloud solutions to enterprises and service providers.

    View all posts
Scroll to Top