Executive Summary
Private Cloud Director uses KVM as its hypervisor and QCOW2 as its native disk format. This architectural choice decouples the hypervisor, disk format, and storage layers eliminating the tight interdependencies that characterize the vSphere stack and providing infrastructure teams with a non-proprietary foundation for their virtual machine workloads. Because every layer in the stack is based on open standards and open-source components, no single vendor controls the format, the tooling, or the path forward.
Background
In a VMware vSphere environment, the hypervisor (ESXi), disk format (VMDK), and storage filesystem (VMFS) form a tightly coupled stack. Each layer depends on the others: ESXi requires VMFS-formatted datastores, VMFS stores VMDKs in formats optimized for that specific filesystem, and vCenter is required to manage all of it. Removing or replacing any single layer breaks the chain.
Private Cloud Director takes a more open approach. KVM, QCOW2, and the underlying storage are independent components. Each can be examined, operated on, or replaced without affecting the others. This bulletin explains how these layers work, where the architectural differences matter, and what that means in practice.
Technical Detail
Stack Comparison
The following table summarizes the key layers of each stack:
| Layer | vSphere Stack | Private Cloud Director Stack |
|---|---|---|
| Hypervisor | ESXi (proprietary, bare-metal only) | KVM (Linux kernel module, open source) |
| Disk format | VMDK (multiple sub-types, VMFS-coupled) | QCOW2 (open, single-file, self-describing) |
| Storage filesystem | VMFS (proprietary clustered filesystem) | Standard Linux filesystems, NFS, block storage via open drivers |
| Management | vCenter (proprietary) | Open APIs, Private Cloud Director management plane |
The critical distinction is the coupling between layers. In the vSphere stack, VMFS serves as a proprietary clustered filesystem that provides distributed locking, allowing multiple ESXi hosts to access the same datastore without data corruption. The disk format depends on this filesystem, and the filesystem depends on the hypervisor. Private Cloud Director solves the same problem without a proprietary filesystem layer. Cinder enforces single-attach at the API level by default, preventing concurrent write access to a volume through orchestration policy rather than filesystem locking. For shared storage scenarios such as NFS-backed datastores used during live migration, coordination is handled at the storage protocol layer through mechanisms like NFSv4 leases. Each layer addresses its own concern independently.
VMDK Structure
A VMware virtual disk is not a single file. Each VMDK consists of a descriptor file (.vmdk) containing metadata about disk geometry, adapter type, and data file pointers, plus a separate extent file (-flat.vmdk for thick disks, -delta.vmdk for snapshots). Both files must be present and consistent for the disk to function.
VMware further subdivides VMDKs into provisioning types: thick eager-zeroed (all space pre-allocated and zeroed), thick lazy-zeroed (space allocated but zeroed on first write), and thin provisioned (space allocated on demand). Each type behaves differently at the storage layer and carries different performance and over-commitment tradeoffs. Snapshots introduce additional complexity through delta VMDKs that use a VMFS-specific sparse format.
This multi-file, multi-type architecture means VMDKs cannot be moved outside of a vSphere environment without conversion. The format is a component in a tightly coupled system, not a portable artifact.
QCOW2 Structure
QCOW2 (QEMU Copy-On-Write version 2) is a single, self-describing file. It contains header metadata, a cluster-based data structure, and the disk contents in one file.
Figure 1: QCOW2 File Layout
Key characteristics of the format:
Copy-on-write is native. Snapshots are built into the format. Creating a snapshot preserves the current disk state and directs new writes to fresh clusters. No special filesystem feature is required.
Thin provisioning is the default behavior. A QCOW2 image created with a 100 GB virtual size may occupy only a few hundred kilobytes on disk until the guest writes data. Space is allocated cluster-by-cluster. Preallocation is available when workloads require it, and for production VMs using block storage volumes, provisioning policy is typically managed at the storage array level.
No filesystem dependency. QCOW2 functions on ext4, XFS, NFS, or directly on raw block devices. The format handles its own reference counting, snapshot management, and space allocation internally. It does not require a specialized clustered filesystem.
Open tooling. qemu-img can create, convert, snapshot, resize, and inspect QCOW2 images from any Linux command line. In Private Cloud Director, day-to-day operations such as volume resizing are handled through the platform’s APIs, but no part of the underlying toolchain is proprietary or license-gated.
Storage Layer Independence
Private Cloud Director supports local disk, NFS-backed shared storage, and persistent block storage volumes through a set of open Cinder drivers. Enterprise storage arrays from NetApp, Pure Storage, Dell EMC, HPE, Hitachi, and others integrate directly. Importantly, each storage vendor develops and maintains its own Cinder driver. This means driver support, feature coverage, and updates come from the storage provider with direct knowledge of the array’s capabilities, not from a third party. No proprietary filesystem reformatting is required; if a storage platform can present a block device or a filesystem mount, Private Cloud Director can use it.
This contrasts with vSphere, where VMFS must sit between the hypervisor and the storage array. That additional layer introduces its own versioning, its own capacity limits, and its own operational procedures for expansion and recovery.
Operational Considerations
Migration from VMware. Converting from VMDK to QCOW2 is a one-time event during migration. vJailbreak, Platform9’s open source migration tool, handles this conversion automatically. It supports warm migration using VMware Change Block Tracking to minimize cutover downtime, and Storage-Accelerated Copy via XCOPY for Pure Storage and NetApp arrays. After migration, VMs run natively on KVM with QCOW2.
Backup and recovery. Open formats integrate with the broader ecosystem without vendor-specific APIs or agents acting as gatekeepers between administrators and their data. Standard Linux tools and third-party backup solutions (such as Veeam) can operate against Private Cloud Director’s storage without proprietary intermediaries.
Disk operations. Resize, snapshot, and inspect operations are available through Private Cloud Director’s APIs and UI. Because the underlying format is QCOW2, these operations do not depend on a vendor-specific management server for execution.
Encryption at rest. Private Cloud Director supports Cinder volume encryption for data at rest, managed at the volume layer rather than the disk image level. This aligns with the architectural principle of independent layers: encryption is a storage concern handled by the storage subsystem, not a feature embedded in the disk format.
Key Takeaways
- Private Cloud Director’s use of KVM and QCOW2 decouples the hypervisor, disk format, and storage layers, eliminating the tight interdependencies present in the vSphere stack.
- QCOW2 is a single-file, self-describing format with native copy-on-write, built-in thin provisioning, and no filesystem dependency.
- VMDK is not a portable disk format; it is a component in a tightly coupled proprietary system that includes VMFS and ESXi.
- Private Cloud Director’s storage layer integrates with enterprise arrays through open Cinder drivers, with no proprietary filesystem required between the hypervisor and the storage.
- Migration from VMDK to QCOW2 is handled automatically by vJailbreak as a one-time conversion event.