Learn why Private Cloud Director is the best VMware alternative

Platform9

Storage Basics – Platform9 Private Cloud Director

In this blog, you will learn about Platform9 Private Cloud Director (PCD) storage, including ephemeral (temporary)  and persistent options. Discover Cinder’s block volume management with snapshots and resizing across backends like LVM, SAN, and NAS. Master storage policies via Cinder Volume Types and explore storage migration in PCD.

Introduction

Storage is a fundamental pillar of any virtualization platform, providing the foundation for virtual machines (VMs), their operating systems, applications, and data. In Platform9 Private Cloud Director (PCD), which leverages open technologies like KVM and Cinder, understanding the available storage options is crucial for designing reliable, performant, and cost-effective private cloud environments.

Platform9 storage options overview

Platform9 PCD offers two primary types of storage for virtual machines, each suited for different use cases:

Ephemeral Storage (Temporary):

  • Nature: This is the default storage option when creating VMs if persistent storage isn’t explicitly chosen. Ephemeral disks are directly tied to the lifecycle of the VM instance.
  • Persistence: Data stored on ephemeral disks is deleted when the associated VM is terminated.
  • Backend: Typically utilizes local/direct-attached storage on the hypervisor host where the VM runs, although it can also be configured to use shared NFS mounts.
  • Use Cases: Ideal for VM operating system boot disks (especially for cloud-native, disposable workloads), temporary scratch space, or situations where data persistence beyond the VM’s life is not required.
  • Limitations: Generally cannot be resized after creation, does not support snapshots independently of the VM, lacks QoS controls, and data is lost if the host fails (unless using NFS backend, though data is still lost on VM deletion). Only one primary ephemeral disk per VM is typical.

Block Storage (Persistent Volumes):

  • Nature: Provides persistent, network-attached block storage volumes managed by Cinder, the integrated block storage service within PCD’s architecture.
  • Persistence: Volumes and their data persist independently of the VM lifecycle. They remain available even after the VM they were attached to is deleted, until the volume itself is explicitly deleted.
  • Backend: Leverages various storage systems (local LVM, SAN arrays via iSCSI/FC, NAS via NFS, etc.) through configurable Cinder storage backend drivers.
  • Use Cases: Recommended for production workloads requiring data persistence, such as databases, application data storage, user directories, and boot volumes for stateful VMs.
  • Features: Supports creation, deletion, attaching to VMs (typically one VM at a time, though multi-attach depends on backend), detaching, resizing (online/offline depending on context), snapshots, cloning, and defining storage characteristics via Volume Types.

Understanding Block Storage

Platform9 utilizes Cinder to manage persistent block storage. Cinder acts as an abstraction layer, providing a unified API for managing volumes across diverse backend storage systems.

  • Volumes: These are the fundamental units managed by Cinder, analogous to virtual hard disks or LUNs. They appear as standard block devices within the guest OS of the VM they are attached to.
  • Cinder Services: Key Cinder components (API, Scheduler, Volume service) run within the PCD control plane or on designated storage nodes to handle volume requests, decide placement based on policies, and interact with storage backends via drivers.

Storage Backends in PCD: Connecting to Your Storage

How does PCD actually store the data for Cinder volumes? Through configurable storage backends, managed via Cinder drivers. This provides flexibility to leverage existing infrastructure or choose appropriate storage tiers:

  • Local Storage (via LVM Driver): You can configure PCD to use the direct-attached disks on hypervisor nodes as a Cinder backend using the LVM (Logical Volume Manager) driver. Each node designated as a block storage node manages its local disks.
  • Pros: Simple setup, potentially lower cost if using existing server disks.
  • Cons: Data is typically siloed to the specific host, offering no inherent redundancy or shared access across hosts (unless advanced file systems are layered on top, outside of basic Cinder LVM). A host failure makes its volumes unavailable. Not suitable for features requiring shared access like live migration of VMs using these volumes without storage migration.
  • Shared Storage (SAN/NAS/Distributed via Cinder Drivers): This is the more common approach for production environments requiring high availability and advanced features. PCD integrates with a wide array of enterprise storage systems through their specific Cinder drivers. Examples include:
  • Network Attached Storage (NAS): Using NFS shares.
  • Storage Area Networks (SAN): Connecting via iSCSI or Fibre Channel (FC) protocols to storage arrays (e.g., NetApp, Pure Storage, Dell EMC, HPE, IBM).
  • Distributed Storage: Integrating with software-defined storage like Ceph.
  • Pros: Centralized management, high availability (provided by the storage system), shared access enabling VM live migration without storage migration, potential for advanced array features (deduplication, compression, hardware snapshots).
  • Cons: Requires investment in shared storage infrastructure and appropriate networking (Ethernet for NFS/iSCSI, dedicated FC fabric for Fibre Channel).

Connecting PCD to Common Storage Technologies

Platform9 PCD, primarily through its integration with Cinder, can leverage various standard storage technologies present in enterprise data centers:

  • Local Disk Storage: As mentioned, disks directly attached to the hypervisor hosts (using interfaces like SATA, SAS, or high-speed NVMe) can serve as the basis for Ephemeral Storage or be managed by the Cinder LVM driver for block storage. When using local disks, especially for Cinder LVM, implementing redundancy at the host level (e.g., using hardware or software RAID) is crucial, as PCD/Cinder itself doesn’t manage the physical disk redundancy for this backend type.
  • Fibre Channel Storage Area Network (SAN): PCD can integrate seamlessly with FC SANs. This requires hypervisor hosts to have Fibre Channel Host Bus Adapters (HBAs) connected to a dedicated FC switch fabric, which in turn connects to the FC storage array. A specific Cinder driver for the storage array vendor (e.g., drivers for NetApp, Dell EMC, Pure Storage, HPE) is configured in PCD to manage volume provisioning, attachment, and other operations (like offloaded snapshots/clones) on the SAN. This setup benefits from the high throughput and low latency typically associated with FC.
  • FCoE (Fibre Channel over Ethernet): FCoE allows FC traffic to be encapsulated and transported over standard Ethernet networks, usually requiring 10GbE or faster speeds and Converged Network Adapters (CNAs) on the hosts. If the host operating system, the CNA, the network switches, and the storage array all support FCoE, and a corresponding Cinder driver is available and configured for FCoE connectivity, PCD could potentially utilize it as a Cinder backend. Support is highly dependent on the specific hardware and Cinder driver capabilities.
  • Hybrid SAN: These arrays combine different storage media, typically faster SSDs for caching or a performance tier, and slower, higher-capacity HDDs for a capacity tier. PCD interacts with a hybrid SAN through its standard protocol interface (e.g., FC or iSCSI) and the corresponding Cinder driver. The tiering and caching logic is managed within the storage array itself. PCD benefits from the performance characteristics exposed by the array. Cinder Volume Types can potentially be configured (if the driver supports reporting different backend pools) to map to specific performance tiers offered by the hybrid array.
  • NFS File-Based Network Attached Storage (NAS): NFS is a versatile option supported by PCD for both storage types.
  • For Ephemeral Storage: Hypervisor hosts can directly mount an NFS share, and PCD can use this mount point to store ephemeral VM disks.
  • For Block Storage (Cinder): The Cinder NFS driver allows an NFS share to be used as a block storage backend. Cinder typically manages volumes as individual files on the NFS share. This relies on standard IP networking and is often simpler to set up than FC SANs but may have different performance characteristics.

Managing Storage Policies with Volume Types

Analogous to VMware’s Storage Policies, Cinder uses Volume Types to classify and manage storage capabilities. Administrators define Volume Types, associating them with specific “extra specs” (key-value pairs). These specs dictate:

  • Backend Mapping: Linking a Volume Type to one or more specific storage backends (e.g., volume_backend_name=SSD_Pool or volume_backend_name=SATA_Archive).
  • Capabilities: Defining characteristics like Quality of Service (QoS limits for IOPS/bandwidth), thin/thick provisioning, encryption settings, replication status, or other features exposed by the backend driver.

When a user requests a new volume, they select a Volume Type. The Cinder scheduler then uses the extra specs associated with that type to filter and select a suitable backend that matches the required capabilities, ensuring the volume meets the defined policy.

Parallels with VMware vVols: Granularity and Policy in PCD Storage

Users familiar with VMware vSphere may recognize conceptual similarities between VMware Virtual Volumes (vVols) and the capabilities offered by Platform9’s Cinder-based block storage. While the underlying technologies (VASA providers vs. Cinder drivers) differ, both aim to provide more granular, policy-driven storage management compared to traditional LUN-based approaches.

  • Granular Management: vVols operate at the level of individual virtual disks (VMDKs), allowing storage operations per disk. Similarly, Cinder manages storage at the volume level. Each Cinder volume is an independent entity attached to a VM, offering finer-grained control than managing large LUNs shared by many VMs.
  • Policy-Driven Control: VMware uses Storage Policy-Based Management (SPBM) to apply storage capabilities (performance, availability) defined in policies to individual vVols/VMDKs. Platform9 achieves a similar outcome using Cinder Volume Types and their associated extra specs. Administrators define types representing different service levels, and users select the appropriate type when creating a volume, ensuring it lands on a backend meeting the policy requirements.
  • Storage Offload: A key benefit of vVols is offloading operations like snapshots and clones to the storage array via the VASA provider, leveraging array efficiency. Cinder drivers for capable storage backends can achieve the same outcome. When you request a snapshot or clone of a Cinder volume residing on an array with efficient snapshot/cloning capabilities, the Cinder driver can instruct the array to perform the operation directly, making it much faster and space-efficient than host-based copying.
  • Abstraction: Both vVols and Cinder provide an abstraction layer. vVols abstract the LUN/filesystem details, presenting Storage Containers. Cinder abstracts the specifics of diverse backend storage systems behind a common API and the concept of Volumes and Volume Types.

While not identical implementations, PCD’s block storage, powered by Cinder, offers analogous benefits to vVols in terms of moving towards more granular, policy-based, and hardware-accelerated storage operations within a virtualized environment.

Volume Operations in PCD

Platform9 provides interfaces (UI, API, CLI) to perform standard volume management tasks, which translate into Cinder API calls:

  • Create Volume: Specify size, name, description, and crucially, the desired Volume Type. Can be created blank, from an image, or from a snapshot/another volume.
  • Attach/Detach Volume: Connect a volume to a running VM or disconnect it.
  • Create/Delete Snapshot: Capture a point-in-time snapshot of a volume for backup or cloning purposes.
  • Resize Volume: Increase the size of an existing volume (often requires filesystem resize within the guest OS afterwards).
  • Clone Volume: Create a new volume based on an existing volume or snapshot.
  • Delete Volume: Permanently remove a volume and its data (use with caution!).

Storage Migration in PCD

Platform9 supports Storage Live Migration, analogous to VMware’s Storage vMotion. This allows for migrating a VM’s storage while the VM remains running:

  • Ephemeral Storage: Live migration of a VM using local ephemeral storage involves copying the disk data to the destination host’s local storage.
  • Block Storage: Volumes can often be migrated between compatible Cinder backends or undergo “retyping” (changing their associated Volume Type, which might trigger a migration if the new type maps to a different backend). This is useful for moving between storage tiers or performing maintenance on storage backends without VM downtime.

Conclusion

Platform9 Private Cloud Director offers a flexible storage architecture by integrating Cinder and providing simple ephemeral storage and robust, persistent block storage. By understanding the differences between ephemeral and block storage, leveraging appropriate Cinder backends (local or shared), and defining granular policies via Volume Types, administrators can tailor the storage environment to meet diverse application requirements for performance, persistence, availability, and cost within their private cloud.

Continue learning

Explore our eight learning modules and become a Private Cloud Director expert. 

Overview & Introduction 

Storage Basics 

Storage Provisioning

Ensuring Uptime

Kubernetes

Optimizing Workloads

LBaaS Networking Basics

Author

  • Chris Jones

    Chris Jones is the Head of Product Marketing at Platform9. He has previously held positions as an Account Executive and Director of Product Management. With over ten years of hands-on experience in the cloud-native infrastructure industry, Chris brings extensive expertise in observability and application performance management. He possesses deep technical knowledge of Kubernetes, OpenStack, and virtualization environments.

    View all posts
Scroll to Top