In this blog you will learn about storage provisioning in Platform9 PCD using Cinder, specifically comparing thin vs. thick provisioning. Understand benefits, risks, backend dependencies, and configure volume types for efficient storage management. Learn to choose the right provisioning approach based on your needs and system capabilities.
Introduction
Considering how storage for virtual machines (VMs) is allocated, thick vs thin, is fundamental in all environments. Traditionally, storage was “thick provisioned,” meaning the entire requested disk size was reserved upfront. However, modern systems, including Platform9 Private Cloud Director (PCD), which utilizes Cinder for its block storage service, often support “thin provisioning” – a more efficient approach that allocates space only as needed. Understanding the differences, benefits, and risks is crucial for effective storage management in PCD.
Thick vs. Thin provisioning in Cinder/PCD
Cinder, the block storage engine within PCD, supports both provisioning concepts, although the availability and behavior depend heavily on the underlying storage backend and its driver:
- Thick Provisioning:
- Concept: When a user requests a volume of a certain size (e.g., 100 GB), Cinder immediately instructs the backend storage system to reserve the entire 100 GB of physical capacity.
- Characteristics: Space is guaranteed to be available for the volume. Initial write performance may be more predictable on some systems as allocation is done upfront. However, if the VM only uses 20 GB of that volume, the remaining 80 GB of physical storage capacity is wasted (allocated but unused).
- Availability in Cinder: Supported by some backends and drivers. For example, the Cinder LVM driver can operate in thick mode, and some SAN drivers allow thick provisioning of LUNs (e.g., NetApp driver with netapp_lun_space_reservation=enabled).
- Thin Provisioning:
- Concept: When a user requests a 100 GB volume, Cinder creates the logical volume representation of that size, but initially consumes minimal physical space on the backend storage (often just metadata). Physical space is only consumed from the backend pool as the VM writes data into the volume blocks.
- Characteristics: Highly efficient use of storage capacity – you only pay (in terms of physical space) for what you use. Allows for oversubscription, where the total logical size of all thin volumes can exceed the physical capacity of the backend pool.
- Availability in Cinder: Widely supported by many modern storage backends and Cinder drivers. LVM can be configured for thin provisioning (lvm_type=thin). NFS backends often default to thin (sparse files). Many SAN/SDS solutions support or even exclusively use thin provisioning (e.g., Pure Storage).
Backend and driver dependency: the deciding factor
It’s critical to understand that Cinder itself doesn’t perform the thin/thick operation; it relies on the configured storage backend and its specific Cinder driver.
- The backend storage system (e.g., a NetApp array, a Pure Storage cluster, local LVM pool) must support the desired provisioning type.
- The Cinder driver for that backend must be able to interact with the backend to request thin or thick allocation and accurately report capabilities and space usage back to Cinder.
- Drivers advertise their capabilities to the Cinder scheduler, indicating whether they support thin provisioning (thin_provisioning_support=True), thick provisioning (thick_provisioning_support=True), or potentially both.
Configuring provisioning policies via volume types
Administrators control which provisioning types are available to end-users through Volume Types. A Volume Type is a named policy associated with specific capabilities defined via “extra specs” (key-value pairs).
To control provisioning, an administrator might create Volume Types like:
- SSD-Thin: With extra specs like volume_backend_name=ssd_pool and provisioning:type=thin (or capabilities:thin_provisioning_support='<is> True’).
- HDD-Thick: With extra specs like volume_backend_name=hdd_pool and provisioning:type=thick (or capabilities:thick_provisioning_support='<is> True’).
When a user creates a volume and selects SSD-Thin, the Cinder scheduler looks for a backend named ssd_pool that reports support for thin provisioning and fulfills the request there using thin allocation. (Note: The exact extra spec key, like provisioning:type, can sometimes vary slightly depending on the Cinder driver).
Benefits of Thin provisioning in PCD
When supported and used appropriately, thin provisioning offers compelling advantages:
- Storage Efficiency & Cost Savings: Maximize physical storage utilization by only consuming space for written data. This delays storage capacity purchases and reduces the overall storage footprint.
- Flexibility & Agility: Provide logically large volumes to applications without immediately dedicating expensive physical capacity. This allows growth without upfront over-allocation.
- Faster Volume Creation: Creating a thin volume is typically much faster than creating a thick volume, especially eager-zeroed thick, as no extensive block zeroing or upfront allocation occurs on the backend.
Risks and management of Thin provisioning
The efficiency of thin provisioning comes with essential management responsibilities and risks:
- Over-Allocation/Oversubscription Risk: The main danger is running out of physical space on the backend storage pool. If you’ve provisioned (logically) more storage across all thin volumes than the backend physically contains, and those volumes fill up with data simultaneously, writes will fail once the physical capacity is exhausted. Cinder has settings like max_over_subscription_ratio and reserved_percentage to help manage this at the scheduler level, but they rely on accurate reporting from the driver.
- Monitoring is KEY: Monitoring the actual physical space utilization on the Cinder storage backends/pools is critical. Relying solely on the logical volume sizes reported within PCD/Cinder is insufficient when using thin provisioning with oversubscription. You need external monitoring tools specific to your storage backend (LVM tools, SAN/NAS management interfaces, etc.). Alerts should be configured based on physical pool usage thresholds.
- Performance Considerations: While modern storage systems handle thin provisioning very efficiently, there can sometimes be a marginal performance impact on the very first write to a previously unallocated block within a thin volume. For most workloads, this is negligible. Thick provisioning guarantees the space exists, which might be preferred for extremely latency-sensitive applications where even minor first-write variations are unacceptable, but this usually comes at a significant cost in storage efficiency.
Choosing the right approach
The decision between thin and thick provisioning (when the choice is available via your backend and Volume Types) depends on:
- Backend Capabilities: Does your storage system and its Cinder driver support thin, thick, or both? Some may only offer thin.
- Efficiency Goals: Thin provisioning is superior for maximizing storage utilization.
- Risk Tolerance & Monitoring: Are you prepared to diligently monitor physical backend capacity to prevent out-of-space conditions if using thin provisioning with oversubscription?
- Application Needs: Do specific applications have strict performance requirements that might theoretically favor thick (though this is less common with modern arrays)?
Administrators should configure appropriate Volume Types reflecting the available backend capabilities and desired policies, clearly naming them so users understand the underlying provisioning method (e.g., perf-thin, cap-thick).
Cinder driver Thin provisioning support summary
This table summarizes provisioning support for various Cinder drivers based on data extracted from the OpenStack Cinder Driver Support Matrix and general driver knowledge.
It now includes links to relevant OpenStack Cinder driver documentation where found.
Driver Name | Provisioning Support | Notes | Documentation Link |
(Ceph) RBD Driver | Thin Default | Based on matrix extraction; Ceph typically defaults to thin. | docs.openstack.org/…/ceph-rbd-driver.html |
DataCore Storage Driver (FC, iSCSI) | Unknown | Based on matrix extraction. | docs.openstack.org/…/datacore-volume-driver.html |
Datera Storage Driver (iSCSI) | Unknown | Based on matrix extraction. | docs.openstack.org/…/datera-volume-driver.html |
Dell PowerFlex (ScaleIO) Driver (ScaleIO) | Both (Configurable) | Based on matrix extraction; Enterprise arrays typically support both. | docs.openstack.org/…/dell-emc-vxflexos-driver.html |
Dell PowerMax Driver (iSCSI, FC) | Both (Configurable) | Based on matrix extraction; Enterprise arrays typically support both. | docs.openstack.org/…/dell-emc-powermax-driver.html |
Dell PowerStore NFS Driver (NFS) | Thin Default | Based on matrix extraction; NFS typically defaults thin. | docs.openstack.org/…/dell-emc-powerstore-driver.html |
Dell PowerStore Driver (iSCSI, FC, NVMe-TCP) | Both (Configurable) | Based on matrix extraction; Enterprise arrays typically support both. | docs.openstack.org/…/dell-emc-powerstore-driver.html |
Dell PowerVault ME Driver (iSCSI, FC) | Unknown | Based on matrix extraction. | docs.openstack.org/…/dell-emc-powervault-me.html |
HPE Alletra 9k/Primera/3PAR Driver (FC, iSCSI) | Both (Configurable) | Source: General Knowledge. Supports both Thin & Thick provisioning, selected via Volume Type extra specs. | docs.openstack.org/…/hpe-3par-driver.html |
LVM Driver (iSCSI/FC) | Both (Configurable) | Supports both Thin & Thick. Thin enabled via lvm_type=thin. | docs.openstack.org/…/lvm-volume-driver.html |
NetApp Unified Driver (NFS, iSCSI, FC) | Both (Configurable) | Generally supports both Thin & Thick, configurable via Volume Types / driver options. | docs.openstack.org/…/netapp-volume-driver.html |
NFS Driver | Both (Thin Default) | Supports both Thin & Thick. Thin often default (nfs_sparsed_volumes=True). | docs.openstack.org/…/nfs-driver.html |
Pure Storage Driver (iSCSI, FC, NVMe-oF) | Thin Default | Typically defaults to Thin Provisioning. | docs.openstack.org/…/pure-storage-driver.html |
Note: This table provides a partial summary and relies on potentially incomplete data extraction and general knowledge. Always consult official documentation for definitive support details.
- This table includes information partially extracted by the browsing tool from the specified URL (https://docs.openstack.org/cinder/latest/reference/support-matrix.html) on April 21, 2025, supplemented with entries based on general Cinder driver knowledge.
- Provisioning support details are based on common configurations and driver capabilities. Specific behavior might depend on backend settings and Cinder configuration.
- Entries added or significantly informed by general knowledge are marked in bold in the first column.
- Documentation links point to the latest OpenStack Cinder documentation where possible; specific features might vary between OpenStack releases. Documentation for LVM and NFS drivers are also added.
- This summary may still be incomplete or reflect interpretations. The official OpenStack Cinder Support Matrix and specific vendor driver documentation should be consulted directly for the most accurate and comprehensive information on driver capabilities and versions.
Conclusion
Platform9 Private Cloud Director, through its integration with Cinder, provides powerful and flexible storage provisioning options. Thin provisioning offers substantial benefits in storage efficiency and agility, making it a popular choice for many workloads. However, it mandates careful monitoring of the underlying physical storage capacity to mitigate the risks associated with over-allocation. By understanding the capabilities of your Cinder storage backends, configuring appropriate Volume Types, and implementing robust monitoring practices, you can effectively leverage both thin and thick provisioning to meet the diverse storage needs of your private cloud environment.
Continue learning
Explore our eight learning modules and become a Private Cloud Director expert.