We’ve received incredibly positive responses to our recent partnership announcement with SolidFire to integrate their all-flash enterprise storage with our Platform9 Cloud management-as-a-Service solution. In a previous post, I discussed the use cases for block storage in a cloud platform such as OpenStack. I also walked through the various options for deploying Cinder block storage in OpenStack and why we chose to work with SolidFire as our first storage partner. In this post, I want to provide more details on how Platform9 integrates with SolidFire and more importantly, how we are making it a first-class managed solution in Platform9. We will begin with a walk-through of the Cinder project architecture and how Cinder works to deliver cloud block storage.
The Cinder Architecture
As seen in the diagram above from Avishay traeger of IBM, the Cinder block service is delivered using the following daemons:
- cinder-api – A WSGI app that authenticates and routes requests throughout the Block Storage service. It supports the OpenStack APIs only, although there is a translation that can be done through Nova’s EC2 interface, which calls in to the cinder client.
- cinder-scheduler – Schedules and routes requests to the appropriate volume service. Depending upon your configuration, this may be simple round-robin scheduling to the running volume services, or it can be more sophisticated through the use of the Filter Scheduler.
- cinder-volume – Manages Block Storage devices, specifically the back-end devices themselves.
- cinder-backup – Provides a means to back up a Cinder Volume to various backup targets.
Cinder Workflow Using Commodity Storage
As originally conceived, all cinder daemons can be hosted on a single Cinder node or alternatively, all cinder daemons, expect cinder-volume can be hosted on the Cloud Controller nodes with the cinder volume daemon installed on one or more cinder-volume nodes that would function as virtual storage arrays (VSA). These VSAs are typically Linux servers with commodity storage; volumes are created and presented to compute nodes using Logical Volume Manager (LVM).
The following high-level procedure, outlined in the OpenStack Cloud Administrator Guide and visualized in a diagram from Avishay traeger of IBM, demonstrates how a Cinder volume is attached to a Cloud instance when invoked via the command-line, the Horizon dashboard, or the OpenStack API:
- A volume is created through the cinder create command which makes a call to the ciinder-api daemon. This command then invokes cinder-volume to creates an LV into the volume group (VG) “cinder-volumes.”
- The cinder-scheduler decides which cinder-volume node to use based on either capacity or volume-type information that you can pass to the scheduler through an extra-specs list. A volume type can be things likes like SSD or SAS drives.
- The volume is attached to an instance through the nova volume-attach command. This command creates a unique iSCSI IQN that is exposed to the compute node.
- The compute node, which runs the instance, now has an active iSCSI session and new local storage (usually a /dev/sdX disk).
- Libvirt uses that local storage as storage for the instance. The instance get a new disk, usually a /dev/vdX disk.
OpenStack Cinder Using SolidFire Block Storage with Platform9 OpenStack
Cinder also supports drivers that allow volumes to be created and presented using enterprise storage solutions from vendors such as SolidFire. As noted in a previous post, these solutions can be used in place of Linux servers that use commodity storage with LVM.
Note that the architecture and workflow is similar to the commodity storage solution except that the cinder-volume node communicates with the SolidFire array to create volumes that are presented to cloud instances. This is in contrast to the commodity solution where the cinder-volume node functions as a virtual storage array to create and to present Cinder volumes using storage attached to that node. Users can leverage SolidFire features such as per-volume Quality of Service (QoS) by leveraging the volume-type and extra-specs list options in OpenStack.
In the diagram above, all OpenStack components reside with the SolidFire array in the same datacenter. That’s a typical architecture when deploying a standard OpenStack distribution. We had two goals, however, for deploying SolidFire with Platform9:
- We wanted to carry over the benefits that customers currently receive with our cloud management-as-a-Service solution. This meant that deploying and managing SolidFire-based Cinder block volumes had to be simple and as automated as possible.
- We wanted to go beyond the default OpenStack Horizon dashboard in terms of giving users the ability, to not only manage Cinder volumes, but to manage their SolidFire array using a single dashboard.
To achieve the first goal, we took an approach similar to how we deployed Glance images for OpenStack. We architected the Platform9 solution so that the cinder-api and cinder-scheduler daemons are hosted with other OpenStack control services off-premises in what we call a customer’s Deployment Unit. To learn more the Deployment Unit and other aspects of the Platform9 architecture, I recommend that readers take a moment to look at my recent Platform9 architecture blog post. In the case of the cinder-volume daemon, we decided it was most optimal to run it on a server in a customer’s on-premises environment.
As mentioned earlier, this is similar to the approach we took with Glance where a customer chooses to authorize a server in his datacenter to operate as a Glance Image Node. In the same way, a customer chooses an on-premises server to assume the cinder-volume node role that communicates with SolidFire arrays. The workflow for creating and managing a cinder volume is unchanged with the exception that the cinder-volume daemon is in a separate location from the cinder-api and cinder-scheduler daemons. The benefit of this approach is that customer Platform9 will mange all cinder daemons and services, including the ability to restart the on-premises cinder-volume daemon if necessary.
From a security perspective, all communication between the cinder-volume daemon and the Deployment Unit is over https. Each customer has a self-signed certificate that is created and used to stamp the cinder-volume daemon so that it can only communicate with that customer’s specific DU. Additionally, user and workload data never leaves the customer’s datacenter.
Typically, configuration of storage arrays for use as Cinder block storage is a manual process in OpenStack using the command line. To simplify this process for our mutual customers, we’ve enabled the process of setting up SolidFire arrays to be done using the Platform9 dashboard or the standard command line.
Once the SolidFire array is configured, Cinder volumes can be created by Platform9 cloud administrators and made available to end-users for attaching to cloud instances. Theses tasks can also be performed using the command line or using the Platform9 dashboard.
As discussed in my earlier post, one of the many reasons we chose to partner with SolidFire is the advanced capabilities they expose through Cinder such as per-volume QoS. As with the initial setup of the array, our plans are to enable the same level of simplicity in configuring these capabilities using the Platform9 dashboard. Over the next few months, we will be providing updates to deliver these enhancements to all our customers.
If you will be at the OpenStack Summit in Tokyo and am interested in learning more about the Cinder project, I encourage you to attend a session that Platform9 and SolidFire will be collaborating on called “Persisting Data In Your Cloud with Cinder Block Storage.” Joining me as a session presenter will be Platform9 engineer, Arun Sriraman, and SolidFire engineer, John Griffith. As many of you may know, John is a former Cinder Project Technical Lead and is currently on that’s project’s Technical Committee. SolidFire is a great partner and we look forward to showing even more joint innovations in the future.
- Getting to know Nate Conger: A candid conversation - June 12, 2023
- Platform9 at the Edge Computing Expo North America 2023 - May 8, 2023
- Argo CD vs Tekton vs Jenkins X: Finding the Right GitOps Tooling - March 1, 2023