Making Platform9 the Best OpenStack Cloud for VMware vSphere

We recently announced General Availability for Platform9 support of VMware vSphere, including 100% interoperability with existing vSphere infrastructure and management tools to make Platform9 the best cloud management solution for vSphere. I highlighted the benefits of our solution in a blog post and indicated that we would be providing more technical details in the future. In this post, I will be explaining how we are able to integrate with vSphere and are filling gaps found in a typical OpenStack with vSphere deployment.

In a previous blog post, Platform9 lead engineer for VMware integration, Amrish Kapoor, walked through some of the shortcomings in the current vSphere support within OpenStack and proposed some capabilities that Platform9 believes are needed to garner greater adoption of OpenStack among the current VMware install base. As Amrish wrote, vSphere is still the gold standard for hypervisors and a large number of businesses rely on it to run their IT infrastructure. We believe that by delivering capabilities, such as dynamic discovery of existing workloads, enables Platform9 Managed OpenStack to be the ideal cloud management platform for running both legacy enterprise as well as new cloud-native workloads.

One Gateway to Rule Them All

The key component that enables Platform9 to integrate with VMware vSphere is our vSphere Gateway Appliance, which was briefly mentioned in our vSphere support announcement blog post. The vSphere Gateway Appliance (VGA) and it’s counterpart, the KVM Host Agent, make up the on-premises tier of the Platform9 solution.

As discussed in my Platform9 architecture post, the on-premises tier resides in our customers’ data centers or co-location facilities and communicates with the Deployment Unit tier where the Platform9 OpenStack control services are hosted. The Deployment Unit and Core Services tiers are hosted off-premises and is the backbone for our cloud management-as-a-Service offering.

The VGA itself is a Linux virtual machine (VM), deployed as an Open Virtual Appliance (OVA) and configured specifically to be a proxy between a customer’s vSphere infrastructure and the OpenStack and Platform9 services running in that customer’s Platform9 Deployment Unit (DU). All communication between the VGA and the DU is over https using the following ports:

  • Ports 80 (HTTP) and 443 (HTTPS)
  • Port 5671 (AMQP over SSL)
  • Port 9443 (Protected file download)

If required, the VGA can also utilize a proxy server for communicating with the DU.  Additionally, each customer has a self-signed certificate that is created and used to stamp the VGA so that it can only communicate with that customer’s specific DU.

Also note that no user data or workload traffic will ever leave the customer’s on-premises environment since that traffic is confined to the vSphere cluster and the infrastructure it resides on. Likewise, the vCenter credentials that are entered when configuring the VGA is used by the VGA but not sent to the DU or Core Services tiers. The VGA is what enables Platform9 to manage and to discover a customer’s vSphere infrastructure.

vSphere Management

The groundwork to manage vSphere using OpenStack was laid by VMware with their contribution of the VMware VCDriver to the OpenStack project. As outlined in my vSphere with OpenStack post, the VMwareVCDriver functions as a translator between the OpenStack APIs and the vCenter APIs. Typically, an OpenStack compute node hosts both the OpenStack Nova compute services and the VMwareVCDriver. This is necessary because the Nova services need to run on a Linux server and it typically runs on a Linux server that is also a KVM hypervisor node. In the case of vSphere, those Nova services cannot run natively on the vCenter server and so a Linux server must be used to run Nova services and the VMwareVCdriver. As the mixed-mode OpenStack cloud diagram illustrates below, a Linux-based OpenStack compute node must proxy for the vCenter servers and translate OpenStack commands to relevant vCenter commands.

With the vSphere Gateway Appliance, we’ve made the installation and configuration of the components needed to integrate vSphere with OpenStack as easy and as painless as possible. We’ve done this by first packaging the relevant Nova services and the VMwareVCDriver into a Linux VM and deploying it as an OVA.

This OVA is already tuned to run vSphere with Platform9 OpenStack and includes additional services for discovering and managing the vSphere infrastructure. Configuration files for the Nova services and the VMwareVCDriver are either preset or presented to the cloud administrator in the form of an intuitive wizard to help mask unneeded complexity.

The VGA is fully monitored and managed by Platform9 using a communication agent that is included with the OVA. This allows us to restart the VGA if that is required for any reason. It is important to note here that the VGA itself is completely stateless. As discussed earlier, intra-cluster communication between the vCenter Server and the ESXi hosts it manages is not passed to the VGA and vSphere specific metadata is persisted only to the vCenter database and cluster. OpenStack nova metadata is persisted to the MySQl database that is part of the OpenStack control services deployment in the DU. The fact that the VGA is stateless means a couple of things:

  1. If the VGA is unavailable, the vSphere cluster continues running under the management of the vCenter servers. End-users would be unable to provision new resources through Platform9 since the DU would no longer be able to communicate with the on-premises vSphere infrastructure. However, new VMs can still be provisioned and made available using vCenter where they will later be discovered by Platform9 (More on that later).
  2. If the VGA needs to be restarted or even redeployed, no metadata will be lost since they do not persist in the VGA. The restarted or redeployed VGA will just rediscover the current infrastructure and run as normal.

Currently, high-availability of the VGA is provided using the vSphere HA feature since the VGA is deployed as an OVA in a vSphere cluster. This means if the ESXi host on which the VGA is running goes down, the VGA will automatically be restarted on another ESXi host. We are looking into other ways to build in even better high-availability capabilities for the VGA.

Infrastructure Discovery

Most of the vSphere management capabilities discussed above are part of the OpenStack project and available to anyone who chooses to deploy the VMwareVCDriver. As Amrish wrote, however, we at Platform9 believe that important capabilities were missing that would make OpenStack more attractive to users. Specifically, the OpenStack project is missing the ability to discover and mange existing vSphere as well as KVM environments. This means that users are forced to run OpenStack in greenfield only environments and having to either migrate their existing infrastructure or run them as separate silos.

Believing that being able to “pull” in exiting infrastructure into the Platform9 managed cloud was critical for customer adoption and would address real customer problems, we set out to enable this in our solution. We’ve since added code that enables the KVM Agent and the VGA to dynamically discover existing infrastructure and write that metadata to the OpenStack database. The upshot is that the OpenStack control services then can see and mange existing resources as though they were provisioned using OpenStack APIs. This allows us to provide Platform9 customers with some very useful capabilities, particularly in the vSphere context:

  • Discovery of existing clusters and VMs As mentioned, this allows customers to leverage existing assets instead of having to be limited to building out brand new infrastructure. Additionally, customer have the ability to choose exactly which clusters they wish to be under Platform9 OpenStack management.
  • Discovery of new clusters and VMs – Platform9 has the ability to discover new VMs created through vCenter and outside of OpenStack. This gives customers the flexibility of using existing tools and processes if they wish. In this context, we assume the vCenter Servers are the source of truth and will poll them on regular basis and translate and write the new metadata to the OpenStack database.
  • Discovery of vSphere templates – Platform9 treats vSphere VM templates as first-class citizens. It will discover any existing VM templates located on managed Datastores and “import” them into a platform9 Glance image catalog. We don’t actually convert and copy the templates to a glance repository per se but store the metadata in Glance, including pointers to the hosting datastore. You can then have these templates available for deployment of new Virtual Machines through Platform9 OpenStack. This is important given the existing investment in building vSphere templates by customers.

We are also doing significant work around OpenStack networking and leveraging the vSwitch and Distributed vSwitch technologies.

With these dynamic discovery capabilities, we have customers who have transformed their legacy virtual infrastructure (KVM or vSphere) into a self-service private cloud in minutes. However, we don’t think these capabilities are useful only for Platform9 customers, but for the entire OpenStack ecosystem. As such, we want to upstream our code that provides these additional capabilities to the OpenStack project.

As a first step, we will be giving a talk at the upcoming OpenStack Summit in Tokyo titled, “Making OpenStack Work In An Existing Environment – Challenges And Solutions.” We encourage all who are interested to attend as we lead a discussion on the value of these capabilities to customer adoption of OpenStack and what additional similar capabilities we would want to add to the project. We welcome feedback and look forward to working with this great community.

If you’re ready to get started with OpenStack on VMware, check out our free trial.

Request Free Trial

You may also enjoy

Why Choosing the Right Kubernetes Partner is Critically Important

By Peter Fray

How to Set Up Cert-Manager and Nginx Ingress Controller with Platform9 Kubernetes

By Mike Petersen

The browser you are using is outdated. For the best experience please download or update your browser to one of the following: