Comparison of OpenStack Implementation & Consumption Options

You’ve decided to move forward with OpenStack implementation to build a private cloud! Congratulations – you are in good company. According to Wikibon, true private cloud (on-premise and hosted together) is expected to grow from $7 billion in 2015 to $201 billion in 2026 (36% CAGR) and be 31% of infrastructure spend. At last month’s OpenStack Summit, the OpenStack Foundation said half of the Fortune 100 use OpenStack. Wal-Mart, AT&T, SAP and Wells Fargo all spoke about how they’re using open source code to build their private clouds.

All that said, there is another equally important decision you need to make: how will you implement and consume your OpenStack private cloud?

There are many choices to make when deploying OpenStack – you may at some point feel like  Tom Hanks in his Starbucks rant in You’ve Got Mail. Nonetheless, you’ll want to understand deeply the choices available to you regarding private cloud consumption, specifically as they pertain to stability, security, and interoperability with your existing enterprise datacenter infrastructure.

OpenStack Summit, most recently held in Austin, drew thousands of cloud professionals and OpenStack contributors, and included breakout sessions of business cases and on-the-field experience directly from users. A very interesting panel discussion explored the pros and cons of the different OpenStack consumption models. The three speakers, Red Hat’s Jonathan Gershater, EMC’s VS Joshi and Platform9’s Cody Hill joined IDC Research Director Ashish Nadkarni to talk about the their perspectives on these OpenStack deployment models such as software only, appliances, and OpenStack-as-a-service.

Here’s a video of the session, as well as a quick recap of the highlights below:

What are the OpenStack consumption models?

VS Joshi quickly explained the various options as:

  1. A DIY method where you download the OpenStack software and build it yourself. This is especially useful for those companies that desire a great level of customization in their private cloud and  have a lot of engineering staff to manage implementation, support, testing, etc.
  2. Software-only mode where you buy software from the many available vendors to manage, test, and package your private cloud for your architects. This eliminates the efforts your team would spend in creating back-end services to support cloud resources. This is a slightly more hands-on option than the appliance and slightly less flexible than complete DIY.
  3. The third one is the “Appliance” method, which VS calls “OpenStack in a box” or an “OpenStack-powered rack.” This is an off-the-shelf solution which is probably the most hands-off approach to private cloud implementation. However, as Jonathan points out, companies usually invest in an open source solution like OpenStack to develop vendor-agnostic capabilities and the appliance model reintroduces “lock-in”, where scaling will imply buying more of the same hardware.

It was at this point that Cody pointed out an important fourth option – OpenStack as a service –  that needs to be taken into account while talking about the various OpenStack deployment scenarios.

Why is OpenStack-as-a-service important?

For many organizations, it’s not practical for OpenStack to be a greenfield deployment and customers should be able to utilize and optimize their existing VMware / KVM deployments. Using an OpenStack-as-a-Service mode, companies can ingest all of their existing networks, vSphere templates, KVM bridges and just run with and maintain a single distribution.

Attendee Questions

There were a number of questions during the session from the audience for the speakers:

  • Michael from American Airlines came up with an interesting perspective  – he said he needed to support the deployment of multiple apps for end users, while being compliant with SOX, PCI, HIPAA guidelines. Michael’s  key challenge included making the OpenStack private cloud work seamlessly with the other legacy assets in the IT infrastructure. According to Cody, the solution lay in creating a service layer that can help manage and integrate different kinds of hardware, hypervisors, and storage vendors, that an enterprise like American Airlines may use.
  • Another attendee spoke about his experience implementing OpenStack in a DIY mode, saying it turned out to be quite challenging when the distribution had to be supported by in-house developers. He even joked that their main developer was in such high demand, he had to seek permission before leaving the building!
  • Marcus from Demand Media mentioned they were currently using an early stage of Metacloud and wanted to run OpenStack in DIY mode. He wanted the panelists’ advice on the best approach to this transition. The answers from the panelists were brief but hard-hitting. VS suggested they find a friend who’s done it before, Cody recommended focusing on developing a deep talent pool, and Jonathan mentioned they may want to hire specialists.

Spanning the Resource Gap

The session wrapped up with some discussion on how companies should attempt to fill the skills gap required for managing growing OpenStack infrastructure needs. Cody summarized with the notion that a managed OpenStack solution like Platform9 can close the gap because you don’t need an army of OpenStack experts to manage it. Instead, the OpenStack control plane is hosted in the cloud with deployment, patching, monitoring, and upgrades being the responsibility of Platform9

Read more about the pros and cons of the different OpenStack implementation models in our “Tech Guide to OpenStack Deployment Models”.

Platform9

You may also enjoy

Kubernetes FinOps: Right-sizing Kubernetes workloads

By Joe Thompson

Kubernetes FinOps: Resource management challenges

By Joe Thompson

The browser you are using is outdated. For the best experience please download or update your browser to one of the following: