The Platform9 team is gearing up for the Barcelona OpenStack Summit, and we’re excited to participate! We’ve proposed a handful of talks and would appreciate your vote to include our talks on the conference agenda. Please see the session abstracts below and vote on the summit website
See you in Barcelona!
Voting closes on Monday, August 8 AT 11:59PM PT (Tuesday, August 9 AT 6:59AM UTC). Please make sure to cast your votes by then.
Why we chose OpenStack as our hybrid cloud management platform
AVG Technologies is a leader in internet security software, providing security software that protects devices for families and businesses worldwide. As part of supporting a high velocity software development model, the AVG team needed to leverage internal infrastructure and public cloud while accelerating CI/CD and automation using a common API. It was clear to them that OpenStack could fully manage their private infrastructure across VMware and KVM, and integrating with Cisco ACI. But what about the need for a single platform across public and private clouds? Other hybrid CMPs were an option but they have a lot of overlap in functionality with OpenStack and the AVG team prefers using OpenStack API due to compatibility with devops tools they use. So they asked themselves: why not consolidate using OpenStack? This talk will highlight how AVG was able to collaborate within the community to build AWS-specific drivers for OpenStack to deliver a fully hybrid implementation at AVG.
Speakers: Mike Cohen(Cisco), Blake Parker(AVG), Madhura Maskasky(Platform9)
Case Study: Building a fully automated developer workflow using OpenStack on VMware
Alert Logic is a leader in security and compliance solutions for the cloud, providing Security-as-a-Service for on-premises, cloud, and hybrid infrastructures powering enterprises worldwide. The IT team inside Alert Logic standardized on VMware as their private infrastructure of choice for it’s stability and performance. However, they had to build a significant amount of automation inhouse (a ruby on rails application) to automate deployment of workloads on a variety of platforms – Vmware, AWS, LXC etc. all their deployment tools interacted directly with VMware vCenter. And the automation still did not fix the need for end-users to have direct access to vCenter for console and other use cases. This conflicted with Alert Logic’s policy of doing everything as-a-service and enabling self-serve model for users. This talk describes Alert Logic’s use case of building a powerful automation using OpenStack with VMware, and ruby on rails with chef to automate their developer workflows.
Speakers: Michael Wiggin(Alert Logic), James Poore(Alert Logic), Madhura Maskasky(Platform9)
Case Study: Running OpenStack on bare metal provider – Packet.net
Renting hardware instead of owning it is increasingly becoming popular — and providers like Rackspace, Softlayer, OVH & Packet enable the building of Private Clouds with rented hardware. Using a bare metal provider as the foundation of a private cloud can save you the hassle of owning hardware, allow for highly automated private cloud environments and strong operational models.
This talk consists of a case study of deploying OpenStack on Packet (www.packet.net) and dives into key deployment considerations and technical hurdles. You will learn about security issues, networking models in deploying OpenStack on a public bare metal provider and solution(s) Platform9 (www.platform9.com) employed to overcome some of these challenges. Using public bare metal providers can be beneficial both in terms of cost and flexibility.
Speakers: Zachary Smith(Packet), Roopak Parikh(Platform9)
Virtual Machine HA On OpenStack
Many workloads in today’s datacenters are like pets. Their lifespan is in years; they run mission critical service that must be up 24×7. Modern, cloud native applications are designed to tolerate failures. The cloud methodology encourages Cattle approach which treats VMs as commodity. In the near future Pet workloads will coexist with Cattle due to many reasons – (1) During the transition from a datacenter environment to the cloud, a mix of workloads prevail. (2) Legacy applications cannot be easily converted to cloud paradigm. Given the state of enterprise workloads, Virtual Machine High Availability (VM HA) is a must have capability.
In the OpenStack community, HA solutions typically deploy Pacemaker with Corosync. At Platform9, we chose Consul to implement a VM HA solution. We found it is easy to adopt and scalable. In this talk we present this feature, experience of running Pet VMs with HA in production and the metrics used to measure effectiveness of VM HA.
How to embrace hybrid cloud without giving up OpenStack
For enterprises running OpenStack-based private cloud, adding public cloud resources is an attractive proposition. A hybrid cloud offers pay-as-you-go flexibility of a public cloud combined with security and privacy of a private cloud. However, the complexity poses a barrier to its adoption.
In this talk, we show how to use the OpenStack API layer as the unifying interface for hybrid-cloud management. We found that this approach solves a number of use cases for managing workloads on public cloud like AWS. It enables uniform experience, simplicity and portability for cloud tooling and applications.
We have extended the OpenStack control pane by building AWS-specific drivers for Nova, Glance, Cinder and Neutron. We added Platform9’s brownfield discovery technology to further ease the ingestion of AWS workloads into an OpenStack-driven cloud. This approach can easily be extended to other clouds as well, making the holy grail of multi-cloud a reality.
Speakers: Sachin Manphatak(Platform9)
Panel: Pros and cons of various OpenStack consumption models
Join us for this panel where we discuss the various ways in which organizations can reap the benefits of OpenStack. The panel will discuss the various consumption/delivery models – available from various suppliers. Such as “roll your own”, appliance, software only, and OpenStack-as-a-service. We will discuss the pros and cons of each model and the questions each organization should ask before they embrace a particular model.
Co-ordinator: Cody Hill(Platform9)
GlusterFS as storage for OpenStack: Pros and cons and how to get started
GlusterFS has been overlooked when it comes to a backend storage solution for OpenStack. Yet, it’s the obvious choice over many others for KVM deployments. How can this be? In this discussion I’d like to touch on:
- What use-cases is GlusterFS great for?
- What use-cases is GlusterFS just OK for?
- What use-cases is GlusterFS a bad choice for?
- Performance metrics for: Ephemeral, Block, & Object Storage
- What are the architecture decisions that need to be made when deploying GlusterFS?
- How do you monitor it to ensure GlusterFS is healthy.
- How do you get started?
Speaker: Cody Hill(Platform9)
Case Study: How TechAccelerator used OpenStack as a better alternative to vCloud Director
With vCloud Director approaching the end of its life, enterprises are increasingly looking at alternatives to satisfy their cloud goals atop existing VMware infrastructure investments. One such company is TechAccelerator – a solutions provider that empowers enterprises with an easy-to-use, on-demand, hands-on platform to speed up product adoption and solution integration. Given the rate at which these lab environments need to be dynamically spun up and destroyed, it is also important to be able to deploy them with a single-click, and clean them up.
Speaker: Cody Hill(Platform9)
One Cloud. Openstack. Kubernetes. One hour. A Managed OpenStack Success Story
OpenStack is hard. At least it has been for some time. It requires resources to manage and expert knowledge on-site, and downtime remains a nagging fear. With the proliferation of containers, cloud teams now have to manage two systems, with 4X (2^2) the worry. Indeed, the “OpenStack or containers?” question consumes further resources in unproductive debate.
No longer. Dell, Platform9 and Midokura have teamed to delivered a solution that is easy to deploy, leverages SDN, minimizes resource overhead, reduces TCO, and best of all: unifies OpenStack and Kubernetes workload management within a single interface
OpenStack storage panel featuring customers and field architects
- What types of storage are these and other customers using?
- What workloads are benefitting from OpenStack storage?
- What were some storage design challenges encountered?
- What were some takeaways upon overcoming these challenges?
- What are some DO’s and DON’T’s for OpenStack storage?
- What are the cost savings and efficiency gains using OpenStack Cinder, Manila, and Swift?
- How have customers planned for disasters and backups?
- What public clouds are customers using?
Speakers: Marc Koderer(SAP), Ed Balduf(SolidFire/NetApp), Kapil Arora(NetApp), Akshai Parthasarathy(NetApp), Paavan Shanbhag(Platform9)
Simple scale test for OpenStack control plane
Scale testing the OpenStack controller with physical or virtual hosts is not always feasible and is bounded by the resources available. In this talk, we dive in to the various steps we followed to scale test the Platform9 Control plane with limited resources.
With almost every OpenStack project it is possible to create a fake driver that creates the relevant objects either in memory or as simple files on the filesystem. Scale testing control plane using a fake driver allows creation of many hosts that connect and consume the control messages like a real host. This allows evaluating performance of various api processes, messaging queues and databases with thousands of hosts interacting with the OpenStack.
Areas that we will discuss may include – enhancements made to fake driver in nova, implementation of similar fake drivers for cinder and possibly neutron. Other aspects of this talk would include performance metrics that we collected running OpenStack under various load conditions.
Speakers: Pushkar Acharya(Platform9) and Sachin Manpathak(Platform9)
Openstack and Kubernetes: One on top of the other, vice-versa, or sideways?
Openstack and Kubernetes are leading management platforms in their respective areas: the former for general infrastructure and virtualization, the latter for containers orchestration. For the growing number of organizations that have decided to take advantage of both, a question arises: what is the best way to architect and integrate the two stacks? Historically, the safe answer seemed to be to deploy Kubernetes clusters “on top” of Openstack, using solutions such as Heat, Murano, or Magnum. Recently though, an alternate pattern is emerging: use Kubernetes as a base layer, and deploy Openstack as an “application” on it. What are the advantages of such an approach? Furthermore, a company may have deployed independent Openstack and Kubernetes stacks due to historical or organizational reasons. How can the company integrate the two to leverage their respective strengths? Finally, is it possible to use Murano as a unified catalog for both both Openstack and Kubernetes-based applications?
Speakers: Bich Le(Platform9)
Adding dynamic configuration to Kolla Kubernetes
Kolla and Kolla Kubernetes are two primary projects that can be used to install OpenStack using containers and Kubernetes orchestration framework respectively. While Kolla-Kubernetes is great, it does suffer from few drawbacks:
- Adding new service requires changes to the central kolla-kuberntes repository.
- Adding the service also means a way to create configuration ansible and password scripts in one location.
- There is no easy way to upgrade ‘configuration’ and services.
This talk proposes changes to kolla-kubernetes architecture into a more service oriented architecture by introducing:
a) A new service Kolla-Config that centralizes the configuration.
b) Introducing ‘Service-Manager’ per component (glance, nova etc) that is responsible for generating and updating configuration.
The result is a much more dynamic OpenStack installation and upgrade experience.
Speakers: Roopak Parikh(Platform9), Bich Le(Platform9)
Unified application model for VMs and containers
As containers become as popular for enterprise workloads as VM instances, there is an emergent need to provide a unified application development paradigm across both domains. In this talk, we describe how we achieved this by creating a new kubernetes driver for Nova. This driver is provided with the kubeconfig for the given Kubernetes cluster during initialization. During instance provisioning, it automatically converts the given Glance image into a docker image. The cloud-init customization scripts are injected into the docker image to be run on startup. The driver deploys a single service targeting a single pod when creating an instance, with a replication controller of just 1 replica. Heat was also modified by adding another parameter to the AutoScalingGroup – the kubeconfig. When this parameter is present, Heat creates an appropriate ReplicationController for the instances instead. Finally,we demonstrate the use of the same Heat template to deploy a VM or container based workload.
Latest posts by Vijay Dwivedi (see all)
- Create A Highly Available Kubernetes Cluster - December 6, 2016
- Kubernetes & Containers: A New Era for DevOps (Webinar) - October 18, 2016
- Onboard Platform9 Managed OpenStack on VMware vSphere - October 3, 2016