We recently announced the beta release of Platform9 Managed Kubernetes. Kubernetes is an open source container orchestration tool, originally designed by Google. The “managed” experience means Platform9 handles all the nitty gritty details of Kubernetes deployment, configuration, ongoing monitoring, troubleshooting and upgrades. What’s more, the container orchestration tools provide built in support for a number of pain-points developers face while building production applications, such as service discovery, load balancing, rolling upgrades, health monitoring, auto-scaling etc.
We often get asked why we choose Kubernetes over the other popular container orchestration tools. In this series of blog posts, we’ll walk you through the various reasons why you may even need to look at container orchestration tools and then compare the most popular ones like Kubernetes, Docker Swarm and Mesos.
Containers make it very easy to run cloud-native applications on physical or virtual infrastructure. They are lighter weight compared to VMs and make more efficient use of the underlying infrastructure. Containers are meant to make it easy to turn apps on and off to meet fluctuating demand and move apps seamlessly between different environments or even clouds. While the container runtime APIs meet the needs of managing one container on one host, they are not suited to manage multiple containers deployed on multiple hosts. This is where we need to look at container orchestration tools.
Container Orchestration tools can manage complex, multi-container apps deployed on a cluster of machines. These tools can treat an entire cluster as a single entity for deployment and management. Container orchestration tools can automate all aspects from from initial placement, scheduling and deployment to updates and health monitoring functions that support scaling and failover.
Capabilities of Container Orchestration Tools
Here are some of the main capabilities that a modern container orchestration platform will typically provide
Container orchestration tools can provision or schedule containers within the cluster and launch them. As part of this, the tool will determine the right placement for the containers by selecting an appropriate host based on the specified constraints such as resource requirements, location affinity etc. The underlying goal is to increase utilization of the available resources. Most tools will be agnostic to the underlying infrastructure provider and, in theory, should be able to move containers across environments and clouds.
Container orchestration tools can load the application blueprint from a schema defined in YAML or JSON. These are popular languages to define infrastructure-as-code similar to OpenStack Heat templates, Puppet Manifests or Chef recipes. Defining the blueprint in this manner makes it easy for DevOps teams to edit, share and version the configurations and provide repeatable deployments across development, testing and production.
Container orchestration tools will track and monitor the health of the containers and hosts in the cluster. If a container crashes, a new one can be spun up quickly. If a host fails, the tool will restart the failed containers on another host. It will also run specified health checks at the appropriate frequency and update the list of available nodes based on the results. In short, the tool will ensure that the current state of the cluster matches the configuration specified.
Rolling Upgrades and Rollback
One of the most desired ability is for the orchestration tool to perform ‘rolling upgrades’ of the application where a new version is applied incrementally across the cluster. Traffic is routed appropriately as containers go down temporarily to receive the update. A rolling update guarantees a minimum number of “ready” containers at any point, so that all old containers are not replaced if there aren’t enough healthy new containers to replace them. If, however, the new version doesn’t perform as expected then the orchestration tool can also rollback the applied change.
Policies for Placement, Scalability etc.
Container orchestration tools provide a way to define policies for host placement, security, performance and high availability. When configured correctly, container orchestration platforms can enable organizations to deploy and operate containerized application workloads in a secure, reliable and scalable way. For example, an application can be scaled up automatically based on CPU usage of the containers.
Since containers encourage a microservices based architecture, service discovery becomes a critical function and is provided in different ways by container orchestration platforms e.g. DNS or proxy-based etc. For example, a web application front-end dynamically discovering another microservice or a database.
Ease of Administration
Container orchestration tools themselves should be easy to deploy, configure and setup for Administrators. They should connect to existing IT tools for SSO, RBAC etc. An extensible architecture will connect to external systems such as local or cloud storage, networking systems etc.
This was a brief overview of the importance of choosing the right container orchestration tool to deploy and manage cloud-native applications. In follow-up, posts we’ll discuss and compare some of the popular tools such as Kubernetes vs Mesos and Kubernetes vs Docker swarm. Meanwhile, you can try the beta release of Managed Kubernetes today!