Container Orchestration Tools: Compare Kubernetes vs Mesos

This post has been updated by Kubernetes vs Mesos + Marathon.

In a previous blog we discussed why you may need a container orchestration tool. Continuing the series, in this blog post we’ll give an overview of and compare Kubernetes vs Mesos.

Kubernetes vs Mesos

Overview of Kubernetes

According to the Kubernetes website – “Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.” Kubernetes was built by Google based on their experience running containers in production over the last decade. See below for a Kubernetes architecture diagram and the following explanation.

Kubernetes vs Mesos - Container Management

The major components in a Kubernetes cluster are:

  • Pods – Kubernetes deploys and schedules containers in groups called pods. A pod will typically include 1 to 5 containers that collaborate to provide a service.
  • Flat Networking Space – The default network model in Kubernetes is flat and permits all pods to talk to each other. Containers in the same pod share an IP and can communicate using ports on the localhost address.
  • Labels – Labels are key-value pairs attached to objects and can be used to search and update multiple objects as a single set.
  • Services – Services are endpoints that can be addressed by name and can be connected to pods using label selectors. The service will automatically round-robin requests between the pods. Kubernetes will set up a DNS server for the cluster that watches for new services and allows them to be addressed by name.
  • Replication Controllers – Replication controllers are the way to instantiate pods in Kubernetes. They control and monitor the number of running pods for a service, improving fault tolerance.

Overview of Mesos (+Marathon)

Apache Mesos is an open-source cluster manager designed to scale to very large clusters, from hundreds to thousands of hosts. Mesos supports diverse kinds of workloads such as Hadoop tasks, cloud native applications etc. The architecture of Mesos is designed around high-availability and resilience.

Mesos architecture - Kubernetes vs Mesos - Container Management

Credit: http://mesos.apache.org/documentation/latest/architecture/

The major components in a Mesos cluster are:

  • Mesos Agent Nodes – Responsible for actually running tasks. All agents submit a list of their available resources to the master. 
  • Mesos Master – The master is responsible for sending tasks to the agents. It maintains a list of available resources and makes “offers” of them to frameworks e.g. Hadoop. The master decides how many resources to offer based on an allocation strategy. There will typically be stand-by master instances to take over in case of a failure.
  • ZooKeeper – Used in elections and for looking up address of current master. Multiple instances of  ZooKeeper are run to ensure availability and handle failures.
  • Frameworks – Frameworks co-ordinate with the master to schedule tasks onto agent nodes. Frameworks are composed of two parts-
    • the executor process runs on the agents and takes care of running the tasks and
    • the scheduler registers with the master and selects which resources to use based on offers from the master.

There may be multiple frameworks running on a Mesos cluster for different kinds of task. Users wishing to submit jobs interact with frameworks rather than directly with Mesos.

In the figure above, a Mesos cluster is running alongside the Marathon, framework as the scheduler. The Marathon scheduler uses ZooKeeper to locate the current Mesos master which it will submit tasks to. Both the Marathon scheduler and the Mesos master have stand-bys ready to start work should the current master become unavailable.

Marathon, created by Mesosphere, is designed to start, monitor and scale long-running applications, including cloud native apps. Clients interact with Marathon through a REST API. Other features include support for health checks and an event stream that can be used to integrate with load-balancers or for analyzing metrics.

Making Sense of the Container Ecosystem eBook for more details on options for container orchestration.

Comparison Between Kubernetes and Mesos(+Marathon)

KubernetesMesos
Types of WorkloadsCloud Native applicationsSupport for diverse kinds of workloads such as big data, cloud native apps etc.
Application DefinitionA combination of Pods, Replication Controllers, Replica Sets, Services and Deployments. As explained in the overview above, a pod is a group of co-located containers; the atomic unit of deployment.

Pods do not express dependencies among individual containers within them.

Containers in a single Pod are guaranteed to run on a single Kubernetes node.
"Application Group" models dependencies as a tree of groups. Components are started in dependency order.

Colocation of group's containers on same Mesos slave is not supported.

A Pod abstraction is on roadmap, but not yet available.
Application Scalability constructsEach application tier is defined as a pod and can be scaled when managed by a Deployment or Replication Controller. The scaling can be manual or automated.Possible to scale an individual group, its dependents in the tree are also scaled.
High AvailabilityPods are distributed among Worker Nodes.

Services also HA by detecting unhealthy pods and removing them.
Applications are distributed among Slave Nodes.
Load BalancingPods are exposed through a Service, which can be a load balancer.Application can be reached via Mesos-DNS, which can act as a rudimentary load balancer.
Auto-scaling for the ApplicationAuto-scaling using a simple number-of-pods target is defined declaratively with the API exposed by Replication Controllers.

CPU-utilization-per-pod target is available as of v1.1 in the Scale subresource. Other targets are on the roadmap.
Load-sensitive autoscaling available as a proof-of-concept application.

Rate-sensitive autoscaling available for Mesosphere’s enterprise customers.

Rich metric-based scaling policy.
Rolling Application Upgrades and Rollback"Deployment" model supports strategies, but one similar to Mesos is planned for the future

Health checks test for liveness i.e. is app responsive
"Rolling restarts" model uses application-defined minimumHealthCapacity (ratio of nodes serving new/old application)

"Health check" hooks consume a "health" API provided by the application itself
Logging and monitoringHealth checks of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve)

Logging: Container logs shipped to Elasticsearch/Kibana (ELK) service deployed in cluster

Resource usage monitoring: Heapster/Grafana/Influx service deployed in cluster

Logging/monitoring add-ons are part of official project

Sysdig Cloud integration
Logging: Can use ELK

Monitoring: Can use external tools
StorageTwo storage APIs:
The first provides abstractions for individual storage backends (e.g. NFS, AWS EBS, ceph,flocker).

The second provides an abstraction for a storage resource request (e.g. 8 Gb), which can be fulfilled with different storage backends.

Modifying the storage resource used by the Docker daemon on a cluster node requires temporarily removing the node from the cluster
A Marathon container can use persistent volumes, but such volumes are local to the node where they are created, so the container must always run on that node.

An experimental flocker integration supports persistent volumes that are not local to one node.
NetworkingThe networking model lets any pod can communicate with other pods and with any service.

The model requires two networks (one for pods, the other for services)

Neither network is assumed to be (or needs to be) reachable from outside the cluster.

The most common way of meeting this requirement is to deploy an overlay network on the cluster nodes.
Marathon's docker integration facilitates mapping container ports to host ports, which are a limited resource.

A container does not get its own IP by default, but it can if Mesos is integrated with Calico. Even so, multiple containers cannot share a network namespace (i.e. cannot talk to one another on localhost).
Service DiscoveryPods discover services using intra-cluster DNSContainers can discover services using DNS or reverse proxy.
Performance and scalabilityWith the release of 1.2, Kubernetes now supports 1000-node clusters. Kubernetes scalability is benchmarked against the following Service Level Objectives (SLOs):

API responsiveness: 99% of all API calls return in less than 1s

Pod startup time: 99% of pods and their containers (with pre-pulled images) start within 5s.
Mesos has been simulated to scale to 50,000 nodes, although it is not clear how far scale has been pushed in production environments. Mesos can run LXC or Docker containers directly from the Marathon framework or it can fire up Kubernetes or Docker Swarm (the Docker-branded container manager) and let them do it.

Mesos can work with multiple frameworks and Kubernetes is one of them. To give you more choice, if it meets your use case, it is also possible to run Kubernetes on Mesos.

In a follow up blog, we’ll compare Kubernetes with Docker Swarm. 

Platform9

You may also enjoy

Top VMware Alternatives to Dive Deeper in 2024

By Madhura Maskasky

Top 6 FinOps KPIs for EKS  

By Chris Jones

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Leaving VMware? Get the VMware alternatives guideDownload now