Kubernetes vs Mesos + Marathon

In this updated blog post we’ll compare Kubernetes with the Mesos + Marathon container orchestration solution. We’ll walk you through a high-level discussion of Kubernetes and Mesos with Marathon, look at their features, and cover their pros and cons.

Overview of Kubernetes

According to the Kubernetes website, “Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.” Kubernetes was built by Google based on their experience running containers in production using an internal cluster management system called Borg (sometimes referred to as Omega). The architecture for Kubernetes, which relies on this experience, is shown below:

As you can see from the figure above, there are a number of components associated with a Kubernetes cluster. The master node places container workloads in user pods on worker nodes or itself. The other components include:

  • etcd: This component stores configuration data which can be accessed by the Kubernetes master’s API Server by simple HTTP or JSON API.
  • API Server: This component is the management hub for the Kubernetes master node. It facilitates communication between the various components, thereby maintaining cluster health.
  • Controller Manager: This component ensures that the clusters’ desired state matches the current state by scaling workloads up and down.
  • Scheduler: This component places the workload on the appropriate node – in this case all workloads will be placed locally on your host.
  • Kubelet: This component receives pod specifications from the API Server and manages pods running in the host.

The following list provides some other common terms associated with Kubernetes:

  • Pods: Kubernetes deploys and schedules containers in groups called pods. Containers in a pod run on the same node and share resources such as filesystems, kernel namespaces, and an IP address.
  • Deployments: These building blocks can be used to create and manage a group of pods. Deployments can be used with a service tier for scaling horizontally or ensuring availability.
  • Services: These are endpoints that can be addressed by name and can be connected to pods using label selectors. The service will automatically round-robin requests between pods. Kubernetes will set up a DNS server for the cluster that watches for new services and allows them to be addressed by name. Services are the “external face” of your container workloads.
  • Labels: These are key-value pairs attached to objects. They can be used to search and update multiple objects as a single set.

Overview of Mesos + Marathon

Mesos is a distributed kernel that aims to provide dynamic allocation of resources in your datacenter. Imagine that you manage the IT department of a mid-size business. You need to have workloads running on 100 nodes during the day but on 25 after hours. Mesos can redistribute workloads so that the other 75 nodes can be powered-off when they are not used. Mesos can also provide resource sharing. In the event that one of your nodes fails, workloads can be distributed among other nodes.

Mesos comes with a number of frameworks, application stacks that use its resource sharing capabilities. Each framework consists of a scheduler and a executor. Marathon is a framework (or meta framework) that can launch applications and other frameworks. Marathon can also serve as a container orchestration platform which can provide scaling and self-healing for containerized workloads. The figure below shows the architecture of Mesos + Marathon.

There are a number of different components in Mesos and Marathon. The following list provides some common terms:

  • Mesos Master: This type of node enables the sharing of resources across frameworks such as Marathon for container orchestration, Spark for large-scale data processing, and Cassandra for NoSQL databases.
  • Mesos Slave: This type of node runs agents that report available resources to the master.
  • Framework: A framework registers with the Mesos master so that the master can be offered tasks to run on slave nodes.
  • Zookeeper: This component provides a highly available database that can the cluster can keep state, i.e. the active master at any given time.
  • Marathon Scheduler: This component receives offers from the Mesos master. Offers from the Mesos master list slave nodes’ available CPU and memory.
  • Docker Executor: This component receives tasks from the Marathon scheduler and launches containers on slave nodes.

Mesosphere DCOS

Mesosphere Enterprise DC/OS leverages the Mesos distributed systems kernel and builds on it with container and big data management, providing installation, user interfaces, management and monitoring tools, and other features. The diagram below shows a high-level architecture of DCOS.

Source: DCOS Documentation – Architecture

As shown above, DCOS is comprised of package management, container orchestration (derived from Marathon), cluster management (derived from Mesos), and other components. Further details on Mesosphere DCOS can be found in DCOS documentation.

Kubernetes vs. Mesos + Marathon

 

Application Definition

Applications can be deployed using a combination of pods, deployments, and services. A pod is a group of co-located containers and is the atomic unit of a deployment. A deployment can have replicas across multiple nodes. A service is the “external face” of container workloads and integrates with DNS to round-robin incoming requests.
From the user’s perspective, an application runs as tasks that are scheduled by Marathon on nodes. For Mesos, an application is a framework, which can be Marathon, Cassandra, Spark and others. Marathon in-turn schedules containers as tasks which are executed on slave nodes. Marathon 1.4 introduces the concept of pods (like Kubernetes), but this isn’t part of the Marathon core.
Nodes can be tagged based on racks, type of storage attached, etc. These constraints can be used when launching Docker containers.
 

  Application Scalability Constructs

Each application tier is defined as a pod and can be scaled when managed by a deployment, which is specified declaratively, e.g., in YAML. The scaling can be manual or automated. Pods are most useful for running co-located and co-administered helper applications, like log and checkpoint backup agents, proxies and adapters, though they can also be used to run vertically integrated application stacks such as LAMP (Apache, MySQL, PHP) or ELK/Elastic (Elasticsearch, Logstash, Kibana).
Mesos CLI or UI can be used. Docker containers can be launched using JSON definitions that specify the repository, resources, number of instances, and command to execute. Scaling-up can be done by using the Marathon UI, and the Marathon scheduler will distribute these containers on slave nodes based on specified criteria. Autoscaling is supported. Multi-tiered applications can be deployed using application groups.
 

High Availability

Deployments allow pods to be distributed among nodes to provide HA, thereby tolerating infrastructure or application failures. Load-balanced services detect unhealthy pods and remove them.
High availability of Kubernetes is supported. Multiple master nodes and worker nodes can be load balanced for requests from kubectl and clients. etcd can be clustered and API Servers can be replicated.
Containers can be scheduled without constraints on node placement, or each container on a unique node (the number of slave nodes should be at least equal to the number of containers).
High availability for Mesos and Marathon is supported using Zookeeper. Zookeeper provides election of Mesos and Marathon leaders and maintains cluster state.
 

          Load Balancing       

Pods are exposed through a service, which can be used as a load-balancer within the cluster. Typically, an ingress resource is used for load balancing.
Host ports can be mapped to multiple container ports, serving as a front-end for other applications or end users.
 

Auto-scaling for the Application

Auto-scaling using a simple number-of-pods target is defined declaratively using deployments. Auto-scaling using resource metrics is also supported. Resource metrics range from CPU and memory utilization to requests or packets-per-second, and even custom metrics.
Marathon continuously monitors the number of instances of a Docker container that are running. If one of the containers fail, Marathon reschedules it on other slave nodes.
Auto-scaling using resource metrics is available through community-supported components only.
 

  Rolling Application Upgrades and Rollback

A deployment  supports both “rolling-update” and “recreate” strategies. Rolling updates can specify maximum number of pods.
Rolling upgrades of applications are supported using deployments. A failed upgrade can be healed using an updated deployment that rolls-back the changes.
 

Health Checks

Health checks are of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve).
Health checks can be specified to be run against the application’s tasks. Health check requests are available in a number of protocols, including HTTP, TCP, and others.
 

          Storage         

Two storage APIs:
The first provides abstractions for individual storage backends (e.g. NFS, AWS EBS, Ceph, Flocker).
The second provides an abstraction for a storage resource request (e.g. 8 Gb), which can be fulfilled with different storage backends. Modifying the storage resource used by the Docker daemon on a cluster node requires temporarily removing the node from the cluster.
Kubernetes offers several types of persistent volumes with block or file support. Examples include iSCSI, NFS, FC, Amazon Web Services, Google Cloud Platform, and Microsoft Azure.
The emptyDir volume is non-persistent and can used to read and write files with a container.
Local persistent volumes (beta) are supported for stateful applications such as MySQL. When needed, tasks can be restarted on the same node using the same volume.
The use of external storage, such as Amazon EBS, is also in beta. At the present time, applications that use external volumes can only be scaled to a single instance because a volume can only attach to a single task at a time.
 

Networking

The networking model is a flat network, enabling all pods to communicate with one another. Network policies specify how pods communicate with each other. The flat network is typically implemented as an overlay.
The model requires two CIDRs: one from which pods get an IP address, the other for services.
Networking can be configured in host mode or bridge mode. In host mode, the host ports are used by containers. This can lead to port conflicts on any given host. In bridge mode, the container ports are bridged to host ports using port mapping. Host ports can be dynamically assigned at time of deployment.
 

        Service Discovery       

Services can be found using environment variables or DNS.
Kubelet adds a set of environment variables when a pod is run. Kubelet supports simple {SVCNAME_SERVICE_HOST} and {SVCNAME_SERVICE_PORT} variables, as well as Docker links compatible variables.
DNS Server is available as an addon. For each Kubernetes service, the DNS Server creates a set of DNS records. With DNS enabled in the entire cluster, pods will be able to use service names that automatically resolve.

Services can be discovered via “named VIPs,” which are DNS records that are are associated with IPs and ports.

Services are automatically assigned DNS records by Mesos-DNS. An optional named VIP can be created; requests via the VIP are load-balanced.

 

Performance and Scalability

With the release of 1.6, Kubernetes scales to 5,000-node clusters. Multiple clusters can be used to scale beyond this limit. Mesos’ 2 tier architecture (with Marathon) makes is very scalable. According to Digital Ocean, Mesos and Marathon clusters have been scaled to 10,000 nodes.

Synopsis

 

Advantages of Kubernetes Over Mesos + Marathon

  • Wide variety of storage options, including on-premises SANs and public clouds.
  • Based on extensive experience running Linux containers at Google. Deployed at scale more often among organizations. Kubernetes is also backed by enterprise offerings from both Google (GKE) and RedHat (OpenShift).
  • Largest community among container orchestration tools. Over 50,000 commits and 1200 contributors.
  • External storage on Mesos + Marathon, including Amazon EBS is in beta.                                              .
  • Mesos is leveraged by Mesosphere. Mesosphere’s DCOS product is primarily backed by its creators and the only commercial distribution, Mesosphere.                                           .
  • Smaller community. Over 12,000 commits and 240 contributors.
 

  Disadvantages of Kuberenetes vs. Mesos + Marathon

  • Lack of single vendor control can complicate a prospective customer’s purchasing decision. Community includes Google, Red Hat, and over 2000 authors. (Source: CNCF)
  • Kubernetes has been built only for container orchestration. It’s based on over 10 years of experience managing Linux containers at Google.
  • Kubernetes 1.6 scales to  5,000-node clusters. Massive scalability beyond 5,000 nodes requires multiple clusters.
  • Single vendor control may allow for accountability with bug fixes and better coordination with feature development.                                .
  • The 2-tier architecture allows for deploying other frameworks (workloads). Examples include Spark, Chronos, and Redis.
  • A few organizations, such as Apple, Bloomberg, Netflix, etc. have deployed Mesos at massive scale greater than 10,000 nodes. (Source: Mesosphere blog)
 

Common Features

  • Open source projects. Anyone can contribute, but Kubernetes enjoys greater and more diverse community participation.
  • Networking features such as load balancing and DNS.
  • Logging and Monitoring. External tools for Kubernetes include Elasticsearch/Kibana (ELK), sysdig, cAdvisor, Heapster/Grafana/InfluxDB (Reference: Logging and Monitoring for Kubernetes). For Mesos/Marathon, nodes provide logs that can be aggregated and external tools can be used for monitoring. (Reference: Monitoring Logging and Debugging for Mesos/Marathon)
  • Can overcome constraints of Docker and Docker API.
  • Autoscaling supported natively.
  • Use of separate set of tools for management. Kubernetes actions can be performed through the kubectl CLI and Kubernetes Dashboard. Mesos uses multiple interfaces: Mesos CLI, Mesos UI, Marathon CLI, Marathon UI.
  • Do-it-yourself installation can be complex for Kubernetes and Mesos, Marathon. Deployment tools for Kubernetes include kubeadm, kops, kargo, and others. Further details in Kubernetes Deployment Guide. Mesos installation can be complex due to the 2 tier architecture with Marathon, setup of Zookeeper for cluster management, HA Proxy for load balancing, etc.

Takeaways

A look at the mindshare of Kubernetes vs. Mesos + Marathon shows Kubernetes leading with over 70% on all metrics: news articles, web searches, publications, and Github.

Kubernetes offers significant advantages over Mesos + Marathon for three reasons:

  • Much wider adoption by the DevOps and containers community
  • Better scheduling options for pods, useful for complex application stacks
  • Based on over a decade of experience at Google

However, Kubernetes has been known to be difficult to deploy and manage. Platform9’s Managed Kubernetes product can fill this gap by letting organizations focus on deploying microservices on Kubernetes, instead of managing and upgrading a highly available Kubernetes deployment themselves. Further details on these and other deployment models for Kubernetes can be found in The Ultimate Guide to Deploy Kubernetes.

In a follow-up blog, we’ll compare Kubernetes with Amazon ECS. 

Platform9

You may also enjoy

Run EKS workloads on 50% compute resources with Elastic Machine Pool

By Kamesh Pemmaraju

VMware Tanzu and Project Pacific: 3 Considerations for Enterprises Adopting Kubernetes

By Sirish Raghuram

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Leaving VMware? Get the VMware alternatives guideDownload now