This post has been updated by Kubernetes vs Amazon ECS.

Developers looking to containerize their traditional virtualized applications into micro-services face a conundrum of choice today. The contenders are Kubernetes, Docker Swarm, Mesos, Amazon ECS and few others. All touting same or similar benefits, all vying for the top spot of being the most powerful container orchestration platform.

In a series of posts, we try to demystify these alternatives and provide necessary data points you need to consider before you make a selection.

In a previous blog we discussed why you need a container orchestration tool and followed that up with blogs comparing Kubernetes vs Mesos and Kubernetes vs Docker Swarm. Continuing the series, in this blog post we’ll give an overview of and compare Kubernetes vs ECS (Amazon EC2 Container Service).

Overview of Kubernetes

We provided an overview of Kubernetes in a previous blog post comparing Kubernetes with Mesos. For the sake of completeness, we’ll cover it again here.

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes was built by Google based on their experience running containers in production over the last decade. See below for a Kubernetes architecture diagram.

Compare Kubernetes vs ECS (Amazon EC2 Container Service)\

The major components in a Kubernetes cluster are:

  • Pods – Kubernetes deploys and schedules containers in groups called pods. A pod will typically include 1 to 5 containers that collaborate to provide a service.
  • Flat Networking Space – The default network model in Kubernetes is flat and permits all pods to talk to each other. Containers in the same pod share an IP and can communicate using ports on the localhost address.
  • Labels – Labels are key-value pairs attached to objects and can be used to search and update multiple objects as a single set.
  • Services – Services are endpoints that can be addressed by name and can be connected to pods using label selectors. The service will automatically round-robin requests between the pods. Kubernetes will set up a DNS server for the cluster that watches for new services and allows them to be addressed by name.
  • Replication Controllers – Replication controllers are the way to instantiate pods in Kubernetes. They control and monitor the number of running pods for a service, improving fault tolerance.

Overview of Amazon ECS

Amazon EC2 Container Service (ECS) is a container management service provided by Amazon AWS that supports Docker containers and allows you to easily run applications on a cluster of Amazon EC2 instances, thus eliminating the need for you to install, operate, and scale your own cluster management infrastructure.

Note that the containers managed by ECS will be exclusively run on AWS EC2 instances. There’s no support for containers to run on infrastructure outside of EC2, whether physical infrastructure or other clouds. In addition, EC2 instances must be created prior to requesting ECS to bring up containers on those instances. This is a big difference compared to other container orchestration solutions, which do not lock the user into a particular infrastructure provider. The advantage, of course, is the ability to work with all the other AWS services like Elastic Load Balancers, CloudTrail, CloudWatch etc.

Compare Kubernetes vs ECS (Amazon EC2 Container Service)


The major components in Amazon ECS are:

    • Task Definition: The task definition is a text file, in JSON format, describing the containers that together form an application. Task definitions specify various parameters for the application e.g. container image repositories, ports, storage etc.
    • Tasks and Scheduler: A task is an instance of a task definition, created at runtime on a container instance within the cluster. The task scheduler is responsible for placing tasks on container instances.
    • Service: A service is a group of tasks that are created and maintained as instances of a task definition. The scheduler maintains the desired count of tasks in the service. A service can optionally run behind a load balancer. The load balancer distributes traffic across the tasks that are associated with the service.
    • Cluster: A cluster is a logical grouping of EC2 instances on which ECS tasks are run.   
    • Container Agent: The container agent runs on each EC2 instance within an ECS cluster. The agent sends telemetry data about the instance’s tasks and resource utilization to Amazon ECS. It will also start and stop tasks based on requests from ECS.
    • Image Repository: Amazon ECS downloads container images from container registries, which may exist within or outside of AWS, such as a accessible private Docker registry or Docker Hub.

The following table provides a comparison of Kubernetes with Amazon ECS. You can also download the Making Sense of the Container Ecosystem eBook for more details on options for container orchestration. 

Feature Comparison between Kubernetes and ECS

 KubernetesAmazon ECS
Deployment InfrastructurePhysical H/W, Virtual Infra or public cloudsOnly on AWS EC2 instances
Application DefinitionA combination of Pods, Replication Controllers, Replica Sets, Services and Deployments. As explained in the overview above, a pod is a group of co-located containers; the atomic unit of deployment.

Pods do not express dependencies among individual containers within them.

Containers in a single Pod are guaranteed to run on a single Kubernetes node.
Application can span multiple task definitions by combining related containers into their own task definitions, each representing a single component.

Task definitions group the containers that are used for a common purpose, and separate the different components into multiple task definitions.
Application Scalability constructsEach application tier is defined as a pod and can be scaled when managed by a Deployment or Replication Controller. The scaling can be manual or automated.Task instances can be scaled by updating their task definitions or the underlying EC2 instances can be scaled based on monitoring alerts.
High AvailabilityPods are distributed among Worker Nodes.

Kubernetes does have alpha support in its scheduler for node affinity, i.e. nodes can be labeled with their AZ and scheduler considers the labels when scheduling. Node Selection.

Services also HA by detecting unhealthy pods and removing them.
By using AWS Availability zones and defining placement policies in the Service requirements.
Load BalancingPods are exposed through a Service, which can be a load balancer.

A Kubernetes Service is not available outside of the cluster, but needs to be exposed using an Ingress
Amazon ECS can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in a service.

Elastic Load Balancing provides two types of load balancers: Application Load Balancers and Classic Load Balancers, and Amazon ECS services can use either type of load balancer.
Auto-scaling for the ApplicationAuto-scaling using a simple number-of-pods target is defined declaratively with the API exposed by Replication Controllers.

CPU-utilization-per-pod target is available as of v1.1 in the Scale subresource. Other targets are on the roadmap.
Amazon ECS can optionally be configured to use Service Auto Scaling to adjust its desired count up or down in response to CloudWatch alarms. Service Auto Scaling is available in all regions that support Amazon ECS.

Service Auto Scaling can also be used in conjunction with Auto Scaling for Amazon EC2 instances to scale the cluster, and the service, as a response to demand.
Tutorial: Scaling Container Instances with CloudWatch Alarms
Rolling Application Upgrades and RollbackDeployment model now keeps a history of updates and allows roll-back to a particular version "Rollback Deployment"

Health checks test for liveness i.e. is app responsive
Update a running service to change the number of tasks that are maintained by a service or to change the task definition used by the tasks.

A updated Docker image of the application can be deployed to the service by creating a new task definition with that image. The service scheduler uses the minimum healthy percent and maximum percent parameters, in the service's deployment configuration, to determine the deployment strategy.
Logging and monitoringHealth checks of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve)

Logging: Container logs shipped to Elasticsearch/Kibana (ELK) service deployed in cluster

Resource usage monitoring: Heapster/Grafana/Influx service deployed in cluster

Logging/monitoring add-ons are part of official project

Sysdig Cloud integration
AWS CloudWatch can be used to store and analyze logs from the task instance and Docker daemon. AWS CloudTrail can be used to record all ECS API calls. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by Amazon ECS.

Amazon ECS provides monitoring capabilities for containers and clusters to report average and aggregate CPU and memory utilization of running tasks as grouped by Task Definition, Service, or Cluster through Amazon CloudWatch. CloudWatch alarms can also send alerts when containers or clusters need to scale up or down.
StorageTwo storage APIs:
The first provides abstractions for individual storage backends (e.g. NFS, AWS EBS, ceph,flocker).

The second provides an abstraction for a storage resource request (e.g. 8 Gb), which can be fulfilled with different storage backends.

Modifying the storage resource used by the Docker daemon on a cluster node requires temporarily removing the node from the cluster
Specify data volumes in Amazon ECS task definitions to provide persistent data volumes for use with containers, or to define an empty, nonpersistent data volume and mount it on multiple containers on the same container instance, or to share defined data volumes at different locations on different containers on the same container instance.

There’s also an option to use Amazon Elastic File System (EFS) to persist data from ECS containers
NetworkingThe default networking model lets any Pod communicate with other Pods and with any Service.

The model requires two networks (one for Pods, the other for Services)

Neither network is assumed to be (or needs to be) reachable from outside the cluster.

The most common way of meeting this requirement is to deploy an overlay network on the cluster nodes. Overlay networks like Calico and MidoNET already work with Kubernetes to provide network isolation of Pods and Services.

Another way is to create routes on all cluster nodes, instead of encapsulating packets and creating an overlay (see Calico, which uses routes)
Amazon ECS strongly recommends launching your container instances inside a VPC to gain more control over the network and offers more extensive configuration capabilities. For more information, see Amazon EC2 and Amazon Virtual Private Cloud in the Amazon EC2 User Guide for Linux Instances

The task definition also has parameters for network settings
Service DiscoveryPods discover services using intra-cluster DNSECS recently started providing some basic service discovery or you can use Consul . Here is an article that lays out a reference architecture for service discovery.
Performance and scalabilityWith the release of 1.2, Kubernetes now supports 1000-node clusters. Kubernetes scalability is benchmarked against the following Service Level Objectives (SLOs):

API responsiveness: 99% of all API calls return in less than 1s

Pod startup time: 99% of pods and their containers (with pre-pulled images) start within 5s.
There aren’t specific claims regarding performance of ECS, but here’s a recent blog by Werner Vogels, CTO -, that has some metrics on scaling v/s latency.

It is worth repeating that Amazon ECS is designed for, and provides maximum value, when integrated with Other AWS Services such as Elastic Load Balancing, Elastic Block Store, Virtual Private Cloud, IAM, and CloudTrail. This will likely provide a complete solution for running a wide range of containerized applications or services. On the other hand, Kubernetes is not restricted to run on any particular kind of infrastructure or a specific provider. In fact, Kubernetes can also be easily run on AWS EC2.

In  follow up blog we’ll compare more tools with Kubernetes. If there’s any specific tool you’d like to us to compare, please leave a comment below. Meanwhile, check out the free trial for the Managed Kubernetes Solution from Platform9. You can also download the Making Sense of the Container Ecosystem eBook for more details on options for container orchestration.

Vijay Dwivedi
Vijay Dwivedi is a Director of Product Marketing at Platform9 and is passionate about everything Cloud, OpenStack and DevOps.

The browser you are using is outdated. For the best experience please download or update your browser to one of the following: