The Ultimate Guide To Using Calico, Flannel, Weave and Cilium

Even for the experienced Kubernetes users, Kubernetes Networking can be an intimidating topic. In this post, we’re going to take a deep dive into the most popular container networking solutions available for Kubernetes. Before we dig into the specifics of each of those networking solutions, we’ll take a cursory view of Kubernetes Networking and the Container Network Interface (CNI) specification. We’ll talk about why networking is a complex problem and how the CNI specification allows the development of specialized solutions outside of the Kubernetes project.

Once we have a good overview of Kubernetes Networking and the CNI specification, we’ll dissect some of the most popular container networking solutions including flannel, calico, weave and cilium. We will talk about what makes each solution unique, and cover the specific features that solution provides as well as potential challenges around deploying the solution. By the end, you’ll have a much better idea of what each networking solution offers and how to choose the right Kubernetes Networking solution for your project.

Kubernetes Networking

As we already mentioned, networking is a complex problem. Kubernetes bases its networking at the pod level where each pod gets an IP address. The advantage of this model is that you don’t need to worry about creating links between pods or mapping container ports to host ports etc. You do however need to ensure that the pods can communicate with other pods and that nodes can communicate with pods without requiring Network Address Translation (NAT).  The benefit of this model is its compatibility with VM networking model. If you application worked in a VM previously, it’s almost guaranteed to work in a pod running on Kubernetes. 

Networking requirements vary greatly, depending on the needs of each application. Rather than address each of these needs in a single solution, Kubernetes abstracts the networking away from Kubernetes itself, allowing vendors to build specific solutions to address different needs, and users to plug in their desired networking solution into their clusters. 

Why CNI

The linux container technology and container networking continue to evolve, to accommodate the needs of a variety of applications that run in different environments. Rather than duplicating the effort to make networking pluggable in order to enable integration between various networking solutions on one end and different container runtimes and container orchestration platforms on the other end, the Container Network Interface (CNI) initiative was created to define a standardized common interface between container execution and the networking layer. 

The CNI project is part of Cloud Native Computing Foundation (CNCF). The CNI specification describes how to configure network interfaces for Linux containers. It defines its focus area to be limited to container network connectivity and removing allocated resources when the container goes away. Because of this focus, the CNI specification is simple and is adopted widely, with a large number of supported plugins. Many container orchestration frameworks, including Kubernetes, have implemented this specification. If you’d like to learn more about the CNI specification, which runtimes use it, and third-party plugins that implement it, the CNI GitHub project is a great place to start. 

The CNI plugins must conform to the standards defined by the CNI specification. Each plugin that implements the CNI specification tries to address different aspects of container networking. Identifying and configuring the right plugin or combination of plugins to meet your project’s needs is vital. We’re going to take an in-depth look at four of the most popular Kubernetes networking plugins next, and explore their strengths and weaknesses. 

Those plugins are:

  • Flannel
  • Calico
  • Weave
  • Cilium

Flannel

Flannel logo

CoreOS created Flannel as one of the first CNI implementations for Kubernetes.  As such, it is one of the oldest and most mature CNI plugins available. It is also a great entry-level choice for networking for your first Kubernetes cluster, due to its simplicity and ease of use. Flannel provides access to basic networking features and requires limited amount of administration to set up and maintain.

Flannel runs a simple overlay network across all the nodes of the Kubernetes cluster. It provides networking at Layer 3, the Network Layer of the OSI networking model. Flannel supports VXLAN as its default backend, although you can also configure it to use UDP and host-gw. Some experimental backends like AWS VPC, AliVPC, IPIP, and IPSec are also available, but not officially supported at present.

One of the drawbacks of Flannel is its lack of advanced features, such as the ability to configure network policies and firewalls. Thus Flannel is a great entry level choice for Kubernetes cluster networking, however, if you are looking for advance networking features, you may want to consider other CNI options such as Calico. 

Calico

Represented by their mascot ‘Felix’, Calico is an open-source project created by Tigera. Calico supports a broad set of platforms, including Kubernetes. The Calico project is hosted on GitHub and has extensive and thorough documentation. Calico is also offered in a paid enterprise version by Tigera. Platform9 also offers Calico as a fully Managed Solution in the paid version of Platform9 Managed Kubernetes. A managed solution gives you access to all of Calico’s features and capabilities but without the complexity of having to manage Calico configuration and maintenance.

 Calico has emerged as one of the most popular CNI plugins for Kubernetes cluster networking. The project has earned a reputation for being reliable, flexible, and supporting highly performant networks for Kubernetes clusters. 

Like Flannel, Calico operates on Layer 3 of the OSI model and uses the BGP protocol to move network packets between nodes in its default configuration with IP in IP for encapsulation. Using BGP, Calico directs packets natively, without needing to wrap them in additional layers of encapsulation. This approach improves performance and simplifies troubleshooting network problems compared with more complex backends, like VXLAN.

Calico’s most valuable feature is its support for network policies. By defining and enforcing network policies, you can prescribe which pods can send and receive traffic and manage security within the network. While Calico is a well-used and capable network tool on its own, its policy management also allows it to pair well with systems like Flannel or Istio, a popular Kubernetes service mesh.

Weave

Weave or Weave Net is a full-featured CNI plugin created and supported by Weaveworks. Weave is available on GitHub repository and the Weaveworks website. Like Calico, Weave is also available in a paid version with a support plan.

Weave creates a mesh overlay between all nodes of a Kubernetes cluster and uses this in combination with a routing component on each node to dynamically route traffic throughout the cluster. By default, Weave routes packets using the fast datapath method, which attempts to send traffic between nodes along the shortest path. The network is continually analyzing traffic flow and optimizing routes. A slower network method, known as sleeve packet forwarding, is the backup method if the fast datapath fails. 

Weave includes features such as creating and enforcing network policies and allows you to configure encryption for your entire network. If configured, Weave uses NaCl encryption for sleeve traffic and IPsec ESP encryption for fast datapath traffic. 

Cilium

cilium logo

A relative newcomer to the land of Kubernetes CNI plugins is Cilium. Cilium and its observability tool, Hubble, take advantage of eBPF. eBPF is a newer technology that runs within the Linux kernel and enables the configuration and execution of sandbox programs that can extend the capability of the kernel without requiring to change the kernel source code. Cilium uses eBPF technology to support more advanced networking and observability features for your Kubernetes cluster networking.

One of the advantages that Cilium offers over other CNI plugins is reduced overhead when managing large networks. While some CNI plugins rely on iptables on each Kubernetes cluster node to manage network addressing, Cilium takes advantage of eBPF to handle this more efficiently and in a more performant manner. Efficient address lookup is critical as your Kubernetes cluster scales to tens of thousands of nodes.

Cilium offers networking policies that operate at layers 3, 4, and 7 of the OSI networking model. This ability to apply policies at multiple layers affords more flexibility in how you manage ingress and egress traffic within your Kubernetes cluster. While still a relatively new CNI plugin, Cilium may be worth consideration, especially if you require fine-grained security controls or need to reduce lookup latency for very large Kubernetes clusters.

Selecting the Right Solution For Your Project

Depending on your needs, selecting the right CNI plugin to use in your cluster could be very simple, or a little more complicated. If your only requirement is for a basic networking solution, Flannel might be your best choice. While it lacks many advanced features like network policies and encryption, it’s light, fast, and consumes fewer resources than other CNI plugins.

If performance and security through network policies and encryption are paramount, you should consider Calico, Weave, or Cilium or a hybrid solution like Canal. Canal uses a combination of Calico and Flannel. Flannel provides basic networking and pairs well with Calico’s best-in-class network policies. Network policies are essential for maintaining a secure cluster, especially given the increased risk of cyberattacks. Calico’s documentation includes a must-read section on adopting a zero trust network model.

Cilium may offer advantages for large-scale deployments, and takes advantage of eBPF for improved observability and network management efficiencies. Cilium is still a young project, and in the benchmark tests referenced below, it does appear to be more resource-intensive.

Speaking of benchmarking, ITNEXT publishes an annual collection of benchmark results for CNI plugins. The results show similar performance for all of the frameworks we’ve discussed today, and several others. The study runs the benchmark tests on clusters with three nodes using the default configuration and measures performance, throughput, latency, and resource usage, among other metrics.

CNI plugins benchmark comparison from ITNEXT cilium calico flannel weave

Fig 1 CNI plugins benchmark comparison from ITNEXT

The benchmarks tests do an excellent job of highlighting the most critical factor related to CNI plugins for your Kubernetes clusters. The benchmark study used the default configuration for each of the tools investigated. Fine-tuning your CNI plugin provides better results and can meet your particular cluster’s needs. In a previous blog post, Platform9’s Co-founder and VP of Product, Madhura Maskasky, discussed Achieving High Performance with Calico

One of the best things about Kubernetes is the growing global community and the plethora of open-source projects, service providers, and managed platforms that support its growth. What that means for you is that you’re not alone. Many engineers have contended with these same questions and challenges. Besides a rich supply of advice and recommendations on the Internet, in forums, and on Slack channels, you have access to managed services like Platform9.

Managed Kubernetes services like Platform9 simplify the process of creating, managing, and maintaining your Kubernetes infrastructure. These providers also streamline the process of adopting and tuning CNI plugins to meet your specific requirements. Managed Calico is one such offering. Calico has earned a reputation of being one of the best-in-class CNI plugins. And when you combine that with Platform9’s Managed Kubernetes platform, you can’t go wrong.

Interested in More Content?

Platform9

You may also enjoy

Kubernetes FinOps: Elastic Machine Pool Step-by-Step guide : Part 2

By Joe Thompson

Kubernetes Service Mesh: How To Set Up Istio

By Platform9

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Leaving VMware? Get the VMware alternatives guideDownload now