Kubernetes Networking – An Implementer’s In-depth Guide [Webinar 2 Recap]

You need not be too far into your Kubernetes journey to realize that networking in Kubernetes is, in a word, complicated. Not only is the Kubernetes networking architecture complex, but there is a range of configuration options to sort through for the key components that make up that architecture, such as your CNI implementation.

To provide some guidance on the intricate topic of Kubernetes network architectures and configuration, Platform9 recently collaborated with Tigera, the Kubernetes networking and security company, to host a webinar titled “Networking in the Brave New World of Containers.”

The webinar drew on the insights of three technologists with hands-on experience navigating the complicated landscape of Kubernetes networking:

  • Nicola Kabar, currently a Senior Solution Architect at Tigera (and formerly a lead solution engineer at Docker and Cisco).
  • Peter Fray, Chief Technologist at Platform9.
  • Chris Jones, Product Manager at Platform9.

Below is a summary of key takeaways, along with a link to see the full webinar for yourself.

Framing the Journey to Kubernetes Implementation

The webinar opened with a brief recap by Fray of the journey that businesses take as they implement Kubernetes (K8s), which Platform9 team members discussed in more detail in a webinar last month.

Fray explained that the K8s journey usually begins with an enterprise architect deciding to implement Kubernetes. The first step along that path involves deciding how to deploy K8s: In the cloud, on-premises, using virtual machines or not.
The next major step — and the subject of this webinar — is figuring out your approach to Kubernetes networking. You have to decide which Container Network Interface (CNI) implementation to adopt, and then how to configure it.

Other steps (which future webinars will address) include decisions such as how to manage storage and logging in Kubernetes and how to integrate CI/CD into a Kubernetes cluster.

“Fireside chat” on Kubernetes Networking Considerations

Fray and Kabar then began chatting about the central topic of the webinar, Kubernetes networking considerations. Fray provided an overview of what the CNI is (in essence, it’s the Kubernetes community’s standard specification that defines how containers connect to each other). He then explained that there are a number of CNI implementations available for Kubernetes, such as Calico, Weave and Flannel. Fray also explained the role that CNIs play within the larger networking architecture of Kubernetes, including components like kube-proxy, service discovery, ingress handling and service meshes.

With this overview complete, Kabar went on to offer tips on managing Kubernetes’s complex networking architecture. He began by discussing how to choose a pod CIDR (classless inter-domain routing) configuration, including how large the CIDR range should be and whether to keep it private or allow external routing. As he explained, choosing the right CIDR size upfront is important because, unlike many other aspects of Kubernetes, it is very difficult to change your CIDR configuration once your clusters are deployed.

Kabar offered tips on how to choose a service CIDR, too, and — if you use the Calico CNI — which dataplane to enable from among the three available options: Linux iptables, eBFP and Windows Host Networking Service (HNS).

Whether or not to use network encapsulation was another key topic of discussion. For most deployment types, Calico supports both encapsulated and unencapsulated networks. Unencapsulated networks will generally provide better performance, but that approach may not always be available or ideal from a security perspective in certain setups.

Using Calico with Platform9

To ground the discussion of Kubernetes networking configuration, Jones offered an overview of how Platform9 uses Calico to help manage Kubernetes clusters. As he said, Calico is the preferred CNI in Platform9 because of its advanced features, such as security policies and route reflectors, as well as its support for multiple deployment architectures (bare metal, AWS, Azure, Linux and Windows).

“The multi-cloud support is a really big value driver from our perspective,” Jones said. “We want people to be able to run Kubernetes clusters where you want to run them, and Calico really provides us with a way to do that.”

Jones then walked through a live demonstration of deploying a cluster in Platform9 and configuring Calico for it. He displayed the different options that Platform9 provides through its Web interface for managing settings like network range, IP-in-IP encapsulation mode and block size. Although clusters managed via Platform9 also provide full access to Calico’s command-line tooling, much of the configuration work can be handled directly through the Platform9 Web UI.

Jones added that, with Platform9, users get hands-on support for their Calico configuration and management. “We don’t say ‘here’s Calico, now you’re on your own,’” he said. “We help you manage Calico” on an ongoing basis by providing network connectivity SLAs of 99.9 percent, architectural support to help ensure that the Calico configuration you choose is the best for your business’s needs and incident management assistance to troubleshoot problems quickly if a networking issue occurs.

Q&A on Kubernetes networking

The webinar concluded with a brief Q&A session in which panelists responded to audience questions about Kubernetes networking, especially as it pertains to Calico.

One audience member asked if it is truly impossible to migrate to a different CNI implementation after your cluster is up and running. Kabar responded that “it is doable, but it is not straightforward.” It could make sense if you are willing to overhaul applications as well as your cluster networking configuration, but he cautioned, “I would avoid it at all costs.” It’s better to get your CNI selection right, before you deploy your cluster.

Pointing out that not all hope is lost if you’re using one CNI but want to change, Fray said that some Platform9 clients in that situation decide to deploy a new cluster rather than change the networking configuration of an existing one. If you “treat your clusters like cattle,” Fray said, it’s typically easier on Platform9 to redeploy with the networking configuration you want than to try to change the configuration for an existing cluster.

Kabar also responded to a question about how Calico is different from other CNI implementations. He pointed to four key distinctions: First, the simplicity and highly distributed architecture of Calico, which avoids single points of failure; second, Calico’s performance, which usually beats that of other CNIs in benchmark tests; third, the fact that Calico provides not just networking support but also networking security controls that “extend way beyond” the network policies available natively in Kubernetes; and fourth, the ability of users to obtain enterprise support from Tigera for Calico, an option that is not available with all other CNIs.

Another question dealt with whether it’s possible to use a single network layer for a distributed Kubernetes cluster that runs partly in a private data center and partly in the public cloud. Fray and Kabar said that this is indeed feasible, although Kabar cautioned that it’s easier if you don’t use network encapsulation.

The final question of the webinar was about whether the Kubernetes deployment mode that you choose — whether bare-metal or in the cloud — should be a major consideration when selecting a CNI. Kabar said that it may be in the cloud, because your CNI needs to be able to support whichever other networking services that cloud provider implements into its Kubernetes service. However, if you choose a CNI like Calico, which is compatible with a range of different deployment and hosting options, you don’t need to worry as much about whether your CNI will work with other services.

View the Webinar

For more detail on the topics discussed above, you can view this webinar at any time here. You can also sign up for the remaining six webinars in the series of which this one is a part, which will be broadcast over the coming months.

Platform9

You may also enjoy

Using MetalLB to add the LoadBalancer Service to Kubernetes Environments

By Mike Petersen

Understanding Kubernetes LoadBalancer vs NodePort vs Ingress

By Platform9

The browser you are using is outdated. For the best experience please download or update your browser to one of the following: