Kubernetes and container-based environments make running microservice-based applications at scale feasible. The ability to add additional capacity, and replace problematic containers quickly, can improve overall system resilience. On the flip side, the benefits of containers also mean they can be challenging when it comes to setting up and managing a networking solution. This article will explore the Kubernetes network model, challenges with container-based environments, and how best to implement and manage networking.
Before we begin, let’s discuss the essential characteristics we need for a networking solution for our container-based environment. We need a way to identify each container within our ecosystem and establish communication protocols between each of the containers. We’ll also need a control plane to apply our specific configuration and manage its implementation. Finally, we need to support the ability to secure network communications within our environment, ensuring that network traffic can access the resources it needs while securing parts of our network that need to be protected.
The Kubernetes Network Model
A Kubernetes environment is called a cluster. A cluster consists of one or more nodes, and within each node, either a single pod or a collection of pods provide the system’s workload capability. The Kubernetes Network Model defines fundamental networking requirements for interactions between pods within a node and across the cluster nodes.
All pods within a node can communicate with all nodes. The communication does not require a NAT service. Nodes also contain agents to support the system’s needs, and each of these agents can communicate freely with all pods within the same node.
The Container Network Interface (CNI) is a collection of tools from the Cloud Native Computing Foundation (CNCF). The default Kubernetes networking provider uses CNI as the networking solution; however, it lacks the advanced feature sets available from other providers. Let’s explore CNI in a little more detail, and then we’ll discuss why and when you might need to consider using a full-featured provider for your deployments.
The CNI or Container Networking Interface
The objectives behind the CNI are to enable newly provisioned containers to join the network with a unique IP address, send and receive relevant traffic, and release the IP address when it’s lifecycle ends. The advantage of assigning each container or pod it’s own IP address, is that we don’t need to worry about mapping ports. We can view the collection of pods similarly to how we would consider a group of virtual machines.
The CNI specification describes a plugin interface that the container runtime can use to configure its networking. Once provisioned, the container runtime calls the plugin to request an IP address. The orchestrator or runtime specifies the network in which the container is to join. Once an IP address is assigned, the host is wired up with a network interface and connected through a bridge to the host machine.
The default Kubernetes network solution may be adequate for a simple cluster, but you’ll find it lacking when applied in more extensive and more complex deployments. One example in which you’ll need a more robust and scalable solution is with an edge computer deployment. The adoption of edge computing has been rapidly increasing in recent years. The ability to have a highly-available, compute node at the edge improves data analysis, reduces latency, and provides an enhanced consumer interaction.
Calico is an open-source project which is one of the leading providers of Kubernetes Networking. Calico provides the necessary features and scalability to support your edge deployments, and it’s based on the network foundation supplied by the CNI spec.
Calico for Enterprise-Grade Kubernetes
A new landscape, such as Kubernetes, requires tools explicitly built for cloud-native environments. Calico is an open-source networking and networking security provider for Kubernetes, OpenShift, and other container-based environments.
Calico uses IP networking and BGP for more efficient routing and handling of traffic within the cluster. This design also allows the network to be easily scaled, and it is highly effective at managing the systems with distributed architectures. The use of IP networking also makes troubleshooting much more straightforward.
In addition to providing a highly scalable and performant network, Calico also allows administrators to define networking policies that manage traffic flow between specific pods, giving granular control to network traffic flow. The ability to segment traffic makes it easier to secure your applications and protect your cluster from malignant and benign risks.
Getting Started and Learning More
The organization behind Calico, and which provides the primary support for the project, is Tigera. For full details, visit Tigera.io
Next Steps: Kubernetes Network Model
Check out some of our other Kubernetes Network Model blog posts:
- Getting to know Nate Conger: A candid conversation - June 12, 2023
- Platform9 at the Edge Computing Expo North America 2023 - May 8, 2023
- Argo CD vs Tekton vs Jenkins X: Finding the Right GitOps Tooling - March 1, 2023