Prior to the advent of Kubernetes, in the so-called “bare-metal” era, networking was primarily confined to physical peripherals, such as network interfaces, cables, switches and routers, and other components of Layer 2 of the OSI model. When virtual machines were introduced, along with virtual networks, overlays, and the like, the concept of networking shifted from a focus on hardware to software. Then came Kubernetes, wherein networking is a high-level, user-defined project focusing on L4 and L7 of the OSI model. Networking has very little to do with hardware anymore, but it can still be considered software-related, with virtual network components, models, and policies that are also known as “Network as Code.”
Before we dive deeper into Kubernetes networking, we need to start by explaining the components that comprise Kubernetes clusters, which are:
- Worker nodes, which run containerized applications.
- Control plane nodes, which manage worker nodes and pods in the cluster.
Worker nodes have their own virtual eth0 and are connected through the infrastructure network; hence, they have their own IP address. The same goes for control plane nodes. The worker and control plane nodes are also connected through the infrastructure network.
But what about the connections within the worker nodes themselves? We know that pods are in the nodes and containers are in the pods, but how do containers communicate with each other? How can pods on different nodes (or the same node, for that matter) communicate with one another?
The answers to these questions lie in several important aspects of Kubernetes networking, such as:
- Container-to-container communication
- Pod-to-pod communication
- Pod-to-service communication
- External-to-service communication
This can be explained using a simple analogy. Let’s say that you rent two rooms in a hotel that are right next to each other and share a bathroom. The occupants of room #1 can communicate with the people in room #2 through the shared bathroom without the need to leave their respective rooms.
In this case, the two rooms are the containers, while the hotel itself is the pod. The hotel (pod) allows the rooms (containers) to talk to each other through the bathroom, which is a network namespace or localhost. This is why pods are important in container-to-container communication.
Since pods are spread throughout worker nodes, we also need to address communication between pods on the same or different nodes.
Before getting into that, you need to know the fundamental requirements of the Kubernetes model:
- Every pod should have a unique IP address.
- Every pod should be able to communicate with every other pod on the same node.
- Every pod should be able to communicate with every other pod on other nodes without NAT (Network Address Translation).
Intra-Node Pod-to-Pod Communication
Let’s explain this through our previous analogy. This time, however, the two rooms represent pods, and the hotel represents the node. Now, say that we rented two rooms in a hotel, but we weren’t as lucky this time, and our rooms are not next door to one another – in fact, they aren’t even on the same floor. Needless to say, they do not share a bathroom. So, if I am in room #1, and I want to communicate with my friend in room #2, I will have to:
- Open the door to room #1,
- Take the elevator or the stairs,
- Make sure I am on the right floor,
- Knock on the door to room #2,
- And then start the conversation.
These steps can be mapped in Kubernetes as follows:
|Door to Room #1||Virtual eth0 with Source IP|
|Elevator or Stairs||Root Network Namespace, Bridge Network|
|Door to Room #2||Virtual eth0 with Destination IP|
Inter-Node Pod-to-Pod Communication
To explain this one, let’s say that we rented two rooms in a hotel, but this time, they are in two different buildings – room #1 is in building A and room #2 is in building B. The two different buildings represent the two different nodes, their entrances represent the virtual eth0 with their respective source and destination IPs, and the hotel’s front desk or clerk is the routing table that helps you to identify which building, floor, and room number your friend is in so that you can communicate with him or her.
Every node is assigned a unique CIDR block for the pod IP, meaning that each pod has a unique IP that does not conflict with other pods on different nodes. Routes are configured for each of the node CIDR blocks, ensuring that requests go from the source pod to the correct destination pod on another cluster.
The previous analogy works well if we have information about the hotel’s floors and room numbers (which would be fixed, of course), so you might think that we could apply our analogy to the pod IP. Unfortunately, that is not the case. Pods are ephemeral (they come and go), which means that their IP is not always the same – hence, we cannot rely on IP.
To solve this problem, Kubernetes introduced Services. Kubernetes Services manage sets of pods and keep track of any changes to the pods, such as IP changes and the creation or termination of pods. Since a Service is a “doorway” to a set of pods, the Service is assigned to a single virtual IP address. This way, the client application that is trying to communicate with these pods only needs to know the Service IP that is associated with the set of the pods.
This is where the ingress network is defined. An ingress network is a collection of policies or rules that allow inbound connections (usually HTTPS) to a Service within a cluster. It can be configured to be reachable externally, to load balancing traffic, to terminate SSL/TLS, and to offer name-based virtual hosting. Ingress acts as an entry point to a Kubernetes cluster, and it sits in front of multiple services, mimicking routers.
Types of Kubernetes Networking Services
You want to design your application or Service based on your needs. For example, you might want to expose your Services to the outside world, or you might want to restrict access to only within the cluster. There are 4 types of Services that can fulfill your needs:
ClusterIP creates a virtual IP inside the cluster that enables communication between different Services, such as communication between front-end and backend Services within the cluster. This is the default ServiceType. It’s a good option if you want to limit your resources to something that can only be accessed by your application, such as a database or an internal accounting application.
This listens to the port on the node and forwards the requests to the port on the pod that’s running the application. It exposes the Service on the node’s IP address at a static port. NodePort has a similar use case to that of ClusterIP, which is to limit access to your resources (like queue systems or Business Intelligence (BI) systems and reports) to an application on each worker node. This seems pretty insecure, since it opens ports to communicate directly with the Services on each worker node. This is not advisable.
This exposes the Service externally using a cloud provider’s load balancer. When you create this type of Service, Kubernetes automatically creates LoadBalance, NodePort, and ClusterIP Services, which allows external load balancers to route the traffic. This type of Service is suitable for exposing an application with frontends (like a shopping cart) externally, so that users can interact and place orders.
This maps the Service to the content of the external name field and returns a CNAME record. When you access this Service, you will not receive its ClusterIP; instead, you will get a CNAME record that is defined in the ExternalName.
Read more about these types – “Understanding Kubernetes LoadBalancer vs NodePort vs Ingress“
CNI – Kubernetes Container Networking Interface
So how do you implement this network model? In this case, Kubernetes does not come with a built-in solution.
There are a few networking projects that work for Kubernetes, such as Calico, Flannel, and WeaveWorks. You can find more information about these projects here. These third-party projects must follow a CNI, which is a set of standards that defines how programs/networking should be developed to solve networking challenges in a container environment. This is sometimes called a plugin.
Let’s take a look at how this is implemented in WeaveWorks. WeaveWorks deploys an agent or Service on each node. They communicate with each other to exchange information about the nodes, networks, and pods within them. Each agent stores topology for the nodes and pods, which allows them to know where the pods are, their IP addresses, and which nodes they are in. Weaveworks creates its own bridge on the nodes (with the name “weave”) and assigns an IP to each network. WeaveWorks also makes sure that the pods have the correct route configured for reaching the agent, and then the agent takes care of the other pods.
When a packet is sent from one pod to another pod on a different node, weave intercepts the packet and identifies that it’s on a separate network. It then encapsulates the packet and transforms it into a new packet with the source and destination. Then, it sends it across the network. When the packet is received on the other node, its weave agent retrieves the packet, examines it, and routes it to the correct pod.
More on Kubernetes Networking
Implementing the Kubernetes network model via networking plugins like GCP, AWS, Flannel, Calico, or WeaveWorks is a good start. But what if your microservices expand and become too complex? What if your development teams want to try fault tolerance, canary testing, or weighted traffic splitting (such as having 80% of the traffic go to version #1 of your application, while the remaining 20% goes to version #2)? How about monitoring the request and implementing a set of rules or policies on ingress, egress, or other resources?
This is where Service Mesh comes in. Service Mesh allows you to capture all information about service-to-service communication and map it as a service metric (observability) that can be used to harden your communication policy, resulting in more efficient and reliable requests. More information on Service Mesh can be found here.
Kubernetes Networking can be complex. CNI plugins allow you to focus on high-level, user-defined networking. You do not need to worry about how these Kubernetes resources communicate with each other, as long as you follow the guidelines that are provided by the CNI third-party projects.
When selecting a Service Type, you should be aware of application function and accessibility, and you should always begin by granting the least amount of access.
Service Mesh helps you manage complex applications or microservices as well as mitigate issues. Observability, together with features such as fault tolerance, traffic shifting, circuit breaking, and canary testing, will help you collect and identify your service-to-service communication. You can use this as a guide for implementing your network policies and achieving a resilient and reliable Kubernetes network.
- Load Balancing Multi-Cloud, Multi-Cluster Workloads in Kubernetes - January 10, 2022
- The Six Most Popular Kubernetes Networking Troubleshooting Issues - January 10, 2022
- Deploy a Platform9 Managed Kubernetes cluster and connect it to Azure Arc - January 7, 2022