Using MetalLB to add the LoadBalancer Service to Kubernetes Environments

MetalLB is a great project that can add the LoadBalancer service to your Kubernetes Cluster, but why would you want to use it? Let’s take a look.

Your on-premises Kubernetes cluster just finished deploying and you are ready to get started. One of the first things you may do is follow a guide on how to set up your first deployment. Everything is going great but then you get to the Service and it is using type LoadBalancer, but your on-premises environment doesn’t have support for the LoadBalancer service. All of the example applications seem to be using a LoadBalancer.

This is one of the first use cases you may run into. What is the next step? Are you going to use the NodePort service and manage your own LoadBalancer to distribute traffic? Are you going to expose the ClusterIP network, or manage LoadBalancers that point to ClusterIPs? How will this translate to production where the LoadBalancer service is available?

MetalLB may be the answer you were looking for. It supports both Layer2 and BGP configurations. In this blog we are going to focus on Layer2 with IPv4 using ARP.

MetalLB Documentation: https://metallb.universe.tf/

Let’s take a look at use cases to show why you would want to use it, then we will go over how to set it up, and finish out with how to use it.

MetalLB Use Cases

On-Premises BareMetal Kubernetes Environment

We will start out looking at a use case that matches our original issue. The on-prem install has finished, we have a working Kubernetes Environment, but we are unable to deploy a LoadBalancer service. We have a network that we could expose internally to allow access to the ClusterIP service, however that is not how we are going to do it in production and we probably want to avoid oversharing network space that end users and developers do not really need access to.

NodePort

What about NodePort? It’s definitely something we could set up and manage, however the key word here would be “manage” – which is something we are trying to get away from. What would this setup look like? We would end up exposing deployments with a service type of NodePort which would give us the Nodes as Endpoints with high numbered ports, something like 192.168.16.22:30000. For each node where we have this socket, we would need to manage a LoadBalancer outside of Kubernetes to distribute traffic between them. If we are using a static list then we need to be able to poll/update the list when there are changes. Sounds like additional work that could result in dropped traffic if the list hasn’t been updated.

LoadBalancer and MetalLB

What if we could just use the LoadBalancer service? That would actually allow us to replicate what we are doing in production, however we can use a private IP space. The biggest benefit of this setup is that the service is automated. A user can request the resource and it will automatically be provisioned. We aren’t managing an external LoadBalancer, at least not until we want to expose our services on Public IP space.

A common setup may only have a single network space which is available to everyone. The addresses would be auto-assigned in MetalLB, which requires the least amount of internal education around annotations for services. Since an internal network space is being used, we could allocate a large block of addresses so that everyone can have a LoadBalancer service if needed. While this may be great for testing, it is definitely not something we would want to do in production where we are paying for LoadBalancer services through a cloud provider.

We could allocate a smaller block of addresses and rely on Ingress for routing traffic based on DNS/Hostnames and endpoints. This may match what we are doing in production, where we have a single LoadBalancer service and split the traffic with an Ingress Controller and Ingress Resources.

Multiple Networks

MetalLB provides a way to allocate multiple networks. With this setup we would specify which network we want to provision a LoadBalancer on using annotations. This would allow for us to split workloads between different networks. We could expose the production network to all users, then have a set of addresses available for staging and development that only our developers or testers could access.

  annotations:
    metallb.universe.tf/address-pool: production-ips

Exposing via NAT

Once we have a production range specified we could expose it to the internet. At this point you could modify your Firewall and map a Public Address to the LoadBalancer Service Address (more than likely using Ingress or a Service Mesh) on ports 80/443. This is going to be a manual 1:1 configuration, however if you are using an Ingress Controller you could support multiple hostnames and endpoints which means you aren’t reconfiguring the rule every time there is an update.

Home Lab BareMetal Kubernetes Environment

Maybe you have a use case where you want a small Kubernetes environment but also want access to LoadBalancer services for development, educational purposes, or even just testing. The install and configuration will be similar, however most of the networking configuration may need to be done virtually unless you have dedicated networking hardware.

Kind even has MetalLB support, in case you want a smaller install for development.

I happen to be using MetalLB in my Kubernetes install on my home lab. My setup has a ~10 IP DHCP reserved list that exists on the base Google Wifi Network (192.168.86.10-20), which means the network is available to anyone connected to it. The worker and controller nodes are VMs which are bridged to the same network. I won’t get into too much detail, however I wanted to show that even with basic network equipment, you can set something like this up locally. If you want more details on how I set this up, feel free to reach out to me on the Platform9 Slack.

If you want to get more advanced you can even setup NAT for your ISP address and forward it to the LoadBalancer IP. More than likely you would be using an Ingress Controller behind the IP address so that you can distribute traffic based on hostnames and endpoints. This would let you host something that is publicly accessible, which opens up the ability to configure DNS and use Cert-Manager/LetsEncrypt for https traffic using http01 challenges.

Requirements for MetalLB

Most applications are created with a LoadBalancer service or require an ingress controller/service mesh to receive traffic. With a built in LoadBalancer that can map external IP addresses directly to your cluster, you can replicate a cloud provider. 

If you are setting up a development environment, without direct mapping of publicly accessible addresses to private addresses (NAT), it will give you the ability to specify a network for “external” traffic to the cluster. While the ability to route traffic to the ClusterIP can be used, emulating what you would see in a cloud provider or a public K8s cluster may be more in line with what you want to test. For this you would allocate a network, or subnet, to the MetalLB IP space, much like you would for floating IP addresses in something like OpenStack. 

https://metallb.universe.tf/#requirements

  • Kubernetes cluster, running Kubernetes 1.13.0 or later, that does not already have network load-balancing functionality.
  • cluster network configuration that can coexist with MetalLB.
  • Some IPv4 addresses for MetalLB to hand out.
  • When using the L2 operating mode, traffic on port 7946 (TCP & UDP, other port can be configured) must be allowed between nodes, as required by memberlist.

Installation

Helm Add Repository and Update

We will add the helm repository and then make sure that all of our repositories are up to date.

helm repo add metallb https://metallb.github.io/metallb
helm repo update

Values.yaml and IP Address Space Configuration

After everything has been updated we will need to configure the IP address range we are going to use. If we are using a specific range, instead of a subnet, the formatting is IP-IP, otherwise you would use IP/24 (or whatever your subnet mask is.)

Create a values.yaml file and paste the information below. Make sure you update the addresses range to the correct network. The network will need to be routable from your location to access it once it has been configured.

Basic Configuration

$ cat values.yaml 
configInline:
  address-pools:
   - name: default
     protocol: layer2
     addresses:
     - 192.168.86.10-192.168.86.20

Advanced Example with Multiple Address Pools

configInline:
  address-pools:
  - name: production
    protocol: layer2
    addresses:
    - 192.168.144.0/20
  - name: staging
    protocol: layer2
    addresses:
    - 192.168.145.0/20
    auto-assign: false

Install MetalLB with Configured Values

After saving the values.yaml file we can finally install MetalLB using Helm.

helm install metallb metallb/metallb -f values.yaml

After installation we can verify the configuration.

$ kubectl get configmap metallb -o yaml
apiVersion: v1
data:
  config: |
    address-pools:
    - addresses:
      - 192.168.86.10-192.168.86.20
      name: default
      protocol: layer2
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: metallb
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2022-02-07T19:42:12Z"
  labels:
    app.kubernetes.io/instance: metallb
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: metallb
    app.kubernetes.io/version: v0.11.0
    helm.sh/chart: metallb-0.11.0
  name: metallb
  namespace: default
  resourceVersion: "8551"
  uid: 5823ae34-399e-4cd4-ab23-0440e00ee510

Our configuration is using a single address range with auto-assign, which means when we create a LoadBalancer service it will automatically pick the next available address. Let’s create a service to see this in action. Save the following in lb-deployment.yaml.

apiVersion: v1
kind: Service
metadata:
  name: nginx
  annotations:
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

Now create the service and deployment.

kubectl create -f lb-deployment.yaml

The service and deployment have been created which means we can see what IP address was associated from our MetalLB LoadBalancer pool. In this case we end up with 192.168.86.10 in the External-IP section.

$ kubectl get service nginx
NAME    TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
nginx   LoadBalancer   10.21.82.141   192.168.86.10   80:30967/TCP   4m40s

Let’s curl the endpoint and see if everything is working.

$ curl http://192.168.86.10
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Success! We were able to add MetalLB to a Kubernetes Cluster that didn’t have LoadBalancer support, and then we were able to provision a Service using the LoadBalancer type.

Conclusion and Next Steps

MetalLB is a great tool for environments that do not have native support for the LoadBalancer service in Kubernetes. This is especially true if you are running an environment that is not on a Cloud Provider. If you are looking for a way to extend the ability to use the LoadBalancer service, to your environment, then MetalLB is a great choice.

PMK also provides a quick and easy way to deploy your clusters with MetalLB enabled. Check out some of our documentation and other posts related to MetalLB:

https://platform9.com/docs/kubernetes/metallb-application-load-balancing#metallb-introduction

https://platform9.com/docs/kubernetes/metallb-addon2

Interested in More Content?

Mike Petersen

You may also enjoy

Balancing cost savings and innovation in an economic downturn

By Kamesh Pemmaraju

Mastering the operational model challenge for distributed AI/ML infrastructure

By Kamesh Pemmaraju

The browser you are using is outdated. For the best experience please download or update your browser to one of the following: