Load Balancer as a Service (LBaaS)
Private Cloud Director implements Load Balancer as a Service (LBaaS) using Octavia with OVN (Open Virtual Network) as the provider driver. This implementation offers a lightweight and efficient load-balancing solution without the overhead of the traditional virtual machine-based approach.
OVN implements load balancing directly within the OVN distributed router using OpenFlow rules programmed into the Open vSwitches (OVS), eliminating the need for dedicated load balancer virtual machines.
Prerequisites
Before implementing LBaaS, please make sure the following are met:
These prerequisites only apply if you plan to deploy the Private Cloud Director load balancer as a service (LBaaS) implementation to create one or more software-defined load balancers for your applications.
CLI Update
You need to install the Octavia extension to the OpenStack CLI in order to use the LBaaS specific OpenStack CLI commands. Run the following command on a machine where you want to run OpenStack CLI to install both packages.
sudo apt install python3-openstackclient python3-octaviaclient -yAlternatively, run the following command on the machine where you already have OpenStack CLI running, to add the LBaaS extension.
sudo apt install python3-octaviaclient -yNetwork Requirements
You will need:
- Internal VXLAN or flat network(s) (a physical or virtual network) that will be used both by your load balancer instance and your pool of virtual machines that will run the service and receive client requests.
- (Optionally) An external network if you plan to use public (floating) IPs for your load balancer.
Pool of Virtual Machines
The pool of virtual machines that will run your application that requires load balancing must meet the following requirements:
- Be running and in an 'active' state
- Have a valid IP address assigned from the same tenant network that you will use to create a new load balancer instance.
- Have your application (for example, web server) running and accessible
Router Configuration
The load balancer requires a virtual router to forward traffic from its virtual IP (VIP) and the backend pool of VMs. The following scenarios require the router to be configured differently:
- If the VIP and backend pool of VMs are on the same physical or virtual network, the router simply needs to have a single interface on that network. This allows it to respond to ARP requests for the VIP.
- If the VIP and backend pools are on different networks, the router must have an interface for each network.
- If you plan to use a floating IP to front the VIP of the load balancer, you will need a router that connects the tenant network used by the load balancer and the pool of VMs, as well as your external network serving the floating IPs. Additionally, you will also need available public (floating) IPs in your quota.
Benefits of OVN Provider Driver
The OVN provider driver for LBaaS in Private Cloud Director offers several advantages:
Resource Efficiency
- No dedicated virtual machines required for load balancing
Faster Deployment
- Near-instant load balancer creation
- No VM provisioning or boot time
Simplified Management
- No separate management network required
- Integrated with existing OVN infrastructure
Supported LBaaS Configuration
Private Cloud Director currently supports following configuration options for LBaaS:
Protocol Support
- Supports TCP, UDP, and SCTP protocols
- No Layer-7 (HTTP) load balancing support
- 1:1 protocol mapping between listeners and pools required
Load Balancing Algorithm
- Only
SOURCE_IP_PORTalgorithm supported ROUND_ROBINandLEAST_CONNECTIONSalgorithms not supported currently (This is a limitation of the OVN provider driver)
- Only
Health Monitoring
- Supports TCP and UDP-CONNECT protocols
- SCTP health monitoring is not currently supported
IP Version Support
- Mixed IPv4 and IPv6 members not supported
- IPv6 support is not currently fully tested
Create a New Instance of Load Balancer
Core Components
- Load Balancer: Provides a virtual IP (VIP) endpoint to distribute traffic across backend servers (Virtual Machines).
- Listener: Defines protocol and port rules for incoming traffic.
- Pool: A group of backend members that handle requests.
- Member: An individual backend server instance.
- Health Monitor: Continuously checks the health of pool members to ensure high availability.
Create a Load Balancer
First, create a load balancer resource with a virtual IP (VIP) in the specified subnet.
Navigate to Networking → Load Balancers.
Click Create Load Balancer.
Enter a name and select a subnet for the VIP
- The VIP is the single entry point for your load balancer
- It must be created on a subnet where your load balancer will be accessible
- This subnet should be the same tenant network where your backend servers (Virtual Machines) are deployed.
Click Create and wait for the status to become
ACTIVE.
Create a Listener
Once the load balancer resource is set up, create the listener. A listener is the component that defines how your load balancer processes incoming requests:
Navigate to Networking → Load Balancers and open the Listeners tab.
Click Create Listener.
Specify a name, protocol, and port
- The name identifies the listener under the load balancer.
- Choose the appropriate protocol (TCP, UDP, SCTP) based on your application's communication requirements.
- Define the port number where incoming traffic should be accepted by the load balancer (e.g., 80 for HTTP, 443 for HTTPS).
Click Create to provision the listener.
Create a Pool
Now, you can create a pool of virtual machines that will handle the client requests from the load balancer.
Navigate to Networking → Load Balancers and go to the Pools tab.
Click Create Pool.
Provide a name, select a protocol, and choose a load-balancing algorithm
- The name helps track multiple pools in your deployment.
- The protocol must match the listener's definition to ensure compatibility with incoming traffic.
- Currently, only the
SOURCE_IP_PORTalgorithm is supported, which routes traffic consistently based on the source IP and port helpful for session persistence.
Click Create to create the pool.
Adding Members
Once the pool is created, you can add the member virtual machines to receive client requests. You will provide each virtual machine's IP address, listening port, and subnet.
Navigate to Networking → Load Balancers and go to the Members tab.
Click Add Member.
Provide the Virtual Machines name, subnet, IP, and port.
- The name distinguishes this member within the pool.
- The subnet should match the network where the member server resides.
- Enter the Virtual Machines IP address that will handle requests and the port it is listening on (e.g., port
8080for a web app).
Click Add Member.
Configuring Health Monitors
Set up health monitoring to ensure that the load balancer periodically checks for the health of the pool of virtual machines. Unhealthy VMs will be skipped to avoid service disruption.
Navigate to Networking → Load Balancers and open the Monitors tab.
Click Create Health Monitor.
Define monitor type and parameters for health checks
- Select the monitor type based on your backend's protocol support (TCP or UDP-CONNECT).
- Set the delay between successive health checks (in seconds), timeout for each check, and retry count to determine when a member is marked unhealthy.
- For example, a delay of 5 seconds, timeout of 3 seconds, and three retries mean a member must fail three checks over 15 seconds to be removed from rotation.
Click Create Monitor to apply health checks to all members in the pool.
Using CLI
You can also use CLI commands to create a load balancer instance.
Create a Load Balancer
First, create a load balancer resource with a virtual IP (VIP) in the specified subnet.
- The VIP is the single entry point for your load balancer
- It must be created on a subnet where your load balancer will be accessible
- This subnet should be the same tenant network where your backend servers (Virtual Machines) are deployed.
openstack loadbalancer create --provider ovn \ --vip-subnet-id <SUBNET_ID> \ --name <LOADBALANCER_NAME>Create a Listener
Once the load balancer resource is set up, create the listener. A listener is the component that defines how your load balancer processes incoming requests:
- It specifies the protocol (TCP, UDP, or SCTP) and port number
- Acts as a front-end service that receives incoming traffic
- Routes the traffic to the appropriate pool of backend servers (Virtual Machine)
- Example: A TCP listener on port 80 for web traffic
openstack loadbalancer listener create \ --protocol TCP \ --protocol-port 80 \ <LOADBALANCER_NAME> \ --name <LISTENER_NAME>Create a Pool
Now you can create a pool of virtual machines that will handle the client requests from the load balancer.
- These VMs must be deployed and running before adding them to the pool
- Every virtual machines in a given pool should provide the same service (e.g., web servers that are part of your application)
- Pool members are identified by their IP address and port
You can also specify the load-balancing algorithm (e.g., SOURCE_IP_PORT) here and associates it with the listener.
openstack loadbalancer pool create \ --protocol TCP \ --lb-algorithm SOURCE_IP_PORT \ --listener <LISTENER_NAME> \ --name <POOL_NAME>Once the pool is created, you can add the member virtual machines that will receive the client requests. You will do this by providing each virtual machine's ip address, listening port, and subnet.
openstack loadbalancer member create \ --subnet-id <SUBNET_ID> \ --address <BACKEND_IP> \ --protocol-port 80 \ <POOL_NAME>Configure Health Monitoring
Set up health monitoring to ensure that the load balancer periodically checks for the health of the pool of virtual machines. Unhealthy VMs will be skipped to avoid service disruption.
openstack loadbalancer healthmonitor create \ --delay 5 \ --timeout 3 \ --max-retries 3 \ --type TCP \ <POOL_NAME>(Optional) Configure Public (Floating) IP
You can use the following commands to expose the load balancer for external access:
Create a public (floating) IP from the external network for the load balancer.
openstack floating ip create <EXTERNAL_NETWORK_NAME>Retrieve the port ID of the virtual IP associated with the load balancer. This information is needed to link the public (floating) IP.
openstack loadbalancer show <LOADBALANCER_NAME> -c vip_port_id -f valueThen, associate the floating IP with the load balancer port, enabling public access.
openstack floating ip set --port <PORT_ID> <FLOATING_IP_ID>Verification and Testing
Check load balancer status:
openstack loadbalancer listCheck the health status of the virtual machines in the pool to ensure they can handle traffic.
openstack loadbalancer member list <POOL_NAME>Confirm that the load balancer is operational by sending a test request to the floating IP.
curl http://<FLOATING_IP>