Load Balancer as a Service (LBaaS)
Private Cloud Director implements Load Balancer as a Service (LBaaS) using Octavia with OVN (Open Virtual Network) as the provider driver. This implementation offers a lightweight and efficient load-balancing solution without the overhead of traditional virtual machine-based approaches.
Private Cloud Director currently uses the open source OVN provider driver for LBaaS instead of the default open source Amphora driver. OVN implements load balancing directly within the OVN distributed router using OpenFlow rules programmed into the Open vSwitches (OVS), eliminating the need for dedicated load balancer virtual machines.
Prerequisites
Before implementing LBaaS, please make sure the LBaaS Prerequisites are met.
Why OVN Provider Driver
The choice of OVN as the provider driver for LBaaS in the Private Cloud Director offers several advantages:
Resource Efficiency
- No dedicated virtual machines required for load balancing
Faster Deployment
- Near-instant load balancer creation
- No VM provisioning or boot time
Simplified Management
- No separate management network required
- Integrated with existing OVN infrastructure
Supported LBaaS Configuration
Private Cloud Director currently supports following configuration options for LBaaS:
Protocol Support
- Supports TCP, UDP, and SCTP protocols
- No Layer-7 (HTTP) load balancing support
- 1:1 protocol mapping between listeners and pools required
Load Balancing Algorithm
- Only
SOURCE_IP_PORT
algorithm supported ROUND_ROBIN
andLEAST_CONNECTIONS
algorithms not supported currently (This is a limitation of the OVN provider driver)
- Only
Health Monitoring
- Supports TCP and UDP-CONNECT protocols
- SCTP health monitoring is not currently supported
IP Version Support
- Mixed IPv4 and IPv6 members not supported
- IPv6 support is not currently fully tested
Create a New Instance of Load Balancer
Core Components
- Load Balancer: Provides a virtual IP (VIP) endpoint to distribute traffic across backend servers (Virtual Machines).
- Listener: Defines protocol and port rules for incoming traffic.
- Pool: A group of backend members that handle requests.
- Member: An individual backend server instance.
- Health Monitor: Continuously checks the health of pool members to ensure high availability.
Create a Load Balancer
First, create a load balancer resource with a virtual IP (VIP) in the specified subnet.
Navigate to Networking → Load Balancers.
Click Create Load Balancer.
Enter a name and select a subnet for the VIP
- The VIP is the single entry point for your load balancer
- It must be created on a subnet where your load balancer will be accessible
- This subnet should be the same tenant network where your backend servers (Virtual Machines) are deployed.
Click Create and wait for the status to become
ACTIVE
.
Create a Listener
Once the load balancer resource is set up, create the listener. A listener is the component that defines how your load balancer processes incoming requests:
Navigate to Networking → Load Balancers and open the Listeners tab.
Click Create Listener.
Specify a name, protocol, and port
- The name identifies the listener under the load balancer.
- Choose the appropriate protocol (TCP, UDP, SCTP) based on your application's communication requirements.
- Define the port number where incoming traffic should be accepted by the load balancer (e.g., 80 for HTTP, 443 for HTTPS).
Click Create to provision the listener.
Create a Pool
Now, you can create a pool of virtual machines that will handle the client requests from the load balancer.
Navigate to Networking → Load Balancers and go to the Pools tab.
Click Create Pool.
Provide a name, select a protocol, and choose a load-balancing algorithm
- The name helps track multiple pools in your deployment.
- The protocol must match the listener's definition to ensure compatibility with incoming traffic.
- Currently, only the
SOURCE_IP_PORT
algorithm is supported, which routes traffic consistently based on the source IP and port helpful for session persistence.
Click Create to create the pool.
Adding Members
Once the pool is created, you can add the member virtual machines to receive client requests. You will provide each virtual machine's IP address, listening port, and subnet.
Navigate to Networking → Load Balancers and go to the Members tab.
Click Add Member.
Provide the Virtual Machines name, subnet, IP, and port.
- The name distinguishes this member within the pool.
- The subnet should match the network where the member server resides.
- Enter the Virtual Machines IP address that will handle requests and the port it is listening on (e.g., port
8080
for a web app).
Click Add Member.
Configuring Health Monitors
Set up health monitoring to ensure that the load balancer periodically checks for the health of the pool of virtual machines. Unhealthy VMs will be skipped to avoid service disruption.
Navigate to Networking → Load Balancers and open the Monitors tab.
Click Create Health Monitor.
Define monitor type and parameters for health checks
- Select the monitor type based on your backend's protocol support (TCP or UDP-CONNECT).
- Set the delay between successive health checks (in seconds), timeout for each check, and retry count to determine when a member is marked unhealthy.
- For example, a delay of 5 seconds, timeout of 3 seconds, and three retries mean a member must fail three checks over 15 seconds to be removed from rotation.
Click Create Monitor to apply health checks to all members in the pool.
Using CLI
You can also use CLI commands to create a load balancer instance.
Create a Load Balancer
First, create a load balancer resource with a virtual IP (VIP) in the specified subnet.
- The VIP is the single entry point for your load balancer
- It must be created on a subnet where your load balancer will be accessible
- This subnet should be the same tenant network where your backend servers (Virtual Machines) are deployed.
openstack loadbalancer create --provider ovn \
--vip-subnet-id <SUBNET_ID> \
--name <LOADBALANCER_NAME>
Create a Listener
Once the load balancer resource is set up, create the listener. A listener is the component that defines how your load balancer processes incoming requests:
- It specifies the protocol (TCP, UDP, or SCTP) and port number
- Acts as a front-end service that receives incoming traffic
- Routes the traffic to the appropriate pool of backend servers (Virtual Machine)
- Example: A TCP listener on port 80 for web traffic
openstack loadbalancer listener create \
--protocol TCP \
--protocol-port 80 \
<LOADBALANCER_NAME> \
--name <LISTENER_NAME>
Create a Pool
Now you can create a pool of virtual machines that will handle the client requests from the load balancer.
- These VMs must be deployed and running before adding them to the pool
- Every virtual machines in a given pool should provide the same service (e.g., web servers that are part of your application)
- Pool members are identified by their IP address and port
You can also specify the load-balancing algorithm (e.g., SOURCE_IP_PORT
) here and associates it with the listener.
openstack loadbalancer pool create \
--protocol TCP \
--lb-algorithm SOURCE_IP_PORT \
--listener <LISTENER_NAME> \
--name <POOL_NAME>
Once the pool is created, you can add the member virtual machines that will receive the client requests. You will do this by providing each virtual machine's ip address, listening port, and subnet.
openstack loadbalancer member create \
--subnet-id <SUBNET_ID> \
--address <BACKEND_IP> \
--protocol-port 80 \
<POOL_NAME>
Configure Health Monitoring
Set up health monitoring to ensure that the load balancer periodically checks for the health of the pool of virtual machines. Unhealthy VMs will be skipped to avoid service disruption.
openstack loadbalancer healthmonitor create \
--delay 5 \
--timeout 3 \
--max-retries 3 \
--type TCP \
<POOL_NAME>
(Optional) Configure Public (Floating) IP
You can use the following commands to expose the load balancer for external access:
Create a public (floating) IP from the external network for the load balancer.
openstack floating ip create <EXTERNAL_NETWORK_NAME>
Retrieve the port ID of the virtual IP associated with the load balancer. This information is needed to link the public (floating) IP.
openstack loadbalancer show <LOADBALANCER_NAME> -c vip_port_id -f value
Then, associate the floating IP with the load balancer port, enabling public access.
openstack floating ip set --port <PORT_ID> <FLOATING_IP_ID>
Verification and Testing
Check load balancer status:
openstack loadbalancer list
Check the health status of the virtual machines in the pool to ensure they can handle traffic.
openstack loadbalancer member list <POOL_NAME>
Confirm that the load balancer is operational by sending a test request to the floating IP.
curl http://<FLOATING_IP>