Enable Platform9 DHCP
Platform9 DHCP is the recommended DHCP (vs whereabouts ) for KubeVirt installations. When leveraging whereabouts there is an issue that occurs during Live Migrations where the Virtual Machine IP address will change when migrated to the target host.
What is Platform9 DHCP
Platform9 created an alternative to whereabouts for the KubeVirt use case by having a DHCP server running inside the pod/vm to cater to the DHCP requests from the virtual machine instance (not the pod in the case of Kubevirt). Multus net-attach-def will use the DHCP server, and there is no need to specify the IPAM plugin. The client/consumer VM will need the dhclient for sending DHCP requests.
Prerequisites
- Kubevirt
- Luigi (advanced Networking)
- Kubemacpool
Enabling the Platform9 DHCP Addon:
- Using the Luigi addons, enable dhcpController addon
apiVersion plumber.k8s.pf9.io/v1
kind NetworkPlugins
metadata
name networkplugins-sample-nosriov
spec
plugins
...
dhcpController
- This creates the
dhcp-controller-system
namespace with the controller for the dnsmasq. - Create
NetworkAttachmentDefinition
. Example:
apiVersion plumber.k8s.pf9.io/v1
kind HostNetworkTemplate
metadata
name ovs-br03-config
spec
# Add fields here
ovsConfig
bridgeName"ovs-br03"
nodeInterface"ens5"
---
apiVersion"k8s.cni.cncf.io/v1"
kind NetworkAttachmentDefinition
metadata
name ovs-dnsmasq-test-woipam
annotations
k8s.v1.cni.cncf.io/resourceName ovs-cni.network.kubevirt.io/ovs-br03
spec
config'{
"cniVersion""0.3.1",
"type""ovs",
"name""ovs-dnsmasq-test-woipam",
"bridge""ovs-br03"
'
- To use a image that does not have dhclient installed/ able to install, create an additional NetworkAttachmentDefinition with the dhcp plugin enabled and start the dhcp daemon.
- Create a DHCP Server
---
apiVersion"k8s.cni.cncf.io/v1"
kind NetworkAttachmentDefinition
metadata
name ovs-dnsmasq-test
annotations
k8s.v1.cni.cncf.io/resourceName ovs-cni.network.kubevirt.io/ovs-br03
spec
config'{
"cniVersion""0.3.1",
"type""ovs",
"name""ovs-dnsmasq-test",
"bridge""ovs-br03",
"ipam"
"type""dhcp"
'
- About the fields
a . Name: Name of the DHCPServer. Configurations of dnsmasq will be generated in a Configmap with the same name
b. networks: list of all networks that this pod will serve
- networkName: Name of NetworkAttachmentDefinition. Should not have dhcp plugin enabled.
- interfaceIp: IP address that the pod will be allocated. Must have prefix to ensure proper routes are added.
- leaseDuration: Duration the leases offered should be valid for. Provide in valid formats for dnsmasq (eg: 10m, 5h, etc). Defaults to 1h
- vlanId: Dnsmasq network identifier. Used as an identifier while restoring IPs.
- cidr:
range_start, range_end, gateway
are optional. range is compulsory. If range start and end are provided, they will be used in place of the default start and end.
- A configmap is generated based on the DHCPServer. This is a conf file for dnsmasq. It can be overridden by creating a valid configmap with the same name as that of the DHCPServer.
For any specific configurations, you can provide your own configmap. Create a configmap with valid dnsmasq.conf parameters. Along with this, dhcp-range must be in one of these two formats
- dhcp-range=<start_IP>,<end_ip>,<netmask>,<leasetime>
- dhcp-range=<vlanID>,<start_ip>,<end_ip>,<netmask>,<leasetime>
- Sample VM yaml to apply
apiVersion kubevirt.io/v1
kind VirtualMachine
metadata
name test-ovs-interface-1
spec
runningtrue
template
metadata
labels
kubevirt.io/size small
kubevirt.io/domain testvm
spec
domain
devices
disks
name containerdisk
disk
bus virtio
name cloudinitdisk
disk
bus virtio
interfaces
name default
masquerade
bridge
name ovs-br03
resources
requests
memory 1024M
hostname myhostname1
networks
name default
pod
name ovs-br03
multus
networkName ovs-dnsmasq-test-woipam
volumes
name containerdisk
containerDisk
image quay.io/kubevirt/fedora-cloud-container-disk-demo latest
name cloudinitdisk
cloudInitNoCloud
userData -
#cloud-config
password fedora
chpasswd expire False
- Sample StatefulSet YAML to apply.
---
apiVersion v1
kind Service
metadata
name headless
spec
ports
port80
name web
selector
app test
clusterIP None
---
apiVersion apps/v1
kind StatefulSet
metadata
name test2
spec
replicas1
selector
matchLabels
app test2
serviceName headless
template
metadata
labels
app test2
annotations
k8s.v1.cni.cncf.io/networks'[
"name" "ovs-dnsmasq-test-woipam"
'
spec
securityContext
runAsUser1000
runAsGroup3000
containers
name test2
image alpine
ports
containerPort80
name web
command"/bin/sh"
args
-c
>-
udhcpc -i net1
tail -f /dev/null
securityContext
runAsUser0
#allowPrivilegeEscalation: false
capabilities
add"NET_ADMIN"
- An IPAllocation is made for every lease stored in the server. It is used to restore leases back to the DHCPServer. Leases are only restored to the vlanID mentioned. Lease will expire at
leaseExpiry
. Sample of how an IPAllocation looks
apiVersion dhcp.plumber.k8s.pf9.io/v1alpha1
kind IPAllocation
metadata
creationTimestamp"2022-11-09T12:18:58Z"
generation1
name192.168.15.90
namespace default
resourceVersion"189858"
uid 70ee31e1-d3b4-47f0-be92-0eb03cd33d57
spec
entityRef test-ovs-interface-1
leaseExpiry"1667998138"
macAddr 1e 8d f0 c4 6c 8e
vlanId vlan3
- To test out the DHCPServer with KubeVirt Live Migration, connect the hosts with tunnels and set the MAC Address of the bridge.
ovs-vsctl set bridge ovs-br02 other_config:hwaddr=<interface mac address>
# exit the daemon and bring the bridge up
ip link set ovs-br02 up
Kubemacpool installation:
- [kubemacpool/deployment-on-arbitrary-cluster.md at main · k8snetworkplumbingwg/kubemacpool](kubemacpool/deployment-on-arbitrary-cluster.md at main · k8snetworkplumbingwg/kubemacpool)
- Using a MAC address pool for virtual machines | OpenShift Virtualization | OpenShift Container Platform 4.7
wget https://raw.githubusercontent.com/k8snetworkplumbingwg/kubemacpool/master/config/release/kubemacpool.yaml
mac_oui=02:`openssl rand -hex 1`:`openssl rand -hex 1`
sed -i "s/02:00:00:00:00:00/$mac_oui:00:00:00/" kubemacpool.yaml
sed -i "s/02:FF:FF:FF:FF:FF/$mac_oui:FF:FF:FF/" kubemacpool.yaml
kubectl apply -f ./kubemacpool.yaml
kubectl label namespace default mutatevirtualmachines.kubemacpool.io=allocate