Enable Platform9 DHCP
Platform9 DHCP is the recommended DHCP (vs whereabouts ) for KubeVirt installations. When leveraging whereabouts there is an issue that occurs during Live Migrations where the Virtual Machine IP address will change when migrated to the target host.
What is Platform9 DHCP
Platform9 created an alternative to whereabouts for the KubeVirt use case by having a DHCP server running inside the pod/vm to cater to the DHCP requests from the virtual machine instance (not the pod in the case of Kubevirt). Multus net-attach-def will use the DHCP server, and there is no need to specify the IPAM plugin. The client/consumer VM will need the dhclient for sending DHCP requests.
Prerequisites
- Kubevirt
- Luigi (advanced Networking)
- Kubemacpool
Enabling the Platform9 DHCP Addon:
- Using the Luigi addons, enable dhcpController addon
apiVersionplumber.k8s.pf9.io/v1kindNetworkPluginsmetadata namenetworkplugins-sample-nosriovspec plugins ... dhcpController- This creates the
dhcp-controller-systemnamespace with the controller for the dnsmasq. - Create
NetworkAttachmentDefinition. Example:
apiVersionplumber.k8s.pf9.io/v1kindHostNetworkTemplatemetadata nameovs-br03-configspec # Add fields here ovsConfigbridgeName"ovs-br03" nodeInterface"ens5"---apiVersion"k8s.cni.cncf.io/v1"kindNetworkAttachmentDefinitionmetadata nameovs-dnsmasq-test-woipam annotations k8s.v1.cni.cncf.io/resourceNameovs-cni.network.kubevirt.io/ovs-br03spec config'{ "cniVersion""0.3.1", "type""ovs", "name""ovs-dnsmasq-test-woipam", "bridge""ovs-br03" '- To use a image that does not have dhclient installed/ able to install, create an additional NetworkAttachmentDefinition with the dhcp plugin enabled and start the dhcp daemon.
- Create a DHCP Server
---apiVersion"k8s.cni.cncf.io/v1"kindNetworkAttachmentDefinitionmetadata nameovs-dnsmasq-test annotations k8s.v1.cni.cncf.io/resourceNameovs-cni.network.kubevirt.io/ovs-br03spec config'{ "cniVersion""0.3.1", "type""ovs", "name""ovs-dnsmasq-test", "bridge""ovs-br03", "ipam" "type""dhcp" '- About the fields
a . Name: Name of the DHCPServer. Configurations of dnsmasq will be generated in a Configmap with the same name
b. networks: list of all networks that this pod will serve
- networkName: Name of NetworkAttachmentDefinition. Should not have dhcp plugin enabled.
- interfaceIp: IP address that the pod will be allocated. Must have prefix to ensure proper routes are added.
- leaseDuration: Duration the leases offered should be valid for. Provide in valid formats for dnsmasq (eg: 10m, 5h, etc). Defaults to 1h
- vlanId: Dnsmasq network identifier. Used as an identifier while restoring IPs.
- cidr:
range_start, range_end, gatewayare optional. range is compulsory. If range start and end are provided, they will be used in place of the default start and end.
- A configmap is generated based on the DHCPServer. This is a conf file for dnsmasq. It can be overridden by creating a valid configmap with the same name as that of the DHCPServer.
For any specific configurations, you can provide your own configmap. Create a configmap with valid dnsmasq.conf parameters. Along with this, dhcp-range must be in one of these two formats
- dhcp-range=<start_IP>,<end_ip>,<netmask>,<leasetime>
- dhcp-range=<vlanID>,<start_ip>,<end_ip>,<netmask>,<leasetime>
- Sample VM yaml to apply
apiVersionkubevirt.io/v1kindVirtualMachinemetadata nametest-ovs-interface-1spec runningtrue template metadata labels kubevirt.io/sizesmall kubevirt.io/domaintestvm spec domain devices disksnamecontainerdisk disk busvirtionamecloudinitdisk disk busvirtio interfacesnamedefault masqueradebridge nameovs-br03 resources requests memory1024M hostnamemyhostname1 networksnamedefault podnameovs-br03 multus networkNameovs-dnsmasq-test-woipam volumesnamecontainerdisk containerDisk imagequay.io/kubevirt/fedora-cloud-container-disk-demolatestnamecloudinitdisk cloudInitNoCloud userData- #cloud-config passwordfedora chpasswd expireFalse - Sample StatefulSet YAML to apply.
---apiVersionv1kindServicemetadata nameheadlessspec portsport80 nameweb selector apptest clusterIPNone---apiVersionapps/v1kindStatefulSetmetadata nametest2spec replicas1 selector matchLabels apptest2 serviceNameheadless template metadata labels apptest2 annotations k8s.v1.cni.cncf.io/networks'[ "name" "ovs-dnsmasq-test-woipam" ' spec securityContext runAsUser1000 runAsGroup3000 containersnametest2 imagealpine portscontainerPort80 nameweb command"/bin/sh" args-c>- udhcpc -i net1 tail -f /dev/null securityContext runAsUser0 #allowPrivilegeEscalation: false capabilities add"NET_ADMIN"- An IPAllocation is made for every lease stored in the server. It is used to restore leases back to the DHCPServer. Leases are only restored to the vlanID mentioned. Lease will expire at
leaseExpiry. Sample of how an IPAllocation looks
apiVersiondhcp.plumber.k8s.pf9.io/v1alpha1kindIPAllocationmetadata creationTimestamp"2022-11-09T12:18:58Z" generation1 name192.168.15.90 namespacedefault resourceVersion"189858" uid70ee31e1-d3b4-47f0-be92-0eb03cd33d57spec entityReftest-ovs-interface-1 leaseExpiry"1667998138" macAddr1e8df0c46c8e vlanIdvlan3- To test out the DHCPServer with KubeVirt Live Migration, connect the hosts with tunnels and set the MAC Address of the bridge.
ovs-vsctl set bridge ovs-br02 other_config:hwaddr=<interface mac address># exit the daemon and bring the bridge upip link set ovs-br02 upKubemacpool installation:
- [kubemacpool/deployment-on-arbitrary-cluster.md at main · k8snetworkplumbingwg/kubemacpool](kubemacpool/deployment-on-arbitrary-cluster.md at main · k8snetworkplumbingwg/kubemacpool)
- Using a MAC address pool for virtual machines | OpenShift Virtualization | OpenShift Container Platform 4.7
wget https://raw.githubusercontent.com/k8snetworkplumbingwg/kubemacpool/master/config/release/kubemacpool.yamlmac_oui=02:`openssl rand -hex 1`:`openssl rand -hex 1`sed -i "s/02:00:00:00:00:00/$mac_oui:00:00:00/" kubemacpool.yamlsed -i "s/02:FF:FF:FF:FF:FF/$mac_oui:FF:FF:FF/" kubemacpool.yamlkubectl apply -f ./kubemacpool.yamlkubectl label namespace default mutatevirtualmachines.kubemacpool.io=allocate