Luigi Networking Quickstart
Prerequisites
- Kubernetes v1.17+
- Calico CNI
- Privileged Pods
- MetalLB is not deployed.
- Every worker node should have an additional NIC/port up and running with no IP address assigned to it, if intended for IPVLAN/L2 and MACVLAN, or an additional interface with an IP address for IPVLAN/L3.
Luigi Installation
Download the Luigi Operator definition to your master node or from where you are able to execute kubectl
commands against this cluster. (A local copy of the manifest is at the end of the document.)
Then install it by using the following command:
$ kubectl apply -f luigi-plugins-operator.yaml
Luigi Install Validation
After executing the command above, validate the installation using the following command. Please consider that some of the pods may take some time to fully be up.
pf9-0102:~ carlos$ kubectl get all -n luigi-system
NAME READY STATUS RESTARTS AGE
pod/luigi-controller-manager-74bdbf9cc9-2g2tf 2/2 Running 0 41h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/luigi-controller-manager-metrics-service ClusterIP 10.168.113.206 <none> 8443/TCP 41h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/luigi-controller-manager 1/1 1 1 41h
NAME DESIRED CURRENT READY AGE
replicaset.apps/luigi-controller-manager-74bdbf9cc9 1 1 1 41h
Luigi Networking Plugins
Luigi Networking Plugins Installation (hostPlumber)
The sampleplugins.yaml
manifest will deploy CNI plugins. The different components are going to deployed as daemon sets and will run in every node including workers and masters.
Create a namespace hostplumber
for the resources related to the hostplumber pod.
kubectl create namespace hostplumber
Apply the following YAML (uncomment as applicable to your environment) to choose and install the required Luigi plugins.
apiVersion plumber.k8s.pf9.io/v1
kind NetworkPlugins
metadata
name networkplugins-sample11
spec
# Add fields here
plugins
hostPlumber
namespace hostplumber
#hostPlumberImage: "platform9/luigi-plumber:v0.1.0"
#nodeFeatureDiscovery: {}
#multus: {}
#whereabouts: {}
# COMMENT/UNCOMMENT - requires all host networking/VFs to be configured first
#sriov: {}
As you may have observed, all the plugins are commented out but HostPlumber. First, we will install the HostPlumber
plugin and revisit this manifest later and re-apply it with the rest of the CNI plugins uncommented after creating our first HostNetworkTemplate object. HostPlumber
is not required, but it provides a mechanism to configure SR-IOV and host networking, as well as view the node SR-IOV and interface state via a K8s operator. If you’re configuring host networking and SR-IOV VFs thru other means, the HostPlumber
plugin is not needed and can be skipped.
The reason we deploy this first, is that the SR-IOV CNI requires VF and network configuration to be done first before deploying the SR-IOV plugin.
Execute the following command to install HostPlumber plugin:
$ kubectl apply -f sampleplugins.yaml
Luigi Networking Plugins Install Validation (hostPlumber)
After executing the command above, let’s review our work.
# pf9-0102:~ carlos$ kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-node 4 4 4 4 4 kubernetes.io/os=linux 2d9h
hostconfig-controller-manager 4 4 4 4 4 <none> 41h
Luigi HostNetworkTemplate Object
HostNetworkTemplate
will help us enable the desired number of VFs in our nodes as well which driver will be loaded on the newly enabled VFs.
The nodeSelector feature performs SR-IOV configurations only on SR-IOV capable nodes.
Note: NFD plugin (Node Feature Discovery) should be installed first in order to leverage the label: feature.node.kubernetes.io/network-sriov.capable: "true"
.
For example:
apiVersion plumber.k8s.pf9.io/v1
kind HostNetworkTemplate
metadata
name sriovconfig-enp3s0f1
spec
# Add fields here
nodeSelector
feature.node.kubernetes.io/network-sriov.capable"true"
testlabelA"123"
sriovConfig
pfName enp3s0f1
numVfs8
vfDriver vfio-pci
mtu9000
---
apiVersion plumber.k8s.pf9.io/v1
kind HostNetworkTemplate
metadata
name sriovconfig-enp3s0f0
spec
# Add fields here
sriovConfig
pfName enp3s0f0
numVfs4
vfDriver ixgbev
In this definition, we are creating two HostNetworkTemplates
– one for SR-IOV using vfio-pci
drivers and the other one SR-IOV using kernel drivers. The first one will only configure the VFs on nodes that has the following 3 criteria: SR-IOV capable, a PF called enp3s0f1
and a label with key-value to testlabelA: "123"
.
The second will attempt to configure 4 VFs with the ixgbevf
drive across all the nodes no matter what labels they have.
It is also possible to merge the two sriovConfig
’s for the two interfaces into the same HostNetworkTemplate
CRD, rather than a separate one for each PF. For example:
apiVersion plumber.k8s.pf9.io/v1
kind HostNetworkTemplate
metadata
name HostNetworkTemplate-kernel-enp3
spec
# Add fields here
nodeSelector
feature.node.kubernetes.io/network-sriov.capable"true"
testlabelA"123"
sriovConfig
pfName enp3s0f1
numVfs8
vfDriver ixgbevf
mtu9000
pfName enp3s0f0
numVfs4
vfDriver ixgbevf
Configuring via vendor and device ID
The above will search for all interfaces matching vendor ID 8086
(Intel) and device ID 1528
(representing a particular model of NIC). It will then create 32 VFs on each matching device and bind all of them to the vfio-pci
(DPDK driver). This might be useful if you don’t know the interface naming scheme across your hosts or PCI addresses, but you just want to target all particular NIC by vendor and device ID.
apiVersion plumber.k8s.pf9.io/v1
kind HostNetworkTemplate
metadata
name HostNetworkTemplate-1528-dev
spec
# Add fields here
nodeSelector
feature.node.kubernetes.io/network-sriov.capable"true"
testlabelA"123"
sriovConfig
vendorId"8086"
deviceId"1528"
numVfs32
vfDriver vfio-pci
Configuring via PCI address
apiVersion plumber.k8s.pf9.io/v1
kind HostNetworkTemplate
metadata
name HostNetworkTemplate-sample
spec
# Add fields here
nodeSelector
feature.node.kubernetes.io/network-sriov.capable"true"
testlabelA"123"
sriovConfig
pciAddr 0000 0300.0
numVfs32
vfDriver vfio-pci
pciAddr 0000 0300.1
numVfs32
vfDriver vfio-pci
The above will configure 32 VFs on PF matching PCI address “0000:03:00.0” and 32 VFs on PCI address “0000:03.00.1”, for a total of 64 VFs, and bind each VF to the vfio-pci driver.
HostNetwork CRD
This is not to be confused with the status section of the HostNetworkTemplate
CRD. (For latter phases, each Node will append its nodename
and the success/failure of application of the HostNetworkTemplate
policy to the Status
section.)
The HostNetwork
CRD, which is different, will not be created by the user. Instead, this is intended to be a read-only CRD and the DaemonSet operator on each node will discover and populate various host settings to this CRD:
- Created: First upon the HostPlumber plugin being deployed.
- Updated: After each application of the
HostNetworkTemplate
CRD. - For Phase 3 and later: Monitored as a periodic task, updating and discovering host changes.
There will be one HostNetwork
CRD automatically created for each node. The Name
will correspond to the Node’s name, which in PMK is the IP.
[root@arjunpmk-master ~]$ kubectl get HostNetwork -n luigi-system
NAME AGE
10.128.144.14 28s
10.128.144.43 25s
10.128.237.203 27s
10.128.237.204 26s
Fetching the first Node in -o yaml reveals:
root@arjunpmk-master ~ $ kubectl get HostNetwork 10.128.237.204 -n luigi-system -o yaml
apiVersion plumber.k8s.pf9.io/v1
kind HostNetwork
metadata
creationTimestamp"2020-10-20T08:22:58Z"
generation1
name10.128.237.204
namespace luigi-system
resourceVersion"3751"
selfLink /apis/plumber.k8s.pf9.io/v1/namespaces/luigi-system/HostNetworks/10.128.237.204
uid 13bfa4ca-2546-43dc-8074-9943f054674e
spec
interfaceStatus
currentSriovConfig
totalVfs63
deviceId"1528"
mac a0 36 9f 43 5454
mtu1500
pciAddr"0000:03:00.0"
pfDriver ixgbe
pfName enp3s0f0
sriovEnabledtrue
vendorId"8086"
currentSriovConfig
totalVfs63
deviceId"1528"
mac a0 36 9f 43 5456
mtu9000
pciAddr"0000:03:00.1"
pfDriver ixgbe
pfName enp3s0f1
sriovEnabledtrue
vendorId"8086"
status
This node just has one interface, eth0
, using virtio
driver. This node happens to be an OpenStack VM. Let’s look at a bare metal SR-IOV node.
root@arjunpmk-master ~ # kubectl get HostNetwork 10.128.237.204 -n default -o yaml
apiVersion plumber.k8s.pf9.io/v1
kind HostNetwork
metadata
spec
status
There are 2 interfaces. Each of them has 4 VFs. For each PF, and each VF underneath, you can see the device and vendor IDs, PCI addresses, driver enabled, MAC address allocated (if assigned to a Pod), VLAN, and other Link level information.
In future phases, more information will be reported such as IP information, ip-route information, and other networking related host-state.
Validate Luigi HostNetworkTemplate Object (hostPlumber)
$ pf9-0102:DPDK carlos$ kubectl get hostnetworktemplate
NAME AGE
sriovconfig-enp3s0f0-kernel 13s
sriovconfig-enp3s0f1-vfio-pci 13s
Luigi Networking Plugins Installation (multus, sriov, whereabouts, and nodeFeatureDiscovery)
Let’s revisit sampleplugins.yaml
Create a namespace hostplumber
for the resources related to the hostplumber pod.
kubectl create namespace hostplumber
Uncomment the rest of the lines as appropriate for your environment and reapply the manifest, this will install Multus, SR-IOV and Whereabouts.
$ kubectl apply -f sampleplugins.yaml
apiVersion plumber.k8s.pf9.io/v1
kind NetworkPlugins
metadata
name networkplugins-sample11
spec
# Add fields here
plugins
hostPlumber
namespace hostplumber
#hostPlumberImage: "platform9/luigi-plumber:v0.1.0"
nodeFeatureDiscovery
multus
whereabouts
# COMMENT/UNCOMMENT - requires all host networking/VFs to be configured first
sriov
Validate Luigi Networking Plugins Installation (Multus, SR-IOV, Whereabouts and nodeFeatureDiscovery
).
Let’s review our work by listing the new DaemonSets created in the kube-system
namespace.
$ pf9-0102:DPDK carlos$ kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-node 4 4 4 4 4 kubernetes.io/os=linux 2d10h
hostconfig-controller-manager 4 4 4 4 4 <none> 42h
kube-multus-ds-amd64 4 4 4 4 4 kubernetes.io/arch=amd64 46h
kube-sriov-cni-ds-amd64 4 4 4 4 4 beta.kubernetes.io/arch=amd64 47h
kube-sriov-device-plugin-amd64 4 4 4 4 4 beta.kubernetes.io/arch=amd64 47h
whereabouts 4 4 4 4 4 beta.kubernetes.io/arch=amd64 47h
$ pf9-0102:DPDK carlos$ kubectl get ds -n node-feature-discovery
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
nfd-worker 3 3 3 3 3 <none> 66s
Note: The kube-sriov-device-plugin-amd64
pods will be in CreatingContainer or Pending state since the sriov-config-map
hasn’t been created. The sriov-config-map
is created in the SR-IOV section of this document.