Luigi Networking Quickstart

This document makes the following assumptions, you are running a BareOS PMK 4.5.1 Kubernetes cluster v 1.17+ , the Calico CNI is enabled, privileged pods options is enabled, no MetalLB is installed and every worker node should have an additional nic/port up and running with no ip assigned to it if intended for ipvlan/l2 and macvlan, or an additional interface with an ip for ipvlan/l3.

Luigi

Luigi Installation

The first step is to download the Luigi Operator definition in your master node or from where you are able to execute kubectl commands against this cluster, a local copy of the manifest is at the end of the document.

As of 4.5, this is available through qbert API as well. Please go to the Bootstrapping Cluster with Network Operator section.

Then simply install it by using the following command:

Bash
Copy

Luigi Install Validation

After executing the command above, let’s review our work, please consider that some of the pods are going to take some time to fully be up, let’s a couple of minutes.

Bash
Copy

Luigi Networking Plugins

Luigi Networking Plugins Installation (hostPlumber)

The sampleplugins.yaml manifest will help to deploy or undeploy different CNI plugins, in our K8s cluster, the different components are going to be deployed as daemon sets and will run in every node including workers and masters.

Please use the following manifest sampleplugins.yaml

YAML
Copy

As you may have observed all the plugins are commented out but hostPlumber, first we will install hostPlumber plugin and we will revisit this manifest later and re-apply it with the rest of the CNI plugins uncommented after creating our first HostNetworkTemplate object. HostPlumber is not required, but it provides a mechanism to configure SRIOV and host networking, as well as view the Node SRIOV and interface state via a K8S operator. If you’re configuring host networking and SRIOV VFs thru other means, hostPlumber plugin is not needed and can be skipped

The reason we deploy this first, is that the SRIOV CNI requires VF and network configuration to be done first before deploying the SRIOV plugin.

Execute the following command to install hostPlumber plugin

YAML
Copy

Luigi Networking Plugins Install Validation (hostPlumber)

After executing the command above, let’s review our work.

Bash
Copy

Luigi HostNetworkTemplate Object

As mentioned before, HostNetworkTemplate will help us to enable the number of VirtualFunctions in our nodes as well which driver will be loaded on the newly enabled VirtualFunctions, on this new release we have added the nodeSelector feature in order to perform SRIOV configurations only on SRIOV capable nodes based Labels .

Note: NFD plugin (Node Feature Discovery) should be installed first in order to leverage on the feature.node.kubernetes.io/network-sriov.capable: "true" label

For example:

YAML
Copy

In this definition, we are creating two HostNetworkTemplates one for sriov using vfio-pci drivers and the other one sriov using kernel drivers, the first one will only configure the VFs on nodes that has 3 things, SRIOV capable, a PF called enp3s0f1 and a label with key-value to testlabelA: "123"

The second will attempt to configure 4 VFs with the ixgbevf drive across all the nodes no matter what labels they have.

It is also possible to merge the two sriovConfig’s for the two interfaces into the same HostNetworkTemplate CRD, rather than a separate one for each PF. For example:

YAML
Copy

Configuring via vendor and device ID

The above will search for all interfaces matching vendor ID 8086 (Intel) and device ID 1528 (representing a particular model of NIC). It will then create 32 VFs on each matching device and bind all of them to the vfio-pci (DPDK driver). This might be useful if you don’t know the interface naming scheme across your hosts or PCI addresses, but you just want to target all particular NIC by vendor and device ID.

YAML
Copy

Configuring via PCI address

YAML
Copy

The above will configure 32 VFs on PF matching PCI address “0000:03:00.0” and 32 VFs on PCI address “0000:03.00.1”, for a total of 64 VFs, and bind each VF to the vfio-pci driver.

HostNetwork CRD

This is not to be confused with the status section of the HostNetworkTemplate CRD(For latter phases, each Node will append it’s nodename and the sucess/failure of application of the HostNetworkTemplate policy to the Status section)

The HostNetwork CRD, which is different, will not be created by the user. Instead, this is intended to be a read-only CRD and the Daemonset operator on each node will discover and populate various host settings to this CRD:

  • Created: First upon the HostPlumber plugin being deployed

  • Updated: After each application of the HostNetworkTemplate CRD

  • For Phase3 and later: Monitored as a periodic task, updating and discovering host changes

There will be one HostNetwork CRD automatically created for each node. The Name will correspond to the Node’s name, which in PMK is the IP.

Bash
Copy

Fetching the first Node in -o yaml reveals

YAML
Copy

This node just has one interface, eth0, using virtio driver. This node happens to be an Openstack VM. Let’s look at a bare metal SRIOV node

YAML
Copy

There are 2 interfaces. Each of them has 4 VFs. For each PF, and each VF underneath, you can see the device and vendor IDs, PCI addresses, driver enabled, MAC address allocated (if assigned to a Pod), VLAN, and other Link level information.

In future phases, more information will be reported such as IP information, ip-route information, and other networking related host-state.

Validate Luigi HostNetworkTemplate Object (hostPlumber)

Bash
Copy

Luigi Networking Plugins Installation (multus, sriov, whereabouts, and nodeFeatureDiscovery)

Let’s revisit sampleplugins.yaml and uncomment the rest of the lines and reapply the manifest, this will install multus, sriov and whereabouts.

Bash
Copy
YAML
Copy

Validate Luigi Networking Plugins Installation (multus,sriov,whereabouts and nodeFeatureDiscovery)

Let’s review our work by listing the new daemonsets created in the kube-system namespace.

Bash
Copy

Note: The kube-sriov-device-plugin-amd64 pods will be in CreatingContainer or Pending state since the sriov-config-map hasn’t been created the sriov-config-map is created in the sriov section of this document.

Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard
  Last updated by Chris Jones