Enable for IPVLAN
IPVLAN
Following example shows how to create Network Attachment definition for IPVLAN
Create Network Attach Definition ipvlan type
Please copy paste the following definition update the values of your subnet and apply it to our cluster:
apiVersion"k8s.cni.cncf.io/v1"
kind NetworkAttachmentDefinition
metadata
name whereabouts-ipvlan-conf-1
spec
config'{
"cniVersion""0.3.0",
"name""ipvlan-conf-1",
"type""ipvlan",
"master""eth2",
"mode""l2",
"ipam"
"type""whereabouts"
"range""192.168.70.0/24"
"range_start""192.168.70.20"
"range_end""192.168.70.50"
"gateway""192.168.70.1"
'
The master key value is the reference to the *second nic * in our worker nodes.
Network Attach Definition Validation ipvlan type
Let’s validate our work by listing and describing our new Network Attach Definition.
$ kubectl get net-attach-def
NAME AGE
whereabouts-conf-ipvlan-1 134m
$ kubectl describe net-attach-def whereabouts-conf-ipvlan
Name: whereabouts-conf-ipvlan-1
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"k8s.cni.cncf.io/v1","kind":"NetworkAttachmentDefinition","metadata":{"annotations":{},"name":"whereabouts-conf-ipvlan","nam...
API Version: k8s.cni.cncf.io/v1
Kind: NetworkAttachmentDefinition
Metadata:
Creation Timestamp: 2020-09-18T20:25:56Z
Generation: 3
Resource Version: 7036755
Self Link: /apis/k8s.cni.cncf.io/v1/namespaces/default/network-attachment-definitions/whereabouts-conf-ipvlan-1
UID: f40709fc-eb0e-4b8b-a25a-ca21f7e84753
Spec:
Config: { "cniVersion": "0.3.0", "type": "ipvlan", "master": "eth2", "mode": "l2", "ipam": { "type": "whereabouts", "range": "192.168.70.0/24", "range_start": "192.168.70.20", "range_end": "192.168.70.50", "gateway": "192.168.70.1", "routes": [{"dst": "0.0.0.0/0"}], "gateway": "192.168.70.1" } }
Events: <none>
Create Pods with ipvlan interfaces
apiVersion v1
kind Pod
metadata
name pod0-case-02
annotations
k8s.v1.cni.cncf.io/networks whereabouts-conf-ipvlan
spec
containers
name pod0-case-02
image docker.io/centos/tools latest
command
/sbin/init
# cat pod1-case2.yaml
apiVersion v1
kind Pod
metadata
name pod1-case-02
annotations
k8s.v1.cni.cncf.io/networks whereabouts-conf-ipvlan
spec
containers
name pod1-case-02
image docker.io/centos/tools latest
command
/sbin/init
Deploy the new pods
$ kubectl apply -f pod1-case2.yaml
$ kubectl apply -f pod0-case2.yaml
Pod Definitions with 2 interfaces
In order for pods to be created with an additional nic, and the same principle applies for adding more than two NICs to the pod, the pod definition should be called with network annotations making reference to the Network Attach Definition(s) please use the following pods definitions to create a testbed.
Validate Pods Creation with ipvlan interfaces
Let’s validate your work by confirming that the pods got created with an additional interface by doing the following commands:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod0-case-02 1/1 Running 0 66m 10.135.1.147 192.168.50.14 <none> <none>
pod1-case-02 1/1 Running 0 62m 10.135.1.148 192.168.50.14 <none> <none>
$ kubectl exec -it pod0-case-02 -- ip -d address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default
link/ether de:ff:c2:57:c6:e7 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
veth numtxqueues 1 numrxqueues 1
inet 10.135.1.147/24 brd 10.135.1.255 scope global eth0
valid_lft forever preferred_lft forever
4: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether fa:16:3e:10:31:25 brd ff:ff:ff:ff:ff:ff promiscuity 0
ipvlan mode l2 numtxqueues 1 numrxqueues 1
inet 192.168.70.20/24 brd 192.168.70.255 scope global net1
valid_lft forever preferred_lft forever
$ kubectl exec -it pod1-case-02 -- ip -d address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default
link/ether aa:2d:2e:e8:5e:19 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
veth numtxqueues 1 numrxqueues 1
inet 10.135.1.148/24 brd 10.135.1.255 scope global eth0
valid_lft forever preferred_lft forever
4: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether fa:16:3e:10:31:25 brd ff:ff:ff:ff:ff:ff promiscuity 0
ipvlan mode l2 numtxqueues 1 numrxqueues 1
inet 192.168.70.21/24 brd 192.168.70.255 scope global net1
valid_lft forever preferred_lft forever
East-West ipvlan traffic
$ kubectl exec -it pod1-case-02 -- ping -c 5 192.168.70.20
PING 192.168.70.20 (192.168.70.20) 56(84) bytes of data.
64 bytes from 192.168.70.20: icmp_seq=1 ttl=64 time=0.317 ms
64 bytes from 192.168.70.20: icmp_seq=2 ttl=64 time=0.057 ms
64 bytes from 192.168.70.20: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 192.168.70.20: icmp_seq=4 ttl=64 time=0.056 ms
64 bytes from 192.168.70.20: icmp_seq=5 ttl=64 time=0.058 ms
--- 192.168.70.20 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.048/0.107/0.317/0.105 m
$ kubectl exec -it pod1-case-02 -- ping -c 5 192.168.70.20
PING 192.168.70.20 (192.168.70.20) 56(84) bytes of data.
64 bytes from 192.168.70.20: icmp_seq=1 ttl=64 time=0.317 ms
64 bytes from 192.168.70.20: icmp_seq=2 ttl=64 time=0.057 ms
64 bytes from 192.168.70.20: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 192.168.70.20: icmp_seq=4 ttl=64 time=0.056 ms
64 bytes from 192.168.70.20: icmp_seq=5 ttl=64 time=0.058 ms
--- 192.168.70.20 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.048/0.107/0.317/0.105 m
North-South ipvlan traffic:
$ kubectl exec -it pod0-case-02 -- ping -c 5 192.168.70.1
PING 192.168.70.1 (192.168.70.1) 56(84) bytes of data.
64 bytes from 192.168.70.1: icmp_seq=1 ttl=64 time=0.893 ms
64 bytes from 192.168.70.1: icmp_seq=2 ttl=64 time=0.589 ms
64 bytes from 192.168.70.1: icmp_seq=3 ttl=64 time=0.573 ms
64 bytes from 192.168.70.1: icmp_seq=4 ttl=64 time=0.590 ms
64 bytes from 192.168.70.1: icmp_seq=5 ttl=64 time=0.537 ms
--- 192.168.70.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.537/0.636/0.893/0.131 ms
$ kubectl exec -it pod1-case-02 -- ping -c 5 192.168.70.1
PING 192.168.70.1 (192.168.70.1) 56(84) bytes of data.
64 bytes from 192.168.70.1: icmp_seq=1 ttl=64 time=0.893 ms
64 bytes from 192.168.70.1: icmp_seq=2 ttl=64 time=0.589 ms
64 bytes from 192.168.70.1: icmp_seq=3 ttl=64 time=0.573 ms
64 bytes from 192.168.70.1: icmp_seq=4 ttl=64 time=0.590 ms
64 bytes from 192.168.70.1: icmp_seq=5 ttl=64 time=0.537 ms
--- 192.168.70.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.537/0.636/0.893/0.131 ms