Kubernetes Master Node in NotReady State With Message "cni plugin not initialized"
Problem
- A Kubernetes master node is showing as
NotReadyand the describe output for the node is showing "cni not initialized".
$ kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster1 NotReady master 34d v1.21.3master2 Ready master 34d v1.21.3master3 Ready master 34d v1.21.3- A description of the node similarly reports
KubeletNotReadydue to the CNI plugin being uninitialized.
$ kubectl describe node master1.....Conditions:Ready False Mon, 13 Jun 2022 21:42:05 +0000 Mon, 13 Jun 2022 21:32:01 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initializedEnvironment
- Platform9 Managed Kubernetes - v5.4 and Higher
- Kubernetes - All v1.21
- Runtime - Containerd
- Container Network Interface - Calico
Cause
Due to an bug in the Platform9 Managed Kubernetes Stack the CNI config is not reloaded when a partial restart of the stack takes place.
Resolution
This issue is now fixed on the pf9-kube- 1.23.8-pmk.218 and above releases.
Workaround
- Verify that the CNI configuration directory referenced by containerd is not empty on the affected node.
For Calico based clusters the directory should contain the following files:
# ls -l /etc/cni/net.d/total 8-rw-r--r-- 1 root root 730 Jun 16 14:16 10-calico.conflist-rw------- 1 root root 3094 Jun 16 14:16 calico-kubeconfig- If the files mentioned above are missing, restart calico-node pod on the affected node.
x
# kubectl get pods -o wide -n kube-systemNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScalico-node-8brm4 1/1 Running 0 3d3h 10.128.142.117 master1 <none> <none>calico-node-xwgn4 1/1 Running 0 3d3h 10.128.141.109 master2 <none> <none>calico-node-bgmi3 1/1 Running 0 3d3h 10.128.141.109 master3 <none> <none># kubectl delete pod calico-node-8brm4 -n kube-systemThe Kubernetes nodes should now all show as Ready .
- Verify the status of the node after restarting calico-node pod.
# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster1 Ready master 34d v1.21.3master2 Ready master 34d v1.21.3master3 Ready master 34d v1.21.3Was this page helpful?