Kubernetes Worker Node Reporting NotReady post Kubelet Service Restart
Problem
The worker node is reporting as NotReady.
Reason:KubeletNotReady Message:container runtime status check may not have completed yet
Below messages are recorded in the kubelet logs of the affected node.
x
...
I0423 17:01:50.708429 13980 kubelet_node_status.go:486] Recording NodeNotReady event message for node 10.47.1.135
I0423 17:01:50.708435 13980 setters.go:537] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2021-04-23 17:01:50.708421807 -0700 PDT m=+22.448694554 LastTransitionTime:2021-04-23 17:01:50.708421807 -0700 PDT m=+22.448694554 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
I0423 17:01:50.708472 13980 kubelet_node_status.go:486] Recording NodeSchedulable event message for node 10.47.1.135
E0423 17:01:50.725559 13980 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
I0423 17:02:00.746993 13980 kubelet_node_status.go:486] Recording NodeReady event message for node 10.47.1.135
Environment
- Platform9 Managed Kubernetes - v4.4 and Higher
Cause
These messages are reported while the pf9-kubelet service is restarted on the node. As we can see from the messages the node went from NotReady to Ready state within seconds.
Resolution
Once the pf9-kubelet service restart is completed the node would be reported as Ready. Verify the restart time for the pf9-kubelet service on the affected node.
Contact Platform9 Support if the issue persists for a longer duration.
Was this page helpful?