Cluster Deployment Fails Due To Multiple NICS
Problem
Cluster creation or node attaching is failing when nodes are having multiple network interfaces.
Environment
- Platform9 Managed Kubernetes - v5.3 and Higher
Cause
PMK by default picks up the an interface that matches the default gateway and tries to connect to Kubernetes API and fails.
Resolution
- Before attaching the node to cluster, Stop PMK stack on the nodes using:
# systemctl stop pf9-hostagent pf9-nodeletd
# /opt/pf9/nodelet/nodeletd phases stop
- Create the file with below content and give the pf9 users ownership
x
# cat /var/opt/pf9/kube_interface_v4
V4_INTERFACE <NIC_NAME>
# chown root:pf9group var/opt/pf9/kube_interface_v4
- Now start the PMK stack:
# systemctl start pf9-hostagent
- Now add the node to cluster using UI or API calls. The node will be onboarded to the cluster with IP address present on the specified interface when the PMK stack starts.
/var/opt/pf9/kub_e_interface_v4_
file will not be removed when the pf9-kube
package is removed. Hence when migrating a node from one cluster to another, when re-imaging the hosts to use a different interface and when cleaning the host to remove it from pf9 these files have to be deleted manually.
Additional Information
Our Engineering team is planning to integrate this option in the Management Plane UI in the future releases. Reach out to Platform9 Support to get more information.
Was this page helpful?