How To Use Sunpike to Apply Custom Kubelet ConfigMap

Problem

How to apply custom kubelet configmap for Master/Worker nodes in Kubernetes 1.24 release and higher, using sunpike since support for dynamic kubelet configuration has been deprecated?

Environment

  • Platform9 Edge Cloud - v5.6 and Higher.
  • Kubernetes 1.24 and Higher.

Procedure

The Dynamic Kubelet configuration can be updated using Qbert API as mentioned in the documentation- https://platform9.com/docs/kubernetes/dynamic-kubelet-configuration#dynamic-kubelet-configuration-using-qbert-apis

The custom kubelet configuration can be stored in a config map, in the default namespace on the sunpike host. Nodelet will keep checking with the sunpike host for any such custom configuration and if found, Nodelet will copy the custom configuration on the kubernetes host(master/worker) and restart the kubelet service to apply the change. These config maps are identified by the nodelet on the basis of labels.

Using the combination of the three labels [Cluster, Role, Node], the kubelet configmap can be applied flexibly :

a. To a specific master OR To a specific worker.

b. To all masters OR To all workers.

c. To specific set of workers AND specific set of masters.

The labels used are:

1) Cluster: Assign cluster label and set value to uuid of the cluster as shown in the PMK UI to link the configMap to the related cluster:

Example
Copy

2) Role: Assign master or worker role label to the configmap, to link the configmap to the set of respective master or worker nodes:

a. Label specific to master:

Label specific to master
Copy

b. Label specific to worker:

Label specifc to worker:
Copy

3) Node: Config maps can be create specific to each node individually or can be tagged for all node using below labels:

For specific node identified with IP:

Javascript
Copy

For all the nodes:

Javascript
Copy

The three labels specifying the Cluster-UUID, Role [Master or Worker] and the Config [If the config is specific to a node or to all] is mandatory for the new configmap to be picked in the cluster.

Steps to add/update the new kubelet configmap via Sunpike:

Here we are taking an example to apply changes in configmap of a worker node's kubelet.

  1. Log in to the sunpike-kube-apiserver container of the respective DUVM, and check the default config maps created for master/worker kubelet configuration i.e, worker-default-kubelet-config.
Inside DUvm sunpike-kube-apiserver container
Copy
  1. Copy the original configmap to a yaml file and update the new yaml to create a new custom config map. Do note that the content of the configuration need to be added with key “kubelet” to the data section of the config map.
sunpike-kube-apiserver pod
Copy

2.a Add the required customisations to the new configmap yaml file,

Javascript
Copy

2.b Apply the changes.

Apply the changes
Copy

DO NOT CHANGE THE KEY “kubelet”! Update the name of the configmap, the required parameters and create a new config map using the updated yaml.

  1. Add required labels for role, node and cluster-uuid, based on your requirement. Below shown are two sample scenarios:

Example- 1. Labelling for worker kubelet: For a specific worker node (172.20.7.219) in a cluster.

sunpike-kube-apiserver pod
Copy

When tagging nodes please note that the name should match the name in "kubectl get nodes".

Example- 2. Labelling for master kubelet: For all the master nodes in a cluster.

sunpike-kube-apiserver pod
Copy
  1. The applied changes in kubelet configmap can be verified by checking labels associated with the respective configmaps:
sunpike-kube-apiserver pod
Copy
  1. On the kubernetes nodes validate that the configuration gets picked up. Nodelet has a reconcile time of around 2 mins. So thats how long it may take for configuration to get picked up for kubelet. Check if user kubelet configuration file is created at /var/opt/pf9/kube/user-config/

Verify that the same file is copied to the default bootstrap configuration file and original file is backed up. The bootstrap-config.yaml is same as above kubelet.yaml

Master/Worker nodes
Copy
  1. The same can also be verified in the logs for nodelet (/var/log/pf9/nodelet.log)

Nodelet log in Worker node:

nodelet.log Worker node
Copy

Nodelet log in Master node:

Javascript
Copy

Additional Information

User can refer the actual kubelet configuration file as well on the node for cluster specific values like cluster DNS. The default kubelet file is located at /var/opt/pf9/kube/kubelet-config/bootstrap-config.yaml

NOTE:

1) If multiple config maps match the same labels, nodelet will pick the first configmap it finds in the API server query. User needs to ensure that only 1 matching configmap should be present for each role/node.

2) User can update the configmaps at any time. If there is any change in the custom configmaps, nodelet will reapply change during reconcile and restart the kubelet service.

3) The default config maps are for reference only. Only one of each master & worker config map will be created on the sunpike host, regardless of number of clusters deployed using the DU.

4) The label “pf9.io/kubelet-config-node-apply-all overrides other node specific labels.

5) If user wishes they can use the same configmap for both master and worker nodes, as long as appropriate labels are added.

6) This feature is not backported to releases < 1.24. For previous versions, user can continue with the old way of configuring dynamic kubelet.

Upgrade considerations

There is no upgrade path for moving users from dynamic kubelet configuration on kubernetes < 1.24 to this. Users will have to manually copy there custom configurations to the required config maps as stated above. Please contact support for creating the required configurations on sunpike.

Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard