How To Use Sunpike to Apply Custom Kubelet ConfigMap
Problem
How to apply custom kubelet configmap for Master/Worker nodes in Kubernetes 1.24 release and higher, using sunpike since support for dynamic kubelet configuration has been deprecated?
Environment
- Platform9 Edge Cloud - v5.6 and Higher.
- Kubernetes 1.24 and Higher.
Procedure
The Dynamic Kubelet configuration can be updated using Qbert API as mentioned in the documentation- https://platform9.com/docs/kubernetes/dynamic-kubelet-configuration#dynamic-kubelet-configuration-using-qbert-apis
The custom kubelet configuration can be stored in a config map, in the default namespace on the sunpike host. Nodelet will keep checking with the sunpike host for any such custom configuration and if found, Nodelet will copy the custom configuration on the kubernetes host(master/worker) and restart the kubelet service to apply the change. These config maps are identified by the nodelet on the basis of labels.
Using the combination of the three labels [Cluster, Role, Node], the kubelet configmap can be applied flexibly :
a. To a specific master OR To a specific worker.
b. To all masters OR To all workers.
c. To specific set of workers AND specific set of masters.
The labels used are:
1) Cluster: Assign cluster label and set value to uuid of the cluster as shown in the PMK UI to link the configMap to the related cluster:
pf9.io/kubelet-config-cluster=da5787cc-761e-421b-9042-683989d7b7f5
2) Role: Assign master or worker role label to the configmap, to link the configmap to the set of respective master or worker nodes:
a. Label specific to master:
pf9.io/kubelet-config-master-default=true
b. Label specific to worker:
pf9.io/kubelet-config-worker-default=true
3) Node: Config maps can be create specific to each node individually or can be tagged for all node using below labels:
For specific node identified with IP:
pf9.io/kubelet-config-node-172.20.7.208=true
For all the nodes:
pf9.io/kubelet-config-node-apply-all=true
The three labels specifying the Cluster-UUID, Role [Master or Worker] and the Config [If the config is specific to a node or to all] is mandatory for the new configmap to be picked in the cluster.
Steps to add/update the new kubelet configmap via Sunpike:
Here we are taking an example to apply changes in configmap of a worker node's kubelet.
- Log in to the sunpike-kube-apiserver container of the respective DUVM, and check the default config maps created for master/worker kubelet configuration i.e,
worker-default-kubelet-config
.
# kubectl get pods -n airctl-1 | grep sunpike-kube-apiserver
# k exec -it sunpike-kube-apiserver-5b9d589b8b-dhmlg -n airctl-1 -- bash
$ kubectl get cm
NAME DATA AGE
kube-root-ca.crt 1 44h
worker-default-kubelet-config 1 21h
master-default-kubelet-config 1 21h
- Copy the original configmap to a yaml file and update the new yaml to create a new custom config map. Do note that the content of the configuration need to be added with key “kubelet” to the data section of the config map.
$ kubectl get cm worker-default-kubelet-config -oyaml > new-worker-kubelet-config.yaml
$ kubectl get cm master-default-kubelet-config -oyaml > new-master-kubelet-config.yaml
2.a Add the required customisations to the new configmap yaml file,
apiVersion: v1
kind: ConfigMap
metadata:
name: new-master-kubelet-config
namespace: kube-system
data:
kubelet: |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
rotateCertificates: true # Example customization 1
featureGates:
RotateKubeletServerCertificate: true # Example customization 2
cgroupDriver: systemd # Example customization 3
serverTLSBootstrap: true # Example customization 4
# other configurations) (
2.b Apply the changes.
$ kubectl apply -f ~/worker_custom_config.yaml
configmap/worker-custom-kubelet-config created
$ kubectl apply -f ~/master_custom_config.yaml
configmap/master-custom-kubelet-config created
DO NOT CHANGE THE KEY “kubelet”! Update the name of the configmap, the required parameters and create a new config map using the updated yaml.
- Add required labels for role, node and cluster-uuid, based on your requirement. Below shown are two sample scenarios:
Example- 1. Labelling for worker kubelet: For a specific worker node (172.20.7.219) in a cluster.
$ kubectl label cm new-worker-kubelet-config pf9.io/kubelet-config-worker-default=true
configmap/worker-custom-kubelet-config labeled
##### Output on the cluster. In the below example "172.20.7.219" is the name of the worker node
$ kubectl label cm new-worker-kubelet-config pf9.io/kubelet-config-node-172.20.7.219=true
configmap/worker-custom-kubelet-config labeled
$ kubectl label cm new-worker-kubelet-config pf9.io/kubelet-config-cluster=a939f88e-97bb-4730-bddf-371d324ed721
configmap/worker-custom-kubelet-config labeled
When tagging nodes please note that the name should match the name in "kubectl get nodes".
Example- 2. Labelling for master kubelet: For all the master nodes in a cluster.
$ kubectl label cm new-master-kubelet-config pf9.io/kubelet-config-master-default=true
configmap/master-kubelet-config labeled
# For master, using the kubelet-config-node-apply-all label so that same
# custom configuration is valid for all master nodes
$ kubectl label cm new-master-kubelet-config pf9.io/kubelet-config-node-apply-all=true
configmap/master-custom-kubelet-config labeled
$ kubectl label cm new-master-kubelet-config pf9.io/kubelet-config-cluster=a939f88e-97bb-4730-bddf-371d324ed721
configmap/master-custom-kubelet-config labeled
- The applied changes in kubelet configmap can be verified by checking labels associated with the respective configmaps:
$ kubectl get configmaps --show-labels
NAME DATA AGE LABELS
kube-root-ca.crt 1 47h <none>
worker-default-kubelet-config 1 23h <none>
master-default-kubelet-config 1 23h <none>
new-worker-kubelet-config 1 109m pf9.io/kubelet-config-cluster=a939f88e-97bb-4730-bddf-371d324ed721,pf9.io/kubelet-config-node-172.20.7.219=true,pf9.io/kubelet-config-worker-default=true
new-master-kubelet-config 1 70m pf9.io/kubelet-config-cluster=a939f88e-97bb-4730-bddf-371d324ed721,pf9.io/kubelet-config-master-default=true,pf9.io/kubelet-config-node-apply-all=true
- On the kubernetes nodes validate that the configuration gets picked up. Nodelet has a reconcile time of around 2 mins. So thats how long it may take for configuration to get picked up for kubelet. Check if user kubelet configuration file is created at
/var/opt/pf9/kube/user-config/
Verify that the same file is copied to the default bootstrap configuration file and original file is backed up. The bootstrap-config.yaml is same as above kubelet.yaml
$ ls -l /var/opt/pf9/kube/user-config/
total 4
-rw-r--r-- 1 pf9 pf9group 829 Nov 24 06:49 kubelet.yaml
$ sudo ls -l /var/opt/pf9/kube/kubelet-config/
total 12
-rw-r--r-- 1 pf9 pf9group 829 Nov 24 06:52 bootstrap-config.yaml
-rw-r--r-- 1 pf9 pf9group 788 Nov 24 06:49 bootstrap-config.yaml.orig
drwxr-xr-x 3 pf9 pf9group 4096 Nov 22 09:01 dynamic-config
- The same can also be verified in the logs for nodelet
(/var/log/pf9/nodelet.log
)
Nodelet log in Worker node:
{"L":"INFO","T":"2022-11-24T06:52:42.455Z","C":"sunpikeutils/sunpikeutils.go:307","M":"Found matching configmap for label pf9.io/kubelet-config-node-172.20.7.219=true: worker-custom-kubelet-config"}
{"L":"INFO","T":"2022-11-24T06:52:42.455Z","C":"sunpikeutils/sunpikeutils.go:505","M":"Using configmap worker-custom-kubelet-config from sunpike for kubelet configuration"}
Nodelet log in Master node:
{"L":"INFO","T":"2022-11-24T06:55:54.970Z","C":"sunpikeutils/sunpikeutils.go:303","M":"Found matching configmap for label pf9.io/kubelet-config-node-apply-all=true: master-custom-kubelet-config"}
{"L":"INFO","T":"2022-11-24T06:55:54.970Z","C":"sunpikeutils/sunpikeutils.go:505","M":"Using configmap master-custom-kubelet-config from sunpike for kubelet configuration"}
Additional Information
User can refer the actual kubelet configuration file as well on the node for cluster specific values like cluster DNS. The default kubelet file is located at /var/opt/pf9/kube/kubelet-config/bootstrap-config.yaml
NOTE:
1) If multiple config maps match the same labels, nodelet will pick the first configmap it finds in the API server query. User needs to ensure that only 1 matching configmap should be present for each role/node.
2) User can update the configmaps at any time. If there is any change in the custom configmaps, nodelet will reapply change during reconcile and restart the kubelet service.
3) The default config maps are for reference only. Only one of each master & worker config map will be created on the sunpike host, regardless of number of clusters deployed using the DU.
4) The label “pf9.io/kubelet-config-node-apply-all” overrides other node specific labels.
5) If user wishes they can use the same configmap for both master and worker nodes, as long as appropriate labels are added.
6) This feature is not backported to releases < 1.24. For previous versions, user can continue with the old way of configuring dynamic kubelet.
Upgrade considerations
There is no upgrade path for moving users from dynamic kubelet configuration on kubernetes < 1.24 to this. Users will have to manually copy there custom configurations to the required config maps as stated above. Please contact support for creating the required configurations on sunpike.