How to Refresh Your Application’s Service Account Tokens Following the Managed Kubernetes 2.3 Upgrade

If your application makes calls to the Kubernetes API server and uses a Kubernetes Service Account, please read on.

As part of the Managed Kubernetes 2.3 upgrade, we are decoupling the cluster CA from the cluster nodes. As part of that change, we generate new client and server certificates for the entire cluster. In particular, we replace the key used to generate Service Account tokens, and then re-generate the Secrets that contain the Service Account tokens. As of 2.3, the Service Account key will remain valid for the lifetime of the cluster.

If you application uses a Service Account, Kubernetes makes the token available via the file /var/run/secrets/kubernetes.io/serviceaccount/token to your application’s containers. However, this file is not updated if the token is re-generated via the API Server.

To ensure that your application’s token is up to date, you must delete and recreate your application’s pods. Please choose from the following:


You deployed your application using a Replication Controller and can tolerate momentary downtime

You can temporarily scale your application to zero pods using the following bash script:

temporary_scale_to_zero.sh
[bash linenum="false"]#!/usr/bin/env bash
# Usage: temporary_scale_to_zero.sh myrc
replicationcontroller=$1
current_replicas=$(kubectl get rc $replicationcontroller -o jsonpath="{.status.replicas}")
kubectl scale rc $replicationcontroller --replicas=0
kubectl scale rc $replicationcontroller --replicas=$current_replicas[/bash]

Example usage:

[bash]temporary_scale_to_zero.sh myrc[/bash]

You deployed your application using a Replication Controller and cannot tolerate momentary downtime

If you cannot tolerate any downtime, you can simulate an in-place rolling update. We implement this by copying your existing Replication Controller definition, replacing its name, then performing a rolling update from the existing to the new Replication Controller.

inplace_rolling_update.sh

[bash linenum="false"]#!/usr/bin/env bash
# Usage: inplace_rolling_update.sh myrc myrc_newname
replicationcontroller=$1
replicationcontroller_newname=$2

kubectl patch rc/$replicationcontroller -p '{"spec":{"selector":{"secrets":"old"},"template":{"metadata":{"labels":{"secrets":"old"}}}}}'

kubectl get rc/$replicationcontroller -o yaml
| sed -e 's/secrets: old/secrets: new/g' -e "0,/name: ${replicationcontroller}/{s/name:${replicationcontroller}/name:${replicationcontroller_newname}/}" -e 's/resourceVersion.*//' | kubectl rolling-update $replicationcontroller --update-period=10s -f -[/bash]

Example usage:

[bash]inplace_rolling_update.sh myrc myrc_newname[/bash]

You deployed your application as one or more stand-alone Pods

If you deployed your application as one or more Pods that are not managed by a Replication Controller, you will need to delete and re-create each of the Pods:

delete_recreate_pod.sh
[bash]#!/usr/bin/env bash
# Usage: delete_recreate_pod.sh mypod
pod=$1
kubectl get pod $pod -o yaml | kubectl replace -f -[/bash]
Example usage:
[bash]delete_recreate_pod.sh mypod[/bash]

The browser you are using is outdated. For the best experience please download or update your browser to one of the following:

Learn the FinOps best practices to maximize your cloud usage & budget:Register Now
+