Why is it Necessary to Annotate Autoscaler Deployment With IAM Policy in AWS Cluster?
Problem
- The autoscaler pod goes into CrashLoopBackoff state after a cluster is upgraded or the service pf9-kube is restarted.
- This happens because the autoscaler deployment did not have the annotation of the AWS IAM policy. The following error can be seen in the autoscaler pod logs:
- The autoscaler deployment is recreated after the cluster upgrade or pf9-kube restart, this is the expected behavior to ensure autoscaling, and any custom changes made are lost as deployment is created using the default template.
- The question here is if the node has the correct IAM role attached to it at the time of creation, then why do we need to annotate the pod with a separate IAM policy?
Environment
- Platform9 Managed Kubernetes - All Versions
- AWS
- Autoscaler
Answer
- kube2iam is used in the cluster which means that the pods running on the nodes do not have the same privileges as that of the nodes. In this case, the calls to the EC2 metadata API from the pods will be intercepted on each node. Instead of authenticating directly, kube2iam will assume roles that are assigned to pods in the cluster via annotations, and respond with the temporary credentials from assuming that role.
- The IAM roles defined as annotations in the pods are needed to have access to the AWS resources.
Was this page helpful?