AWS Native Clusters
Platform9 supports creating clusters natively in AWS using EC2 instances known as AWS Native Clusters and the ability to import existing EKS Clusters.
AWS Native Clusters
PMK provides native integration with Amazon AWS to create Kubernetes clusters using AWS EC2 instances. In this model, PMK manages the lifecycle of the nodes on EC2 and related AWS services such as Route53, ELB and EBS. PMK deploys production-ready Kubernetes cluster that can auto-scale based on workload requirements and compared to EKS provides access to customize the Kubernetes control plane, use custom API server flags and leverage identities external to AWS.
AWS EKS Cluster Import
To help centralize and simplify multi-cluster and hybrid Kubernetes deployments Platform9 can import existing EKS Clusters. Imported clusters have a limited set of functionality compared to AWS Native Clusters; Platform9 does not support any lifecycle actions for imported clusters or kubeconfig generation.
A full comparison of AWS Native Clusters vs EKS Imports can be found here EKS vs AWS Native Cluster FAQ
Pre-requisites for EKS Cluster Imports
AWS Service Account
Platform9 recommends that a service account be created in AWS and an associated Secret Key and Access Key be setup for connecting Platform9 and AWS.
Access Key and Secret Key
PMK requires that you specify an AWS access key ID and associated secret access key for a single IAM user in your AWS account. The keys are used to import EKS clusters and perform all cluster actions. The account that owns the Access Key and Secret Key must have access to the AWS EKS API for all List and Describe endpoints as detailed here: https://docs.aws.amazon.com/eks/latest/APIReference/Welcome.html
EKS Cluster Permissions
For data collection to function correctly the AWS user used to import the cluster must be added to the clusters RBAC ConfigMap to provide either the User/Service Account or a Role that the Service Account is enrolled within access to the cluster. Specifically the system:masters
group
To add the service account used to import the cluster to the EKS Cluster follow the steps outlined by AWS - Provide Access for IAM Users and Roles to Existing EKS Clusters
EKS Cluster API Server Access
For PMK to function correctly the EKS Clusters API Server must be available on a Public or Public+Private VPC. Clusters with a Private only endpoint will import however only the Cluster and Cluster Details dashboards will function.
The Service Account or User that owns the Access Key and Secret Key must have system:master group access on the EKS clusters that are being imported.
Pre-requisites for AWS Native Clusters
AWS Service Account
Platform9 recommends that a service account be created in AWS and an associated Secret Key and Access Key be setup.
Access Key and Secret Key
PMK requires that you specify an AWS access key ID and associated secret access key for a single IAM user in your AWS account. All credentials are encrypted in the Platform9 SaaS Management Plane.
Route53
Starting with release 5.1 of PMK Route 53 is optional for new AWS Native Clusters. During cluster creation you are able to select whether or not to use Route53. To use Route53 your account must have at least one domain registered in Route 53 with an associated public hosted zone.
When creating a cluster, the API server and Kubernetes Service fully qualified domain names will be associated to your Route 53 domain. For example, if the hosted zone is associated to the domain name “platform9.systems” then the API and Service FQDN will be created in the following syntax “xxx.platform9.systems "
AWS IAM Policy: Pre-configured policy
You can download a pre-configured AWS Policy that is limited to the permissions detailed below from here, and apply it to an existing or new credential.
Following permissions are required on your AWS account.
- ELB Management
- Route 53 DNS Configuration
- Access to two or more Availability Zones within the region
- EC2 Instance Management
- EBS Volume Management
- VPC Management
- EKS API (Read Only)
Refer to this AWS article for more info on how to create and manage AWS access key ID and secret access key for your AWS account.
AWS Access Overview
The provided credentials will be utilized for creating, deleting and updating the following artifacts:
- VPC (Only if deploying a cluster to a new VPC)
- Subnets in each AZ (Only if deploying a cluster to a new VPC. In an existing VPC, the first subnet of each AZ is used)
- Security Group (For cluster connectivity)
- ELB (For HA Kubernetes API)
- Auto Scaling Groups (For creation of ASGs for master and worker nodes)
- Route 53 Hosted Zone Record sets (For API and Service FQDNs)
- Launch Configuration (For creating EC2 instances)
- Internet Gateway (For exposing the Kubernetes API with HTTPS)
- Routes (For the Internet Gateway)
- IAM Roles and Instance Profiles (For deployment of highly available etcd and Kubernetes AWS integration)
- EKS API List and Describe
Make sure that the default limits for your region are configured properly
All AWS resources are configured by default with limits. As your usage of Kubernetes on AWS grows, you might run into some of them.
For example, the AWS default limit for number of VPCs in a region is 5, as stated in AWS documentation on VPC limits
To see the default limit values for all your EC2 resources within a given region:
- Log into your AWS console
- Navigate to Services > EC2
- Once in EC2, on the left hand side menu panel, click on limits
This will show you all default limits for your AWS resources.
Any specific limit can be raised by submitting a ‘Service limit increase’ request with AWS.