AWS Clusters (Qbert)
Platform9 Managed Kubernetes (PMK) supports creating Kubernetes clusters on AWS by deploying Kubernetes using the AWS EC2 instances, and using the PMK Qbert APIs. This document describes the steps to create such a cluster.
PMK provides native integration with Amazon AWS to create Kubernetes clusters using AWS EC2 instances. In this model, PMK manages the lifecycle of the nodes on EC2 and related AWS services such as Route53, ELB and EBS. PMK deploys production-ready Kubernetes cluster that can auto-scale based on workload requirements and compared to EKS provides access to customize the Kubernetes control plane, use custom API server flags and leverage identities external to AWS.
Pre-requisites for AWS Clusters (Qbert)
AWS Service Account
Platform9 recommends that a service account be created in AWS and an associated Secret Key and Access Key be setup.
Access Key and Secret Key
PMK requires that you specify an AWS access key ID and associated secret access key for a single IAM user in your AWS account. All credentials are encrypted in the Platform9 SaaS Management Plane.
Route53
Starting with release 5.1 of PMK Route 53 is optional for new AWS Native Clusters. During cluster creation you are able to select whether or not to use Route53. To use Route53 your account must have at least one domain registered in Route 53 with an associated public hosted zone.
When creating a cluster, the API server and Kubernetes Service fully qualified domain names will be associated to your Route 53 domain. For example, if the hosted zone is associated to the domain name “platform9.systems” then the API and Service FQDN will be created in the following syntax “xxx.platform9.systems "
AWS IAM Policy: Pre-configured policy
You can download a pre-configured AWS Policy that is limited to the permissions detailed below from here, and apply it to an existing or new credential.
Following permissions are required on your AWS account.
- ELB Management
- Route 53 DNS Configuration
- Access to two or more Availability Zones within the region
- EC2 Instance Management
- EBS Volume Management
- VPC Management
- EKS API (Read Only)
Refer to this AWS article for more info on how to create and manage AWS access key ID and secret access key for your AWS account.
AWS Access Details
The provided credentials will be utilized for creating, deleting and updating the following artifacts:
- VPC (Only if deploying a cluster to a new VPC)
- Subnets in each AZ (Only if deploying a cluster to a new VPC. In an existing VPC, the first subnet of each AZ is used)
- Security Group (For cluster connectivity)
- ELB (For HA Kubernetes API)
- Auto Scaling Groups (For creation of ASGs for master and worker nodes)
- Route 53 Hosted Zone Record sets (For API and Service FQDNs)
- Launch Configuration (For creating EC2 instances)
- Launch Configuration (For creating and deleting EC2 instances)
- Internet Gateway (For exposing the Kubernetes API with HTTPS)
- Routes (For the Internet Gateway)
- IAM Roles and Instance Profiles (For deployment of highly available etcd and Kubernetes AWS integration)
- EKS API List and Describe
Configure Default Limits on Region
Make sure that the default limits for your region are configured properly.
All AWS resources are configured by default with limits. As your usage of Kubernetes on AWS grows, you might run into some of them.
For example, the AWS default limit for number of VPCs in a region is 5, as stated in AWS documentation on VPC limits
To see the default limit values for all your EC2 resources within a given region:
- Log into your AWS console
- Navigate to Services > EC2
- Once in EC2, on the left hand side menu panel, click on limits
This will show you all default limits for your AWS resources.
Any specific limit can be raised by submitting a ‘Service limit increase’ request with AWS.
Default size of AWS instance is increased to 150G (primary) and 180G (secondary). Volume size can be customised by manually creating new launch template version and setting it as default
For updating AMI for volume resize , We recommend not using AMI created from existing instance which is part of PMK cluster since it may carry old disk metadata like mounted secondary disk , pf9 components , previous instance's data snapshot , which can cause provisioning failures.
Consider Below points while creating AMI from existing instance :-
- Make sure to unmount secondary volume before creating AMI from existing instance .
- Make sure you remove pf9 components and any other metadata if present .
- AMI snapshot size should not exceed instance storage size in launch template .