Create EKS Clusters w/ Arlon
This document describes the steps to deploy an EKS cluster consisting of two worker nodes using Arlon and ArlonCTL. For context around what is Arlon, PMK integration with Arlon, and ArlonCTL, please first read GitOps with Arlon and ArlonCTL Reference
Prerequisite
The Arlon integration with PMK currently only works with Amazon EKS clusters and PMK native cluster on AWS created using Cluster API.
- Read the PMK Cluster API documentation about Setting up your AWS Account for PMK
- Setup your AWS Cloud Provider using PMK UI
- A working instance of kubectl CLI. You can download it here: http://pwittrock.github.io/docs/tasks/tools/install-kubectl/
To create EKS clusters using Arlon, following are optional but highly recommended CLI tools to install:
- Download and install ClusterCTL
- ArgoCD CLI: https://argo-cd.readthedocs.io/en/stable/cli_installation/
- Git: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git
Download ArlonCTL
Download arlonctl on your machine using this command:
bash <(curl -sL https://arlon-assets.s3-us-west-1.amazonaws.com/arlonCTL/5.7/arlonctl_setup)
This downloads arlonctl in /usr/local/bin/arlonctl
Currently, the supported operating systems are :
- Linux (amd64)
- MacOS (amd64, arm64)
Once downloaded, make sure that the required prerequisites are met by running : arlonctl verify
Create ArlonCTL Context
An ArlonCTL context captures the properties of your PMK management plane that you would like the ArlonCTL CLI to work with. This command allows you to create and switch between two or more Platform9 management plane instances while using the same ArlonCTL CLI.
Create a context for your ArlonCTL CLI to work with your current PMK management plane instance.This can be done by running:
arlonctl context create <context-name>
To create a new context, you need to supply your PMK management plane FQDN, your PMK username and password.
After the context creation is successful, it can be verified by using arlonctl context list
which should show you the current context being used.\
Context, Kubeconfig & Arlon Logs Storage
All the contexts which have been added by the user are stored in a contexts.json file which is stored at ~/.config/arlon
. Sensitive credentials such as passwords are stored in the JSON file after being encrypted. The kubeconfig file for your current PMK management plane is also stored in the same location with the name context.config. The logs for all the arlonctl commands are stored in ~/.config/arlon/logs.txt
Configure Workspace Git Repository
If the verification is successful and all the required tools are installed, you are ready for the next steps.
go ahead and add a workspace git repository to ArgoCD. This git repo will be the source of truth for all new clusters created using Arlon. This is the repository where ArlonCTL will add all the files and manifests related to the profiles, bundles, clusters created using ArlonCTL. To add a repo to ArgoCD, use this command :
argocd repo add $WORKSPACE_REPO_URL --username $User --password $GITHUB_PAT
$GITHUB_PAT
is the Github Personal Access Token.
$User
is your Github username.
$WORKSPACE_REPO___URL
is the Github url of the workspace repository
Create Cloud Provider
Once the repo has been added to ArgoCD, proceed to create a new AWS Cloud Provider in PMK UI. After the Cloud Provider is successfully created, list the cloud providers available to ArlonCTL using the following command:
arlonctl cloudprovider list
NAME CLOUDPROVIDER
aws f59538340-aws
Create Cluster Template
A Cluster Template stores the configuration information for your cluster, eg the Cloud Provider, version of Kubernetes, number of nodes in the cluster etc. A Cluster Template is stored in the form of a manifest file in your git directory. The manifest typically contains multiple related resources that together define an arbitrarily complex cluster. If you make subsequent changes to the Cluster Template, workload clusters originally created from it will automatically acquire the changes.
Note: Cluster Templates only support dynamic profiles today.
Here is an example of a manifest file that we can use to create a Cluster Template. This manifest file defines an EKS cluster with a 'Machine Deployment' component.
Note: The AWS infrastructure provider version used for the manifest file here is v1.5.1, as PMK 5.7 supports this version of Cluster API.
Generate or Copy Manifest
If you'd like to create this manifest file from scratch, run the following command using ClusterCTL. Or you can copy the manifest file provided below into your git repository.
clusterctl generate cluster capi-quickstart --flavor eks \
--kubernetes-version v1.23.10 \
--control-plane-machine-count=3 \
--worker-machine-count=2 \
-- infrastrtucture-aws:v1.5.1 \
> capi-quickstart.yaml
apiVersion cluster.x-k8s.io/v1beta1
kind Cluster
metadata
name capi-quickstart
namespace default
spec
clusterNetwork
pods
cidrBlocks
192.168.0.0/16
controlPlaneRef
apiVersion controlplane.cluster.x-k8s.io/v1beta1
kind AWSManagedControlPlane
name capi-quickstart-control-plane
infrastructureRef
apiVersion controlplane.cluster.x-k8s.io/v1beta1
kind AWSManagedControlPlane
name capi-quickstart-control-plane
---
apiVersion controlplane.cluster.x-k8s.io/v1beta1
kind AWSManagedControlPlane
metadata
name capi-quickstart-control-plane
namespace default
spec
region $ AWS_REGION
sshKeyName $ AWS_SSH_KEY_NAME
version1.23.10
---
apiVersion cluster.x-k8s.io/v1beta1
kind MachineDeployment
metadata
name capi-quickstart-md-0
namespace default
spec
clusterName capi-quickstart
replicas2
selector
matchLabels null
template
spec
bootstrap
configRef
apiVersion bootstrap.cluster.x-k8s.io/v1beta1
kind EKSConfigTemplate
name capi-quickstart-md-0
clusterName capi-quickstart
infrastructureRef
apiVersion infrastructure.cluster.x-k8s.io/v1beta1
kind AWSMachineTemplate
name capi-quickstart-md-0
version1.23.10
---
apiVersion infrastructure.cluster.x-k8s.io/v1beta1
kind AWSMachineTemplate
metadata
name capi-quickstart-md-0
namespace default
spec
template
spec
iamInstanceProfile nodes.cluster-api-provider-aws.sigs.k8s.io
instanceType $ AWS_CONTROL_PLANE_MACHINE_TYPE
sshKeyName $ AWS_SSH_KEY_NAME
---
apiVersion bootstrap.cluster.x-k8s.io/v1beta1
kind EKSConfigTemplate
metadata
name capi-quickstart-md-0
namespace default
spec
template
where,
$AWS_REGION
is your AWS Region that you'd like to create EKS clusters in.
$AWS_SSH_KEY_NAME
is your AWS SSH KEY
$AWS_CONTROL_PLANE_MACHINE_TYPE
is the Machine Type for the Kubernetes worker node AWS instances that you'd like to use.
Push the manifest file above to your workspace Git repository
In a similar way, manifests can be created with AWSMachinePool and AWSManagedMachinePool components using ClusterCTL.
Prepare Git Directory
Before creating a new EKS cluster using your Cluster Template manifest file, the file and it's containing directory must first be "prepared" or "prepped" by pf9-arlon. The "prep" phase makes minor changes to the directory and manifest to help Arlon deploy multiple copies of the cluster without naming conflicts.
Use the following command to achieve this:
arlonctl clustertemplate preparegit --repo-url $WORKSPACE_REPO_URL --repo-path <pathToDirectory> [--repo-revision revision] --ducloudprovider $CLOUD_PROVIDER
$CLOUD_PROVIDER
is the output of arlonctl cloudprovider list
$WORKSPACE_REPO_URL
is the workspace repo url that was added to ArgoCD.
Validate Git Directory
Post the successful preparation of the cluster-template manifest directory, the directory needs to be validated before the Cluster Template is used.
To determine if a git directory is eligible to serve as Cluster Template, run the following command:
arlonctl clustertemplate validategit --repo-url $WORKSPACE_REPO_URL --repo-path <pathToDirectory> [--repo-revision revision]
Create Cluster
To create a new EKS cluster from the Cluster Template:
Note: Attaching profiles to the cluster is optional.
arlonctl cluster create --cluster-name <clusterName> --repo-url $WORKSPACE_REPO_URL --repo-path <pathToDirectory> [--output-yaml] [--profile <profileName>] [--repo-revision <repoRevision>]
Once the cluster is deployed, the status of the apps and the cluster deployed by Arlon can be queried by running the following command from the ArgoCD CLI : argocd app list
. The cluster is fully deployed once those apps are all Synced and Healthy.
Cluster Update
To update the profile of your newly created EKS cluster:
# To add a new profile to the existing cluster
arlonctl cluster ngupdate <clustername> --profile <profilename>
# To delete an existing profile from the existing cluster
arlonctl cluster ngupdate <clustername> --delete-profile
A new Arlon cluster can be created without any profile associated with the cluster. So, the above commands can be used to add a new profile to the existing cluster.
An existing profile can be deleted from the cluster as well using the above command. Executing this command will delete the profile app and all the bundles associated with the profile in argocd.
Delete Cluster
To destroy a cluster:
arlonctl cluster delete <clusterName>
pf9-arlon creates between 2 and 3 ArgoCD application resources to compose a cluster (the 3rd application, called "profile app", is used when an optional profile is specified at cluster creation time). When you destroy an Arlon cluster, all related ArgoCD applications will be clean up.
Create Cluster w/ Profile
If you would like to create a new EKS cluster, and configure a default application or a set of applications after the cluster is created, then use the Bundle and the Profile components to achieve this.
Bundle
A bundle is a grouping of data files that produce a set of Kubernetes manifests via a tool. Each bundle is defined using a Kubernetes secret in the Arlon namespace.
Static Bundle: A static bundle embeds the manifest's YAML data itself and is not affected by subsequent changes to this manifest.
arlonctl bundle create xenial --tags application --desc "sample app" --from-file xenial.yaml
Download xenial.yaml from : https://github.com/arlonproj/arlon/blob/main/examples/bundles/xenial.yaml. We will create a static bundle using this app.
Dynamic Bundle: A dynamic bundle contains a reference to the manifest data stored in git.When the user updates a dynamic bundle in git, all clusters consuming that bundle (through a profile specified at cluster creation time) will acquire the change.
We will create a dynamic version of the same application, this time using a reference to a git directory containing the YAML.Clone the git repository that was earlier added to ArgoCD. Then follow these steps :
cd ${WORKSPACE_REPO}
mkdir -p bundles/xenial
cp xenial.yaml bundles/xenial
git add bundles/xenial
git commit -m "add xenial bundle"
git push origin main
Once this bundle is added to your workspace repository, proceed to create a dynamic bundle.
$WORKSPACE_REPO_URL is the workspace repo url that has earlier been added to ArgoCD.
Profile
A profile expresses a desired configuration for a Kubernetes cluster. It's a collection of bundles (static, dynamic, or a combination). A profile can be static or dynamic. Profiles are stored in git as YAMLs.
Static Profile: When a cluster consumes a static profile at creation time, the set of bundles for the cluster is fixed at that time and does not change over time even when the static bundle is updated.
arlonctl profile create prof-1 --bundles xenial --desc "static profile 1" --tags examples
Dynamic Profile: Any change in the bundles present in the profile will get reflected in the dynamic profile. Any cluster consuming that dynamic profile will be affected by the change, meaning it may lose or acquire new bundles in real time.
arlonctl profile create xenial-dynamic --repo-url ${WORKSPACE_REPO_URL} --repo-base-path profiles --bundles xenial-dynamic --desc "dynamic profile(xenial app)" --tags examples
$WORKSPACE_REPO_ URL is the workspace repo url that has earlier been added to ArgoCD.