Cluster Creation Using the Airctl
Creating a Cluster Using the Airctl and API
In this article, we will discuss how to create a cluster using the internal API. Once airctl has been installed, and we have created a configuration file, Docker should then be installed and configured. At this point, we can start the management plane and obtain the credentials for the newly created Management Server. Next, ensure an A record exists that includes the Management Server FQDN and the IP of the physical host where the management plane runs. At this point, we can use airctl, an internal script, or the UI to begin the cluster creation process. Remember to use the keystoneEnabled: False setting to disable keystone authentication for the cluster.
Airctl includes multiple commands that assist with the creation, configuration, and modification of a cluster. Now, using the sudo__configure-hosts command, configure the hosts noted within the airctl-config.yaml file, and then install any additional components needed, based on the parameters noted in the default config file located in the home-directory of the user.
- The hostAgentRepo setting should point to /opt/pf9/airctl/hostagent-<release-number>.tar.gz
- The dockerRepo location should point to /opt/pf9/airctl/docker-<release-number>.tar.gz
- The imagesDir should point to /opt/pf9/airctl/imgs
Lastly, ensure the SSH user has passwordless sudo access enabled.
Users can place all exported docker into the /opt/pf9/airctl/imgs images folder, where they can be uploaded and shared with various other nodes.
$ /opt/pf9/airctl/airctl configure-hosts --config /opt/pf9/airctl/conf/airctl-config.yaml
[dav-3-1] connecting..
[dav-3-2] connecting..
[dav-3-1] connection successful
[dav-3-1] check docker install
[dav-3-2] connection successful
[dav-3-2] check docker install
[dav-3-1] docker already installed
[dav-3-1] adding /etc/pf9/kube_override.env file
[dav-3-2] docker already installed
[dav-3-2] adding /etc/pf9/kube_override.env file
[dav-3-2] uploading image docker-imgs-v-5.1.0-1670019.tar.gz 100% |████████████████████████████████████████| [35s:0s][dav-3-2] succcess
[dav-3-1] uploading image docker-imgs-v-5.1.0-1670019.tar.gz 100% |████████████████████████████████████████| [35s:0s][dav-3-1] succcess
[dav-3-2] docker import docker-imgs-v-5.10.0-3248001.tar.gz
[dav-3-1] docker import docker-imgs-v-5.10.0-3248001.tar.gz
[dav-3-1] docker import successful
[dav-3-1] mkdir -p /var/opt/pf9/yum-repo
[dav-3-1] chown centos /var/opt/pf9/yum-repo
[dav-3-1] uploading image /tmp/hostagent-v-5.10.0-3248001.tar.gz 13% |█████ | [0s:1s][dav-3-2] docker import successful
[dav-3-2] mkdir -p /var/opt/pf9/yum-repo
[dav-3-1] uploading image /tmp/hostagent-v-5.10.0-3248001.tar.gz 18% |███████ | [0s:1s][dav-3-2] chown centos /var/opt/pf9/yum-repo
[dav-3-2] uploading image /tmp/hostagent-v-5.10.0-3248001.tar.gz 80% |████████████████████████████████ | [1s:0s][dav-3-1] succcess
[dav-3-1] tar -xvzf /tmp/hostagent-v-5.10.0-3248001.tar.gz -C /var/opt/pf9/yum-repo
[dav-3-2] uploading image /tmp/hostagent-v-5.10.0-3248001.tar.gz 100% |████████████████████████████████████████| [2s:0s][dav-3-2] succcess
[dav-3-2] tar -xvzf /tmp/hostagent-v-5.10.0-3248001.tar.gz -C /var/opt/pf9/yum-repo
[dav-3-1] mv /var/opt/pf9/yum-repo/hostagent.repo /etc/yum.repos.d/hostagent.repo
[dav-3-1] removing host /etc/hosts entry
[dav-3-1] removed entry from /etc/hosts
[dav-3-1] adding /etc/hosts entry
[dav-3-2] mv /var/opt/pf9/yum-repo/hostagent.repo /etc/yum.repos.d/hostagent.repo
[dav-3-1] added entry to /etc/hosts
[dav-3-1] installing hostagent and authorizing host
[dav-3-2] removing host /etc/hosts entry
[dav-3-2] removed entry from /etc/hosts
[dav-3-2] adding /etc/hosts entry
[dav-3-2] added entry to /etc/hosts
[dav-3-2] installing hostagent and authorizing host
2020/12/11 15:02:07 [DEBUG] PUT https://airctl-1.pf9.localnet:443/resmgr/v1/hosts/473f209d-8b33-4eb6-8cb5-79e2827d5c4c/roles/pf9-kube
2020/12/11 15:02:07 [DEBUG] PUT https://airctl-1.pf9.localnet:443/resmgr/v1/hosts/473f209d-8b33-4eb6-8cb5-79e2827d5c4c/roles/pf9-kube (status: 404): retrying in 10s (15 left)
2020/12/11 15:02:08 [DEBUG] PUT https://airctl-1.pf9.localnet:443/resmgr/v1/hosts/8633ef3b-52df-4bd5-9dbb-fd0418d4561c/roles/pf9-kube
2020/12/11 15:02:08 [DEBUG] PUT https://airctl-1.pf9.localnet:443/resmgr/v1/hosts/8633ef3b-52df-4bd5-9dbb-fd0418d4561c/roles/pf9-kube (status: 404): retrying in 10s (15 left)
2020/12/11 15:02:17 [DEBUG] PUT https://airctl-1.pf9.localnet:443/resmgr/v1/hosts/473f209d-8b33-4eb6-8cb5-79e2827d5c4c/roles/pf9-kube (status: 404): retrying in 20s (14 left)
2020/12/11 15:02:18 [DEBUG] PUT https://airctl-1.pf9.localnet:443/resmgr/v1/hosts/8633ef3b-52df-4bd5-9dbb-fd0418d4561c/roles/pf9-kube (status: 404): retrying in 20s (14 left)
[dav-3-1] host done
[dav-3-2] host done
Once the above configuration completes, run the airctl get-creds command to obtain the Management Server login username and password. Then, browse to https://airctl-1.pf9.localnet to login into the Management Server. Users can utilize the UI or the review the specific API references needed to create a cluster.
Important Considerations
Cluster creation may present challenges if the system cannot access YUM/ APT repositories or air-gapped environments are required. Below are several common considerations when enabling a local installation.
- It is vital to consider using local YUM/ APT repositories. Packages like Foreman provide a self-managed central RPM repository
- Platform9 provides self-contained RPM packages, including their dependencies, as tar.gz files or as part of a YUM/ APT repository when using SMCP. These files can either be installed via airctl manage-hosts on each host or from a local YUM/ APT repository
- If a local repository is not configured, disable all repositories except for the default Platform9 repos (pf9dockerpo and hostagentrepo)
- Additionally, ensure the mirror lookup is disabled using the yum-config-manager, as YUM tries to read from mirrors as soon as you begin interacting with it
- Nodes should only be onboarded using airctl. The SMCP on-premises management plane UI does not support onboarding nodes to the management plane