Installation
Installation
To begin the installation of PEC, run the install.sh command.
The install.sh command requires root level permission to run, however, a user with sudo level permissions' status can also run these commands.
The files are installed under the /opt/pf9/airctl folder. See the Artifacts page for more details on the directory structure.
Configuration file
Airctl relies on 2 configuration files to define the management stack.
Management Cluster Configuration
Use one of the sample configuration templates below, to describe your management cluster.
Sample configuration for a single master cluster:
clusterNameairctl-mgmtshUserrootsshPrivateKeyFile/root/.ssh/id_rsanodeletPkg/opt/pf9/airctl/nodelet/nodelet.tar.gzallowWorkloadsOnMastertruemasterIp10.128.144.161masterVipEnabledfalsecalicoV4Interface"interface=eth0"privilegedtruedns hostsFile/opt/pf9/airctl/hostsmasterNodesnodeName10.128.144.151workerNodesnodeName10.128.144.151nodeName10.128.145.202nodeName10.128.145.63nodeName10.128.145.219Sample configuration of a multi-master cluster:
clusterNameairctl-mgmtsshUserrootsshPrivateKeyFile/root/.ssh/id_rsakubeconfig/etc/nodelet/airctl-mgmt/certs/admin.kubeconfignodeletPkg/opt/pf9/airctl/nodelet/nodelet.tar.gzallowWorkloadsOnMastertruemasterIp10.128.144.161masterVipEnabledtruemasterVipInterfaceeth0masterVipVrouterId209calicoV4Interface"interface=eth0"privilegedtruedns hostsFile/opt/pf9/airctl/hostsmasterNodesnodeName10.128.144.151nodeName10.128.145.63nodeName10.128.145.219nodeName10.128.145.137nodeName10.128.145.76workerNodesnodeName10.128.145.202nodeName10.128.145.197Sample configuration of a dual-stack cluster:
allowWorkloadsOnMastertruecalico v4BlockSize26 v4ContainersCidr10.20.0.0/22 v4Interfaceinterface=eth1 v4IpIpModeAlways v4NATOutgoingtrue v6BlockSize122 v6ContainersCidrfd00101/116 v6Interfaceinterface=eth0 v6NATOutgoingtrueclusterNameairctl-mgmtcontainerRuntime cgroupDriversystemd namecontainerddns hostsFile/opt/pf9/airctl/hosts corednsHosts10.128.145.197 airctl-1-2479689-127.pf9.localnetfc00:1:a:2::297 airctl-1-2479689-127.pf9.localnetipv4trueipv6truek8sApiPort"443"masterIp10.128.145.189masterIpV6fc001a214emasterNodesnodeName10.128.145.189masterVipInterfaceeth1mtu"1440"nodeletPkg/opt/pf9/airctl/nodelet/nodelet.tar.gzprivileged"true"sshPrivateKeyFile/home/centos/.ssh/id_rsasshUsercentosuserImages/home/centos/downloads/nodelet-imgs-v-5.6.0-2481858.tar.gz/home/centos/downloads/kubedu-imgs-v-5.6.0-2481858.tar.gzworkerNodesAirctl automatically adds a couple of required tar.gz image bundles, to upload and import into the container runtime of every node in the cluster.
This will be required if public image repositories are not available. These will be added to the systemImages section when the management cluster is created. The file names are generated based on the airctl version.
The config will look as follows in the cluster spec file. The user does not need to add this, it is added automatically.
systemImages/root/kubedu-imgs-v-5.6.0-2161634.tar.gz/root/nodelet-imgs-v-5.6.0-2161634.tar.gzHowever, if there are custom image bundles that the user does want loaded, they may use the following section to specify their bundles:
userImages/root/<img-bundle-1>.tar.gz/root/<img-bundle-2>.tar.gzFor more info please see: https://github.com/platform9/nodelet/blob/main/nodeletctl/README.md#airgapped-user-image-bundles
This can take considerable time to upload and import, especially with large image bundles.
Save your configuration file to disk. Assuming it is at /root/clusterSpec.yaml
The directory /opt/pf9/airctl/conf is the directory that contains an example configuration file. Users should copy the configuration file into their home directory. By default, airctl looks for the presence of the $HOME/airctl-config.yaml file. Airctl also provides the option --config to pass the configuration file’s location if it is not present in the default location. Now that we are ready to configure the software, begin by creating a config file.
Add the path to the file in airctl-config.yaml:
bootstrapCfgPath/home/centos/clusterSpec.yamlThen, create the management cluster:
airctl advanced-ddu create-mgmt --config /opt/pf9/airctl/airctl-config.yamlNext, edit the new config file with your specific settings using the editor of your choice. See the Airctl Reference page for more details on all the config file parameters. We recommend verifying the default value of the following options:
- duFqdn
- vmCpus, vmMemoryMb,
- dockerRepo: points to the offline bundle for docker example /opt/pf9/airctl/docker-v-5.3.0-1675863.tar.gz
If implementing an air gapped system with local registries, please review the Using a Local Docker Registry on Air-Gapped Systems article before proceeding.
Initialization
This initialization step installs docker and registers the containers that are used by the KDU/Management plane. Additionally, it creates a DNS entry based on the configuration file info for the KDU DNS name on the local machine.
Next, verify that docker is working for the current user. Make sure the current user can access docker CLI.
You will now need to log out and then log back in after this step.
Starting the Management Plane
Update airctl-config.yaml with any values you'd like to change. It has reasonable defaults, but we all have our preferences.
With the config file updated, we need to prep the system. This involves installing required dependencies like helm, etc.
sudo airctl init --config /opt/pf9/airctl/airctl-config.yamlStart the management plane using:
airctl start --config /opt/pf9/airctl/conf/airctl-config.yamlWait for KubeDU services to be up
The airctl status command reports the state of the DU. Wait until the status command reports the task state as ready .
[centos@airctl-host ~]$ /opt/pf9/airctl/airctl --config /opt/pf9/airctl/conf/airctl-config.yaml status------------- deployment details ---------------fqdn: airctl-1.pf9.localnetcluster: airctl-1.pf9.localnettask state: ready-------- region service status ----------task state: readydesired services: 42ready services: 41DDU Private Registry (Airgapped)
If the workload clusters managed by the airctl DDU are airgapped, there is a private registry service deployed. This behavior has not changed from the CDU. To push an image bundle to the DDU registry:
sudo airctl advanced-du push-images --img-tar <path to image .tar.gz> --config <path to airctl configuration>Stopping The Management Plane
The management plane can be cleaned up using:
airctl unconfigure-du --config /opt/pf9/airctl/conf/airctl-config.yamlTo delete the management cluster, run:
airctl advanced-ddu delete-mgmt --config /opt/pf9/airctl/conf/airctl-config.yamlManagement Plane Health
You may check the health of the management plane at any time using the status command.
airctl --config /opt/pf9/airctl/conf/airctl-config.yaml statusThis reports the number of services deployed and how many are healthy, as well as the overall status of the DU. If you do see that the expected number of services are not healthy, you can look at each individual service via kubectl.
kubectl get pods -AHost Management
Once the management plane has been deployed, you may now add nodes to it. The following fields in the airctl config file help with that
# nodeHostnames points to the list of nodes manage. This can be either IP addresses or DNS names.# This list does not need to include all nodes on the DU. It can point to just the current list of nodes that need to be operated on.nodeHostnames: - 10.128.240.189 - 10.128.240.190 - 10.128.240.191 - host1.myorganization.localnet - host2.myorganization.localnet# sshUser reflects the user on the nodes used by airctl to login.# Airctl expects a key based password login on these nodes.sshUser: centos# path to ssh key files# sshPublicKeyFile: /home/centos/.ssh/id_rsa.pub# sshPrivateKeyFile: /home/centos/.ssh/id_rsa# The following fields points to the path of the packages that need to be installed on the nodes.# We install 2 packages and their dependencies, hostagent and docker.# By default, we expect to install packages on Centos7.9.hostAgentRepo: /opt/pf9/airctl/hostagent-v-5.2.0-1549281.tar.gzdockerRepo: /opt/pf9/airctl/docker-v-5.2.0-1549281.tar.gzIf using RHEL 8.6, remember to use the docker-8-vXXXX.tar.gz and hostagent-8-vXXXX.tar.gz packages for the dockerRepo and hostagentRepo fields above.
Once you have the above fields populated correctly in the airctl config, run the following command to copy the packages to the nodes, as well as authorize those nodes in the airctl management plane.
airctl --config /opt/pf9/airctl/conf/airctl-config.yaml configure-hostsThe configure-hosts command has various flags that may be set to suit your specific environment. Please look at the help text to determine what’s appropriate for you.
The above command is idempotent and can be run any number of times. Nodes already authorized in the past are untouched, even if they are missing from the config above.
Host Health
You can look at the status of the hosts with:
airctl --config /opt/pf9/airctl/conf/airctl-config.yaml host-statusObtain Credentials
Now we acquire the credential of the newly created DU.
Note: Please update the duUser and the duPassword entries in the config file with the above values before proceeding. If the default admin password is unchanged, then the duPassword field is optional.
Note: If a new password is passed during airctl start command in this way, then this new password needs to be passed to get-creds command to acquire the admin credentials of the DU, otherwise there will be an error to get the admin credentials. Do keep in mind to store this new password in a file, as the admin credentials cannot be acquired if this password is forgotten/lost.
/opt/pf9/airctl/airctl start --config /opt/pf9/airctl/conf/airctl-config.yaml --password $newPasswd/opt/pf9/airctl/airctl get-creds --config /opt/pf9/airctl/conf/airctl-config.yamlfatal errorfailed to getting admin credentialsfailed to validate airctl passwordcrypto/bcrypthashedPassword is not the hash of the given password/opt/pf9/airctl/airctl get-creds --config /opt/pf9/airctl/conf/airctl-config.yaml --password $newPasswdemailadmin@airctl.localnetpasswordsomesecretpasswordAccessing the UI/CLI
Accessing the UI or CLI is not possible using the IP address. You will either need to update your DNS settings to create an A record or have the FQDN and IP of the physical host's DU (management station) where the management plane runs. For testing purposes, you can create an /etc/hosts entry on your local workstation.
Configure Hosts
This section discusses how to prepare multiple hosts to be added to the cluster when you have direct SSH access. Your Platform9 solutions architect can assist you in working through methods of onboarding hosts. At this point, we are ready to onboard new hosts.
There are multiple methods to onboard hosts, especially if you do not have direct SSH access. The airctl command has a call named configure-hosts which aids in configuring multiple hosts and prepares them, along with the Platform9 agent that is already running and ready to add to a cluster. The command also helps set up the /etc/hosts entry to point to the Platform9 DU, and can optionally install docker. Additionally, it has the ability to pre-cache docker images as needed. The airctl page can reference more completed details on these tasks. The airctl command uses the nodeHostnames option to specify which hosts should be processed.
nodeHostnames10.128.147.7310.128.147.185test-pf9-airgap-c7-1676888-610-1.novalocal /opt/pf9/airctl/airctl configure-hosts --helpconfigure the hosts for agent installation, docker installation and image pre-install•Usage: airctl configure-hosts [flags]•Flags: -h, --help help for configure-hosts --reset-pf9-managed-docker-conf remove PF9_MANAGED_DOCKER=false from /etc/pf9/kube_override.env, this would enable PMK to completely manage(install, upgrade) docker as part of the cluster creation --skip-docker-img-import skip import the docker images on each hosts --skip-docker-install skip docker install on each hosts --skip-image-renaming skip image renaming to point to the privateRegistryBase, for e.g. gcr.io/hyperkube will become <privateRegistryBase>/hyperkube --skip-yum-repo-install skip yum repo installation, default to false•Global Flags: --config string config file (default is $HOME/airctl-config.yaml) --json json output for commands (configure-hosts only currently) --verbose print verbose logs to the consoleBehind the scene, configure-host is going to accomplish the following:
- Create a Yum-Repo (unless --skip-yum-repo-install) is specified to install Platform9 agents
- Install docker (unless --skip-docker-install) is specified
- Push all the images needed by Platform9 (unless --skip-image-import is specified)
As mentioned earlier, the newer version moves to a central registry and needs you to have up to date yum-repos, in which case some --skip-xxx can be applied, and the process would be much faster.