Installation
Install Base Artifacts
To begin the installation of SMCP, run the downloaded install.sh
script.
The execution of the install.sh script requires root-level privileges, but users holding sudo-level permissions are also capable of carrying out these commands.
The files are installed under the /opt/pf9/airctl directory. See the Artifacts page for more details on the directory structure.
echo 'export PATH=$PATH:/opt/pf9/airctl' >> ~/.bashrc
source ~/.bashrc
This command will append the directory containing airctl to your PATH environment variable, allowing you to run the command without specifying the full path each time. The source command ensures that the changes take effect immediately in the current shell session.
Configuration
Airctl relies on 2 configuration files to define the management stack. Both of these configuration files can be generated along with downloading any other artifacts using the command:
airctl configure
This is an interactive CLI that takes in various user inputs and generates the 2 necessary configuration files at /opt/pf9/airctl/conf location:
/conf/airctl-config.yaml
/conf/nodelet-bootstrap-config.yaml
You may edit these options by hand if necessary. Please refer here for more information.
The command also lets you download other required and optional artifacts once the config files are generated. You may also download these artifacts at a later point using the command:
airctl artifacts download
Creating the Management Cluster
Once your configuration files are generated and the required artifacts downloaded, we are ready to create the management cluster that will host the management plane. Run the following command:
airctl advanced-ddu create-mgmt --config /opt/pf9/airctl/airctl-config.yaml
This will create a head-less Platform9 based Kubernetes cluster. It is considered head-less because there is no accompanying management plane to monitor and manage it.
If running in airgapped mode, the cluster creation can take considerable time to upload and import the required container images
Once the management cluster is created, please validate that the cluster is healthy by running:
kubectl get nodes
You should see all attached nodes here and that they are marked Ready
. You may find the required kubeconfig at /etc/nodelet/airctl-mgmt/certs/admin.kubeconfig
on the node that you ran the airctl create-mgmt
command on.
If implementing an air gapped system with local registries, please review the (Link Removed) article before proceeding.
Initialization
This initialization step installs docker and registers the containers that are used by the Management Server. Additionally, it creates a DNS entry based on the configuration file info for the Management Server DNS name on the local machine.
Starting the Management Plane
Update airctl-config.yaml with any values you'd like to change. It has reasonable defaults, but we all have our preferences.
With the config file updated, we need to prep the system. This involves installing required dependencies like helm, etc.
sudo airctl init --config /opt/pf9/airctl/airctl-config.yaml
Start the management plane using:
airctl start --config /opt/pf9/airctl/conf/airctl-config.yaml
Wait for Kube Management Server services to be up
The airctl status
command reports the state of the Management Server. Wait until the status command reports the task state as ready
.
[centos@airctl-host ~]$ airctl --config /opt/pf9/airctl/conf/airctl-config.yaml status
------------- deployment details ---------------
fqdn: airctl-1.pf9.localnet
cluster: airctl-1.pf9.localnet
task state: ready
-------- region service status ----------
task state: ready
desired services: 42
ready services: 41
Management Server Private Registry (Airgapped)
If the workload clusters managed by the airctl Management Server are airgapped, there is a private registry service deployed. This behavior has not changed from the Management Server. To push an image bundle to the Management Server registry:
sudo airctl advanced-du push-images --img-tar <path to image .tar.gz> --config <path to airctl configuration>
The various image bundles are located at /opt/pf9/airctl/images/
.
Stopping The Management Plane
The management plane can be cleaned up using:
airctl unconfigure-du --config /opt/pf9/airctl/conf/airctl-config.yaml
To delete the management cluster, run:
airctl advanced-ddu delete-mgmt --config /opt/pf9/airctl/conf/airctl-config.yaml
Management Plane Health
You may check the health of the management plane at any time using the status command.
airctl --config /opt/pf9/airctl/conf/airctl-config.yaml status
This reports the number of services deployed and how many are healthy, as well as the overall status of the Management Server. If you do see that the expected number of services are not healthy, you can look at each individual service via kubectl.
kubectl get pods -A
Host Management
Once the management plane has been deployed, you may now add nodes to it. The following fields in the airctl config file help with that
# nodeHostnames points to the list of nodes manage. This can be either IP addresses or DNS names.
# This list does not need to include all nodes on the Management Server. It can point to just the current list of nodes that need to be operated on.
nodeHostnames:
- 10.128.240.189
- 10.128.240.190
- 10.128.240.191
- host1.myorganization.localnet
- host2.myorganization.localnet
# sshUser reflects the user on the nodes used by airctl to login.
# Airctl expects a key based password login on these nodes.
sshUser: centos
# path to ssh key files
# sshPublicKeyFile: /home/centos/.ssh/id_rsa.pub
# sshPrivateKeyFile: /home/centos/.ssh/id_rsa
# The following fields points to the path of the packages that need to be installed on the nodes.
# We install 2 packages and their dependencies, hostagent and docker.
# By default, we expect to install packages on Centos7.9.
hostAgentRepo: /opt/pf9/airctl/hostagent-v-5.2.0-1549281.tar.gz
dockerRepo: /opt/pf9/airctl/docker-v-5.2.0-1549281.tar.gz
Once you have the above fields populated correctly in the airctl config, run the following command to copy the packages to the nodes, as well as authorize those nodes in the airctl management plane.
airctl --config /opt/pf9/airctl/conf/airctl-config.yaml configure-hosts
The configure-hosts command has various flags that may be set to suit your specific environment. Please look at the help text to determine what’s appropriate for you.
The above command is idempotent and can be run any number of times. Nodes already authorized in the past are untouched, even if they are missing from the config above.
Host Health
You can look at the status of the hosts with:
airctl --config /opt/pf9/airctl/conf/airctl-config.yaml host-status
Obtain Credentials
Now we acquire the credential of the newly created Management Server.
Note: Please update the duUser and the duPassword entries in the config file with the above values before proceeding. If the default admin password is unchanged, then the duPassword field is optional.
Note: If a new password is passed during airctl start command in this way, then this new password needs to be passed to get-creds command to acquire the admin credentials of the Management Server, otherwise there will be an error to get the admin credentials. Do keep in mind to store this new password in a file, as the admin credentials cannot be acquired if this password is forgotten/lost.
airctl start --config /opt/pf9/airctl/conf/airctl-config.yaml --password $newPasswd
airctl get-creds --config /opt/pf9/airctl/conf/airctl-config.yaml
fatal error failed to getting admin credentials failed to validate airctl password crypto/bcrypt hashedPassword is not the hash of the given password
airctl get-creds --config /opt/pf9/airctl/conf/airctl-config.yaml --password $newPasswd
email admin@airctl.localnet
password somesecretpassword
Accessing the UI/CLI
Accessing the UI or CLI is not possible using the IP address. You will either need to update your DNS settings to create an A record or have the FQDN and IP of the physical host's Management Server where the management plane runs. For testing purposes, you can create an /etc/hosts entry on your local workstation.
Configure Hosts
This section discusses how to prepare multiple hosts to be added to the cluster when you have direct SSH access. Your Platform9 solutions architect can assist you in working through methods of onboarding hosts. At this point, we are ready to onboard new hosts.
There are multiple methods to onboard hosts, especially if you do not have direct SSH access. The airctl
command has a call named configure-hosts which aids in configuring multiple hosts and prepares them, along with the Platform9 agent that is already running and ready to add to a cluster. The command also helps set up the /etc/hosts entry to point to the Platform9 Management Server, and can optionally install docker. Additionally, it has the ability to pre-cache docker images as needed. The airctl page can reference more completed details on these tasks. The airctl command uses the nodeHostnames option to specify which hosts should be processed.
nodeHostnames
10.128.147.73
10.128.147.185
test-pf9-airgap-c7-1676888-610-1.novalocal
airctl configure-hosts --help
configure the hosts for agent installation, docker installation and image pre-install
•
Usage:
airctl configure-hosts [flags]
•
Flags:
-h, --help help for configure-hosts
--reset-pf9-managed-docker-conf remove PF9_MANAGED_DOCKER=false from /etc/pf9/kube_override.env, this would enable PMK to completely manage(install, upgrade) docker as part of the cluster creation
--skip-docker-img-import skip import the docker images on each hosts
--skip-docker-install skip docker install on each hosts
--skip-image-renaming skip image renaming to point to the privateRegistryBase, for e.g. gcr.io/hyperkube will become <privateRegistryBase>/hyperkube
--skip-yum-repo-install skip yum repo installation, default to false
•
Global Flags:
--config string config file (default is $HOME/airctl-config.yaml)
--json json output for commands (configure-hosts only currently)
--verbose print verbose logs to the console
Behind the scene, configure-host is going to accomplish the following:
- Create a Yum-Repo (unless --skip-yum-repo-install) is specified to install Platform9 agents. Additionally, this disables the existing yum repos on the node.
- Install docker (unless --skip-docker-install) is specified
- Push all the images needed by Platform9 (unless --skip-image-import is specified)
As mentioned earlier, the newer version moves to a central registry and needs you to have up to date yum-repos, in which case some --skip-xxx can be applied, and the process would be much faster.