Install
This guide outlines the steps for a self-hosted deployment of Private Cloud Director. Before installing, refer to the Pre-requisites section to ensure all required prerequisites are met.
Concepts
Management Cluster
As part of the installation process, the Self-Hosted version of Private Cloud Director creates a Kubernetes cluster using the physical servers that you use to deploy it on. We refer to this cluster as the management cluster. The Private Cloud Director management plane then runs as a set of Kubernetes pods and services on this management cluster.
Nodelet
Nodelet is a software agent that is installed and run on each management cluster node as a component of Self-hosted Private Cloud Director . The nodelet agent is responsible for functions such as installation and configuration of multiple Kubernetes services including etcd, containerd, docker, networking, webhooks etc.
Infra Region vs Workload Regions
Read Tenants and Regions to understand the concepts of regions and infra region in Private Cloud Director.
Download Installer
airctl is the command-line installer for Self-hosted Private Cloud Director. Run the following commands only on one of the management cluster hosts to download airctl along with the required installer artifacts.
All the following steps should be performed by a non-root user.
Step 1: Download the Installer Script
Run the following command to download the installer script and required artifacts into your home folder:
curl --user-agent "<YOUR_USER_AGENT_KEY>" https://pf9-airctl.s3-accelerate.amazonaws.com/latest/index.txt | awk '{print "curl -sS --user-agent \"<YOUR_USER_AGENT_KEY>\" \"https://pf9-airctl.s3-accelerate.amazonaws.com/latest/" $NF "\" -o ${HOME}/" $NF}' | bashReplace YOUR_USER_AGENT_KEY in the command with the user agent key you requested from Platform9. For more details see Pre-requisites
Step 2: Make the Installer Executable
Set the execute permissions on the installation script.
chmod +x ./install-pcd.shStep 3: Run the Installation Script
Execute the installer with the specified version. This runs the installer using the version number found in version.txt.
./install-pcd.sh `cat version.txt`Step 4: Add airctl to System Path
Add airctl to the system path to use it globally by creating a symlink in /usr/bin folder.
sudo ln -s /opt/pf9/airctl/airctl /usr/bin/airctlConfigure airctl
Run the following command to generate a configuration file, which will be used to deploy the Self-hosted Private Cloud Director management cluster.
You can choose between a single-master or multi-master management cluster, depending on your installation type (POC or production).
/opt/pf9/airctl/airctl configure --du-fqdn pcd.platform9.localnet --external-ip4 10.149.106.15 --ipv4-enabled --master-ips 10.149.106.11,10.149.106.12,10.149.106.13 --master-vip-interface ens3 --master-vip4 10.149.106.16 --storage-provider custom --regions Region1 --worker-ips 10.149.106.14Following are the input parameters for the command:
du-fqdn - Specify the base FQDN you would like to use for product_name. For eg
pcd.mycompany.comexternal-ip4 - Specify the VIP to be used for the management plane here.
ipv4-enabled - Enable the IPv4 networking for the cluster.
master-ips - Specify a comma separated list of IP addresses for the master nodes you'd like to use for the management cluster. We recommend minimum 3 master nodes for a production environment. These nodes act as worker nodes as well.
worker-ips (optional) - If you'd like to add worker nodes to the management cluster, then specify the comma separated list of IP addresses for worker nodes here.
master-vip-interface - Specify the name of the network interface to be used for the Virtual IP of master nodes. Note that each master node must have it's default network interface named with this name.
master-vip4 - Specify the Virtual IP to be used for the management cluster. This will be used to serve the management Kubernetes cluster's API server.
master-vip-vrouterid - This is optional. If unspecified, one will randomly be generated and can be found in the updated cluster spec saved in the directory that contains the state of the management server (see 'Important Directories' section for directory location). It is recommended to specify one if you plan to deploy multiple Kubernetes clusters in the same VLAN to avoid collision.
regions - Specify one or more region names as a space-separated list for regions you would like to create in your Private Cloud Director setup. When specifying more than one regions the list needs to be enclosed in "". The final FQDN for your Private Cloud Director deployment will use a combination of your base FQDN and your region name. For example, if your base FQDN is
pcd.mycompany.comand you specified a single region name here as region1, the final FQDN for your deployment will bepcd-region1.mycompany.com.storage-provider - Specify the CSI storage provider that should be used to store any persistent state for the management cluster. If not specified,
hostpath-provisionerstorage provider option will be selected as default.- For
customas the storage provider, copy provider specific CSI yaml files to/opt/pf9/airctl/conf/ddu/storage/custom/locally.airctlwill configure the dependencies reading the yaml files at this path and use the provided storage class for provisioning volumes for Private Cloud Director components. TheStorageClassKubernetes Custom Resource supplied here will be renamed to be calledpcd-scand set as the default storage class on your management cluster. - Alternatively, you can create the storage provider resources and storage class out of band prior to running
airctl starton the management cluster. Ensure that the storage class is namedpcd-scin this case. If this is done,airctlwill skip reading the yaml files at custom path. We also recommend creating a test pod with a persistent volume to ensure correct connectivity and configuration is in place first. - For non-production environments and where custom storage provider is not available/required, select
hostpath-provisioner.
- For
nfs-ip, nfs-share - If using hostpath-provisioner as storage provider, NFS server IP and mount location must be provided.
- You should have an NFS server pre-configured before selecting this option.
If you plan to deploy a single-master management cluster (not recommended for production), enter the same master node IP for both VIP for management cluster and VIP for management plane fields above.
This command generates two configuration file templates:
/opt/pf9/airctl/conf/nodelet-bootstrap-config.yaml– Contains the configuration required to bootstrap the management cluster./opt/pf9/airctl/conf/airctl-config.yaml– Contains the configuration for the management plane.
To avoid passing the configuration file as a command-line option when running airctl commands, copy /opt/pf9/airctl/conf/airctl-config.yaml to your $HOME directory.
ln -s /opt/pf9/airctl/conf/airctl-config.yaml $HOME/airctl-config.yamlProxy Configuration (Optional)
If your environment uses a network proxy, set the required values in the /opt/pf9/airctl/conf/helm_values/kplane.template.yml file as shown below:
cat /opt/pf9/airctl/conf/helm_values/kplane.template.yml | grep proxy# Sample output:https_proxy: "http://squid.platform9.horse:3128"http_proxy: "http://squid.platform9.horse:3128"no_proxy: "10.149.106.11,10.149.106.12,10.149.106.13,10.149.106.14,10.149.106.15,10.149.106.16,127.0.0.1,10.20.0.0/22,localhost,::1,.svc,.svc.cluster.local,10.21.0.0/16,10.20.0.0/16,.cluster.local,.platform9.localnet,.default.svc"The list of I.P. addresses in the no_proxy list should include the master-ips, worker-ips, external-ip4, master-vip4 along with any other addresses for which the traffic should not be routed via proxy server.
Also, to ensure that containerd honors the proxy values and allows the creation of the Private Cloud Director management cluster, update the proxy values on all management plane nodes as shown below:
cat /etc/environment# Sample output:PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/ssl/bin"HTTP_PROXY="http://squid.platform9.horse:3128"https_proxy="http://squid.platform9.horse:3128"http_proxy="http://squid.platform9.horse:3128"HTTPS_PROXY="http://squid.platform9.horse:3128"NO_PROXY="10.149.106.11,10.149.106.12,10.149.106.13,10.149.106.14,10.149.106.15,10.149.106.16,127.0.0.1,10.20.0.0/22,localhost,::1,.svc,.svc.cluster.local,10.21.0.0/16,10.20.0.0/16,.cluster.local,.platform9.localnet,.default.svc"no_proxy="10.149.106.11,10.149.106.12,10.149.106.13,10.149.106.14,10.149.106.15,10.149.106.16,127.0.0.1,10.20.0.0/22,localhost,::1,.svc,.svc.cluster.local,10.21.0.0/16,10.20.0.0/16,.cluster.local,.platform9.localnet,.default.svc"cat /etc/systemd/system/containerd.service.d/http-proxy.conf# Sample output:[Service]EnvironmentFile=/etc/environmentDeploy Management Cluster
Next, deploy the management cluster for your Private Cloud Director environment.
Step 1: Run Pre-Checks
Before creating the management cluster, run the following command to perform pre-checks and resolve any issues identified:
airctl checkOptionally, configure AWS Credentials for airctl backup
Before creating the cluster, ensure that the AWS credentials required for S3 backup are available by creating the following file.
Path: /etc/default/airctl-backupContents:AWS_ACCESS_KEY_ID=<YOUR_ACCESS_KEY>AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_KEY>AWS_REGION=<YOUR_AWS_REGION>AWS_S3_PATH=s3://<YOUR_BUCKET_NAME_/PATH>When you create this file before cluster deployment, the system automatically creates a Kubernetes secret named aws-credentials in the pf9-utils namespace. You need this secret to upload backups to your S3 bucket.
Without this file, you must manually create or patch the aws-credentials secret in the pf9-utils namespace after cluster creation.
Step 2: Deploy the Kubernetes Cluster
Run the following command to deploy the Kubernetes cluster:
airctl create-cluster --verboseStep 3: Validate Cluster Health
Once the cluster is created, verify that it is functioning properly and that all nodes are healthy:
export KUBECONFIG=/etc/nodelet/airctl-mgmt/certs/admin.kubeconfigkubectl get nodes# Sample output:NAME STATUS ROLES AGE VERSION10.149.106.11 Ready master 4m29s v1.29.210.149.106.12 Ready master 5m41s v1.29.210.149.106.13 Ready master 5m42s v1.29.2Please refer to /var/log/pf9/nodelet.log for troubleshooting any issues with the management cluster creation.
Install Management Plane
Now that the management cluster is created, run the following commands to install and configure the Private Cloud Director self-hosted management plane.
airctl start# Sample output: INFO pcd-virt management plane creation started SUCCESS generating certs and config... SUCCESS setting up base infrastructure...▀ starting consul...Secret consul-gossip-encryption-key in namespace default not found, creating new... INFO kplane setup done, creating management plane INFO starting pcd-virt deployment... SUCCESS pcd-virt deployment now complete INFO pcd-virt management plane created - the services will take a while to startIt may take up to 45 minutes for all services to be deployed.
Monitor Deployment Progress
You can track the progress of the management plane deployment by checking the logs of the du-install pod as a root user:
export KUBECONFIG=/etc/pf9/kube.d/kubeconfigs/admin.yamlkubectl get pod -A | grep du-installkubectl logs -n <ns> <pod name> -fAlternatively, to check specific pods:
kubectl get pods -n foo-kplane | grep du-install# Sample output:du-install-foo-bmqqd 0/1 Completed 0 108mdu-install-foo-region1-f7fdw 0/1 Completed 0 98mPlease refer to airctl-logs/airctl.log for logs in case of any issues with airctl start command.
Check Management Plane & Region Status
To verify the status of the management plane and its regions, run:
airctl statusObtain UI Credentials
Once the installation is complete, you can retrieve the credentials for the Private Cloud Director UI by running the following command:.
airctl get-credsIf you do not have a working internal DNS that resolves the management plane FQDN to its IP address, you must manually update the /etc/hosts file on your local machine and any new host you want to add to the management cluster.
Use the following command to check if the necessary entry exists:
cat /etc/hosts | grep foo<VIP for externalIP> foo-region1.bar.io<VIP for externalIP> foo.bar.io<VIP for externalIP> foo-kplane.bar.ioAfter updating the hosts file, open your web browser and log into the UI using the provided credentials. Then, follow the steps in the Getting Started guide to configure your hosts and set up your Private Cloud Director environment.
Important Files & Directories
The following directories contain various log and state files related to the Private Cloud Director self-hosted deployment.
Directories:
/opt/pf9/airctl– Contains all binaries, offline installers, Docker image tar files, miscellaneous scripts, and configuration files for airctl./opt/pf9/pf9-kube– Managed by Nodelet. Stores binaries and scripts used to manage the management cluster./etc/nodelet– Contains configuration files for Nodelet and certificates generated by it.~/airctl-logs/– Stores all logs related to the deployment.
Files:
/opt/pf9/airctl/conf/airctl-config.yaml– Contains configuration information for the management plane./opt/pf9/airctl/conf/nodelet-bootstrap-config.yaml– Stores configuration required to bootstrap the management cluster./var/log/pf9/nodelet.log– Log file for troubleshooting issues with management cluster creation.~/airctl-logs/airctl.log– Log file for troubleshooting issues with management plane creation.