Pre-requisites
This document describes the infrastructure pre-requisites to get your Private Cloud Director private cloud up and running. If you're looking to deploy Self-Hosted version of Private Cloud Director, please follow Pre-requisites first.
Hypervisor Host Prerequisites
Each physical server or host that you will use as a hypervisor with Private Cloud Director must meet the following requirements:
x86 server - Private Cloud Director only supports x86 server hardware today.
Running Ubuntu 22.04 LTS (Jammy Jellyfish) AMD64 cloud image. Note: A full server distribution is not required, and the minimal distribution is not supported.
The server must meet the CPU Model Pre-requisites for Hypervisor Hosts.
Each server should have following minimum amount of resources:
- 8 vCPUs
- 16GB RAM
- 250 GB storage (including OS + Platform9 installer packages, logs, etc. + VM storage). Note: When using non-ephemeral (cinder) storage for VMs, storage of 100 GB should be enough.
sudo
access enabled for Administrator to log into the server and install the Platform9 agentServer
hostname
should contain at least one non-numeric characterMake sure that the content under
/opt/pf9
is not shared across hosts. Either make this a local directory, or if using shared storage, make sure that this path mounts to a unique shared storage file share or volume that is not shared across any other hosts in your Private Cloud Director setup.When using the SaaS-hosted deployment model, outbound connectivity (port 443) must be enabled on each server so that the Platform9 agent can connect to the Private Cloud Director SaaS management plane.
In the case of a multi-domain environment, host onboarding should be done by the Administrator user in the
default
domain and not the secondary domains.If planning to use VM Live Migration feature, follow the Live Migration Prerequisites
If planning to use the Virtual Machine High Availability (VM HA) feature, follow the VM HA Prerequisites.
If planning to use the Dynamic Resource Rebalancing (DRR) feature, follow the DRR Pre-requisites.
Storage Prerequisites
Private Cloud Director supports a wide variety of enterprise storage solutions. Verify you have access to the administrative console of your storage solution and can lookup the required configuration information from your admin console.
- Read more in the Storage Overview article about types of storage supported by Private Cloud Director.
- For block storage, see the list of supported block storage drivers
- Private Cloud Director expects every hypervisor that connects to iSCSI storage to present one unique iSCSI Qualified Name (IQN). Duplicate IQN can exist across hypervisor hosts when multiple hosts boot with an identical IQN, often because their OS image was cloned. Please refer to the knowledge base article to address duplicate IQNs.
- See also: latest compatibility matrix of Cinder storage drivers and devices as maintained by the OpenStack project.
Using Ephemeral Local Storage
If you plan to use Ephemeral Local Storage for VM root disk, sufficient local disk space is required at per hypervisor host level to store virtual machine instance files. The recommended minimum storage per hypervisor host in this case is:
- 250GB of local disk space
- This space is used for storing VM images, swap, and temporary storage.
- Ensures sufficient capacity for high-density workloads.
Using Ephemeral Shared Storage or Volume Based Storage
If you plan to use Ephemeral Shared Storage or Block Storage Volumes for VM root disk, then per hypervisor host local disk requirements are significantly lower:
- 95GB of local disk space for operating system and services.
- Virtual machine storage is managed externally, reducing local storage needs.
Partitioning Recommendations
For Volume based storage, partitioning should optimize performance and ensure efficient storage utilization:
/
(root) – Minimum 50GB for operating system and essential services./var
– Minimum 30GB, especially for logs and temporary files./home
– Optional, size as needed based on user requirements./opt
– If using additional services, allocate 15GB+.Swap
– Recommended 1.5x RAM for best performance.
Networking Prerequisites
All hypervisor hosts should have a minimum of 1 network interface, and ideally 4 network interfaces to enable redundancy across network interface failure. A typical configuration would look like:
- bond0 mapped to two adapters: eth0 and eth1
- bond1 mapped to two adapters: eth2 and eth3
Key Networking Decisions
Your key decisions before configuring networking in Private Cloud Director are:
Use of bonded network interfaces (recommended) to ensure availability if a physical network interface fails
Desired network topology and separation:
- Management network
- Workload network (e.g. a VM network)
- Storage network
- Backup/DR network
Use of physical networks vs "virtual" software defined networks:
- A common use case is that external north-south connectivity is available via an existing physical network in your infrastructure; but a group of users may want to use a virtual network that doesn't need to consume ports from this external network
- You may have limitations on the VLANs that are available to use, and may want to expand the logical network range by using an IP overlay such as VXLAN or GENEVE networking
- Groups of users and workloads that have overlapping IP ranges can be isolated easily using virtual networks
External firewall (outside cluster) vs in-cluster firewall
Segregation of traffic can be done within the Private Cloud Director if you aren't already using VLAN or VXLAN based network segments.
For further reading, see Overview & Architecture.
Outbound Connectivity Requirements
You would need to configure outbound access on port 443 from your hosts for the below domain names to ensure they can be onboarded to the Private Cloud Director management plane.
- Private Cloud Director management plane url is accessed over port 443.
- For
pcdctl
CLI download on hosts, https://pcdctl.s3.us-west-2.amazonaws.com/pcdctl-setup - APT sources list for installing packages on the Ubuntu host using
pcdctl prep-node
:
Image Library Prerequisites
The Image Library service manages virtual machine images in the Private Cloud Director environment. To enable its proper operation, the following prerequisites must be met:
- Ensure that port
9494
is allowed, used by the Image Library API for image operations. - The Image Library service must operate with
admin
permissions to read and write image files to persistent storage.
External Connectivity
The hypervisor node that you've assigned image library role (the image library node) must have external connectivity to be accessible via a browser. This requirement is necessary for:
- Uploading images through the Private Cloud Director UI.
- Verifying and accepting self-signed certificates.
Self-Signed Certificates
The image library node uses self-signed certificates. To enable image uploads from the UI, users need to:
Navigate to the image library endpoint in a browser.
- Click Access & Security Menu -> API Access -> and look for glance-cluster.
Accept the insecure certificate when prompted.
Why Self-Signed Certificate?
The self-signed certificate is needed because the image library node secures communication with SSL/TLS and uses a self-generated certificate instead of one from a public CA.
Since browsers and CLI tools trust only publicly verified certificates, users must manually accept the self-signed certificate when accessing the Image Library Admin endpoint.
Similarly, the --insecure
flag is required for the OpenStack CLI to bypass certificate verification during image uploads.
Load Balancer As a Service (LBaaS) Prerequisites
These pre-requisites only apply if you plan to deploy Load Balancer as a Service (LBaaS) implementation offered by Private Cloud Director to create one or more software-defined load balancers for your application services.
CLI Update
You need to install the Octavia extension to the OpenStack CLI in order to use the LBaaS specific OpenStack CLI commands. Run the following command on a machine where you want to run OpenStack CLI to install both packages.
sudo apt install python3-openstackclient python3-octaviaclient -y
Alternatively, run the following command on the machine where you already have OpenStack CLI running, to add the LBaaS extension.
sudo apt install python3-octaviaclient -y
Network Requirements
You will need:
- An internal network (a physical or virtual network) that will be used both by your load balancer instance, and your pool of virtual machines that will run the service and receive client requests.
- (Optionally) An external network if you plan to use public (floating) IPs for your load balancer.
Pool of Virtual Machines
The pool of virtual machines that will run your application that requires load balancing must meet the following requirements:
- Be running and in an 'active' state
- Have a valid IP address assigned from the same tenant network that you will use to create a new load balancer instance.
- Have your application (e.g., web server) running and accessible
Router Configuration
If you plan to use public (floating) IPs for your load balancer, you need:
- A router connecting the tenant network used by the load balancer and the pool of VMs, and your external network.
- Available public (floating) IPs in your quota
Kubernetes Pre-requisites
Read Kubernetes Pre-requisites for requirements to setup a Kubernetes cluster in Private Cloud Director