Learn why Private Cloud Director is the best VMware alternative

Platform9

Beginner’s Guide to Private Cloud Director Community Edition

Hey everyone! 👋 So, you’re curious about setting up your own little cloud haven? I’ve got you covered with this simple guide to getting started with Private Cloud Director Community Edition. Think of it as your friendly “let’s build something cool together” walkthrough.

What’s Community Edition?

This version of Private Cloud Director is awesome for testing stuff out or if you’re just starting small. You can deploy it on a bare-metal setup or inside a virtual machine. The infrastructure and workload regions run on the same VM, but a separate hypervisor host is needed. This can run as a VM alongside CE if needed. This guide will use a single bare-metal host to run the necessary VMs. Check out the official docs to learn more about PCD Community Edition.

My Home Lab Setup

Since I’m a bit of a hardware geek (I build water-cooled gaming PCs in my spare time!), that’s what we’re using for our example. Here’s the beast:

  • Intel i9 12900k (16 cores, 24 threads)
  • 64 GB RAM
  • 2 TB SSD
  • Nvidia 3090 Ti
My Home Lab Setup

But hey, feel free to use whatever you’ve got lying around. The minimum hardware requirements for a CE host are:

  • 12 CPUs
  • 32GB RAM
  • 100GB local storage

In order to create virtual machines, at least one hypervisor host must be available. The minimum hardware requirements for a hypervisor host are:

  • 8 CPUs
  • 16GB RAM suggested
  • 100GB local storage suggested

Let’s Get Down to Business: Deployment Steps

Okay, here’s the rundown of what we’re going to do:

  1. Bare-Metal Hypervisor Install: First, we’ll install Ubuntu Desktop on our machine (highlighted in red in the diagram below). This will allow us to use the KVM hypervisor to spin up virtual machines that we will use to deploy Community Edition and our hypervisor host into.
Bare-Metal Hypervisor Install
  1. Private Cloud Director Community Edition Install: We will then install Community Edition into the VM highlighted in red.
Private Cloud Director Community Edition Install
  1. Hypervisor Host Onboarding: Finally, we will onboard a hypervisor host that will host the workload virtual machines on it. And yes, we will take full advantage of nested virtualization to make this happen.
Hypervisor Host Onboarding

This is what everything will look like once we’re done. We will be onboarding a single hypervisor host, but feel free to onboard more (shown in dotted line below) if you have the resources available. 

Hypervisor Host Onboarding 01

Let’s get started!

Bare-Metal Hypervisor Install

Install Ubuntu Desktop on bare metal host. We will use virt-manager GUI to easily manage our virtual machines. Launch virt-manager GUI with the following command.

virt-manager

Bare-Metal Hypervisor Install

There is a known issue with Community Edition when installing on a host with an IP in the 192.168.0.0/16 range due to conflict with internal pod networking. This issue will be resolved in a future release. To set the IP range to a non-conflicting range, navigate to Edit > Connection Details and edit the XML to use the 172.16.122.1/24 subnet.

Connection Details

Next, we will create an Ubuntu Server 22.04 virtual machine to host PCD CE. Navigate to File > New Virtual Machine.

New Virtual Machine

Follow the prompts to create the virtual machine. Ensure that the following resources are assigned to the virtual machine:

  • 16 vCPU
  • 32 GB RAM
  • 100 GB local storage
  • The network configuration that we updated above  
New Virtual Machine Configuration

Launch the VM, and follow the prompts to install Ubuntu Server.

Follow the prompts till you reach the Guided storage configuration screen. For simplicity, uncheck LVM under the storage configuration menu

Guided storage configuration

If you choose to enable LVM, ensure that the logical volume is expanded to take up the entire physical partition after installation is complete. Installing CE on a volume with less than 50 GB of space will result in failure, even if the underlying partition is larger. 

Below are helpful commands to resize the logical volume. 

Command to expand LVM to take up the entire partition:

sudo lvresize -l +100%FREE /dev/mapper/<logical volume name>

Example:

sudo lvresize -l +100%FREE /dev/mapper/ubuntu–vg-ubuntu–lv

Command to resize filesystem to match the logical volume:

sudo resize2fs /dev/mapper/<logical volume name>

Example:

sudo resize2fs /dev/mapper/ubuntu–vg-ubuntu–lv

Private Cloud Director Community Edition Install

Now we are ready to deploy PCD CE to the virtual machine. Launch the VM that we just created. Run the commands below to switch to root and begin the deployment process. 

sudo su –

curl -sfL https://go.pcd.run | bash

The final deployment step is long-running and takes around 45 minutes to complete.

Private Cloud Director Community Edition Install

Once you get to this point, you can monitor the deployment progress by opening another SSH session with the VM.

kubectl get pods -n pcd-kplane

SSH session with the VM

As long as you don’t see any failures here, it means that the installation is progressing without any hiccups.

Progressing Without any Hiccups

Once the deployment completes, you will be presented with the PCD FQDN and login credentials. 

Next, we will add DNS entries to our Ubuntu Desktop environment so we can access the PCD UI from here. Replace ‘172.16.122.183’ with the IP address of the VM that we just deployed PCD into. This entry allows the DNS to direct requests to the correct IP address when attempting to reach pcd-community.pf9.io or pcd.pf9.io.

echo “172.16.122.183 pcd-community.pf9.io” | sudo tee -a /etc/hosts

echo “172.16.122.183 pcd.pf9.io” | sudo tee -a /etc/hosts

From the Ubuntu Desktop environment, navigate to pcd-community.pf9.io in a web browser. If everything has gone well, you will see the PCD login screen. 

Leave the Domain as default, choose ‘Use local credentials’ at the top right, and login with the credentials provided when the Community Edition install completes.

Community Edition install completes
Community Edition install completes

Hypervisor Host Onboarding

Now we will create a new VM that will serve as our hypervisor host to our workload VMs. Similar to how we created the PCD VM, create another Ubuntu Server VM with the following resources:

  • 8 CPUs
  • 16GB RAM
  • 100GB local storage
Hypervisor Host Onboarding

Back in the PCD UI, navigate to Infrastructure > Cluster Blueprint. Fill out the required fields as shown below and hit Save Blueprint.

Cluster Blueprint

The Network Interface refers to the name of the Ethernet network interface on the PCD host. You can view the network interfaces on your PCD host by running the following command.

ip link show

Network Interface

Now we will onboard the new hypervisor host onto PCD. Navigate to Infrastructure > Cluster Hosts > Add New Hosts button on the top right.

Add New Hosts

Before running the steps displayed, connect to the hypervisor host VM that we just created and add DNS entries like we previously did for our Ubuntu Desktop environment.

Hypervisor Host VM

Next, execute the following commands to trust the management plane SSL certificate. 

# Export DU_URL

export FQDN=<FQDN-workload-region>

# Get the DUs self-signed certificate

openssl s_client -showcerts -connect $FQDN:443 2>/dev/null </dev/null | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > du.crt

# Copy it to trusted store

cp du.crt /usr/local/share/ca-certificates

# Refresh CA certs

update-ca-certificates

# Ensure curl does not complain about the certificate

curl https://$FQDN/keystone/v3

Example:

export FQDN=pcd-community.pf9.io

openssl s_client -showcerts -connect $FQDN:443 2>/dev/null </dev/null | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > du.crt

cp du.crt /usr/local/share/ca-certificates

update-ca-certificates

Updating certificates in /etc/ssl/certs…

rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL

1 added, 0 removed; done.

Running hooks in /etc/ca-certificates/update.d…

done.

curl https://$FQDN/keystone/v3

{“version”: {“id”: “v3.14”, “status”: “stable”, “updated”: “2020-04-07T00:00:00Z”, “links”: [{“rel”: “self”, “href”: “https://pcd-community.pf9.io/keystone/v3/”}], “media-types”: [{“base”: “application/json”, “type”: “application/vnd.openstack.identity-v3+json”}]}}

Proceed to execute commands from the UI in your hypervisor host VM to onboard this host to PCD.

bash <(curl -s https://pcdctl.s3.us-west-2.amazonaws.com/pcdctl-setup)

Hypervisor Host VM 02

For the second command, skip prompts for Proxy URL and MFA Token by hitting enter. Enter Account URL, Username, Region, and Tenant as shown on the UI. Enter your password when prompted.

pcdctl config set -u https://pcd-community.pf9.io -e admin@airctl.localnet -r Community -t service

Hypervisor Host VM 03

Finally run the third command.

pcdctl prep-node

Hypervisor Host VM 04

Once the host provisioning process completes, you will see the host in the PCD UI under Infrastructure > Cluster Hosts.

Cluster Hosts

Creating a Virtual Machine with Persistent Storage

We will now create persistent storage that can be used by VMs. We will create a Network File System (NFS) share in the Ubuntu Desktop environment that will be made available to PCD VMs.

Install dependencies with the following command.

sudo apt install nfs-kernel-server

Create the directory to be shared and update permissions.

sudo mkdir -p /srv/nfs/shared

sudo chmod 777 /srv/nfs/shared

Update config file with NFS share configuration.

sudo nano /etc/exports

Update the contents of the file to the following. 

/srv/nfs/shared *(rw,no_subtree_check)

The asterisk allows connections from any IP address. This isn’t ideal for any real-world scenarios, but we do this here for the sake of simplicity. rw allows read-write access.

Restart the NFS server and check status.

sudo systemctl restart nfs-kernel-server

sudo systemctl status nfs-kernel-server

Creating a Virtual Machine with Persistent Storage

We are now ready to connect to this NFS path from PCD. Navigate to Infrastructure > Cluster Blueprint. Under Storage Volume Types, type in a name for the Volume Type and click Add Configuration. 

Add Configuration

Name your volume configuration, select NFS as the Storage Driver, and use the following as the nfs_mount_point. Replace 192.168.1.206 with the IP address of your Ubuntu Desktop that we used to create the NFS share.

192.168.1.206:/srv/nfs/shared

Finally, navigate to Infrastructure > Cluster Hosts, select the host we onboarded a few steps ago, and click Edit Roles. Assign the Hypervisor role to the host by checking the box in the Hypervisor column. Under persistent storage select the NFS configuration we created. Click Update Role Assignment.

Update Role Assignment.

It’s time to spin up a VM on our hypervisor host. First, we upload an image that will be used to create the VM. I am using CirrOS for its small footprint. Navigate to Images > Images and click on the Add Image button in the top right. Select the CirrOS image, make the selections as shown below, and click Add Image.

Add Image

Next, create a virtual network that we will attach to our VM. Navigate to Networks and Security > Virtual Networks and click on Create Network button on the top right. Configure the virtual network as follows.

Virtual Networks
Virtual Networks 02

Finally, navigate to Virtual Machines > Virtual Machines and click the Deploy New VM button in the top right corner. Since we want to create this VM with persistent storage, make selections as shown below to use NFS. 

Deploy New VM
Deploy New VM 02
Deploy New VM 03
Deploy New VM 04

You should now see the VM in your Virtual Machines tab.

Deploy New VM 04

Select the VM and click on the Console button to launch into the VM.

Console

Congratulations on making it to the end! We started with a bare metal Ubuntu installation and deployed a VM in an enterprise-grade workload management solution. Please give PCD CE a spin and let us know how things go.

Author

  • Pushkar Mulay

     

    View all posts
Scroll to Top