Learn why Private Cloud Director is the best VMware alternative

Platform9

Beginner’s Guide to Private Cloud Director Community Edition

Hey everyone! 👋 So, you’re curious about setting up your own little cloud haven? I’ve got you covered with this simple guide to getting started with Private Cloud Director Community Edition. Think of it as your friendly “let’s build something cool together” walkthrough.

What’s Community Edition?

This version of Private Cloud Director is awesome for testing stuff out or if you’re just starting small. You can deploy it on a bare-metal setup or inside a virtual machine. The infrastructure and workload regions run on the same VM, but a separate hypervisor host is needed. This can run as a VM alongside CE if needed. This guide will use a single bare-metal host to run the necessary VMs. Check out the official docs to learn more about Private Cloud Director Community Edition.

My Home Lab Setup

Since I’m a bit of a hardware geek (I build water-cooled gaming PCs in my spare time!), that’s what we’re using for our example. Here’s the beast:

  • Intel i9 12900k (16 cores, 24 threads)
  • 64 GB RAM
  • 2 TB SSD
  • Nvidia 3090 Ti
My Home Lab Setup

But hey, feel free to use whatever you’ve got lying around. The minimum hardware requirements for a CE host are:

  • 8 CPUs
  • 32GB RAM
  • 100GB local storage

In order to create virtual machines, at least one hypervisor host must be available. The minimum hardware requirements for a hypervisor host are:

  • 8 CPUs
  • 16GB RAM suggested
  • 100GB local storage suggested

Let’s Get Down to Business: Deployment Steps

Okay, here’s the rundown of what we’re going to do:

  1. Bare-Metal Hypervisor Install: First, we’ll install Ubuntu Desktop on our machine (highlighted in red in the diagram below). This will allow us to use the KVM hypervisor to spin up virtual machines that we will use to deploy Community Edition and our hypervisor host into.
Bare-Metal Hypervisor Install
  1. Private Cloud Director Community Edition Install: We will then install Community Edition into the VM highlighted in red.
Private Cloud Director Community Edition Install
  1. Hypervisor Host Onboarding: Finally, we will onboard a hypervisor host that will host the workload virtual machines on it. And yes, we will take full advantage of nested virtualization to make this happen.
Hypervisor Host Onboarding

This is what everything will look like once we’re done. We will be onboarding a single hypervisor host, but feel free to onboard more (shown in dotted line below) if you have the resources available. 

Hypervisor Host Onboarding 01

Let’s get started!

Bare-Metal Hypervisor Install

Install Ubuntu Desktop on bare metal host. We will use virt-manager GUI to easily manage our virtual machines. Launch virt-manager GUI with the following command.

virt-manager
Bare-Metal Hypervisor Install

Next, we will create an Ubuntu Server 22.04 virtual machine to host Private Cloud Director CE. Navigate to File > New Virtual Machine.

Follow the prompts to create the virtual machine. Ensure that the following resources are assigned to the virtual machine:

  • 8 vCPU
  • 32 GB RAM
  • 100 GB local storage

Launch the VM, and follow the prompts to install Ubuntu Server.

Follow the prompts till you reach the Guided storage configuration screen. For simplicity, uncheck LVM under the storage configuration menu

Guided storage configuration

If you choose to enable LVM, ensure that the logical volume is expanded to take up the entire physical partition after installation is complete. Installing CE on a volume with less than 50 GB of space will result in failure, even if the underlying partition is larger. 

Below are helpful commands to resize the logical volume. 

Command to expand LVM to take up the entire partition:

Bash
sudo lvresize -l +100%FREE /dev/mapper/<logical volume name>
Example
sudo lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

Command to resize filesystem to match the logical volume:

Bash
sudo resize2fs /dev/mapper/<logical volume name>
Example
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

Private Cloud Director Community Edition Install

Now we are ready to deploy Private Cloud Director CE to the virtual machine. Launch the VM that we just created. Run the commands below to switch to root and begin the deployment process. 

Bash
# switch to root user
sudo su -
# start Community Edition installer
curl -sfL https://go.pcd.run | bash

The final deployment step is long-running and takes around 45 minutes to complete.

Once the deployment completes, you will be presented with the Private Cloud Director FQDN and login credentials. 

Next, we will add DNS entries to our Ubuntu Desktop environment so we can access the Private Cloud Director UI from here. Replace ‘172.16.122.183’ in the below example with the IP address of the Private Cloud Director Community Edition VM we just deployed. This entry resolves requests to the correct IP address when attempting to reach pcd-community.pf9.io or pcd.pf9.io.

Bash
echo "172.16.122.183 pcd-community.pf9.io" | sudo tee -a /etc/hosts
echo "172.16.122.183 pcd.pf9.io" | sudo tee -a /etc/hosts

From the Ubuntu Desktop environment, navigate to pcd-community.pf9.io in a web browser. If everything has gone well, you will see the Private Cloud Director login screen. 

Leave the Domain as default, choose ‘Use local credentials’ at the top right, and login with the credentials provided when the Community Edition install completes.

Community Edition install completes

Hypervisor Host Onboarding

Now we will create a new VM that will serve as our hypervisor host to our workload VMs. Similar to how we created the Private Cloud Director VM, create another Ubuntu Server VM with the following resources:

  • 8 CPUs
  • 16GB RAM
  • 100GB local storage

Back in the Private Cloud Director UI, navigate to Infrastructure > Cluster Blueprint. Fill out the required fields as shown below and hit Save Blueprint.

Cluster Blueprint

The Network Interface refers to the name of the Ethernet network interface on the Private Cloud Director host. You can view the network interfaces on your Private Cloud Director host by running the following command.

ip link show
Network Interface

Now we will onboard the new hypervisor host onto Private Cloud Director. Navigate to Infrastructure > Cluster Hosts > Add New Hosts button on the top right.

Add New Hosts

Before running the steps displayed, connect to the hypervisor host VM that we just created and add DNS entries like we previously did for our Ubuntu Desktop environment.

Hypervisor Host VM

Proceed to execute commands from the UI in your hypervisor host VM to onboard this host to Private Cloud Director.

Bash
bash <(curl -s https://pcdctl.s3.us-west-2.amazonaws.com/pcdctl-setup)
Hypervisor Host VM 02

For this next command, skip prompts for Proxy URL and MFA Token by hitting enter. Enter Account URL, Username, Region, and Tenant as shown on the UI. Enter your password when prompted.

Bash
pcdctl config set -u https://pcd-community.pf9.io -e admin@airctl.localnet -r Community -t service
Hypervisor Host VM 03

Finally run this third command.

Bash
pcdctl prep-node
Hypervisor Host VM 04

Once the host provisioning process completes, you will see the host in the Private Cloud Director UI under Infrastructure > Cluster Hosts.

Cluster Hosts

Creating a Virtual Machine with Persistent Storage

We will now create persistent storage that can be used by VMs. We will create a Network File System (NFS) share in the Ubuntu Desktop environment that will be made available to Private Cloud Director VMs.

Install dependencies with the following command.

Bash
sudo apt install nfs-kernel-server

Create the directory to be shared and update permissions.

Bash
sudo mkdir -p /srv/nfs/shared
sudo chmod 777 /srv/nfs/shared

Update config file with NFS share configuration.

Bash
sudo nano /etc/exports

Update the contents of the file to the following. 

/srv/nfs/shared *(rw,no_subtree_check)

The asterisk allows connections from any IP address. This isn’t ideal for any real-world scenarios, but we do this here for the sake of simplicity. rw allows read-write access.

Restart the NFS server and check status.

Bash
sudo systemctl restart nfs-kernel-server
sudo systemctl status nfs-kernel-server
Creating a Virtual Machine with Persistent Storage

We are now ready to connect to this NFS path from Private Cloud Director. Navigate to Infrastructure > Cluster Blueprint. Under Storage Volume Types, type in a name for the Volume Type and click Add Configuration. 

Name your volume configuration, select NFS as the Storage Driver, and use the following as the nfs_mount_point. Replace 192.168.1.206 with the IP address of your Ubuntu Desktop that we used to create the NFS share.

192.168.1.206:/srv/nfs/shared

Finally, navigate to Infrastructure > Cluster Hosts, select the host we onboarded a few steps ago, and click Edit Roles. Assign the Hypervisor role to the host by checking the box in the Hypervisor column. Under persistent storage select the NFS configuration we created. Click Update Role Assignment.

It’s time to spin up a VM on our hypervisor host. First, we upload an image that will be used to create the VM. I am using CirrOS for its small footprint. Download that to your Ubuntu desktop. In the user interface, navigate to Images > Images and click on the Add Image button in the top right. Select the CirrOS image, choose the qcow2 image type, and click Add Image.

Next, create a virtual network that we will attach to our VM. Navigate to Networks and Security > Virtual Networks and click on Create Network button on the top right. Configure the virtual network as follows.

Network configuration:

  • Give the network a name
  • Leave the rest of the Network Configurations defaults as set

Subnet configuration:

  • Give the subnet a name
  • Set or leave IPv4 as the default
  • Enter 10.0.0.0/16 for the Network Address CIDR

Leave everything else as set and click Create Network

Finally, navigate to Virtual Machines > Virtual Machines and click the Deploy New VM button in the top right corner. Use the following steps to deploy the VM.

  • Give the a VM a name
  • Choose the cluster
  • Boot the VM from a new 20GB volume on the NFS volume type
  • Select the CirrOS image and click Next.
  • The next screen may give you an option to add available volumes to the VM. This screen can be skipped by clicking Next.
  • Choose the m1.tiny.vol flavor.
  • Choose the virtual network that was previously created. If that step was skipped, you’ll need to create the virtual network before moving forward with VM deployment.
  • On the final screen of the deployment wizard, you can leave all of the defaults as-is. Since Cirros is not a cloud-init enabled image, you will not need to set a password during deployment.
  • Click Deploy VM and Finish, if needed.

You should now see the VM in your Virtual Machines tab.

Select the VM and click on the Console button to launch into the VM.

Console

From here, you can login using the cirros user and the default password gocubsgo. You can validate that the image was deployed on a 20GB volume with the following command:

df -h /

The filesystem should show a size of approximately 19.4GB.

Congratulations on making it to the end! We started with a bare metal Ubuntu installation and deployed a VM in an enterprise-grade workload management solution. Please give Private Cloud Director CE a spin and let us know how things go.

Author

  • Pushkar Mulay

     

    View all posts
Scroll to Top