This is the second post in the Automating Private Cloud Director series. In the first post, I covered the Private Cloud Director (PCD) API surface, how authentication works under the hood, and using the CLI. In this post, I’d like to walk through something practical: installing Terraform, pointing it at your PCD environment, and deploying a VM with a network, security group, and persistent volume in a single main.tf.
Private Cloud Director’s core services (Compute, Block Storage, Networking, Identity, Image Library) expose standard OpenStack-compatible APIs. That means the HashiCorp OpenStack Terraform provider works out of the box. There’s no PCD-specific provider to install, no proprietary plugin. If you’ve used Terraform before, this will feel familiar.
For VMware folks: this replaces the vSphere Terraform provider workflow, but uses a standard, community-maintained provider instead of a vendor-specific one. Your Terraform skills carry over; only the provider block changes.
Prerequisites
Before you start, you’ll need:
- A running Private Cloud Director environment (Community Edition works) with at least one hypervisor host
- An image uploaded to the image library (this tutorial uses an image named Ubuntu 22.04)
- A flavor (this tutorial uses an “out of the box” flavor, m1.medium.vol)
- A physical network (this tutorial uses a physical network named vmnet)
- A machine with network access to your PCD environment (your laptop, a jump box, etc.)
Pro tip: All of the code in this blog post is hosted in our Platform9 – Community GitHub organization. Feel free to clone, fork, and submit pull requests to your heart’s desire!
Install Terraform
macOS (Homebrew):
brew tap hashicorp/tap
brew install hashicorp/tap/terraformUbuntu/Debian:
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(grep -oP '(?<=UBUNTU_CODENAME=).*' /etc/os-release || lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraformFor other operating systems, see Terraform’s install instructions.
Verify it’s installed:
terraform -versionGet Your PCD Credentials
Terraform authenticates to PCD using the same credentials as the OpenStack CLI and pcdctl. If you followed the first post in this series, you already have your RC file set up. If not, here’s the quick version:
- Log in to the PCD UI.
- Navigate to Settings > API Access.
- Copy the pcdctl RC contents displayed on the page and save them to a local file (e.g., ~/pcdctlrc).
The file contains environment variables that look like this:
export OS_AUTH_URL=https://<your-pcd-fqdn>/keystone/v3
export OS_PROJECT_NAME=service
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_DOMAIN_NAME=default
export OS_USERNAME=your-admin-user@example.com
export OS_REGION_NAME=RegionOne
export OS_IDENTITY_API_VERSION=3Add your password and (if your environment uses self-signed certificates) the insecure flag:
# Add these lines to your pcdctlrc file
export OS_PASSWORD=yourpassword
export OS_INSECURE=trueSource the file to load the variables into your shell:
source ~/pcdctlrcCreate a Terraform Project
Create a working directory and an initial main.tf:
mkdir ~/pcd-terraform && cd ~/pcd-terraformCreate main.tf with the OpenStack provider. The provider reads credentials from the environment variables you just sourced, so no secrets go in the file:
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 3.0"
}
}
}
provider "openstack" {
# Credentials come from OS_* environment variables.
# No values needed here.
}Initialize the project to download the provider:
terraform initYou should see output like:
Initializing provider plugins...
- Finding terraform-provider-openstack/openstack versions matching "~> 3.0"...
- Installing terraform-provider-openstack/openstack v3.x.x...
Terraform has been successfully initialized!Verify Connectivity
Before building anything, let’s confirm Terraform can talk to your PCD. Add a data source that looks up your current project:
data "openstack_identity_project_v3" "current" {
name = "service"
}
output "project_id" {
value = data.openstack_identity_project_v3.current.id
}Run:
terraform planterraform plan is a dry run. It reads your .tf files, compares the desired state against what currently exists in PCD, and shows you what it would do without actually doing anything. If you see a project ID in the output, you’re authenticated and connected. Remove the data source and output after verifying; they were just for the connectivity check.
Deploy a VM
Now for the real thing. Replace the contents of main.tf with the following. Note: use the correct variable values for your environment, if they differ from the example.
This creates a security group, a bootable volume from an image, and a VM attached to an existing network:
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 3.0"
}
}
}
provider "openstack" {}
# --- Variables ---
variable "image_name" {
default = "Ubuntu 22.04"
}
variable "flavor_name" {
default = "m1.medium.vol"
}
variable "network_name" {
default = "vmnet"
}
variable "volume_size" {
description = "Root volume size in GB"
default = 20
}
variable "vm_name" {
default = "tf-demo-vm"
}
# --- Look up existing resources ---
data "openstack_images_image_v2" "ubuntu" {
name = var.image_name
most_recent = true
}
data "openstack_networking_network_v2" "vmnet" {
name = var.network_name
}
# --- Security group: allow SSH and ICMP ---
resource "openstack_networking_secgroup_v2" "tf_demo_sg" {
name = "tf-demo-sg"
description = "Allow SSH and ICMP for Terraform demo"
}
resource "openstack_networking_secgroup_rule_v2" "ssh" {
security_group_id = openstack_networking_secgroup_v2.tf_demo_sg.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "icmp" {
security_group_id = openstack_networking_secgroup_v2.tf_demo_sg.id
direction = "ingress"
ethertype = "IPv4"
protocol = "icmp"
remote_ip_prefix = "0.0.0.0/0"
}
# --- Bootable volume from image ---
resource "openstack_blockstorage_volume_v3" "root_vol" {
name = "${var.vm_name}-root"
size = var.volume_size
image_id = data.openstack_images_image_v2.ubuntu.id
}
# --- VM instance ---
resource "openstack_compute_instance_v2" "demo_vm" {
name = var.vm_name
flavor_name = var.flavor_name
security_groups = [openstack_networking_secgroup_v2.tf_demo_sg.name]
block_device {
uuid = openstack_blockstorage_volume_v3.root_vol.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
delete_on_termination = true
}
network {
name = data.openstack_networking_network_v2.vmnet.name
}
}
# --- Outputs ---
output "vm_id" {
value = openstack_compute_instance_v2.demo_vm.id
}
output "vm_ip" {
value = openstack_compute_instance_v2.demo_vm.access_ip_v4
}
output "security_group_id" {
value = openstack_networking_secgroup_v2.tf_demo_sg.id
}
output "volume_id" {
value = openstack_blockstorage_volume_v3.root_vol.id
}Some notes on what’s happening here:
- Data sources (data blocks) look up existing resources in PCD by name. We’re finding the Ubuntu 22.04 image and the vmnet network rather than creating them. This is how you reference infrastructure that already exists.
- Security group creates a new group with two rules: SSH (TCP 22) and ICMP (ping). The default security group in PCD denies all inbound traffic, so you need at least SSH to access the VM.
- Volume creates a 20 GB bootable volume from the Ubuntu image. This is persistent block storage. If the VM is deleted, the volume can be preserved (we’ve set delete_on_termination = true here for cleanup, but in production you’d set it to false).
- Instance boots from the volume, attaches to vmnet, and applies the security group.
- Variables make everything configurable. You can override any of them without editing the file by passing -var “vm_name=my-other-vm” on the command line.
Ephemeral vs. volume-backed: when to use which
The main.tf above uses a volume-backed VM. That’s the production pattern: the root disk lives on persistent block storage, survives independently of the VM, and can be preserved if the VM is deleted (set delete_on_termination = false). If a hypervisor fails, the VM can be recovered on another host because the volume is accessible from any host in the cluster.
If you’re spinning up dev/test VMs or stateless workloads where the data doesn’t need to persist (or running CE without persistent storage configured), you can use ephemeral storage instead. The config is simpler because you don’t need the bootable volume resource at all.
# --- VM instance ---
resource "openstack_compute_instance_v2" "demo_vm" {
name = var.vm_name
flavor_name = "m1.medium"
image_name = var.image_name
security_groups = [openstack_networking_secgroup_v2.tf_demo_sg.name]
network {
name = data.openstack_networking_network_v2.vmnet.name
}
}With this approach, the Compute Service creates an ephemeral root disk from the image on the hypervisor’s local storage. The disk size is determined by the flavor. When the VM is deleted, the ephemeral disk is deleted with it.
There’s a trade-off though – if the hypervisor fails and the ephemeral storage path isn’t on shared storage, the VM can’t be recovered on another host. For anything you care about keeping, use the volume-backed approach.
Apply and Verify
First, preview what Terraform will do:
terraform planYou’ll see output showing the resources to be created (prefixed with +) and then the total count of actions to be action at the bottom:
Plan: 5 to add, 0 to change, 0 to destroy.The five resources are: security group, two security group rules, volume, and instance.
If the plan looks right, apply it:
terraform applyTerraform shows the plan one more time and asks for confirmation. Type yes. It will create the resources in dependency order (security group and volume first, then the instance that depends on both).
When it’s done, you’ll see the outputs:
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
Outputs:
vm_id = "abc12345-..."
vm_ip = "10.x.x.x"
security_group_id = "def67890-..."
volume_id = "abc12345-..."Verify in the PCD UI: navigate to Virtual Machines and you should see tf-demo-vm running. Check Volumes and you’ll see the 20 GB root volume. Check Networks and Security > Security Groups and you’ll see tf-demo-sg.
Clean Up
When you’re done, tear everything down:
terraform destroyTerraform shows what it will delete, asks for confirmation, and removes everything in reverse dependency order. The VM is deleted first, then the volume, then the security group rules and group.
This is one of Terraform’s biggest advantages: your infrastructure is described in a file, and terraform destroy guarantees a clean teardown. No orphaned volumes, no leftover security groups.
What’s Next
You’ve now got a working Terraform setup that can provision infrastructure on PCD. From here, you can:
- Add more resources. The OpenStack Terraform provider supports networks, subnets, routers, floating IPs, load balancers, DNS zones, and more. Anything you can create in the PCD UI, you can automate with Terraform.
- Use variables files. Put your environment-specific values in a terraform.tfvars file to keep your main.tf clean and reusable across environments.
- Manage state remotely. For team workflows, configure a remote backend (S3, Consul, etc.) so Terraform state is shared and locked.
In the next post in this series, I’ll cover using Ansible with PCD to automate both provisioning and guest OS configuration in a single playbook.
Thanks!