Learn why Private Cloud Director is the best VMware alternative

Platform9

Automating Private Cloud Director: Ansible for Provisioning and Configuration

This is the third post in the Automating Private Cloud Director series. In the first post, I covered the API surface and authentication. In the second, I walked through deploying a VM with Terraform. In this post, I’d like to cover Ansible.

Where Terraform excels at declaring infrastructure state (“I want this VM, this volume, this network to exist”), Ansible adds something Terraform doesn’t do natively: guest OS configuration. Ansible can provision a VM on PCD, wait for it to come up, SSH into it, install packages, configure services, and apply security hardening, all in a single playbook run. That two-phase workflow (provision infrastructure, then configure the guest) is what makes Ansible especially useful for teams that need repeatable, end-to-end automation.

For VMware folks: this replaces the combination of PowerCLI for VM provisioning and vSphere Guest Agent / Ansible VMware modules for guest configuration.

Prerequisites

Before you start, you’ll need:

  • A running Private Cloud Director (PCD) environment (Community Edition works) with at least one hypervisor host
  • An image uploaded to the image library (this tutorial uses an image named Ubuntu 22.04)
  • A flavor (this tutorial uses an “out of the box” flavor,  m1.medium.vol)
  • A physical network (this tutorial uses a physical network named  vmnet)
  • An SSH key pair (the playbook will upload your public key to PCD)
  • Your PCD credentials (the RC file from Settings > API Access)
  • A machine with network connectivity to your PCD environment and SSH access to the VMs it creates

Pro tip: All of the code in this blog post is hosted in our Platform9 – Community GitHub organization. Feel free to clone, fork, and submit pull requests to your heart’s desire!

Install Ansible and the OpenStack Collection

macOS:

ShellScript
pip3 install ansible openstacksdk
ansible-galaxy collection install openstack.cloud

Ubuntu/Debian:

Bash
apt install python3-pip -y
pip3 install ansible openstacksdk
ansible-galaxy collection install openstack.cloud

For other operating systems, see Ansible’s install instructions.

The openstack.cloud collection provides Ansible modules for interacting with OpenStack-compatible APIs. The openstacksdk Python library is required under the hood. These modules work with PCD because PCD exposes the same APIs.

Verify the collection is installed:

Bash
ansible-galaxy collection list | grep openstack

Authentication

The openstack.cloud modules support two authentication methods: environment variables (the RC file) or a clouds.yaml file. I’ll show both.

Option 1: Environment variables (the RC file)

This is the same approach from the previous posts. Source your RC file:

Bash
source ~/pcdctlrc

When using environment variables, the Ansible modules pick them up automatically. No auth block is needed in the playbook.

Option 2: clouds.yaml

If you prefer a file-based approach (useful when managing multiple PCD environments), create ~/.config/openstack/clouds.yaml:

YAML
clouds:
  pcd:
    auth:
      auth_url: https://<your-pcd-fqdn>/keystone/v3
      username: your-admin-user@example.com
      password: yourpassword
      project_name: service
      user_domain_name: default
      project_domain_name: default
    region_name: RegionOne
    identity_api_version: 3
    verify: false  # Set to true if using proper certificates

Then reference the cloud name in your playbook tasks with cloud: pcd. I’ll use the clouds.yaml approach in the examples below because it keeps the playbook self-contained.

The Playbook

Here’s the full playbook. It does three things:

  1. Provisions infrastructure on PCD (security group, key pair, VM)
  2. Waits for the VM to become reachable over SSH
  3. Configures the guest OS (updates packages, installs Nginx, starts the service)

Create a file called deploy-and-configure.yaml:

YAML
---
# Play 1: Provision infrastructure on PCD
- name: Provision a VM on Private Cloud Director
  hosts: localhost
  connection: local
  gather_facts: false

  vars:
    cloud_name: pcd
    vm_name: ansible-demo-vm
    image_name: "Ubuntu 22.04"
    flavor_name: m1.medium.vol
    network_name: vmnet
    key_name: ansible-demo-key
    security_group_name: ansible-demo-sg
    public_key_path: "{{ lookup('env', 'HOME') }}/.ssh/id_rsa.pub"

  tasks:
    - name: Create security group
      openstack.cloud.security_group:
        cloud: "{{ cloud_name }}"
        name: "{{ security_group_name }}"
        description: "Allow SSH and ICMP for Ansible demo"
        state: present

    - name: Allow SSH ingress
      openstack.cloud.security_group_rule:
        cloud: "{{ cloud_name }}"
        security_group: "{{ security_group_name }}"
        protocol: tcp
        port_range_min: 22
        port_range_max: 22
        remote_ip_prefix: 0.0.0.0/0
        direction: ingress
        state: present

    - name: Allow ICMP ingress
      openstack.cloud.security_group_rule:
        cloud: "{{ cloud_name }}"
        security_group: "{{ security_group_name }}"
        protocol: icmp
        remote_ip_prefix: 0.0.0.0/0
        direction: ingress
        state: present

    - name: Upload SSH key pair
      openstack.cloud.keypair:
        cloud: "{{ cloud_name }}"
        name: "{{ key_name }}"
        public_key_file: "{{ public_key_path }}"
        state: present

    - name: Create VM
      openstack.cloud.server:
        cloud: "{{ cloud_name }}"
        name: "{{ vm_name }}"
        image: "{{ image_name }}"
        flavor: "{{ flavor_name }}"
        key_name: "{{ key_name }}"
        security_groups:
          - "{{ security_group_name }}"
        network: "{{ network_name }}"
        boot_from_volume: true
        volume_size: 20
        terminate_volume: true
        wait: true
        timeout: 300
        state: present
      register: vm_result

    - name: Set VM IP fact
      ansible.builtin.set_fact:
        vm_ip: "{{ vm_result.server.addresses[network_name][0].addr }}"

    - name: Show VM IP address
      ansible.builtin.debug:
        msg: "VM created at {{ vm_ip }}"

    - name: Add VM to in-memory inventory
      ansible.builtin.add_host:
        name: "{{ vm_ip }}"
        groups: new_vms
        ansible_user: ubuntu
        ansible_ssh_private_key_file: "{{ lookup('env', 'HOME') }}/.ssh/id_rsa"
        ansible_ssh_common_args: "-o StrictHostKeyChecking=no"

    - name: Wait for SSH to become available
      ansible.builtin.wait_for:
        host: "{{ vm_ip }}"
        port: 22
        delay: 10
        timeout: 300

    - name: Wait for cloud-init to finish
      ansible.builtin.pause:
        seconds: 30

# Play 2: Configure the guest OS
- name: Configure the VM
  hosts: new_vms
  become: true
  gather_facts: true

  tasks:
    - name: Update apt cache
      ansible.builtin.apt:
        update_cache: true
        cache_valid_time: 3600

    - name: Install Nginx
      ansible.builtin.apt:
        name: nginx
        state: present

    - name: Start and enable Nginx
      ansible.builtin.systemd:
        name: nginx
        state: started
        enabled: true

    - name: Verify Nginx is serving
      ansible.builtin.uri:
        url: "http://localhost"
        status_code: 200

What’s Happening Here

The playbook has two plays, and that’s the key pattern.

Play 1 runs on localhost. It talks to PCD’s API through the openstack.cloud modules to create the security group, upload your SSH public key, and deploy a volume-backed VM. After the VM is created, set_fact extracts the IP address from the server’s addresses dict (keyed by network name). Then add_host adds that IP to an in-memory inventory group called new_vms, and the playbook waits for SSH to come up.

A note on the IP extraction: with volume-backed VMs on provider networks, the access_ipv4 field on the server object isn’t always populated. The  addresses dict is the reliable place to find the assigned IP. The syntax vm_result.server.addresses[network_name][0].addr gets the first IPv4 address on the network you specified.

There’s also a 30-second pause after the SSH port check. This is important, as the SSH daemon starts before cloud-init finishes, so port 22 can be open before the SSH key has been injected into the ubuntu user’s authorized_keys. Without the pause, Play 2 will try to connect and fail with an authentication error.

Play 2 runs on the new_vms group, which now contains the VM that Play 1 just created. This play SSHes into the VM and configures it: updates packages, installs Nginx, starts the service, and verifies it’s responding. become: true elevates to root for package installation.

This two-play pattern is what makes Ansible different from Terraform for this use case. Terraform can create the VM, but it can’t SSH in and configure the guest OS as a native part of its workflow. With Ansible, infrastructure provisioning and guest configuration happen in a single ansible-playbook run.

Run The Playbook

Bash
ansible-playbook deploy-and-configure.yaml

You’ll see output for each task. When it finishes, you should have a running VM with Nginx serving on port 80. Verify in the PCD UI under Virtual Machines.

Teardown

To tear down the VM and its associated resources, create a second playbook called teardown.yaml:

YAML
---
- name: Tear down Ansible demo resources
  hosts: localhost
  connection: local
  gather_facts: false

  vars:
    cloud_name: pcd
    vm_name: ansible-demo-vm
    key_name: ansible-demo-key
    security_group_name: ansible-demo-sg

  tasks:
    - name: Delete VM
      openstack.cloud.server:
        cloud: "{{ cloud_name }}"
        name: "{{ vm_name }}"
        state: absent
        wait: true

    - name: Delete key pair
      openstack.cloud.keypair:
        cloud: "{{ cloud_name }}"
        name: "{{ key_name }}"
        state: absent

    - name: Delete security group
      openstack.cloud.security_group:
        cloud: "{{ cloud_name }}"
        name: "{{ security_group_name }}"
        state: absent

Then run the teardown.yaml to destroy the VM and its associated resources:

Bash
ansible-playbook teardown.yaml

Using Environment Variables Instead of clouds.yaml

If you’d rather use the RC file approach instead of clouds.yaml, source the file and remove all cloud: parameters from the playbook tasks. The modules will pick up the OS_* environment variables automatically:

YAML
    - name: Create VM
      openstack.cloud.server:
        name: "{{ vm_name }}"
        image: "{{ image_name }}"
        flavor: "{{ flavor_name }}"
        # ... same parameters, no cloud: line

Both approaches work. Use whichever fits your workflow.

For VMware Admins: How This Maps

VMware WorkflowPCD + Ansible Equivalent
PowerCLI New-VM + vSphere Guest Agent for guest configopenstack.cloud.server for provisioning + standard Ansible modules for guest config
Aria Automation blueprints for provisioning + guest configAnsible playbooks (single tool for both layers)
VM customization specs (hostname, network, domain join)cloud-init user data or Ansible post-provisioning tasks
vCenter templates + linked clonesPCD images + volume-backed VMs
Ansible community.vmware collectionAnsible openstack.cloud collection (same Ansible skills, different provider)

If you’re already using Ansible with community.vmware modules today, the transition is straightforward. The playbook structure, inventory patterns, and guest configuration modules are identical. Only the provisioning tasks change from community.vmware.vmware_guest to openstack.cloud.server.

What’s Next

You’ve now got a working Ansible setup that can provision infrastructure on PCD and configure the guest OS in a single playbook run. From here, you can:

  • Add more provisioning modules. The openstack.cloud collection includes modules for networks, subnets, routers, floating IPs, volumes, images, and more. Anything you can create in the PCD UI, you can automate with Ansible.
  • Expand guest configuration. The playbook in this post installs Nginx, but Play 2 can do anything Ansible can do: harden SSH, configure firewalls, join a domain, deploy application code, set up monitoring agents. The full library of Ansible built-in modules is available once you’re SSHed in.
  • Use roles for reusable configuration. If you’re configuring the same stack across multiple VMs, Ansible roles let you package tasks, templates, and handlers into reusable units. A “web server” role, a “database” role, a “hardening” role, composed as needed per playbook.
  • Combine with Terraform. Terraform handles the infrastructure state (networks, security groups, volumes) and Ansible handles post-provisioning configuration. Many teams use terraform apply first, then run Ansible against the provisioned inventory. The two tools complement each other.
  • Build dynamic inventory. Instead of using add_host to build in-memory inventory, you can use the openstack.cloud.inventory plugin to dynamically discover VMs in PCD and group them by metadata, network, or naming convention.

In the next post in this series, I’ll cover the PCD Application Catalog, which lets you wrap Terraform modules in a self-service UI so developers can deploy pre-approved environments without writing any code.

Thanks for reading along with me!

Author

  • Damian Karlson

    Damian leads technical product marketing and community engagement for Private Cloud Director & vJailbreak. Prior to joining Platform9, he had many years at VMware, EMC, and Dell focused on delivering powerful cloud solutions & services.

    View all posts
Scroll to Top