Learn why Private Cloud Director is the best VMware alternative

Platform9

Integrating Enterprise ITSM Approval Workflows with Private Cloud Director

Motivation

Enterprise IT teams working in manufacturing or regulated industries require the ability to review, approve and customize workload provisioning requests being made by a broad range of self-service users.  These requests may be reviewed to ensure conformance, resource availability or to satisfy custom provisioning requests. 

Previously, Enterprise IT teams were able to use VMware vCenter Orchestrator (now called Aria Automation Orchestrator).  Given recent licensing changes that are making VMware customers look for alternatives, users are looking for alternative ways to achieve a similar level of control and flexibility in their workload provisioning workflow.

Architecture

This document proposes a solution architecture using Platform9’s Private Cloud Director as the private cloud platform, ServiceNow as the Enterprise ITSM platform, and Apache Airflow as a workflow manager to provide extensibility and customizability.

Private Cloud Director (PCD) is Platform9’s private cloud platform, which includes comprehensive, VMware like management for Virtualized environments.  PCD includes a comprehensive API (authentication and authorization via OpenStack Keystone, compute via Nova, storage via Cinder and networking via Neutron).  Being compatible with open-source OpenStack APIs means that PCD can also be used via Terraform, Ansible and other automation tools.

ServiceNow is the leading Enterprise ITSM solution.  ServiceNow includes a self-service portal, which includes a catalog of available options that can be requested.  This catalog can be extended to include objects from 3rd party systems, such as available virtualization offerings being managed via PCD.  ServiceNow also includes a built-in workflow system, however, in this architecture, we will use Apache Airflow to manage the workflow for greater flexibility.

Apache Airflow is used by 1000s of businesses worldwide.  It has matured over the years to offer comprehensive workflow management capabilities.  Jobs in Airflow are managed as Directed-Acyclic-Graphics (DAGs).  DAGs can be sequenced across individual Tasks.  This makes it easy to compose a variety of sophisticated Jobs that span individual Tasks, with conditionals, serial or parallel execution, and the ability to resume Job execution even if an individual Task fails.

System Design

To illustrate this integration, we will demonstrate a provisioning workflow, wherein a user can:

  1. Go to their ServiceNow catalog and request a provisioning operation.
  2. The catalog should query information from their private cloud environment and show the available resources such as Virtual Machine Images (templates), Flavors (T-Shirt Size configurations) and Networks.
  3. The user can submit a request, which should then trigger validation of ServiceNow automated validation rules.
  4. Upon successful validation, the request is sent for Approval to an approver.
  5. Upon approval, the provisioning operation completes and the user receives information about the provisioned resources via an email.
  6. If approval was denied, the user receives this information via email.

Sequence of Events

The diagram below  shows the components being used and how they interact, along with the overall task execution flow between ServiceNow and Airflow.

Sequence-of-Events

ServiceNow to Airflow Resource Provisioning Workflow

Initial Form Load: When a user opens the ServiceNow form, an ‘OnLoad’ client script automatically triggers. This script contacts a custom Flask application that reads a pre-configured ‘clouds.yaml’ file to retrieve available PCD (Private Cloud Director) environments. The Flask app returns this list, which populates a dropdown menu for cloud selection.

Cloud Selection: When the user selects a cloud from the dropdown, an ‘OnChange’ client script triggers and calls the Flask application again, this time passing the selected cloud name. The Flask app responds with cloud-specific details (images, networks, flavors, etc.) that populate additional form fields.

Form Submission and Approval: After filling out all required details, the user clicks “Order Now” to submit the form. The request automatically routes through ServiceNow’s approval logic (configured for auto-approval in this demo).

Data Transfer to Airflow: Upon approval, a ServiceNow Business Rule activates, collecting all form data and creating a JSON payload. This payload is sent via ServiceNow’s REST Message entity to Airflow’s DAG API endpoint, using proper authentication headers (Authorization: Bearer token, Content-Type: application/json).

Airflow Processing: The Airflow DAG receives the JSON payload and begins executing its tasks sequentially. As part of this process, an approval email is sent to the configured administrator containing two action links: “Approve” and “Deny.”

Final Outcomes:

  • If approved: The system creates the requested resource (VM) in the specified PCD environment, then sends a success notification to the user with the new resource details.
  • If denied: A denial notification is sent to the user, and the workflow terminates.

This creates an end-to-end automated provisioning system that bridges ServiceNow’s user interface with Airflow’s orchestration capabilities and PCD’s resource management.

ServiceNow Form:

User submits the request.

ServiceNow-Form1
ServiceNow-Form2

Supplement:  Airflow:

Sequence of tasks;

Supplement-Airflow

Approval Flow;

1.  The admin receives an email to approve/deny the request.

Approval-Flow-Request

2.  Post approval, the resource(VM in this case) gets created in the PCD.

Approval-Flow-Post-approval

3. A success (or even Denial, if the admin denies the request) email is sent back to the user, along with the resource name.

Approval-Flow-Success

Airflow DAG code(python):

from airflow import DAG
from airflow.operators.python import PythonOperator, BranchPythonOperator
from airflow.operators.email import EmailOperator
from airflow.operators.empty import EmptyOperator
from airflow.utils.trigger_rule import TriggerRule
from datetime import datetime, timedelta
import openstack
import requests
import time
import os
import json
from openstack.config import OpenStackConfig
from dotenv import load_dotenv

load_dotenv()

AIRFLOW_HOST = os.getenv("AIRFLOW_HOST", "localhost")
APPROVAL_SERVER_PORT = os.getenv("APPROVAL_SERVER_PORT", "5000")
APPROVAL_SERVER_URL = f"http://{AIRFLOW_HOST}:{APPROVAL_SERVER_PORT}"
CLOUDS_YAML_PATH = os.getenv("CLOUDS_YAML_PATH")
DEFAULT_CLOUD = os.getenv("DEFAULT_CLOUD")
ADMIN_EMAIL = os.getenv("ADMIN_EMAIL")

default_args = {
    "retries": 0,
    "retry_delay": timedelta(minutes=1)
}

def get_inputs(**context):
    conf = context["dag_run"].conf
    context['ti'].xcom_push(key='input_conf', value=conf)

def send_admin_email(**context):
    conf = context['ti'].xcom_pull(key='input_conf', task_ids='get_input')
    user_email = conf.get("user_email")
    dag_run_id = context["dag_run"].run_id
    approve_url = f"{APPROVAL_SERVER_URL}/approval?dag_run_id={dag_run_id}&status=approve"
    deny_url = f"{APPROVAL_SERVER_URL}/approval?dag_run_id={dag_run_id}&status=deny"

    html_content = f"""
    <h3>VM Creation Approval Request</h3>
    <p>User <b>{user_email}</b> has requested a VM.</p>
    <p><a href="{approve_url}">Approve</a> | <a href="{deny_url}">Deny</a></p>
    """

    # Send email via EmailOperator
    email = EmailOperator(
        task_id='send_admin_email',
        to=ADMIN_EMAIL,
        subject="Airflow Approve Request!",
        html_content=html_content
    )
    return email.execute(context=context)

def wait_for_approval(**context):
    dag_run_id = context["dag_run"].run_id
    timeout = 2 * 60  # 2 minutes
    poll_interval = 15  # seconds
    elapsed = 0

    while elapsed < timeout:
        resp = requests.get(f"{APPROVAL_SERVER_URL}/get_approval", params={"dag_run_id": dag_run_id})
        status = resp.json().get("status")
        if status in ["approve", "deny"]:
            context['ti'].xcom_push(key="approval_status", value=status)
            return
        time.sleep(poll_interval)
        elapsed += poll_interval

    context['ti'].xcom_push(key="approval_status", value="timeout")

def decide_next(**context):
    status = context['ti'].xcom_pull(key="approval_status", task_ids='wait_for_approval')
    if status == "approve":
        return "create_vm"
    else:
        return "send_denial_email"

def create_vm(**context):
    conf = context['ti'].xcom_pull(key='input_conf', task_ids='get_input')
    flavor = conf["flavor"]
    image = conf["image"]
    network = conf["network"]
    cloud = conf.get("cloud", DEFAULT_CLOUD)

    clouds = OpenStackConfig(config_files=[CLOUDS_YAML_PATH])
    cloud_config = clouds.get_one(cloud)
    conn = openstack.connection.Connection(config=cloud_config)

    flavor_obj = conn.compute.find_flavor(flavor)
    image_obj = conn.compute.find_image(image)
    network_obj = conn.network.find_network(network)

    if not all([flavor_obj, image_obj, network_obj]):
        raise Exception("Could not find resources.")

    vm_name = f"airflow-vm-{datetime.now().strftime('%Y%m%d%H%M%S')}"
    server = conn.compute.create_server(
        name=vm_name,
        image_id=image_obj.id,
        flavor_id=flavor_obj.id,
        networks=[{"uuid": network_obj.id}]
    )
    conn.compute.wait_for_server(server)
    context['ti'].xcom_push(key='vm_name', value=vm_name)

def send_success_email(**context):
    conf = context['ti'].xcom_pull(key='input_conf', task_ids='get_input')
    user_email = conf.get("user_email")
    vm_name = context['ti'].xcom_pull(key='vm_name', task_ids='create_vm')

    html_content = f"""
    <h3>VM Created Successfully</h3>
    <p>VM Name: <b>{vm_name}</b></p>
    """

    email = EmailOperator(
        task_id='send_success_email',
        to=user_email,
        subject="Your VM has been created!",
        html_content=html_content
    )
    return email.execute(context=context)

def send_denial_email(**context):
    conf = context['ti'].xcom_pull(key='input_conf', task_ids='get_input')
    user_email = conf.get("user_email")
    html_content = "<h3>VM Creation Denied</h3><p>Admin has denied the request.</p>"

    email = EmailOperator(
        task_id='send_denial_email',
        to=user_email,
        subject="VM Creation Request Denied!",
        html_content=html_content
    )
    return email.execute(context=context)

with DAG(
    dag_id="openstack_vm_creator_approval_flow",
    start_date=datetime(2025, 7, 25),
    schedule=None,
    catchup=False,
    default_args=default_args,
    tags=["openstack", "approval", "vm"]
) as dag:

    get_input = PythonOperator(
        task_id="get_input",
        python_callable=get_inputs
    )

    send_admin_email = PythonOperator(
        task_id="send_admin_email",
        python_callable=send_admin_email
    )

    wait_for_approval = PythonOperator(
        task_id="wait_for_approval",
        python_callable=wait_for_approval
    )

    decide = BranchPythonOperator(
        task_id="decide_next_step",
        python_callable=decide_next
    )

    create_vm = PythonOperator(
        task_id="create_vm",
        python_callable=create_vm
    )

    send_success_email = PythonOperator(
        task_id="send_success_email",
        python_callable=send_success_email,
        trigger_rule=TriggerRule.ALL_SUCCESS
    )

    send_denial_email = PythonOperator(
        task_id="send_denial_email",
        python_callable=send_denial_email,
        trigger_rule=TriggerRule.NONE_FAILED_MIN_ONE_SUCCESS
    )

    end = EmptyOperator(task_id="end", trigger_rule=TriggerRule.ALL_DONE)

    get_input >> send_admin_email >> wait_for_approval >> decide
    decide >> create_vm >> send_success_email >> end
    decide >> send_denial_email >> end

Supplement:  Video demo

GDrive link for video demo

Author

Scroll to Top