# Unable to Create VM On Compute Host- Aggregate Sync Issue

## Problem

When attempting to create a VM on a newly added compute node, the following error is observed in the UI or through the CLI:

```ruby
No valid host was found. There are not enough hosts available.
```

Additionally, the nova-scheduler pod and nova-api-osapi logs show the following messages:

```ruby
$ kubectl logs deploy/nova-scheduler -n <NS>
Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up.
```

```ruby
$ kubectl logs deploy/nova-api-osapi -m <NS>

WARNING nova.compute.api [None [REQ_ID] [RESROUCE_UUID] - - default default] Failed to associate [PROVIDER_NAME] with a placement aggregate: No such resource provider [PROVIDER_NAME].. This may be corrected after running nova-manage placement sync_aggregates.: nova.exception.ResourceProviderNotFound: No such resource provider [PROVIDER_NAME].
```

The above message in logs indicate that the issue is the resource provider mapping for the new/existing compute node is incorrect

## Environment

* Platform9 Private Cloud Director - v2025.4 and Higher
* Self Hosted Private Cloud Director Virtualization – v2025.4 and Higher
* Component - Compute

## Cause

This issue may occur due to host-aggregate synchronisation problems. Specifically, the compute host may not be properly recognised by the Placement API as part of the aggregate, which leads to the scheduler being unable to locate valid hosts for allocation.

> For SaaS users who do not have access to the DB, an alternative way to identify the affected host is to migrate an existing test VM to new or existing hosts.> > For Self-Hosted PCD this can be validated using the below steps:

Validate if the resource mapping is completed by comparing the data of the existing nodes to the new node in the placement DB by following below steps

1. Log into the database

```bash
$ kubectl exec -it deploy/mysqld-exporter -n <REGION_NAMESPACE> -c mysqld-exporter -- mysql resmgr -u root -p<password>
```

2. Switch to the placement DB

```sql
MySQL [resmgr]> use placement;
```

3. Check the resource provider mapping is created in the tables *resource\_provider\_aggregates* and *resource\_providers*. If the resource details are not mapped correctly, there will be a discrepancy observed with the *created\_at* and *updated\_at* timestamps in the below two tables with the timestamp when the node was onboarded.

```sql
MySQL [placement]> select * from  resource_provider_aggregates;
+---------------------+------------+----------------------+--------------+
| created_at          | updated_at | resource_provider_id | aggregate_id |
+---------------------+------------+----------------------+--------------+
| 2025-06-07 19:15:45 | NULL       |                  144 |            3 |
| 2025-06-07 19:15:10 | NULL       |                  151 |            3 |
| 2025-06-09 22:45:13 | NULL       |                 4671 |            3 |
| 2025-06-09 23:33:38 | NULL       |                 4672 |            3 |
| 2025-06-09 23:51:00 | NULL       |                 4673 |            3 |
+---------------------+------------+----------------------+--------------+
```

```sql
MySQL [placement]> select * from resource_providers;
+---------------------+---------------------+------+---------------------------+---------------------+------------+------------------+--------------------+
| created_at          | updated_at          | id   | uuid                      | name                | generation | root_provider_id | parent_provider_id |
+---------------------+---------------------+------+---------------------------+---------------------+------------+------------------+--------------------+
| 2025-05-28 12:42:40 | 2025-06-05 17:23:20 |  144 | [resource_provider_UUID1] | [HOST1.EXAMPLE.COM] |        432 |              144 |               NULL |
| 2025-05-28 12:59:01 | 2025-06-10 17:38:18 |  151 | [resource_provider_UUID2] | [HOST2.EXAMPLE.COM] |        711 |              151 |               NULL |
| 2025-06-09 08:16:12 | 2025-06-10 17:23:21 | 4671 | [resource_provider_UUID3] | [HOST3.EXAMPLE.COM] |       1058 |             4671 |               NULL |
| 2025-06-09 08:16:52 | 2025-06-10 17:38:18 | 4672 | [resource_provider_UUID4] | [HOST4.EXAMPLE.COM] |        951 |             4672 |               NULL |
| 2025-06-09 08:55:40 | 2025-06-10 17:23:20 | 4673 | [resource_provider_UUID5] | [HOST5.EXAMPLE.COM] |        434 |             4673 |               NULL |
+---------------------+---------------------+------+---------------------------+---------------------+------------+------------------+--------------------+
```

In the above sample output, the *updated\_at* timestamp (2025-06-05 17:23:20) in *resource\_providers* table for id "144" is older than the *created\_at* (2025-06-07 19:15:45) timestamp from the *resource\_provider\_aggregates* which highlights the discrepancy. In this case, *host1.example.com* is affected by this issue.

## Resolution

There are 2 ways of resolving this issue.

**Option 1:**

This is a preferred and much easier way to resolve this issue.

The issue can be resolve by running nova-manage placement sync\_aggregates from the nova-api-osapi pod as below:

```bash
$ kubectl exec -it deploy/nova-api-osapi -n <NS> -- nova-manage placement sync_aggregates
```

**Option 2:**

The issue could be resolved by removing the problematic compute host from its host aggregate and re-adding it. This action triggers Nova to re-notify placement and successfully schedule VMs to the host.

Steps:

1. List All Aggregates

```bash
$ openstack aggregate list
```

2. Inspect a Specific Aggregate and check if the problematic host is part of any aggregate:

```bash
$ openstack aggregate show <AGGREGATE_NAME_OR_ID>
```

3. Remove the Host from Aggregate

```bash
$ openstack aggregate remove host <AGGREGATE_NAME_OR_ID> <HOSTNAME>
```

4. Add the Host Back to the Aggregate

```bash
$ openstack aggregate add host <AGGREGATE_NAME_OR_ID> <HOSTNAME>
```

This action re-registers the host within the aggregate and updates Placement API visibility.

## Additional Information

Ensure that the nova-compute service is running and registered:

```bash
$ openstack compute service list --service nova-compute
```

Check for any errors in the ostackhost log on the hosts:

```bash
/var/log/pf9/ostackhost.log
```

> For Self-Hosted PCD users, check the below pod logs:

```bash
$ kubectl logs <NOVA_SCHEDULER_POD> -n <REGION_NAMESPACE>
$ kubectl logs <NOVA_API_OSAPI_POD> -n <REGION_NAMESPACE>
$ kubectl logs <NOVA_SCHEDULER_POD> -n <REGION_NAMESPACE>
$ kubectl logs <NOVA_CONDUCTOR_POD> -n <REGION_NAMESPACE>
$ kubectl logs <PLACEMENT_API_POD> -n <REGION_NAMESPACE>
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://platform9.com/kb/pcd/compute/unable-to-create-vm-on-compute-host-aggregate-sync-issue.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
