Kubernetes Worker VM Stuck in Provisioning State
Problem
- Kubernetes cluster is stuck in
ScalingUp
state & Machine Status is stuck inProvisioning
state.


Environment
- Private Cloud Director Kubernetes - v2025.4 and Higher
- Component - Kubernetes
Cause
- Review the
machinedeployments
from the<SHORTNAME_DEFAULT_SERVICE>
namespace. There are no Ready replicas.
$ kubectl get machinedeployments -n <SHORTNAME_DEFAULT_SERVICE>
NAME CLUSTER REPLICAS READY UPDATED UNAVAILABLE PHASE AGE VERSION
test-no-ssh-3b377fe3 test-no-ssh 1 1 1 ScalingUp 22m v1.32.3
- Further checking the
machines
object, the VM is infailed
state. In the yaml output of the machine resource, it complains about a missing key.
$ kubectl get machines -n <SHORTNAME_DEFAULT_SERVICE>
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
test-no-ssh-3b377fe3-9xft6-d4h6d test-no-ssh Failed 25m v1.32.3
Message: error creating Openstack instance: Bad request with: [POST https://example1.platform9.com/nova/v2.1/<id>/servers], error message: {"badRequest": {"code": 400, "message": "Invalid key_name provided."}}
Reason: InstanceCreateFailed
Severity: Error
Status: False
Resolution
- As a prerequisite, public SSH key needs to be added in the PCD Virtualization before creating a Kubernetes cluster. The key name has to be
default
only, any other name must not be used. This key is used to bootstrapped into the worker nodes and will be used to access the worker nodes later. - This can be created via UI
Networks and Security SSH Keys. - All users need to have individual SSH Key created.
Was this page helpful?