DEV Community

Provisioning a Kubernetes Cluster on OpenStack Using Cluster API (CAPI)

This is my first blog post. I spent this weekend going deep into CAPI + CAPO + ORC. This guide covers everything - installation, configuration, manifests, and the full flow explained simply. If you want to understand CAPI, this guide is for you.

What Is Cluster API and Why Should You Care?

Managing Kubernetes clusters manually is painful. You manually create infrastructure, install Kubernetes packages, SSH into VMs, run kubeadm init, and then run kubeadm join to join the worker nodes, set up CNI, etc. Cluster API (CAPI) solves this by treating cluster creation as a Kubernetes-native, declarative operation. Cluster API is a Kubernetes project that lets you manage Kubernetes clusters using Kubernetes-style APIs.

Instead of running commands, you write YAML manifests and apply them to a "management cluster." CAPI controllers read those manifests and provision your cluster automatically — creating VMs, configuring networking, running kubeadm — all without you touching a single VM.

Three core concepts to understand:

  • Management Cluster — A small existing Kubernetes cluster that runs CAPI controllers and manages other clusters
  • Workload Cluster — The cluster CAPI creates for you, where your actual apps run
  • Providers — Plugins that tell CAPI how to talk to specific infrastructure (OpenStack, AWS, GCP, etc.)

Role of controllers:

CAPI installs multiple controllers.

Controller Role
CAPI Core Orchestrates cluster lifecycle — watches Cluster, Machine, MachineDeployment objects
CAPO (Cluster API Provider OpenStack) Talks to OpenStack APIs — creates Nova VMs, Octavia LBs
CABPK (Kubeadm Bootstrap Provider) Generates cloud-init scripts that run kubeadm on VMs
KCP (Kubeadm Control Plane Provider) Manages control plane machine lifecycle and scaling

All are installed together via clusterctl.

If you have been reading about CAPO, you may have come across ORC — the OpenStack Resource Controller. ORC is a separate project and is not part of CAPO. It needs to be installed separately. ORC is a separate, optional Kubernetes controller that lets you manage OpenStack resources — Glance images, Nova keypairs, Neutron networks and routers — as Kubernetes custom resources, declaratively. The idea is that instead of manually uploading your node image to Glance and copying its UUID into your CAPI manifests, you declare an Image CRD in your management cluster, and ORC ensures it exists in OpenStack. It ensures the resources referenced in the CAPI manifest exist before creating the cluster.

Step 1 — Prerequisites

Before installing CAPI, you need:

  • A running management cluster (even a single-node kind or minikube cluster works)
  • clusterctl CLI installed
  • OpenStack credentials (your clouds.yaml file)
  • OpenStack environment with:
    • At least one available flavor (e.g., m1.medium)
    • A Glance image with Kubernetes pre-installed (qcow image with kubeadm/kubelet/kubectl pre-installed)
    • A provider network created
    • Octavia (Load Balancer service) enabled

Install clusterctl:

curl -L https://github.com/kubernetes-sigs/cluster-api/releases/latest/download/clusterctl-linux-amd64 \
  -o clusterctl
chmod +x clusterctl
sudo mv clusterctl /usr/local/bin/
Enter fullscreen mode Exit fullscreen mode

Verify:

clusterctl version
Enter fullscreen mode Exit fullscreen mode

Sample output:

clusterctl version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"1a1852c74072febc3fbf9823c9dd32f4cd6cc747", GitTreeState:"clean", BuildDate:"2026-02-17T17:46:30Z", GoVersion:"go1.24.13", Compiler:"gc", Platform:"linux/amd64"}
Enter fullscreen mode Exit fullscreen mode

Step 2 — Initialize CAPI with OpenStack Provider

clusterctl init --infrastructure openstack:v0.14.1
Enter fullscreen mode Exit fullscreen mode

This installs:

  • CAPI core controllers
  • CAPO (OpenStack infrastructure provider)
  • CABPK (kubeadm bootstrap provider)
  • KCP (kubeadm control plane provider)

Sample Output:

Fetching providers
Installing cert-manager version="v1.19.3"
Waiting for cert-manager to be available...
spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
Installing provider="cluster-api" version="v1.12.4" targetNamespace="capi-system"
spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
Installing provider="bootstrap-kubeadm" version="v1.12.4" targetNamespace="capi-kubeadm-bootstrap-system"
spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
Installing provider="control-plane-kubeadm" version="v1.12.4" targetNamespace="capi-kubeadm-control-plane-system"
spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
Installing provider="infrastructure-openstack" version="v0.14.1" targetNamespace="capo-system"
spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
Your management cluster has been initialized successfully!

You can now create your first workload cluster by running the following:

  clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

Verify all controllers are running:

kubectl get pods -A | grep -E "capi|capo|cert-manager"
capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager-5b6b785789-r62sl      1/1     Running   0          42m
capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager-95df764cf-hh7km   1/1     Running   0          42m
capi-system                         capi-controller-manager-6bbbfd79d8-b2dm4                        1/1     Running   0          42m
capo-system                         capo-controller-manager-6469cf765b-xhv8s                        1/1     Running   0          42m
cert-manager                        cert-manager-77f7698dcf-76hsq                                   1/1     Running   0          43m
cert-manager                        cert-manager-cainjector-557b97dd67-52fnw                        1/1     Running   0          43m
cert-manager                        cert-manager-webhook-5b7654ff4-cmgc8                            1/1     Running   0          43m
Enter fullscreen mode Exit fullscreen mode

Step 3 — Set Environment Variables for Template Generation

clusterctl can generate cluster manifests from a template. You need to export these environment variables first:

export OPENSTACK_CLOUD=openstack # name should match the cloudname in clouds.yaml
export OPENSTACK_CLOUD_CACERT_B64=""
export  OPENSTACK_CLOUD_YAML_B64=$(base64 -w0 clouds.yaml) # get clouds.yaml from openstack horizon dashboard
export OPENSTACK_NODE_MACHINE_FLAVOR=m1.medium # instance type for ctrl plane node
export OPENSTACK_IMAGE_NAME=ubuntu-2404-kube # Name of the image used for kubernetes cluster.
export OPENSTACK_FAILURE_DOMAIN=nova  # avail zone usually nova.
export OPENSTACK_EXTERNAL_NETWORK_ID=5068d3a6-f560-4d79-b834-cd6943dc5de5 # ex: openstack network list | grep provider, get the id.
export OPENSTACK_DNS_NAMESERVERS=8.8.8.8
export OPENSTACK_SSH_KEY_NAME="kashi" # name of the SSH key uploaded in OpenStack and you want to be uploaded to cluster nodes
Enter fullscreen mode Exit fullscreen mode

The image used in CAPI is prebuilt image that contains kubeadm, kubectl, kubelet all the kubernetes packages preinstalled.

You can build the image by running the following Makefile:

SHELL := /bin/bash

IMAGE_BUILDER_REPO=https://github.com/kubernetes-sigs/image-builder.git
IMAGE_BUILDER_DIR=image-builder

ifndef IMAGE_NAME
IMAGE_NAME=ubuntu-2404-kube-v1.34.3
$(warning IMAGE_NAME is undefined, using default: $(IMAGE_NAME))
endif

ifndef IMAGE_FILE
IMAGE_FILE=output/ubuntu-2404-kube-v1.34.3/ubuntu-2404-kube-v1.34.3
$(warning IMAGE_FILE is undefined, using default: $(IMAGE_FILE))
endif

OS_IMAGE_NAME=$(IMAGE_NAME)
OS_DISK_FORMAT=qcow2
OS_CONTAINER_FORMAT=bare
OS_VISIBILITY=public
ifndef ADMIN_RC_PATH
ADMIN_RC_PATH=/etc/kolla/admin-openrc.sh
$(warning ADMIN_RC_PATH is undefined, using default: $(ADMIN_RC_PATH))
endif


img-builder-clone:
    if [ ! -d "$(IMAGE_BUILDER_DIR)" ]; then \
        git clone $(IMAGE_BUILDER_REPO); \
    fi

capi-deps: img-builder-clone
    cd $(IMAGE_BUILDER_DIR)/images/capi && make deps-qemu


build-capi: capi-deps
    if [ ! -f "$(IMAGE_BUILDER_DIR)/images/capi/$(IMAGE_FILE)" ]; then \
       cd $(IMAGE_BUILDER_DIR)/images/capi && make build-qemu-ubuntu-2404 &> /dev/null; \
    fi


capi-upload: 
     . $(ADMIN_RC_PATH) && openstack image create $(OS_IMAGE_NAME) \
        --file $(IMAGE_BUILDER_DIR)/images/capi/$(IMAGE_FILE) \
        --disk-format $(OS_DISK_FORMAT) \
        --container-format $(OS_CONTAINER_FORMAT) \
        --public


capi-image: build-capi capi-upload

capi-delete:
     . $(ADMIN_RC_PATH) && openstack image delete $(OS_IMAGE_NAME)

all: capi-image

clean: 
    rm -rf $(IMAGE_BUILDER_DIR)  $(DISKIMAGE_DIR) $(OCTAVIA_REPO)
Enter fullscreen mode Exit fullscreen mode

Run the makefile: make all

You can list the required environment variables to be set by running the following command:

clusterctl generate cluster --infrastructure openstack:v0.14.1 --list-variables capi-quickstart
Enter fullscreen mode Exit fullscreen mode

Sample Output:

Required Variables:
  - KUBERNETES_VERSION
  - OPENSTACK_CLOUD
  - OPENSTACK_CLOUD_CACERT_B64
  - OPENSTACK_CLOUD_YAML_B64
  - OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR
  - OPENSTACK_DNS_NAMESERVERS
  - OPENSTACK_EXTERNAL_NETWORK_ID
  - OPENSTACK_FAILURE_DOMAIN
  - OPENSTACK_IMAGE_NAME
  - OPENSTACK_NODE_MACHINE_FLAVOR
  - OPENSTACK_SSH_KEY_NAME

Optional Variables:
  - CLUSTER_NAME                 (defaults to capi-quickstart)
  - CONTROL_PLANE_MACHINE_COUNT  (defaults to 1)
  - WORKER_MACHINE_COUNT         (defaults to 0)
Enter fullscreen mode Exit fullscreen mode

Step 4 — Generate the Cluster Manifests

clusterctl generate cluster hind-cluster --infrastructure openstack:v0.14.1   --kubernetes-version v1.34.3  --control-plane-machine-count=1   --worker-machine-count=1  > capi-quickstart.yaml
Enter fullscreen mode Exit fullscreen mode

The above command generates cluster manifests. Let's understand the generated manifests.

The Cluster manifest:

apiVersion: cluster.x-k8s.io/v1beta2
kind: Cluster
metadata:
  name: hind-cluster
  namespace: default
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    serviceDomain: cluster.local
  controlPlaneRef:
    apiGroup: controlplane.cluster.x-k8s.io
    kind: KubeadmControlPlane
    name: hind-cluster-control-plane
  infrastructureRef:
    apiGroup: infrastructure.cluster.x-k8s.io
    kind: OpenStackCluster
    name: hind-cluster
Enter fullscreen mode Exit fullscreen mode

This is the main object — it ties everything together. It says:

  • Pods get IPs from 192.168.0.0/16
  • Control plane is defined by KubeadmControlPlane
  • OpenStack infrastructure is defined by OpenStackCluster

The OpenStackCluster - OpenStack Infrastructure Definition

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackCluster
metadata:
  name: hind-cluster
  namespace: default
spec:
  apiServerLoadBalancer:
    enabled: true
  externalNetwork:
    id: c926f5e2-830e-43b0-9d06-89641bb8e131
  identityRef:
    cloudName: openstack
    name: hind-cluster-cloud-config
  managedSecurityGroups:
    allNodesSecurityGroupRules:
    - description: Created by cluster-api-provider-openstack - BGP (calico)
      direction: ingress
      etherType: IPv4
      name: BGP (Calico)
      portRangeMax: 179
      portRangeMin: 179
      protocol: tcp
      remoteManagedGroups:
      - controlplane
      - worker
    - description: Created by cluster-api-provider-openstack - IP-in-IP (calico)
      direction: ingress
      etherType: IPv4
      name: IP-in-IP (calico)
      protocol: "4"
      remoteManagedGroups:
      - controlplane
      - worker
  managedSubnets:
  - cidr: 10.6.0.0/24
    dnsNameservers:
    - 8.8.8.8
Enter fullscreen mode Exit fullscreen mode

CAPO controller in the capo-system namespace reads this manifest and:

  • Checks the existence of the external network using the ID provided in the above manifest, also creates an internal network for our cluster, subnet, a router, and attaches this internal network to the router so the nodes in the cluster get internet access (outbound) and a security group that allows only the load balancer to access the backend cluster nodes. The client should be able to access the cluster only through the loadbalancer IP and the backend control plane IPs, or their identities are not exposed.
  • Creates security groups for the control plane and workers.
  • Creates an Octavia LoadBalancer for the Kubernetes API endpoint. This is especially useful when u have multiple control plane nodes, and the load balancer will load balance the requests across the control planes.
  • Sets the LB IP as the cluster's controlPlaneEndpoint

The kubeadmControlPlane definition:

apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: KubeadmControlPlane
metadata:
  name: hind-cluster-control-plane
  namespace: default
spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      controllerManager:
        extraArgs:
        - name: cloud-provider
          value: external
    initConfiguration:
      nodeRegistration:
        kubeletExtraArgs:
        - name: cloud-provider
          value: external
        - name: provider-id
          value: openstack:///'{{ instance_id }}'
        name: '{{ local_hostname }}'
    joinConfiguration:
      nodeRegistration:
        kubeletExtraArgs:
        - name: cloud-provider
          value: external
        - name: provider-id
          value: openstack:///'{{ instance_id }}'
        name: '{{ local_hostname }}'
  machineTemplate:
    spec:
      infrastructureRef:
        apiGroup: infrastructure.cluster.x-k8s.io
        kind: OpenStackMachineTemplate
        name: hind-cluster-control-plane
  replicas: 1
  version: v1.34.3
Enter fullscreen mode Exit fullscreen mode

cloud-provider: external on kubelet: When kubelet starts, it adds a taint node.cloudprovider.kubernetes.io/uninitialized on the node. No pods schedule here until the OpenStack Cloud Controller Manager (CCM) initializes the node and removes this taint. This flag tells kubelet, "don't try to handle cloud stuff yourself, wait for external CCM."

cloud-provider: external on controller-manager - Disables the built-in cloud loops in kube-controller-manager (node lifecycle, service LoadBalancer, routes). These are handed off to the OpenStack CCM pod. When a load-balancer type service is created, it is the responsibility of the CCM to create an LB for you behind the scenes.

provider-id: openstack:///{{ instance_id }} - A unique ID linking the Kubernetes Node object to the actual Nova VM. {{ instance_id }} is a cloud-init template variable resolved to the real VM UUID at boot time. The OpenStack CCM uses this to fetch instance metadata.

The OpenStackMachineTemplate for Control Plane:

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackMachineTemplate
metadata:
  name: hind-cluster-md-0
  namespace: default
spec:
  template:
    spec:
      flavor: m1.medium
      image:
        filter:
          name: ubuntu-2404-kube
      sshKeyName: kashi
Enter fullscreen mode Exit fullscreen mode

This tells CAPO what Nova flavor and Glance image to use for control plane VMs. The CAPO looks up the image in Glance and creates the VMs using the image UID fetched from Glance.

The MachineDeployment — Worker Node Pool

apiVersion: cluster.x-k8s.io/v1beta2
kind: MachineDeployment
metadata:
  name: hind-cluster-md-0
  namespace: default
spec:
  clusterName: hind-cluster
  replicas: 1
  selector:
    matchLabels: null
  template:
    spec:
      bootstrap:
        configRef:
          apiGroup: bootstrap.cluster.x-k8s.io
          kind: KubeadmConfigTemplate
          name: hind-cluster-md-0
      clusterName: hind-cluster
      failureDomain: nova
      infrastructureRef:
        apiGroup: infrastructure.cluster.x-k8s.io
        kind: OpenStackMachineTemplate
        name: hind-cluster-md-0
      version: v1.34.3
Enter fullscreen mode Exit fullscreen mode

Like a Deployment for VMs. It ensures the desired number of worker nodes exists. It references KubeadmConfigTemplate for the cloud-init script and OpenStackMachineTemplate for VM specs.

The KubeadmConfigTemplate — Worker Bootstrap Config

apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: KubeadmConfigTemplate
metadata:
  name: hind-cluster-md-0
  namespace: default
spec:
  template:
    spec:
      joinConfiguration:
        nodeRegistration:
          kubeletExtraArgs:
          - name: cloud-provider
            value: external
          - name: provider-id
            value: openstack:///'{{ instance_id }}'
          name: '{{ local_hostname }}'
Enter fullscreen mode Exit fullscreen mode

Workers only have joinConfiguration — they never run kubeadm init, only kubeadm join. The same cloud-provider: external and provider-id flags are set here for the same reasons as the control plane.

So, when you apply capi-quickstart.yaml, CAPI sees the cluster object, and CAPO sees the OpenstackCluster object and creates the internal network, subnet, router, attaches subnet to the router for internet access ( router will have a port from the provider network as external gw ). It also creates security groups and the loadbalancer for kubernetes API server endpoint.

You should see the below events once when you describe the OpenstackCluster resource:

Events:
  Type    Reason                         Age   From                  Message
  ----    ------                         ----  ----                  -------
  Normal  Successfulcreatenetwork        54m   openstack-controller  Created network k8s-clusterapi-cluster-default-hind-cluster with id 05371ddd-a37e-43d6-ba6d-58599c4d2b7b
  Normal  Successfulcreatesubnet         54m   openstack-controller  Created subnet k8s-clusterapi-cluster-default-hind-cluster with id ea8186bc-88f7-4cc6-aa27-bf9b5a1154c5
  Normal  Successfulcreaterouter         54m   openstack-controller  Created router k8s-clusterapi-cluster-default-hind-cluster with id 8790e4f0-3b55-4d4e-acf4-8ce37e5719f4
  Normal  Successfulcreatesecuritygroup  54m   openstack-controller  Created security group k8s-cluster-default-hind-cluster-secgroup-controlplane with id e08613ca-135c-4f64-ba67-def6a1ffd7e1
  Normal  Successfulcreatesecuritygroup  54m   openstack-controller  Created security group k8s-cluster-default-hind-cluster-secgroup-worker with id 4ed61b8a-a9b0-4fb8-a3db-af6e457aea8a
  Normal  Successfulcreateloadbalancer   54m   openstack-controller  Created load balancer k8s-clusterapi-cluster-default-hind-cluster-kubeapi with id 421cd8a9-a7ba-4f41-8f03-1cdc4d958358
  Normal  Successfulcreatefloatingip     52m   openstack-controller  Created floating IP 10.31.0.101 with id 433d5f18-34dc-489d-b2ae-19ba2a167f8b
  Normal  Successfulassociatefloatingip  52m   openstack-controller  Associated floating IP 10.31.0.101 with port 6f0f82d9-bc8e-42e1-88eb-65ec33469d6e
  Normal  Successfulcreatelistener       52m   openstack-controller  Created listener k8s-clusterapi-cluster-default-hind-cluster-kubeapi-6443 with id c66b31a9-f9ee-4bfd-b61f-b402c266a8bf
  Normal  Successfulcreatepool           52m   openstack-controller  Created pool k8s-clusterapi-cluster-default-hind-cluster-kubeapi-6443 with id 465f565c-3a84-4b4d-938b-c28dfcc009af
  Normal  Successfulcreatemonitor        52m   openstack-controller  Created monitor k8s-clusterapi-cluster-default-hind-cluster-kubeapi-6443 with id 1349252f-b42b-4393-bab3-9243ff9b282e
Enter fullscreen mode Exit fullscreen mode

And Also verify the network, subnet, router, security groups in your openstack cluster:

(kolla-venv) root@control-1:~#  openstack network list
+--------------------------------------+---------------------------------------------+--------------------------------------+
| ID                                   | Name                                        | Subnets                              |
+--------------------------------------+---------------------------------------------+--------------------------------------+
| 05371ddd-a37e-43d6-ba6d-58599c4d2b7b | k8s-clusterapi-cluster-default-hind-cluster | ea8186bc-88f7-4cc6-aa27-bf9b5a1154c5 |
| 40c70b44-1c45-407f-b590-5c019814ef57 | trove-mgmt                                  | 22b30e5a-4cbb-422d-9e9b-6f0ee2736c46 |
| 75bba094-d69a-4060-a271-a5d529915f01 | private-net                                 | e2c43b43-2936-4839-b57c-8c955f44f913 |
| 7e72fa18-46b5-4bfb-96bf-5666df021dec | lb-mgmt-net                                 | 2183795b-8723-490f-aaa9-5bb49f2d9af1 |
| c926f5e2-830e-43b0-9d06-89641bb8e131 | provider                                    | 4073439c-176e-46b7-8f78-9e7debdfef02 |
+--------------------------------------+---------------------------------------------+--------------------------------------+
(kolla-venv) root@control-1:~#  openstack subnet list
 +--------------------------------------+---------------------------------------------+--------------------------------------+-----------------+
| ID                                   | Name                                        | Network                              | Subnet          |
+--------------------------------------+---------------------------------------------+--------------------------------------+-----------------+
| 2183795b-8723-490f-aaa9-5bb49f2d9af1 | lb-mgmt-subnet                              | 7e72fa18-46b5-4bfb-96bf-5666df021dec | 10.1.0.0/24     |
| 22b30e5a-4cbb-422d-9e9b-6f0ee2736c46 | trove-mgmt-subnet                           | 40c70b44-1c45-407f-b590-5c019814ef57 | 10.33.0.0/24    |
| 4073439c-176e-46b7-8f78-9e7debdfef02 | provider-subnet                             | c926f5e2-830e-43b0-9d06-89641bb8e131 | 10.31.0.0/24    |
| e2c43b43-2936-4839-b57c-8c955f44f913 | private-subnet                              | 75bba094-d69a-4060-a271-a5d529915f01 | 192.168.50.0/24 |
| ea8186bc-88f7-4cc6-aa27-bf9b5a1154c5 | k8s-clusterapi-cluster-default-hind-cluster | 05371ddd-a37e-43d6-ba6d-58599c4d2b7b | 10.6.0.0/24     |
+--------------------------------------+---------------------------------------------+--------------------------------------+-----------------+
(kolla-venv) root@control-1:~#  
(kolla-venv) root@control-1:~#  openstack router list 
+--------------------------------------+---------------------------------------------+--------+-------+----------------------------------+-------------+-------+
| ID                                   | Name                                        | Status | State | Project                          | Distributed | HA    |
+--------------------------------------+---------------------------------------------+--------+-------+----------------------------------+-------------+-------+
| 40625a37-d00e-422c-a5b4-577c4d4c2a81 | trove-mgmt-router                           | ACTIVE | UP    | 2c2343fd2bd141e8a18229afefb9c4ff | False       | False |
| 8790e4f0-3b55-4d4e-acf4-8ce37e5719f4 | k8s-clusterapi-cluster-default-hind-cluster | ACTIVE | UP    | 2c2343fd2bd141e8a18229afefb9c4ff | False       | False |
+--------------------------------------+---------------------------------------------+--------+-------+----------------------------------+-------------+-------+
(kolla-venv) root@control-1:~# 
(kolla-venv) root@control-1:~# 
(kolla-venv) root@control-1:~# 
(kolla-venv) root@control-1:~#  openstack router show k8s-clusterapi-cluster-default-hind-cluster
+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                   | Value                                                                                                                                                     |
+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up          | UP                                                                                                                                                        |
| availability_zone_hints |                                                                                                                                                           |
| availability_zones      | nova                                                                                                                                                      |
| created_at              | 2026-03-29T11:26:23Z                                                                                                                                      |
| description             | Created by cluster-api-provider-openstack cluster default-hind-cluster                                                                                    |
| distributed             | False                                                                                                                                                     |
| enable_ndp_proxy        | None                                                                                                                                                      |
| external_gateway_info   | {"network_id": "c926f5e2-830e-43b0-9d06-89641bb8e131", "external_fixed_ips": [{"subnet_id": "4073439c-176e-46b7-8f78-9e7debdfef02", "ip_address":         |
|                         | "10.31.0.246"}], "enable_snat": true}                                                                                                                     |
| flavor_id               | None                                                                                                                                                      |
| ha                      | False                                                                                                                                                     |
| id                      | 8790e4f0-3b55-4d4e-acf4-8ce37e5719f4                                                                                                                      |
| interfaces_info         | [{"port_id": "1e65d210-c1e7-4118-a847-d27b9058077b", "ip_address": "10.6.0.1", "subnet_id": "ea8186bc-88f7-4cc6-aa27-bf9b5a1154c5"}]                      |
| name                    | k8s-clusterapi-cluster-default-hind-cluster                                                                                                               |
| project_id              | 2c2343fd2bd141e8a18229afefb9c4ff                                                                                                                          |
| revision_number         | 3                                                                                                                                                         |
| routes                  |                                                                                                                                                           |
| status                  | ACTIVE                                                                                                                                                    |
| tags                    |                                                                                                                                                           |
| updated_at              | 2026-03-29T11:26:25Z                                                                                                                                      |
+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------+
(kolla-venv) root@control-1:~# 
(kolla-venv) root@control-1:~# 
(kolla-venv) root@control-1:~# 
(kolla-venv) root@control-1:~#  openstack loadbalancer list
 +-------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
| id                                  | name                                | project_id                       | vip_address | provisioning_status | operating_status | provider |
+-------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
| 421cd8a9-a7ba-4f41-8f03-            | k8s-clusterapi-cluster-default-     | 2c2343fd2bd141e8a18229afefb9c4ff | 10.6.0.107  | ACTIVE              | ONLINE           | amphora  |
| 1cdc4d958358                        | hind-cluster-kubeapi                |                                  |             |                     |                  |          |
+-------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
(kolla-venv) root@control-1:~#  
(kolla-venv) root@control-1:~#  openstack  security group list
+--------------------------------------+--------------------------------------------------------+---------------------------+----------------------------------+------+--------+
| ID                                   | Name                                                   | Description               | Project                          | Tags | Shared |
+--------------------------------------+--------------------------------------------------------+---------------------------+----------------------------------+------+--------+
| 188e18b6-cc7e-42d6-8b56-a03cb9af5e5a | lb-421cd8a9-a7ba-4f41-8f03-1cdc4d958358                |                           | f37717d8ea254fdfb12bff38e8834f9d | []   | False  |
| 266e525f-2e15-4f72-9c11-7448c47d9abf | trove-sec-grp                                          | trove-sec-grp             | 2c2343fd2bd141e8a18229afefb9c4ff | []   | False  |
| 3a9617cd-6e92-4c7f-bad5-fd3dfb1996ed | default                                                | Default security group    | 2c2343fd2bd141e8a18229afefb9c4ff | []   | False  |
| 429b9c68-e2db-437c-b4db-bf7f8bad6d60 | default                                                | Default security group    | f37717d8ea254fdfb12bff38e8834f9d | []   | False  |
| 4ed61b8a-a9b0-4fb8-a3db-af6e457aea8a | k8s-cluster-default-hind-cluster-secgroup-worker       | Cluster API managed group | 2c2343fd2bd141e8a18229afefb9c4ff | []   | False  |
| 64ad5172-e754-416b-a228-b1bd9d98e96a | test                                                   | test                      | 2c2343fd2bd141e8a18229afefb9c4ff | []   | False  |
| 80fbc123-07f5-4b7d-bc2c-92891fc96668 | lb-mgmt-sec-grp                                        |                           | f37717d8ea254fdfb12bff38e8834f9d | []   | False  |
| e08613ca-135c-4f64-ba67-def6a1ffd7e1 | k8s-cluster-default-hind-cluster-secgroup-controlplane | Cluster API managed group | 2c2343fd2bd141e8a18229afefb9c4ff | []   | False  |
| e8fc39aa-668a-404a-afe0-98ffa118f4a1 | lb-health-mgr-sec-grp                                  |                           | f37717d8ea254fdfb12bff38e8834f9d | []   | False  |
+--------------------------------------+--------------------------------------------------------+---------------------------+----------------------------------+------+--------+
(kolla-venv) root@control-1:~# 
Enter fullscreen mode Exit fullscreen mode

As you can see, the resources created by the clusterAPI are prefixed with k8s-clusterapi, and security groups are prefixed with k8s-cluster.

Also, you can see that a load balancer has been created that will act as the frontend for our control plane nodes.

Then the capi-kubeadm-control-plane-controller-manager will see the KubeadmControlPlane object and create the kubeadmconfig and machine objects in the cluster.

Verify:

kubectl get kubeadmconfig
Enter fullscreen mode Exit fullscreen mode

Output:

NAME                               CLUSTER        DATA SECRET CREATED   AGE
hind-cluster-control-plane-wggsv   hind-cluster   true                  57m
hind-cluster-md-0-xg7gb-qjlvj      hind-cluster   true                  57m
Enter fullscreen mode Exit fullscreen mode

We see 2 objects, one for controlplane and the other for the worker node.

kubectl get machine
Enter fullscreen mode Exit fullscreen mode

Output:

NAME                               CLUSTER        NODE NAME                          READY     AVAILABLE   UP-TO-DATE   PHASE     AGE   VERSION
hind-cluster-control-plane-wggsv   hind-cluster   hind-cluster-control-plane-wggsv   Unknown   False       True         Running   57m   v1.34.3
hind-cluster-md-0-xg7gb-qjlvj      hind-cluster   hind-cluster-md-0-xg7gb-qjlvj      True      True        True         Running   57m   v1.34.3
Enter fullscreen mode Exit fullscreen mode

And capi-kubeadm-bootstrap-controller-manager, does a crucial job. It generates the cloud-init script that contains the actual kubeadm commands to be run during the booting process of our nodes in the cluster. This cloud-init script is actually stored in a secret.

Highlighted above are the secrets created by the controller that contain the cloud-init script. These secrets will be referenced by the machine objects created by kubeadm-controlplane-mgr.

kubectl get machine hind-cluster-control-plane-wggsv -o yaml
apiVersion: cluster.x-k8s.io/v1beta2
kind: Machine
metadata:
  annotations:
    pre-terminate.delete.hook.machine.cluster.x-k8s.io/kcp-cleanup: ""
  creationTimestamp: "2026-03-29T11:28:04Z"
  finalizers:
  - machine.cluster.x-k8s.io
  generation: 3
  labels:
    cluster.x-k8s.io/cluster-name: hind-cluster
    cluster.x-k8s.io/control-plane: ""
    cluster.x-k8s.io/control-plane-name: hind-cluster-control-plane
  name: hind-cluster-control-plane-wggsv
  namespace: default
  ownerReferences:
  - apiVersion: controlplane.cluster.x-k8s.io/v1beta2
    blockOwnerDeletion: true
    controller: true
    kind: KubeadmControlPlane
    name: hind-cluster-control-plane
    uid: d25ecc23-dd2c-417a-b891-f88eb85225af
  resourceVersion: "15813"
  uid: 4b4725ad-f1ee-4232-8c06-4cd2f14ed025
spec:
  bootstrap:
    configRef:
      apiGroup: bootstrap.cluster.x-k8s.io
      kind: KubeadmConfig
      name: hind-cluster-control-plane-wggsv
    dataSecretName: hind-cluster-control-plane-wggsv
  clusterName: hind-cluster
  deletion:
    nodeDeletionTimeoutSeconds: 10
  failureDomain: nova
  infrastructureRef:
    apiGroup: infrastructure.cluster.x-k8s.io
    kind: OpenStackMachine
    name: hind-cluster-control-plane-wggsv
  providerID: openstack:///3570638e-3d1c-4ccb-b758-86321356ea0d
  readinessGates:
  - conditionType: APIServerPodHealthy
  - conditionType: ControllerManagerPodHealthy
  - conditionType: SchedulerPodHealthy
  - conditionType: EtcdPodHealthy
  - conditionType: EtcdMemberHealthy
  version: v1.34.3
Enter fullscreen mode Exit fullscreen mode

Verify the data secretName above. The value matches one of the secrets listed in the output of the get secrets command.

CAPO controller sees the machine object and calls the NOVA service to create the VM.

Also, while kubeadmcontrolplane is used for control plane nodes, machinedeployment is used for worker nodes.

After you applied the clusterAPI, manifests, get the kubeconfig of the workload cluster using:

clusterctl get kubeconfig hind-cluster > hind-cluster.kubeconfig
Enter fullscreen mode Exit fullscreen mode

Export the kubeconfig and verify the cluster status:

export KUBECONFIG=./hind-cluster.kubeconfig
kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Output:

NAME                               STATUS     ROLES           AGE   VERSION
hind-cluster-control-plane-wggsv   NotReady   control-plane   13m   v1.34.3
hind-cluster-md-0-xg7gb-qjlvj      NotReady   <none>          11m   v1.34.3
Enter fullscreen mode Exit fullscreen mode

The nodes are in NotReady state. And it is completely expected. Because we need to install CNI.

You can install the CNI of your choice, I am going with calico:

kubectl --kubeconfig=./hind-cluster.kubeconfig \
  apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
Enter fullscreen mode Exit fullscreen mode

After installing the CNI, nodes should transition to Ready state:

kubectl get nodes
NAME                               STATUS   ROLES           AGE   VERSION
hind-cluster-control-plane-wggsv   Ready    control-plane   70m   v1.34.3
hind-cluster-md-0-xg7gb-qjlvj      Ready    <none>          68m   v1.34.3
Enter fullscreen mode Exit fullscreen mode

Also, remember that we discussed about the role of openstack cloud controller manager above. In the above capi manifests, especially for kubeadmcontrolplane under kubeadmconfigspec, and in the kubeadmconfigtemplate of machine deployment. we specified extra args. i.e cloud-provider set to external.

Since these flags are set, the worker node and control plane node both will register themselves with a taint: node.cluster.x-k8s.io/uninitialized:NoSchedule.

So unless you install Openstack cloud controller manager the nodes will stay unschedulable. Let's verify this by logging into the nodes.
Verify the kubelet status on control plane node:

On worker node:

You notice the following flags on both the nodes.
On worker node:

/usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cloud-provider=external --pod-infra-container-image=registry.k8s.io/pause:3.10.1 --provider-id=openstack:///d9ae4dde-b281-421b-ad1b-36a2ee7db110 --register-with-taints=node.cluster.x-k8s.io/uninitialized:NoSchedule
Enter fullscreen mode Exit fullscreen mode

The provider ID here corresponds to nova VM UUID. This is how OpenStack Cloud Controller Manager maps the Kubernetes node to nova vm object in OpenStack. Also, the cloud-provider is set to external.

On control plane:

usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cloud-provider=external --pod-infra-container-image=registry.k8s.io/pause:3.10.1 --provider-id=openstack:///3570638e-3d1c-4ccb-b758-86321356ea0d
Enter fullscreen mode Exit fullscreen mode

Now check the taints.
For control plane:

taints-on-control-plane

For worker node:

taints-on-worker-node

So, now let's deploy OpenStack Cloud Controller Manager. Run the following steps:

wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
chmod +x /tmp/env.rc
echo "Removing old cloud.conf files..."
rm -f /tmp/cloud.conf*
echo "Generating cloud.conf file from clouds.yaml..."
/tmp/env.rc ./clouds.yaml openstack
cp /tmp/cloud.conf* ./cloud.conf
echo "Creating secret in the kube-system ns..."
kubectl --kubeconfig=./hind-cluster.kubeconfig -n kube-system create secret generic cloud-config --from-file=cloud.conf
echo "Deploying cloud controller in the workload cluster..."
kubectl apply --kubeconfig=./hind-cluster.kubeconfig -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
kubectl apply --kubeconfig=./hind-cluster.kubeconfig -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
kubectl apply --kubeconfig=./hind-cluster.kubeconfig -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
Enter fullscreen mode Exit fullscreen mode

tmp.rc is the script used to generate cloud.conf file from clouds.yaml. Download the clouds.yaml file from your OpenStack cloud and specify the path to this script as the first argument. And the second argument is the cloud name in the clouds.yaml.

Once you install the cloud controller manager, it will remove the taint, and you should see the coredns pods move to the running state from pending.

status

Create a workload and Test the cluster:

Now lets create a small nginx pod and then expose it using Loadbalancer service. We will see if the openstack CCM creates an LB or not.

root@hind-cluster-control-plane-wggsv:~# kubectl create deploy nginx --image=nginx
deployment.apps/nginx created
root@hind-cluster-control-plane-wggsv:~# 
root@hind-cluster-control-plane-wggsv:~# 
root@hind-cluster-control-plane-wggsv:~# 
root@hind-cluster-control-plane-wggsv:~# 
root@hind-cluster-control-plane-wggsv:~# kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
nginx-66686b6766-549cc   0/1     ContainerCreating   0          4s
root@hind-cluster-control-plane-wggsv:~# kubectl expose deploy nginx --type=LoadBalancer --port=80
service/nginx exposed
root@hind-cluster-control-plane-wggsv:~# 
root@hind-cluster-control-plane-wggsv:~# 
root@hind-cluster-control-plane-wggsv:~# 
root@hind-cluster-control-plane-wggsv:~# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        90m
nginx        LoadBalancer   10.99.239.196   <pending>     80:31426/TCP   4s
root@hind-cluster-control-plane-wggsv:~# 
Enter fullscreen mode Exit fullscreen mode

Initially, it will be in the pending state. Describe the service and verify:

root@hind-cluster-control-plane-wggsv:~# kubectl describe svc nginx 
Name:                     nginx
Namespace:                default
Labels:                   app=nginx
Annotations:              <none>
Selector:                 app=nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.239.196
IPs:                      10.99.239.196
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31426/TCP
Endpoints:                192.168.69.193:80
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  12s   service-controller  Ensuring load balancer
Enter fullscreen mode Exit fullscreen mode

Also verify in the openstack cluster:

(kolla-venv) root@control-1:~#  openstack loadbalancer list
  +-------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
| id                                  | name                                | project_id                       | vip_address | provisioning_status | operating_status | provider |
+-------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
| 421cd8a9-a7ba-4f41-8f03-            | k8s-clusterapi-cluster-default-     | 2c2343fd2bd141e8a18229afefb9c4ff | 10.6.0.107  | ACTIVE              | ONLINE           | amphora  |
| 1cdc4d958358                        | hind-cluster-kubeapi                |                                  |             |                     |                  |          |
| 5289ba7f-dc63-4db2-a6a4-            | kube_service_kubernetes_default_ngi | 2c2343fd2bd141e8a18229afefb9c4ff | 10.6.0.203  | PENDING_CREATE      | ONLINE           | amphora  |
| f72d2e86f33d                        | nx                                  |                                  |             |                     |                  |          |
+-------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
Enter fullscreen mode Exit fullscreen mode

Finally after sometime, the status should change to active:

(kolla-venv) root@control-1:~#   openstack loadbalancer list
+-------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
| id                                  | name                                | project_id                       | vip_address | provisioning_status | operating_status | provider |
+-------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
| 421cd8a9-a7ba-4f41-8f03-            | k8s-clusterapi-cluster-default-     | 2c2343fd2bd141e8a18229afefb9c4ff | 10.6.0.107  | ACTIVE              | ONLINE           | amphora  |
| 1cdc4d958358                        | hind-cluster-kubeapi                |                                  |             |                     |                  |          |
| 5289ba7f-dc63-4db2-a6a4-            | kube_service_kubernetes_default_ngi | 2c2343fd2bd141e8a18229afefb9c4ff | 10.6.0.203  | ACTIVE              | ONLINE           | amphora  |
| f72d2e86f33d                        | nx                                  |                                  |             |                     |                  |          |
+-------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
Enter fullscreen mode Exit fullscreen mode

And the status of the service should change as follows:

root@hind-cluster-control-plane-wggsv:~# kubectl describe svc nginx 
Name:                     nginx
Namespace:                default
Labels:                   app=nginx
Annotations:              loadbalancer.openstack.org/load-balancer-address: 10.31.0.48
                          loadbalancer.openstack.org/load-balancer-id: 5289ba7f-dc63-4db2-a6a4-f72d2e86f33d
Selector:                 app=nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.239.196
IPs:                      10.99.239.196
LoadBalancer Ingress:     10.31.0.48 (VIP)
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31426/TCP
Endpoints:                192.168.69.193:80
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:
  Type    Reason                Age                 From                Message
  ----    ------                ----                ----                -------
  Normal  EnsuringLoadBalancer  23s (x2 over 117s)  service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   23s (x2 over 23s)   service-controller  Ensured load balancer

root@hind-cluster-control-plane-wggsv:~# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        95m
nginx        LoadBalancer   10.99.239.196   10.31.0.48    80:31426/TCP   4m49s
Enter fullscreen mode Exit fullscreen mode

you can see that the service got an IP from provider network. Let's verify it through curl:

root@hind-cluster-control-plane-wggsv:~# curl  10.31.0.48
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, nginx is successfully installed and working.
Further configuration is required for the web server, reverse proxy, 
API gateway, load balancer, content cache, or other features.</p>

<p>For online documentation and support please refer to
<a href="https://nginx.org/">nginx.org</a>.<br/>
To engage with the community please visit
<a href="https://community.nginx.org/">community.nginx.org</a>.<br/>
For enterprise grade support, professional services, additional 
security features and capabilities please refer to
<a href="https://f5.com/nginx">f5.com/nginx</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@hind-cluster-control-plane-wggsv:~# 
Enter fullscreen mode Exit fullscreen mode

Now let's scale the pods to 2 replicas and see the loadbalancing in action.

root@hind-cluster-control-plane-wggsv:~# kubectl scale deploy nginx --replicas=2
deployment.apps/nginx scaled

# In the first replica:
root@nginx-66686b6766-549cc:/usr/share/nginx/html# echo "Replica-1" > index.html 
# In the second replica:
root@nginx-66686b6766-mgks5:/usr/share/nginx/html# echo "Replica-2" > index.html 
Enter fullscreen mode Exit fullscreen mode

Verify loadbalancing:

root@hind-cluster-control-plane-wggsv:~# curl  10.31.0.48
Replica-2
root@hind-cluster-control-plane-wggsv:~# curl  10.31.0.48
Replica-1
root@hind-cluster-control-plane-wggsv:~# curl  10.31.0.48
Replica-1
root@hind-cluster-control-plane-wggsv:~# curl  10.31.0.48
Replica-1
root@hind-cluster-control-plane-wggsv:~# curl  10.31.0.48
Replica-1
root@hind-cluster-control-plane-wggsv:~# curl  10.31.0.48
Replica-2
root@hind-cluster-control-plane-wggsv:~# 
Enter fullscreen mode Exit fullscreen mode

That's all for now. If you found this helpful or got stuck somewhere, drop a comment below. This stuff took me a long time to piece together — happy to help you shortcut that journey.

Top comments (0)