In my last post, I wrote about how we can run an Openshift IPI install on KVM. In this document(Which is relatively shorter), I'll talk about my experience in scaling a worker node (a semi-automated method).
We will be breaking this down into 2 steps:
- The Preparation
- The Scaling
The Preparation
- The prerequisite to start this is obviously a server( A blank VM in KVM, in our case).
- A DNS entry in our Antagonist's Database.
- The sushy webserver listening on port 8000, this is our IDrac/iLo emulation. More on that here
- And a bit of patience (did i pull-off a Robin Sharma here?)
The Scaling:
Before we start the actual scaling process let me walk through the actual process :
- We create a VM in KVM (This is as good as having a spare Baremetal host)
- We create a Manifest (actually two) and apply it. This manifest will remind you of the install-config.yaml we used here
- Once the manifest is applied, the machine is booted as a result of our BMC magic (via sushy and Redfish) and Installation is kicked-off.
- Once the install is completed the Machine(node) joins the cluster.
Gather some data:
All (okay, most) of the work will happen in the openshift-machine-api namespace.
Let's check the current nodes that we have
oc get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 3d20h v1.27.10+28ed2d7
master2 Ready control-plane,master 3d20h v1.27.10+28ed2d7
master3 Ready control-plane,master 3d20h v1.27.10+28ed2d7
worker2.lab Ready worker 2d22h v1.27.10+28ed2d7
- It's important to check the Baremetals we currently have.
oc get bmh -n openshift-machine-api
NAME STATE CONSUMER ONLINE ERROR AGE
master externally provisioned mycluster-7ln8n-master-0 true 3d21h
master2 externally provisioned mycluster-7ln8n-master-1 true 3d21h
master3 externally provisioned mycluster-7ln8n-master-2 true 3d21h
worker2 provisioned mycluster-7ln8n-worker-0-hpm5j true 2d22h
- A note on machinsets. Machine sets display the "group" of workernodes we have. They are used to scale compute(Workers). Make node of the name of the Machineset this is the one we will be scaling out.
oc get machinesets -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
mycluster-7ln8n-worker-0 1 1 1 1 3d21h
The manifests:
We need to 2 manifests here and it's largely inspired by the install-config.yaml we had earlier :
Snippet of install-config.yaml:
- name: worker3
role: worker
bmc:
address: redfish-virtualmedia+http://192.168.122.1:8000/redfish/v1/Systems/aa12a91d-56f9-41f4-b5ea-e5001dae179c
username: admin
password: password
bootMACAddress: 52:54:00:fd:5e:1d
rootDeviceHints:
deviceName: /dev/vda
And our Manifests will do exactly what the above code we used while installation, but we split it into two:
- Creates a BMH
- Creates a secret to hold the console credentials.
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
metadata:
name: worker3
namespace: openshift-machine-api
spec:
online: true
bootMACAddress: 52:54:00:c4:0e:6c
bmc:
address: "redfish-virtualmedia+http://192.168.122.1:8000/redfish/v1/Systems/d9fea2de-6fd9-4a44-99fc-95f32b610407"
credentialsName: worker3-bmc-secret
rootDeviceHints:
deviceName: /dev/vda
apiVersion: v1
kind: Secret
metadata:
name: worker3-bmc-secret
namespace: openshift-machine-api
type: Opaque
stringData:
username: admin
password: password
- Apply the manifests:
oc apply -f .
The wait (Part 1) :
After you apply the manifests, the node will transition into 4 states:
- Registering
- Inspecting
- Available
- Provisioned
For brevity we will jump to the "available" state, as this is the most interesting part. At this point if you login to the worker using 'ssh core@IP' you will see that our new worker is booted up using a CoreOS ISO by the Installer. This is a way of openshift telling us "The system is available, what do you want me to do?" (And this will also take us to the final part)
And we say we wanna scale, this is the machineset name I had asked you to make note of(If you haven't fallen asleep by now!). So the "oc scale" command uses the machineset name to scale the compute by 2 replicas in our case.
oc scale machineset/mycluster-7ln8n-worker-0 -n openshift-machine-api --replicas=2
Now, when you login to the worker machine you will see that the installation might have started:
all [-] coreos-installer: Read disk 868.2 MiB/2.4 GiB (35%) _run_install /usr/lib/python3.9/site-packages/ironic_coreos_install.py:271
Apr 10 14:41:47 worker3 podman[1179]: 2026-04-10 14:41:47.517 1 DEBUG ironic_coreos_install [-] coreos-installer: Read disk 868.2 MiB/2.4 GiB (35%) _run_install /usr/lib/python3.9/site-packages/ironic_coreos_install.py:271
Apr 10 14:41:48 worker3 ironic-agent[1191]: 2026-04-10 14:41:48.523 1 DEBUG ironic_coreos_install [-] coreos-installer: Read disk 883.6 MiB/2.4 GiB (35%) _run_install /usr/lib/python3.9/site-packages/ironic_coreos_install.py:271
The wait (Final part, I promise!):
In about 15-20 minutes you will see that the machine state will have transitioned from "Available" to "Provisioned"
oc get bmh -n openshift-machine-api
NAME STATE CONSUMER ONLINE ERROR AGE
master externally provisioned mycluster-7ln8n-master-0 true 4d6h
master2 externally provisioned mycluster-7ln8n-master-1 true 4d6h
master3 externally provisioned mycluster-7ln8n-master-2 true 4d6h
worker2 provisioned mycluster-7ln8n-worker-0-hpm5j true 3d7h
worker3 provisioned mycluster-7ln8n-worker-0-k67h7 true 7h41m
The "get nodes" should also show it's available for use.
oc get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 5d8h v1.27.10+28ed2d7
master2 Ready control-plane,master 5d8h v1.27.10+28ed2d7
master3 Ready control-plane,master 5d8h v1.27.10+28ed2d7
worker3 Ready worker 9h v1.27.10+28ed2d7
worker2.lab Ready worker 4d10h v1.27.10+28ed2d7
That's all folks!!
While the procedure looks lengthy while explaining and documenting, it essentially is just creating the manifests and running oc scale command. If you have scaled a compute manually you will appreciate how much work this method cuts down!
Top comments (0)