<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ashish Nair</title>
    <description>The latest articles on DEV Community by Ashish Nair (@ashish_nair_d9b10ba4f8126).</description>
    <link>https://dev.to/ashish_nair_d9b10ba4f8126</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ashish_nair_d9b10ba4f8126"/>
    <language>en</language>
    <item>
      <title>Autoscaling in Openshift with Cluster Autoscaler and Machine Autoscaler</title>
      <dc:creator>Ashish Nair</dc:creator>
      <pubDate>Mon, 27 Apr 2026 04:56:21 +0000</pubDate>
      <link>https://dev.to/ashish_nair_d9b10ba4f8126/autoscaling-in-openshift-with-cluster-autoscaler-and-machine-autoscaler-2i9h</link>
      <guid>https://dev.to/ashish_nair_d9b10ba4f8126/autoscaling-in-openshift-with-cluster-autoscaler-and-machine-autoscaler-2i9h</guid>
      <description>&lt;p&gt;In my previous post, I covered how to scale compute (worker) nodes in OpenShift using a semi-automated approach. While OpenShift handled most of the heavy lifting—such as powering on the node via BMC, installing RHCOS, and joining the cluster—the scaling action itself still required manual intervention.&lt;/p&gt;

&lt;p&gt;This document focuses on removing that manual step altogether. Specifically, it explores how Cluster Autoscaler and Machine Autoscaler work together to enable automatic, workload-driven scaling of compute nodes. While technologies like Cluster API and Machine API provide the underlying framework for managing machine lifecycles, the real decision-making around when to scale happens at the autoscaler layer.&lt;/p&gt;

&lt;p&gt;Before I dive into it, I would like to explain in a line about the building blocks(So you don't think of me of a complete idiot!).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster Autoscaler&lt;/strong&gt; - responsible for observing the state of workloads in the cluster. It continuously monitors pending and unschedulable pods and determines whether adding (or removing) nodes would help satisfy resource requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Machine Autoscaler&lt;/strong&gt; - acts as the bridge between high-level scaling decisions and infrastructure changes. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Machinsets&lt;/strong&gt;- Define how a worker node should be created&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster API&lt;/strong&gt;: Cluster API is a Kubernetes project that provides a declarative way to create, manage, and scale Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Machine API&lt;/strong&gt;: Red Hat’s opinionated implementation of Cluster API concepts. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MachineSets&lt;/strong&gt;: Equivalent of a ReplicaSet — but for nodes instead of pods.&lt;/p&gt;

&lt;p&gt;The way these different component work together is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The cluster Autoscaler watches for unschedulable pods.&lt;/li&gt;
&lt;li&gt;Informs the Machine Autoscaler who updates the machineSets limits.Simply speaking, runs the "oc scale machineset/ replicas=2"
&lt;/li&gt;
&lt;li&gt;Our hardworking employee, Machine-API prepares the new node (of course, with help from BMH)&lt;/li&gt;
&lt;li&gt;Pods get scheduled to the new worker node.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We will see how to perform the teeny-tiny set of 4 steps in Openshift:&lt;/p&gt;

&lt;p&gt;Note: You can change to the "openshift-machine-api" namespace as most of the steps will be in that namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc project openshift-machine-api

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's check our current nodes, machinsets and BMH(That's BareMetalHosts and not Jaspreet Bhumrah!)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get nodes
NAME      STATUS   ROLES                  AGE    VERSION
master    Ready    control-plane,master   8d     v1.27.10+28ed2d7
master2   Ready    control-plane,master   8d     v1.27.10+28ed2d7
master3   Ready    control-plane,master   8d     v1.27.10+28ed2d7
worker1   Ready    worker                 3d7h   v1.27.10+28ed2d7

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get machinesets
NAME                       DESIRED   CURRENT   READY   AVAILABLE   AGE
mycluster-7ln8n-worker-0   1         1         1       1           8d

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get bmh
NAME      STATE                    CONSUMER                         ONLINE   ERROR                AGE
master    externally provisioned   mycluster-7ln8n-master-0         true        8d
master2   externally provisioned   mycluster-7ln8n-master-1         true        8d
master3   externally provisioned   mycluster-7ln8n-master-2         true        8d
worker1   provisioned              mycluster-7ln8n-worker-0-qv4cn   true        3d10h

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's create our cluster Autoscaler and Machineautoscaler manifests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc apply -f machineautoscaler.yaml 
machineautoscaler.autoscaling.openshift.io/worker-autoscaler created
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc apply -f clusautoscaler.yaml 
clusterautoscaler.autoscaling.openshift.io/default created
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: You can construct the machine autoscaler and cluster autoscaler manifests from Redhat's official &lt;a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/machine_management/applying-autoscaling" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's add some load to our cluster. This is just a simple manifest that requests 1G memory and spawns ~20+ replicas.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc apply -f dep.yaml 
deployment.apps/load-test created
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This manifest has actually created someone load on the current worker node and needs probably another one to accommodate all the pods. Hence a lot of them are now in "Pending" state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;load-test-65777f99f7-458kp                            0/1     Pending   0          2m22s   &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;            &lt;/span&gt;&amp;lt;none&amp;gt;    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;span class="gp"&gt;load-test-65777f99f7-4p72l                            1/1     Running   0          2m22s   10.128.2.43       worker1   &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;           &lt;/span&gt;&amp;lt;none&amp;gt;
&lt;span class="gp"&gt;load-test-65777f99f7-4v6hq                            0/1     Pending   0          2m22s   &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;            &lt;/span&gt;&amp;lt;none&amp;gt;    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;span class="gp"&gt;load-test-65777f99f7-84n79                            1/1     Running   0          2m22s   10.128.2.39       worker1   &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;           &lt;/span&gt;&amp;lt;none&amp;gt;
&lt;span class="gp"&gt;load-test-65777f99f7-8955k                            0/1     Pending   0          2m22s   &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;            &lt;/span&gt;&amp;lt;none&amp;gt;    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;span class="gp"&gt;load-test-65777f99f7-9h4j6                            0/1     Pending   0          2m22s   &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;            &lt;/span&gt;&amp;lt;none&amp;gt;    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our Autoscalers have actually updated our machineset:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get machineset
NAME                       DESIRED   CURRENT   READY   AVAILABLE   AGE
mycluster-7ln8n-worker-0   2         2         1       1           8d
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which has started provisioning a new node for us:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get bmh
NAME      STATE                    CONSUMER                         ONLINE   ERROR                AGE
master    externally provisioned   mycluster-7ln8n-master-0         true        8d
master2   externally provisioned   mycluster-7ln8n-master-1         true        8d
master3   externally provisioned   mycluster-7ln8n-master-2         true        8d
worker2   provisioning             mycluster-7ln8n-worker-0-9vszs   true                          12m
worker1   provisioned              mycluster-7ln8n-worker-0-qv4cn   true                          3d10h
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In no time(Ok, around 20 minutes) you should see your new worker node ready to take workload. The pending pods will move to this new worker gradually.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    Ready    control-plane,master   8d      v1.27.10+28ed2d7
master2   Ready    control-plane,master   8d      v1.27.10+28ed2d7
master3   Ready    control-plane,master   8d      v1.27.10+28ed2d7
worker1   Ready    worker                 3d8h    v1.27.10+28ed2d7
worker2   Ready    worker                 3m14s   v1.27.10+28ed2d7
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "oc get pods -o wide" will actually tell you that the workloads are actually moving to this new worker node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;load-test-65777f99f7-458kp                            1/1     Running   0          33m     10.131.0.5        worker2   &amp;lt;none&amp;gt;           
load-test-65777f99f7-4p72l                            1/1     Running   0          33m     10.128.2.43       worker1   &amp;lt;none&amp;gt; 
load-test-65777f99f7-84n79                            1/1     Running   0          33m     10.128.2.39       worker1   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
load-test-65777f99f7-9h4j6                            1/1     Running   0          33m     10.131.0.6        worker2   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
load-test-65777f99f7-ctvc7                            1/1     Running   0          33m     10.128.2.41       worker1   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The autoscalers not only works well for scaling-out, it also works pretty well for scaling-in. &lt;/p&gt;

&lt;p&gt;To test this out I deleted the deployment I had created earlier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc delete deployment/load-test
deployment.apps "load-test" deleted
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also check the event logs using 'oc get events'. It actually tells us the entire flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;24m                  Normal    Killing                  Pod/load-test-65777f99f7-ctvc7                     Stopping container stress
24m                  Normal    Killing                  Pod/load-test-65777f99f7-hgf2n                     Stopping container stress

13m                  Normal    ScaleDownEmpty           ConfigMap/cluster-autoscaler-status                Scale-down: removing empty node "worker2"
13m                  Normal    ScaleDownEmpty           ConfigMap/cluster-autoscaler-status                Scale-down: empty node worker2 removed
13m                  Normal    DrainProceeds            Machine/mycluster-7ln8n-worker-0-9vszs             Node drain proceeds
13m (x10 over 66m)   Normal    SuccessfulUpdate         MachineAutoscaler/worker-autoscaler                Updated MachineAutoscaler target: openshift-machine-api/mycluster-7ln8n-worker-0
13m                  Normal    Deleted                  Machine/mycluster-7ln8n-worker-0-9vszs             Node "worker2" drained
13m                  Normal    DrainSucceeded           Machine/mycluster-7ln8n-worker-0-9vszs             Node drain succeeded
13m                  Normal    DeprovisioningStarted    BareMetalHost/worker2                              Image deprovisioning started

8m34s                Normal    DeprovisioningComplete   BareMetalHost/worker2                              Image deprovisioning completed
8m33s                Normal    PowerOff                 BareMetalHost/worker2                              Host soft powered off
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In a nutshell it has:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On deleting the deployment, the containers were killed.&lt;/li&gt;
&lt;li&gt;It scaled down the workload, we can also check the machineset output.&lt;/li&gt;
&lt;li&gt;Drained the node.&lt;/li&gt;
&lt;li&gt;Deprovisioned and Power-off the node.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;user@server1:~/test/manifests$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;oc get machineset &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-machine-api
&lt;span class="go"&gt;NAME                       DESIRED   CURRENT   READY   AVAILABLE   AGE
mycluster-7ln8n-worker-0   1         1         1       1           8d
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;virsh list --all
 Id   Name      State
--------------------------
 1    master    running
 2    master2   running
 3    master3   running
 4    worker1   running
 -    worker2   shut off
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>automation</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Scaling worker nodes in openshift with machine-api</title>
      <dc:creator>Ashish Nair</dc:creator>
      <pubDate>Mon, 13 Apr 2026 05:59:39 +0000</pubDate>
      <link>https://dev.to/ashish_nair_d9b10ba4f8126/scaling-worker-nodes-in-openshift-with-machine-api-38f0</link>
      <guid>https://dev.to/ashish_nair_d9b10ba4f8126/scaling-worker-nodes-in-openshift-with-machine-api-38f0</guid>
      <description>&lt;p&gt;In my last post, I wrote about how we can run an Openshift IPI install on &lt;a href="https://dev.to/ashish_nair_d9b10ba4f8126/deploying-openshift-ipi-on-kvm-baremetal-simulation-with-redfish-sushy-2o60"&gt;KVM&lt;/a&gt;. In this document(Which is relatively shorter), I'll talk about my experience in scaling a worker node (a semi-automated method). &lt;/p&gt;

&lt;p&gt;We will be breaking this down into 2 steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Preparation&lt;/li&gt;
&lt;li&gt;The Scaling&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Preparation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The prerequisite to start this is obviously a server( A blank VM in KVM, in our case). &lt;/li&gt;
&lt;li&gt;A DNS entry in our Antagonist's Database.&lt;/li&gt;
&lt;li&gt;The sushy webserver listening on port 8000, this is our IDrac/iLo emulation. More on that &lt;a href="https://dev.to/ashish_nair_d9b10ba4f8126/deploying-openshift-ipi-on-kvm-baremetal-simulation-with-redfish-sushy-2o60"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;And a bit of patience (did i pull-off a Robin Sharma here?)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Scaling:
&lt;/h3&gt;

&lt;p&gt;Before we start the actual scaling process let me walk through the actual process :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We create a VM in KVM (This is as good as having a spare Baremetal host)&lt;/li&gt;
&lt;li&gt;We create a Manifest (actually two) and apply it. This manifest will remind you of the install-config.yaml we used &lt;a href="https://dev.to/ashish_nair_d9b10ba4f8126/deploying-openshift-ipi-on-kvm-baremetal-simulation-with-redfish-sushy-2o60"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Once the manifest is applied, the machine is booted as a result of our BMC magic (via sushy and Redfish) and Installation is kicked-off.&lt;/li&gt;
&lt;li&gt;Once the install is completed the Machine(node) joins the cluster. &lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Gather some data:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;All (okay, most) of the work will happen in the &lt;strong&gt;openshift-machine-api&lt;/strong&gt; namespace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Let's check the current nodes that we have&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get nodes
NAME          STATUS   ROLES                  AGE     VERSION
master        Ready    control-plane,master   3d20h   v1.27.10+28ed2d7
master2       Ready    control-plane,master   3d20h   v1.27.10+28ed2d7
master3       Ready    control-plane,master   3d20h   v1.27.10+28ed2d7
worker2.lab   Ready    worker                 2d22h   v1.27.10+28ed2d7
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;It's important to check the Baremetals we currently have.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get bmh -n openshift-machine-api
NAME      STATE                    CONSUMER                         ONLINE   ERROR                AGE
master    externally provisioned   mycluster-7ln8n-master-0         true        3d21h
master2   externally provisioned   mycluster-7ln8n-master-1         true        3d21h
master3   externally provisioned   mycluster-7ln8n-master-2         true        3d21h
worker2   provisioned              mycluster-7ln8n-worker-0-hpm5j   true        2d22h

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A note on machinsets.&lt;/strong&gt; Machine sets display the "group" of workernodes we have. They are used to scale compute(Workers). Make node of the name of the Machineset this is the one we will be scaling out.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get machinesets -n openshift-machine-api
NAME                       DESIRED   CURRENT   READY   AVAILABLE   AGE
mycluster-7ln8n-worker-0   1         1         1       1           3d21h
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The manifests:
&lt;/h3&gt;

&lt;p&gt;We need to 2 manifests here and it's largely inspired by the install-config.yaml we had earlier :&lt;/p&gt;

&lt;p&gt;Snippet of install-config.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt; &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker3&lt;/span&gt;
      &lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
      &lt;span class="s"&gt;bmc&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redfish-virtualmedia+http://192.168.122.1:8000/redfish/v1/Systems/aa12a91d-56f9-41f4-b5ea-e5001dae179c&lt;/span&gt;
        &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;
        &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
      &lt;span class="na"&gt;bootMACAddress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;52:54:00:fd:5e:1d&lt;/span&gt;
      &lt;span class="na"&gt;rootDeviceHints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;deviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/dev/vda&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And our Manifests will do exactly what the above code we used while installation, but we split it into two:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creates a BMH &lt;/li&gt;
&lt;li&gt;Creates a secret to hold the console credentials.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metal3.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BareMetalHost&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker3&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openshift-machine-api&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;online&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;bootMACAddress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;52:54:00:c4:0e:6c&lt;/span&gt;
  &lt;span class="na"&gt;bmc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;redfish-virtualmedia+http://192.168.122.1:8000/redfish/v1/Systems/d9fea2de-6fd9-4a44-99fc-95f32b610407"&lt;/span&gt;
    &lt;span class="na"&gt;credentialsName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker3-bmc-secret&lt;/span&gt;
  &lt;span class="na"&gt;rootDeviceHints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;deviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/dev/vda&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker3-bmc-secret&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openshift-machine-api&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Apply the manifests:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc apply -f .
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The wait (Part 1) :
&lt;/h3&gt;

&lt;p&gt;After you apply the manifests, the node will transition into 4 states:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Registering&lt;/li&gt;
&lt;li&gt;Inspecting&lt;/li&gt;
&lt;li&gt;Available&lt;/li&gt;
&lt;li&gt;Provisioned&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For brevity we will jump to the "available" state, as this is the most interesting part. At this point if you login to the worker using 'ssh core@IP' you will see that our new worker is booted up using a CoreOS ISO by the Installer. This is a way of openshift telling us "The system is available, what do you want me to do?" (And this will also take us to the final part)&lt;/p&gt;

&lt;p&gt;And we say we wanna scale, this is the machineset name I had asked you to make note of(If you haven't fallen asleep by now!). So the "oc scale" command uses the machineset name to scale the compute by 2 replicas in our case.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc scale machineset/mycluster-7ln8n-worker-0 -n openshift-machine-api --replicas=2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, when you login to the worker machine you will see that the installation might have started:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;all [-] coreos-installer: Read disk 868.2 MiB/2.4 GiB (35%) _run_install /usr/lib/python3.9/site-packages/ironic_coreos_install.py:271
Apr 10 14:41:47 worker3 podman[1179]: 2026-04-10 14:41:47.517 1 DEBUG ironic_coreos_install [-] coreos-installer: Read disk 868.2 MiB/2.4 GiB (35%) _run_install /usr/lib/python3.9/site-packages/ironic_coreos_install.py:271
Apr 10 14:41:48 worker3 ironic-agent[1191]: 2026-04-10 14:41:48.523 1 DEBUG ironic_coreos_install [-] coreos-installer: Read disk 883.6 MiB/2.4 GiB (35%) _run_install /usr/lib/python3.9/site-packages/ironic_coreos_install.py:271
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The wait (Final part, I promise!):
&lt;/h3&gt;

&lt;p&gt;In about 15-20 minutes you will see that the machine state will have transitioned from "Available" to "Provisioned"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get bmh -n openshift-machine-api
NAME      STATE                    CONSUMER                         ONLINE   ERROR   AGE
master    externally provisioned   mycluster-7ln8n-master-0         true             4d6h
master2   externally provisioned   mycluster-7ln8n-master-1         true             4d6h
master3   externally provisioned   mycluster-7ln8n-master-2         true             4d6h
worker2   provisioned              mycluster-7ln8n-worker-0-hpm5j   true             3d7h
worker3   provisioned              mycluster-7ln8n-worker-0-k67h7   true             7h41m
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "get nodes" should also show it's available for use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;oc get nodes
NAME          STATUS   ROLES                  AGE     VERSION
master        Ready    control-plane,master   5d8h    v1.27.10+28ed2d7
master2       Ready    control-plane,master   5d8h    v1.27.10+28ed2d7
master3       Ready    control-plane,master   5d8h    v1.27.10+28ed2d7
worker3       Ready    worker                 9h      v1.27.10+28ed2d7
worker2.lab   Ready    worker                 4d10h   v1.27.10+28ed2d7
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's all &lt;em&gt;folks&lt;/em&gt;!!&lt;/p&gt;

&lt;p&gt;While the procedure looks lengthy while explaining and documenting, it essentially is just creating the manifests and running oc scale command. If you have scaled a compute manually you will appreciate how much work this method cuts down! &lt;/p&gt;

</description>
      <category>automation</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Deploying OpenShift IPI on KVM (Baremetal Simulation with Redfish &amp; Sushy)</title>
      <dc:creator>Ashish Nair</dc:creator>
      <pubDate>Tue, 07 Apr 2026 19:04:42 +0000</pubDate>
      <link>https://dev.to/ashish_nair_d9b10ba4f8126/deploying-openshift-ipi-on-kvm-baremetal-simulation-with-redfish-sushy-2o60</link>
      <guid>https://dev.to/ashish_nair_d9b10ba4f8126/deploying-openshift-ipi-on-kvm-baremetal-simulation-with-redfish-sushy-2o60</guid>
      <description>&lt;p&gt;If the article gave you a "Yet Another Openshift Setup Guide"(Sorry YAML, I stole some letters) feel, I don't blame you!(This is an indication of how much free time i have, lol!) While this isn't a typical How-To guide (I lied!) ,I'll tell you why this is different - Openshift doesn't support IPI method of installation on KVM (ironical huh? not supporting their own siblings) but there's a hack that allows you to do it(ofcourse! you can only use it for labs!!)&lt;/p&gt;

&lt;h2&gt;
  
  
  The Layout (okay! Architecture diagram)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipu0xxo9pdclj6qgi7ro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipu0xxo9pdclj6qgi7ro.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The components
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A KVM host(Our protagonist!)&lt;/li&gt;
&lt;li&gt;DNSmasq built-in with KVM A.K.A our Antagonist!&lt;/li&gt;
&lt;li&gt;Sushy (not the dish, this is a Redfish emulator)&lt;/li&gt;
&lt;li&gt;VM's (The masters and the Workers)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  A note on prerequisites and Hardware requirements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;you can check the H/W requirements for openshift in Redhat's or OKD's official documentation &lt;a href="https://docs.okd.io/latest/installing/installing_bare_metal/ipi/ipi-install-prerequisites.html#installation-minimum-resource-requirements_ipi-install-prerequisites" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1. Assembling components for the Virtual Machines
&lt;/h3&gt;

&lt;h2&gt;
  
  
  Adding disks to our Masters and Slaves.
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;qemu-img create &lt;span class="nt"&gt;-f&lt;/span&gt; qcow2 /var/lib/libvirt/images/master-1.qcow2 120G
qemu-img create &lt;span class="nt"&gt;-f&lt;/span&gt; qcow2 /var/lib/libvirt/images/master-2.qcow2 120G
qemu-img create &lt;span class="nt"&gt;-f&lt;/span&gt; qcow2 /var/lib/libvirt/images/master-3.qcow2 120G
qemu-img create &lt;span class="nt"&gt;-f&lt;/span&gt; qcow2 /var/lib/libvirt/images/worker-1.qcow2 120G
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring a Network in Libvirt
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;This is the most &lt;strong&gt;critical&lt;/strong&gt; part of the setup. If this fails the install will &lt;strong&gt;fail&lt;/strong&gt; and frustrate you to the core!&lt;/li&gt;
&lt;li&gt;Save this into a file, probably default.xml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs96dgztdxx2yzxd33cvo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs96dgztdxx2yzxd33cvo.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apply it
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;virsh net-define default.xml
virsh net-start default
virsh net-autostart default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating the VM's
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;It's time to create the VM's using the KVM console but don't boot them yet! We will have our installer boot these machines for us via &lt;a href="https://github.com/openstack/sushy-tools" rel="noopener noreferrer"&gt;Redfish and Sushy&lt;/a&gt;. In other words, a poor man's iDrac/ILO but only for power management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  setting up Sushy
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;create a virtual Environment to install python module
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv ~/sushy-env
&lt;span class="nb"&gt;source&lt;/span&gt; ~/sushy-env/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install the sushy module
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;sushy-tools
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;start sushy
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sushy-emulator &lt;span class="nt"&gt;-i&lt;/span&gt; 192.168.122.1 &lt;span class="nt"&gt;--port&lt;/span&gt; 8000 &lt;span class="nt"&gt;--libvirt-uri&lt;/span&gt; qemu:///system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing powering on/off the VM's using the tool we just installed.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;validate the Redfish API
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://192.168.122.1:8000/redfish/v1/Systems
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will look something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;sushy-emulator &lt;span class="nt"&gt;-i&lt;/span&gt; 192.168.122.1 &lt;span class="nt"&gt;--port&lt;/span&gt; 8000 &lt;span class="nt"&gt;--libvirt-uri&lt;/span&gt; qemu:///system
&lt;span class="go"&gt; * Serving Flask app 'sushy_tools.emulator.main'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://192.168.122.1:8000
Press CTRL+C to quit
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Validate power control. The curl command we ran above will give you an id assigned to every system that might look like 58ec1279-e393-4dba-a7b4-e8ea37c0d6da, replace that id in the below command.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://192.168.122.1:8000/redfish/v1/Systems/&amp;lt;ID&amp;gt;/Actions/ComputerSystem.Reset &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"ResetType": "On"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The system will be powered on if you check the KVM console. Power it off again!( I swear I'm not trying to irritate you!)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Redfish power operations require files under /usr/share/OVMF. My system had secure boot files missing so I had to create the below soft links.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/share/OVMF/OVMF_VARS_4M.fd /usr/share/OVMF/OVMF_VARS.fd
&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/share/OVMF/OVMF_CODE_4M.fd /usr/share/OVMF/OVMF_CODE.fd
&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/share/OVMF/OVMF_CODE_4M.secboot.fd /usr/share/OVMF/OVMF_CODE_4M.ms.fd
&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/share/OVMF/OVMF_CODE_4M.secboot.fd /usr/share/OVMF/OVMF_CODE.secboot.fd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Building the installer
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Installing the tools to build the installer
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install golang git make gcc g++ libvirt-dev pkg-config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
plaintext&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clone the repo
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/openshift/installer.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
shell&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;compile the installer with &lt;strong&gt;TAGS=libvirt hack/build.sh&lt;/strong&gt;. A.K.A - The Hack
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd installer
make build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
shell&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;copy the installer to /usr/local/bin
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp bin/openshift-install /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
markdown&lt;/p&gt;
&lt;h3&gt;
  
  
  3. The installation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create directories for the install
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir ~/ocp-install
cd ~/ocp-install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
yaml&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create install-config.yaml
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
baseDomain: lab
metadata:
  name: mycluster

controlPlane:
  name: master
  replicas: 3

compute:
- name: worker
  replicas: 2

networking:
  networkType: OVNKubernetes
  machineNetwork:
  - cidr: 192.168.122.0/24

platform:
  baremetal:
    externalBridge: "virbr0"
    apiVIP: 192.168.122.10
    ingressVIP: 192.168.122.11
    provisioningNetwork: "Disabled"

    hosts:
    - name: master
      role: master
      bmc:
        address: redfish-virtualmedia+http://192.168.122.1:8000/redfish/v1/Systems/&amp;lt;ID&amp;gt;
        username: admin
        password: password
      bootMACAddress: 52:54:00:3d:30:b5
      rootDeviceHints:
        deviceName: /dev/vda
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
plaintext&lt;br&gt;
Note: populate fields for all you masters and workers and Add the pull-config(from the Redhat Portal) and sshKey(ssh public key from your home directory).  fields and populate them. The ignition configs(RH core OS' Kickstart) will bake this into the RHCOS ISO and you should be able to login to your master/workers via ssh using core@master/worker using the key you entered above.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kick-off the installation (while you're inside ~/ocp-install which also houses your install-config.yaml)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openshift-install create cluster --dir . --log-level=DEBUG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
plaintext&lt;br&gt;
This will create a bootstrap node in KVM, then after the initial phase of install is complete it will remove it and boot your masters and workers(via sushy and Redfish) via RHCOS iso and continue with the install.  &lt;/p&gt;

&lt;p&gt;Note: you can change the --log-level parameter to INFO if detailing is not your thing(not judging you!)&lt;/p&gt;
&lt;h3&gt;
  
  
  The wait (And also the toughest part)
&lt;/h3&gt;

&lt;p&gt;Yes, This is the toughest part as the install takes around 1 hour or even more to complete based on the resources you have on your system. &lt;br&gt;
You can login to one of your nodes (core@master) and tail the bootkube logs and monitor the install.&lt;/p&gt;

&lt;p&gt;Once the Install is complete you should see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO Waiting up to 1h0m0s (until 5:21PM IST) for the cluster at https://api.mycluster.lab:6443 to initialize... 
INFO Checking to see if there is a route at openshift-console/console... 
INFO Install complete!                            
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/user/test/ocp-install2/auth/kubeconfig' 
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.lab 
INFO Login to the console with user: "kubeadmin", and password: "eI2ES-wtGQG-Lgwec-KUNum" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
shell&lt;/p&gt;

&lt;p&gt;export the kubeconfig file and access your cluster API's:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export KUBECONFIG=./auth/kubeconfig
oc get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What a waste of my weekend!!&lt;/p&gt;

</description>
      <category>automation</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
