If the article gave you a "Yet Another Openshift Setup Guide"(Sorry YAML, I stole some letters) feel, I don't blame you!(This is an indication of how much free time i have, lol!) While this isn't a typical How-To guide (I lied!) ,I'll tell you why this is different - Openshift doesn't support IPI method of installation on KVM (ironical huh? not supporting their own siblings) but there's a hack that allows you to do it(ofcourse! you can only use it for labs!!)
The Layout (okay! Architecture diagram)
The components
- A KVM host(Our protagonist!)
- DNSmasq built-in with KVM A.K.A our Antagonist!
- Sushy (not the dish, this is a Redfish emulator)
- VM's (The masters and the Workers)
A note on prerequisites and Hardware requirements
- you can check the H/W requirements for openshift in Redhat's or OKD's official documentation here
1. Assembling components for the Virtual Machines
Adding disks to our Masters and Slaves.
qemu-img create -f qcow2 /var/lib/libvirt/images/master-1.qcow2 120G
qemu-img create -f qcow2 /var/lib/libvirt/images/master-2.qcow2 120G
qemu-img create -f qcow2 /var/lib/libvirt/images/master-3.qcow2 120G
qemu-img create -f qcow2 /var/lib/libvirt/images/worker-1.qcow2 120G
Configuring a Network in Libvirt
- This is the most critical part of the setup. If this fails the install will fail and frustrate you to the core!
- Save this into a file, probably default.xml
- Apply it
virsh net-define default.xml
virsh net-start default
virsh net-autostart default
Creating the VM's
- It's time to create the VM's using the KVM console but don't boot them yet! We will have our installer boot these machines for us via Redfish and Sushy. In other words, a poor man's iDrac/ILO but only for power management.
setting up Sushy
- create a virtual Environment to install python module
python3 -m venv ~/sushy-env
source ~/sushy-env/bin/activate
- Install the sushy module
pip install sushy-tools
- start sushy
sushy-emulator -i 192.168.122.1 --port 8000 --libvirt-uri qemu:///system
Testing powering on/off the VM's using the tool we just installed.
- validate the Redfish API
curl http://192.168.122.1:8000/redfish/v1/Systems
The output will look something like:
$ sushy-emulator -i 192.168.122.1 --port 8000 --libvirt-uri qemu:///system
* Serving Flask app 'sushy_tools.emulator.main'
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://192.168.122.1:8000
Press CTRL+C to quit
- Validate power control. The curl command we ran above will give you an id assigned to every system that might look like 58ec1279-e393-4dba-a7b4-e8ea37c0d6da, replace that id in the below command.
curl -X POST http://192.168.122.1:8000/redfish/v1/Systems/<ID>/Actions/ComputerSystem.Reset \
-H "Content-Type: application/json" \
-d '{"ResetType": "On"}'
The system will be powered on if you check the KVM console. Power it off again!( I swear I'm not trying to irritate you!)
- Redfish power operations require files under /usr/share/OVMF. My system had secure boot files missing so I had to create the below soft links.
sudo ln -s /usr/share/OVMF/OVMF_VARS_4M.fd /usr/share/OVMF/OVMF_VARS.fd
sudo ln -s /usr/share/OVMF/OVMF_CODE_4M.fd /usr/share/OVMF/OVMF_CODE.fd
sudo ln -s /usr/share/OVMF/OVMF_CODE_4M.secboot.fd /usr/share/OVMF/OVMF_CODE_4M.ms.fd
sudo ln -s /usr/share/OVMF/OVMF_CODE_4M.secboot.fd /usr/share/OVMF/OVMF_CODE.secboot.fd
2. Building the installer
- Installing the tools to build the installer
sudo apt install golang git make gcc g++ libvirt-dev pkg-config
plaintext
- clone the repo
git clone https://github.com/openshift/installer.git
shell
- compile the installer with TAGS=libvirt hack/build.sh. A.K.A - The Hack
cd installer
make build
shell
- copy the installer to /usr/local/bin
sudo cp bin/openshift-install /usr/local/bin/
markdown
3. The installation
- Create directories for the install
mkdir ~/ocp-install
cd ~/ocp-install
yaml
- create install-config.yaml
apiVersion: v1
baseDomain: lab
metadata:
name: mycluster
controlPlane:
name: master
replicas: 3
compute:
- name: worker
replicas: 2
networking:
networkType: OVNKubernetes
machineNetwork:
- cidr: 192.168.122.0/24
platform:
baremetal:
externalBridge: "virbr0"
apiVIP: 192.168.122.10
ingressVIP: 192.168.122.11
provisioningNetwork: "Disabled"
hosts:
- name: master
role: master
bmc:
address: redfish-virtualmedia+http://192.168.122.1:8000/redfish/v1/Systems/<ID>
username: admin
password: password
bootMACAddress: 52:54:00:3d:30:b5
rootDeviceHints:
deviceName: /dev/vda
plaintext
Note: populate fields for all you masters and workers and Add the pull-config(from the Redhat Portal) and sshKey(ssh public key from your home directory). fields and populate them. The ignition configs(RH core OS' Kickstart) will bake this into the RHCOS ISO and you should be able to login to your master/workers via ssh using core@master/worker using the key you entered above.
- kick-off the installation (while you're inside ~/ocp-install which also houses your install-config.yaml)
openshift-install create cluster --dir . --log-level=DEBUG
plaintext
This will create a bootstrap node in KVM, then after the initial phase of install is complete it will remove it and boot your masters and workers(via sushy and Redfish) via RHCOS iso and continue with the install.
Note: you can change the --log-level parameter to INFO if detailing is not your thing(not judging you!)
The wait (And also the toughest part)
Yes, This is the toughest part as the install takes around 1 hour or even more to complete based on the resources you have on your system.
You can login to one of your nodes (core@master) and tail the bootkube logs and monitor the install.
Once the Install is complete you should see something like:
INFO Waiting up to 1h0m0s (until 5:21PM IST) for the cluster at https://api.mycluster.lab:6443 to initialize...
INFO Checking to see if there is a route at openshift-console/console...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/user/test/ocp-install2/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.lab
INFO Login to the console with user: "kubeadmin", and password: "eI2ES-wtGQG-Lgwec-KUNum"
shell
export the kubeconfig file and access your cluster API's:
export KUBECONFIG=./auth/kubeconfig
oc get nodes
What a waste of my weekend!!


Top comments (0)