DEV Community

Cover image for Creating a Kubernetes Cluster with Fedora CoreOS 36
Carmine Zaccagnino
Carmine Zaccagnino

Posted on

Creating a Kubernetes Cluster with Fedora CoreOS 36

In this post I'm going to explain how to create a Kubernetes 1.24 cluster on Fedora CoreOS 36 nodes.

I will assume that, if you're reading this, you already know that Fedora CoreOS is a great choice as a Kubernetes node OS because of both the ease of maintenance provided by RPM-OSTree and Zincati and the really easy way to automate provisioning of new nodes thanks to the use of Butane and Ignition.

I believe it is necessary to write a tutorial on this because existing tutorials on this topic are somewhat outdated and, in my opinion, fail to address some aspects, even though they're really good in many ways (the one linked here is very good in my opinion for example).

In particular, I'm going to show a multi-node cluster that keeps the taint on the control plane and, to give an example that can be directly replicated, I'll provide scripts that aid in creating virtual machines for the nodes using libvirt. You can, of course, apply just Ignition scripts and node configuration steps to your environment if that's what you're looking to get from this tutorial.

Preliminary considerations about container runtimes and SELinux

The aforementioned existing tutorial on creating Kubernetes clusters with Fedora CoreOS chooses to use CRI-O instead of the preinstalled containerd runtime. I believe this choice merits some further attention, even though I agree that it's the correct choice.

CRI-O's strengths are in added features, particularly the integration with SELinux (but that is disabled in that tutorial) and the fact it's the same runtime used by OpenShift, but it is slower than containerd, even if marginally, as the linked paper shows, so one could be tempted to just stick with containerd.

My experience with the exact setup that will be shown in this tutorial, even if one disables SELinux by overriding /etc/selinux/config in the Ignition file, is that the choice to use containerd results in a really unstable and unusable cluster, and that's why I'll be showing steps to install and enable CRI-O instead in this tutorial.

I will not be disabling SELinux, on the other hand. That's because, in the current state and for what I've seen, there's no need for that and I'll gladly take the increased security, even if it means having to troubleshoot the few instances in which it gets overzealous, and doing that with setroubleshootd (that can be installed on Fedora CoreOS using rpm-ostree as part of the setroubleshoot package) it's not hard at all.

Creating a virtualized Kubernetes cluster using Fedora CoreOS VMs

For the full example, we are going to create a three-node cluster with one control plane and two workers, which is the least demanding sensible (even though it misses out on HA) virtualized setup that even a mid-range laptop could run, as long as it runs a (ideally RedHat-based) Linux distribution with podman, libvirt and virt-install.

Another prerequisite is an extracted qcow2 image of Fedora CoreOS, which can be downloaded as an XZ archive from the Fedora CoreOS official website, or retrieved by coreos-installer.

The steps have been tested on x86 systems with Fedora Workstation 36 and Red Hat Enterprise Linux 9 as the host operating system.

Create an Ignition script

To inizialize the VM, a JSON Ignition script is used, which can be derived from the following fcos.bu YAML file using Butane:

Before doing that, generate an SSH key pair using ssh-keygen or locate an existing one you'd like to use for the VMs we're about to create and replace <PASTE YOUR SSH PUBLIC KEY HERE> in fcos.bu with the public key content, all on one line.

Use the butane OCI container to generate the Ignition file by running


podman run --interactive --rm \
quay.io/coreos/butane:release \
--pretty --strict < fcos.bu > fcos.ign

Enter fullscreen mode Exit fullscreen mode

Take note of the path of the generated fcos.ign file, and set it as the value assigned to the IGN_CONFIG variable in the following file, which I'll refer to as start_fcos.sh from now on:

In that file, also change the IMAGE variable to the path to the downloaded QCOW2 Fedora CoreOS image (e.g. $HOME/Downloads/fedora-coreos-36.20220505.3.2-qemu.x86_64.qcow2).

In three different terminals or tmux panes or windows run ./start_fcos.sh 1, ./start_fcos.sh 2 and ./start_fcos.sh 3 to create three nodes called node1, node2 and node3. The Virtual Machine Manager (virt-manager) GUI tool can be used to manage these three VMs after the initial setup.

Look for lines like


Fedora CoreOS 36.20220505.3.2
Kernel 5.17.5-300.fc36.x86_64 on an x86_64 (ttyS0)
SSH host key: SHA256:70h+0L1lAfXChOpmBH1odArSLRCMJY2v9sOM45XThyM (ECDSA)
SSH host key: SHA256:M3Tcq1tebp+uFKAXqo6kD+PWIzzz03ndwreIEhFR5IQ (ED25519)
SSH host key: SHA256:EnywjztiwTBK9nLy47wj/gaus3wgAflI11j2ckXE0QM (RSA)
enp1s0: 192.168.122.146 fe80::cc38:c8c:7c38:c0fc

Enter fullscreen mode Exit fullscreen mode

in the three terminals and add the IP addresses to /etc/hosts on the host system like in this example:


192.168.122.146 node1
192.168.122.96 node2
192.168.122.87 node3

Enter fullscreen mode Exit fullscreen mode

to then be able to more easily access the nodes from the host PC.

Log in via SSH as the core user on the three nodes (for example in three panes in another tmux window) using ssh core@node1, ssh core@node2 and ssh core@node3.

Set the hostname on each node to node1, node2 and node3 by creating the /etc/hostname file on each as root, for example like this in the first node (run the two commands one by one or it won't work):


sudo -i # become root
echo "node1" > /etc/hostname # set the hostname

Enter fullscreen mode Exit fullscreen mode

On the first node (which will be the cluster's control plane) install CRI-O and the required tools to create and manage the cluster with:


sudo rpm-ostree install kubelet kubeadm kubectl cri-o

Enter fullscreen mode Exit fullscreen mode

On the other nodes, only the CRI-O runtime, kubelet and kubeadm are needed, and they can be installed with:


sudo rpm-ostree install kubelet kubeadm cri-o

Enter fullscreen mode Exit fullscreen mode

reboot using sudo systemctl reboot to apply the hostname change and the updates (Fedora CoreOS uses OSTree which only updates system files on reboot to ensure atomic updates).

After the reboot, log in to all the nodes again and start CRI-O and kubelet using


sudo systemctl enable --now crio kubelet

Enter fullscreen mode Exit fullscreen mode

Create a clusterconfig.yml on the control plane just like the following (using vi in the SSH console for example):

and then run:


kubeadm init ––config clusterconfig.yml

Enter fullscreen mode Exit fullscreen mode

to initialize the cluster.

After it's done, configure kubectl on the control plane by running the suggested commands:


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Enter fullscreen mode Exit fullscreen mode

The ~/.kube/config file can be copied to the host system as well to be able to run kubectl commands on the host directly instead of logging in to the control plane via SSH.

Set up Pod networking with kube-router, which is on the simpler side of CNI plugins without missing out on important features, by running:


kubectl apply -f \
https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml

Enter fullscreen mode Exit fullscreen mode

Add the other nodes (node2 and node3 in our example) as workers to the cluster by running on each (as root, therefore with sudo) the kubeadm join command shown at the end of the kubeadm init output. If it was lost, you can run:


kubeadm token create –-print-join-command

Enter fullscreen mode Exit fullscreen mode

to have kubeadm generate a new token and get a new join command.

If all was done correctly, the cluster should be ready for action. Confirm this by running kubectl get nodes and verifying that all three nodes are shown as Ready.

You can create a test deployment of three NGINX instances with:


kubectl create deployment test --image nginx --replicas 3

Enter fullscreen mode Exit fullscreen mode

and a Service to be able to access it externally (in a file called testsvc.yml, for example):


apiVersion: v1
kind: Service
metadata:
  name: testsvc
spec:
  type: NodePort
  selector:
    app: test
  ports:
    - port: 80
      nodePort: 30001

Enter fullscreen mode Exit fullscreen mode

that can be added to the cluster with kubectl create -f testsvc.yml and that allows us to test that everything works correctly by navigating to node1:30001 with a browser or cURL and verifying we get the NGINX welcome page.

Next steps

I have a few containers and Kubernetes example files of varying complexity in a GitHub repository (github.com/carzacc/thesis-files), maybe you'll find something of interest there to complement reading the official Kubernetes documentation.

In particular, in the README for directory 3/morra I show an example deployment of a REST API that interacts with a Redis store and a MySQL cluster. For production applications, remember to use namespaces and create accounts that follow the principle of least privilege.

In the past, I've been very happy to write tutorials after having a topic requested or suggested in an email. In those cases, they were Flutter tutorial ideas. The same could happen with Kubernetes as well, so feel free to write to me at carmine@carminezacc.com to ask anything.

I have a blog at carmine.dev, too, where you can also find this post!

Top comments (0)