DEV Community

Mahinsha Nazeer
Mahinsha Nazeer

Posted on • Originally published at Medium on

Raspberry Pi K8S Cluster Setup for Home Lab with Cilium

I’m setting up a Kubernetes cluster using Raspberry Pis for home lab testing and learning purposes. The cluster consists of four nodes:

  • Master 1: Raspberry Pi 5 (8GB RAM) 192.168.1.200
  • Master 2: Raspberry Pi 5 (4GB RAM) 192.168.1.201
  • Worker 1: Raspberry Pi 3B+ 192.168.1.202
  • Worker 2: Raspberry Pi 3B 192.168.1.203

I’ve overclocked the Raspberry Pis to enhance performance. Below are the detailed specifications and overclocking configurations.

root@master1:/home/admin# neofetch
       _,met$$$$$gg. root@master1 
    ,g$$$$$$$$$$$$$$$P. ------------ 
  ,g$$P" """Y$$.". OS: Debian GNU/Linux 12 (bookworm) aarch64 
 ,$$P' `$$$. Host: Raspberry Pi 5 Model B Rev 1.0 
',$$P ,ggs. `$$b: Kernel: 6.12.25+rpt-rpi-2712 
`d$$' ,$P"' . $$$ Uptime: 2 hours 
 $$P d$' , $$P Packages: 1099 (dpkg), 2 (snap) 
 $$: $$. - ,d$$' Shell: bash 5.2.15 
 $$; Y$b._ _,d$P' CPU: (4) @ 2.700GHz 
 Y$$. `.`"Y$$$$P"' Memory: 360MiB / 8063MiB 
 `$$b "-.__
  `Y$$                                                
   `Y$$.                                              
     `$$b.
       `Y$$b.
          `"Y$b._
              `"""

root@master1:/home/admin#

root@master2:/home/admin# neofetch
       _,met$$$$$gg. root@master2 
    ,g$$$$$$$$$$$$$$$P. ------------ 
  ,g$$P" """Y$$.". OS: Debian GNU/Linux 12 (bookworm) aarch64 
 ,$$P' `$$$. Host: Raspberry Pi 5 Model B Rev 1.0 
',$$P ,ggs. `$$b: Kernel: 6.12.25+rpt-rpi-2712 
`d$$' ,$P"' . $$$ Uptime: 1 hour, 6 mins 
 $$P d$' , $$P Packages: 783 (dpkg) 
 $$: $$. - ,d$$' Shell: bash 5.2.15 
 $$; Y$b._ _,d$P' CPU: (4) @ 2.700GHz 
 Y$$. `.`"Y$$$$P"' Memory: 234MiB / 4050MiB 
 `$$b "-.__
  `Y$$                                                
   `Y$$.                                              
     `$$b.
       `Y$$b.
          `"Y$b._
              `"""

root@master2:/home/admin#  

admin@worker1:~ $ sudo su
root@worker1:/home/admin# neofetch
       _,met$$$$$gg. root@worker1 
    ,g$$$$$$$$$$$$$$$P. ------------ 
  ,g$$P" """Y$$.". OS: Debian GNU/Linux 12 (bookworm) aarch64 
 ,$$P' `$$$. Host: Raspberry Pi 3 Model B Plus Rev 1.3 
',$$P ,ggs. `$$b: Kernel: 6.6.74+rpt-rpi-v8 
`d$$' ,$P"' . $$$ Uptime: 1 min 
 $$P d$' , $$P Packages: 751 (dpkg) 
 $$: $$. - ,d$$' Shell: bash 5.2.15 
 $$; Y$b._ _,d$P' Resolution: 1024x768 
 Y$$. `.`"Y$$$$P"' CPU: (4) @ 1.400GHz 
 `$$b "-.__ Memory: 166MiB / 907MiB 
  `Y$$
   `Y$$.                                              
     `$$b.                                            
       `Y$$b.  
          `"Y$b._
              `"""

root@worker1:/home/admin# 

root@worker2:/home/admin# neofetch
       _,met$$$$$gg. root@worker2 
    ,g$$$$$$$$$$$$$$$P. ------------ 
  ,g$$P" """Y$$.". OS: Debian GNU/Linux 12 (bookworm) aarch64 
 ,$$P' `$$$. Host: Raspberry Pi 3 Model B Rev 1.2 
',$$P ,ggs. `$$b: Kernel: 6.12.25+rpt-rpi-v8 
`d$$' ,$P"' . $$$ Uptime: 1 hour, 6 mins 
 $$P d$' , $$P Packages: 743 (dpkg) 
 $$: $$. - ,d$$' Shell: bash 5.2.15 
 $$; Y$b._ _,d$P' CPU: (4) @ 1.400GHz 
 Y$$. `.`"Y$$$$P"' Memory: 116MiB / 906MiB 
 `$$b "-.__
  `Y$$                                                
   `Y$$.                                              
     `$$b.
       `Y$$b.
          `"Y$b._
              `"""

root@worker2:/home/admin#te
Enter fullscreen mode Exit fullscreen mode

First, we’ll configure m aster2 (Raspberry Pi 5–4GB) as the initial control plane node , while master1 , worker1 , and worker2 will function as worker nodes.

This setup is intentional — it allows us to explore advanced scenarios in future blog posts, such as:

  • Setting up a new cluster using kubeadm with Cilium.
  • Handling control plane degradation and recovery

Starting simple provides a solid foundation while leaving room to scale and experiment.

Preparation in each node:

The following commands prepare the OS for installing external components like container runtimes (Docker/containerd) and Kubernetes tools (kubeadm, kubelet, kubectl).

sudo apt update && sudo apt upgrade -y
sudo apt install -y curl apt-transport-https ca-certificates software-properties-common
Enter fullscreen mode Exit fullscreen mode

Enabling cgroup for CPU and memory

Hence, we are using Raspberry Pi, we can simply edit the cmdline.txt file in the firmware location to enable the cGroups.

sudo nano /boot/firmware/cmdline.txt

#add the following content
ipv6.disable=1 cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
Enter fullscreen mode Exit fullscreen mode

Now disable swap in all nodes.

Disabling swap on Kubernetes nodes is a common practice due to its potential to negatively impact performance and predictability, especially for containerized workloads. Swap, while it allows a system to use more memory than available RAM, accesses data from the disk, which is significantly slower than RAM. This can lead to performance degradation, increased latency, and unpredictable behaviour, particularly under load.

Disabling swap is mandatory because Kubernetes requires it for stable and predictable performance:

  • Kubernetes relies on the kubelet to manage resources, assuming memory is fully available.
  • Swap can cause nodes to become slow or unresponsive as critical processes get swapped out.
  • Disabling swap ensures the scheduler and kubelet have accurate memory availability data.
  • Kubernetes officially fails to start if swap is enabled unless explicitly configured otherwise (which is not recommended).
sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo systemctl disable dphys-swapfile

#or

sudo swapoff -a
sudo apt purge -y dphys-swapfile

#verify using 
free -mh
Enter fullscreen mode Exit fullscreen mode

Enable required kernel modules:

Enabling overlay and br_netfilter kernel modules is essential for Kubernetes networking:

overlay: Supports overlay network drivers like Calico or Flannel, enabling pods across different nodes to communicate through virtual networks.

br_netfilter: Allows Linux bridge firewall rules to inspect and filter bridged network traffic, which is crucial for Kubernetes network policies and packet forwarding.

sudo modprobe overlay
sudo modprobe br_netfilter    
Enter fullscreen mode Exit fullscreen mode

Adding overlay and br_netfilter to /etc/modules-load.d/k8s.conf

Adding overlay and br_netfilter to /etc/modules-load.d/k8s.conf ensures these kernel modules:

  • Load automatically at boot time
  • Maintain persistent availability without manual intervention after reboot

This guarantees Kubernetes networking components relying on these modules are always operational, avoiding network failures after node restarts.

vi /etc/modules-load.d/k8s.conf

overlay
br_netfilter
Enter fullscreen mode Exit fullscreen mode


k8s.conf

Configure sysctl:

The sysctl settings in /etc/sysctl.d/k8s.conf enable essential kernel networking features for Kubernetes:

net.bridge.bridge-nf-call-ip6tables = 1

Ensures IPv6 bridged network traffic is passed through ip6tables for filtering and firewall rules.

net.bridge.bridge-nf-call-iptables = 1

Ensures IPv4 bridged network traffic is processed by iptables, allowing Kubernetes to enforce network policies and firewall rules on pod traffic.

net.ipv4.ip_forward = 1

Enables IP forwarding at the kernel level, allowing packets to be routed between different network interfaces — critical for pod-to-pod communication across nodes.

These settings are required for Kubernetes networking to function properly and to enforce network security policies.

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl - system
Enter fullscreen mode Exit fullscreen mode


sysctl.d/k8s.confg

2. Hostname and IP configuration in the router.

To ensure network stability within the Kubernetes cluster, it’s essential to assign static IP addresses to all nodes. In my setup, I have four Raspberry Pi devices connected to my home router via Wi-Fi.

While it’s widely recommended to use wired (RJ45) Ethernet connections for Kubernetes clusters, mainly for improved reliability and lower latency, I’ve opted to use Wi-Fi. Despite the general concerns, I’ve tested this configuration extensively under various conditions and, so far, have encountered no issues.

To maintain consistent IP addressing, I reserved the IPs for each Raspberry Pi in the router’s DHCP settings. This ensures the cluster nodes remain discoverable and reachable, even after reboots or network interruptions.

I am using a Netlink Router at home. Please find my current configuration in the following screenshot. If you are using similar models, you can refer to the following configuration:

Once this is configured, we have to edit the host file in each node and add the following content:

vi /etc/hosts

#add the following
192.168.1.200 master1
192.168.1.201 master2
192.168.1.202 worker1
192.168.1.203 worker2
Enter fullscreen mode Exit fullscreen mode

3. Installing Container Runtime.

In Kubernetes, the smallest deployable unit is the Pod, which usually runs one or more containers. To run these containers, Kubernetes relies on a container runtime.

Initially, Kubernetes was designed to work closely with Docker, which was the most widely adopted container engine at the time. However, over time, Kubernetes evolved, and the need to support multiple container runtimes became clear.

To address this, the Open Container Initiative (OCI) was introduced — a set of industry standards that define how containers should be built and executed. In line with OCI, Kubernetes implemented the Container Runtime Interface (CRI), which allows it to interact with any OCI-compliant runtime, such as containerd or CRI-O, in a standardised way.

Today, Docker is no longer a single, unified project. It consists of several components, including Docker-CLI, Docker-Compose, Docker Network, runc, and containerd. While containerd complies with the OCI standard and is capable of running containers, it does not natively support the CRI that Kubernetes requires.

To bridge this gap, Kubernetes initially provided a compatibility layer called Docker shim, allowing it to communicate with the Docker Engine. However, Dockershim has now been deprecated, and its role is continued by the community-maintained project CRI-Dockerd.

In this project, we are using the Docker Engine as the container runtime for Kubernetes, integrated via CRI-Dockerd.

Let’s move on and see how to install Docker and configure it as a container runtime for Kubernetes using CRI-Dockerd. Currently, Mirantis is managing the CRI-Dockerd project, and I followed their official documentation for configuring CRI-Dockerd on my Raspberry Pi. Please note that we have to configure the container runtime in all nodes for setting up the cluster successfully.

Installing Docker on all nodes

Debian

We’ll begin by installing Docker on all machines in the cluster. If you’re using a Debian-based system (such as Ubuntu or Raspberry Pi OS), you can use the following commands regardless of the system architecture:

for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Enter fullscreen mode Exit fullscreen mode

Once Docker installation is completed, run the following command to verify the installation is proper and working fine.

docker run hello-world
Enter fullscreen mode Exit fullscreen mode

The following is the expected output;


docker run hello-world

Installing CRI-Dockerd on all nodes

You can refer to the following documentation for more details regarding CRI-Dockerd configuration:

Install

In this setup, I’m using Raspberry Pi devices, which are based on the ARM64 architecture. Instead of using the default package manager, I’m downloading the Docker binaries directly from the official release page to ensure compatibility and control over the installation version.

Releases · Mirantis/cri-dockerd

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.17/cri-dockerd-0.3.17.arm64.tgz

#after downloading extract and move it to local binary

tar -xvf cri-dockerd-0.3.17.arm64.tgz
cd cri-dockerd
sudo mv cri-dockerd /usr/local/bin/
Enter fullscreen mode Exit fullscreen mode

Once the installation is completed, run the following command to verify the installation;

sudo systemctl status cri-docker.service
Enter fullscreen mode Exit fullscreen mode

If the configuration is correct, running the systemctl command should return an output indicating that the service is active and running.


node1


node2 (master)


node 3


node 4

3. Installing Kubeadm

Installing kubeadm

In this setup, we will be using Kubeadm version 1.29. This specific version is chosen intentionally, as we are planning a follow-up blog focused on Kubeadm upgrade procedures. Starting with v1.29 provides a practical baseline for demonstrating version upgrades.

If you prefer to install the latest available version, you can refer to the official Kubernetes documentation linked above. Otherwise, to proceed with Kubeadm v1.29, follow the version-specific installation steps documented here.

Installing kubeadm

sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

#if you are facing any issues with the installation and you want to reinstall the packages, please unhold them first.

sudo apt-mark unhold kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode


kubeadm, kubelet and kubectl installation completed

Follow the installation process on all nodes. Once the installation is completed, enable the kubelet on all nodes:

sudo systemctl enable --now kubelet
Enter fullscreen mode Exit fullscreen mode

The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do.

4. Cgroup Driver Configuration

Why Cgroup Driver Configuration Matters in Kubeadm Installation?

During a Kubernetes cluster setup with Kubeadm, it is essential to ensure that the cgroup driver used by the container runtime (e.g., Docker, containerd) matches the driver expected by Kubelet, the Kubernetes node agent.

Cgroups (control groups) are used by the Linux kernel to limit and isolate resource usage (CPU, memory, etc.) of process groups. Kubernetes relies on this mechanism to enforce resource limits on containers.

There are two common cgroup drivers:

  • cgroupfs — a native driver provided by the Linux kernel
  • systemd — integrates with the systemd init system for resource management

Why this matters:

If the cgroup drivers between Docker/containerd and Kubelet are mismatched, Kubelet will fail to start, or you may encounter node stability issues.

Best practice:

Kubernetes now recommends using the systemd cgroup driver for better compatibility and integration with modern Linux distributions.

You can verify and configure the cgroup driver to ensure consistency before initialising the cluster with kubeadm. Refer to the following documentation for more details. In our case, it is not required.

Configuring a cgroup driver

5. Initialising master-node.

Creating a cluster with kubeadm

When you run kubeadm init, Kubernetes performs several critical actions to bootstrap the control plane:

  1. Generates PKI Certificates Secure communication between components is enabled by generating TLS certificates under /etc/kubernetes/pki.
  2. Bootstrap the Kubeconfig Files. Configuration files required by kubectl and system components are created under /etc/kubernetes/ and $HOME/.kube/config.
  3. Start Control Plane Components These pods run on the master node and are defined under /etc/kubernetes/manifests/. The following static pods are created and managed by the kubelet
  4. kube-apiserver
  5. kube-controller-manager
  6. kube-scheduler
  7. Initialises the etcd Database An embedded or external etcd key-value store is initialised to persist cluster state.
  8. Sets up the Bootstrap Token and Join Configuration A token and join command are generated to allow worker nodes to securely join the cluster.
  9. Outputs Post-Initialisation Instructions The command displays steps to:
  10. Configure kubectl access
  11. Install a Pod network add-on (required before scheduling workloads)
  12. Join worker nodes using the kubeadm join command

Now run the following command. After initialising the control plane with kubeadm, you need to configure kubectl to communicate with the cluster. The following commands handle that setup:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf
Enter fullscreen mode Exit fullscreen mode

The export command sets the environment variable so that kubectl uses the admin configuration directly from the system path. Now run the following command to initialize the master node;

sudo kubeadm init --cri-socket=unix:///var/run/cri-dockerd.sock
Enter fullscreen mode Exit fullscreen mode

If the control plane initialisation is successful, you’ll see a message similar to the one shown in the screenshot below. This message confirms that the Kubernetes control plane is up and running.

Additionally, a bootstrap token will be generated. This token, along with the control plane IP and certificate hash, is required for joining worker nodes to the cluster.

You’ll also receive instructions to:

  • Set up your kubectl config
  • Install a pod network add-on
  • Use the kubeadm join command on worker nodes


kubeadm init

Note: You must use the corresponding sockets depending on the container runtime version you are using. Refer to the table below for the appropriate configuration:


Different runtimes and socket path

6. Cilium configuration

In a Kubernetes cluster, a CNI (Container Network Interface) plugin is required to handle pod networking. Cilium is one of the most modern and powerful CNIs available today, and it’s quickly becoming the industry standard. Cilium is built on eBPF (extended Berkeley Packet Filter), enabling highly efficient, kernel-level networking, security, and observability without relying on iptables. It offers low latency, high throughput, and reduced CPU overhead compared to traditional CNIs, making it ideal for both cloud and bare-metal environments. Cilium supports identity-based security policies, allowing fine-grained access control across microservices, independent of IP addresses. Built-in Hubble provides deep visibility into service-to-service communication, DNS resolution, and flow logs — all without installing extra agents.

It integrates tightly with Kubernetes, supporting:

  • Network policies
  • Load balancing
  • Service mesh (via Cilium Service Mesh)

Cilium Quick Installation - Cilium 1.17.4 documentation

The installation of Cilium involves two key steps: first, installing the Cilium CLI, and then deploying Cilium itself. We begin by installing the Cilium CLI.

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=arm64
if ["$(uname -m)" = "aarch64"]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
Enter fullscreen mode Exit fullscreen mode

You can now run the following command to check whether the installation is proper

cilium version
Enter fullscreen mode Exit fullscreen mode

Now use cilium-cli to install Cilium into the Kubernetes cluster:

cilium install --version 1.17.4
Enter fullscreen mode Exit fullscreen mode

Run the following command to check the status of CNI pods:

kubectl get pods -n kube-system -l k8s-app=cilium
Enter fullscreen mode Exit fullscreen mode


Cilium daemonsets

Now that the configuration is complete, you can run the following command on each worker node to join them to the initialised Kubernetes cluster. You will get this command when you run kubernetes init on master node:

sudo kubeadm join 192.168.1.201:6443 --token 7zk7fr.uyvo27b37piu9yaq --discovery-token-ca-cert-hash sha256:eae000c72c5801b5bbd5bcdc569112a216df32e4f4ac
5bccd3fac9d0caf45c8b --cri-socket=unix:///var/run/cri-dockerd.sock
Enter fullscreen mode Exit fullscreen mode

Note: Cilium needs to be installed only once, from the control plane node. It runs as a DaemonSet, which automatically deploys Cilium agents on all nodes, including workers.

Running cilium install on a worker node that doesn’t have access to the API server will result in a “connection refused” error.

Once the nodes have successfully joined the cluster, you can verify their status by running the following command from the control plane node:


kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

I successfully completed the Kubernetes cluster setup, but I missed capturing a screenshot at the time — it was quite late and I was too tired. The next day, when I attempted to take the screenshot, I found that one of the worker nodes was unresponsive. The Raspberry Pi was not powering up, and I suspect it may be compromised or experiencing hardware failure.

However, the current status of the cluster with the remaining nodes can be seen in the screenshot below.

I hope this guide has been helpful. Please feel free to share any feedback or ask questions — I’m happy to assist with any challenges you encounter.

I’m also planning to acquire a new single-board computer (SBC), other than the Raspberry Pi, that will better suit my AI projects. As a result, the next blog post may be delayed.

Thanks for reading, cheers!

Troubleshooting commands:

API server port: 6443

netstat -tlnp | grep 6443

systemctl status kube-apiserver

curl -k
https://192.168.1.201:6443/healthz

telnet 192.168.1.201 6443

sudo kubeadm reset -f

sudo lsof -i :10250

sudo systemctl start kubelet

journalctl -u containerd -f

Top comments (0)