Most people interact with Kubernetes through the cloud. They probably are as close to the cluster as they are to their laptop. Using a cluster in the cloud with a web browser or terminal is fine but there is something more intimate or rewarding to deploying a kubernetes cluster on bare metal. And even more rewarding is physically unplugging a node and watch Kubernetes rebalancing the workloads.
What You'll Need
The total cost is roughly €300 depending on what you already own, the number of nodes you will choose to have and whether you will get a case to wrap everything nicely.
4x Raspberry Pi 4 (or Pi 5) (4GB RAM or more). With 4GB per board, there's enough headroom to run workloads on top. 8GB model is nice to have but not necessary.
4x microSD cards (32GB). The OS, container images, and Kubernetes binaries all live on these. 32GB gives you comfortable room.
A multi-port USB-C charger (at least 15W per port). Each Pi 4 draws up to 15W under load. A 4-port USB-C PD GaN charger powers the whole cluster from a single wall outlet.
4x USB-C cables.
An ethernet switch.
4x ethernet cables.
A microSD card reader, a monitor (the Pi 4 has two micro-HDMI ports) and a USB keyboard. You'll only need the monitor and keyboard briefly during initial setup. Once SSH is configured, everything happens remotely from your laptop.
A cluster case. Something like the UCTRONICS Raspberry Pi cluster enclosure keeps your four boards stacked neatly with proper airflow. Optional but recommended plus it looks great on a desk.
The Architecture We're Building
Before we start plugging things in, let's understand what we're building and why it's shaped this way.
Pi 1 has a dual role. Its Wi-Fi interface connects to your home router and the internet. Its ethernet interface connects to a private switch where the three worker nodes live. Pi 1 acts as a gateway. It performs NAT (Network Address Translation) so the workers can reach the internet through it, the same way your home router lets your devices reach the internet through its single public IP.
Why this topology? The workers live on an isolated 10.0.0.0/24 subnet. This is deliberate. It mirrors how production clusters work where nodes sit in a private network and communicate over a dedicated fabric, not over the same Wi-Fi your phone uses. It also means your cluster won't interfere with other devices on your home network.
Phase 1: Flash the Operating System
We'll use Ubuntu Server (64-bit, aarch64 edition) on all four Pis.
Writing the SD Cards
Use the Raspberry Pi Imager to flash each card. Before writing, click the gear icon to pre-configure each node:
For every node:
- Enable SSH with public-key authentication. Generate an ed25519 key pair on your laptop (
ssh-keygen -t ed25519) if you don't have one, and paste the public key here. - Set the username.
Set unique hostnames for each node:
- Pi 1:
k8s-control(this will be our control plane and gateway) - Pi 2:
worker-1 - Pi 3:
worker-2 - Pi 4:
worker-3
Flash each card and insert them into the Pis.
Phase 2: Network Setup
This phase is the foundation everything else depends on. If the nodes can't reliably talk to each other, nothing that follows will work.
Step 1. Configure the Gateway Node (Pi 1)
Boot Pi 1 with a monitor and keyboard attached. First, set up the Wi-Fi interface so it connects to your home router, and give the ethernet interface a static IP that will serve as the gateway for the cluster network.
Create a netplan configuration at /etc/netplan/01-cluster.yaml:
network:
version: 2
wifis:
wlan0:
dhcp4: true
access-points:
"YOUR_WIFI_SSID":
password: "your-wifi-password"
ethernets:
eth0:
addresses:
- 10.0.0.1/24
Apply it:
sudo netplan apply
Now Pi 1 has two interfaces: wlan0 gets an IP from your home router (internet access), and eth0 is statically set to 10.0.0.1 (the cluster's gateway address).
Why 10.0.0.0/24? We make sure the range doesn't collide with the home network (which is probably 192.168.x.x).
Step 2. Enable NAT (Network Address Translation)
The workers will live on the 10.0.0.0/24 network, but that network has no direct route to the internet. Pi 1 needs to act as a router.
# Enable IP forwarding
# This tells the Linux kernel to route packets
# between interfaces instead of dropping them
echo 'net.ipv4.ip_forward = 1' | sudo tee /etc/sysctl.d/ip-forward.conf
sudo sysctl --system
# Set up NAT masquerade
# Rewrite source IPs from 10.0.0.x to Pi 1's
# Wi-Fi address so the internet sees them as coming from Pi 1
sudo apt-get install -y iptables-persistent
sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
sudo netfilter-persistent save
Without this step, the workers could talk to each other and to Pi 1, but any attempt to reach the internet (which they'll need to pull container images) would silently fail.
Step 3. Run a DHCP Server on Pi 1
Instead of manually configuring a static IP on each worker, we'll run a DHCP server on Pi 1 that hands out addresses automatically. More importantly, we'll bind specific IPs to specific MAC addresses so that each worker always gets the same IP. Kubernetes nodes register with their IP, so you don't want those changing.
sudo apt-get install -y isc-dhcp-server
Edit /etc/dhcp/dhcpd.conf:
# The subnet the DHCP server manages
subnet 10.0.0.0 netmask 255.255.255.0 {
range 10.0.0.10 10.0.0.50; # dynamic range for any new devices
option routers 10.0.0.1; # point clients to Pi 1 as gateway
option domain-name-servers 8.8.8.8, 1.1.1.1; # public DNS servers
}
# Static leases. Each worker always gets the same IP
# Find your Pi's MAC with: ip link show eth0
host worker-1 {
hardware ethernet XX:XX:XX:XX:XX:XX; # replace with Pi 2's eth0 MAC
fixed-address 10.0.0.11;
}
host worker-2 {
hardware ethernet XX:XX:XX:XX:XX:XX; # replace with Pi 3's eth0 MAC
fixed-address 10.0.0.12;
}
host worker-3 {
hardware ethernet XX:XX:XX:XX:XX:XX; # replace with Pi 4's eth0 MAC
fixed-address 10.0.0.13;
}
Tell the DHCP server to only listen on eth0 (we don't want it interfering with your home network's Wi-Fi). Edit /etc/default/isc-dhcp-server:
INTERFACESv4="eth0"
Now, there's a common boot-order problem: the DHCP server can start before the ethernet interface has its static IP assigned. When that happens, the server sees no valid address on eth0 and crashes. To fix this, create a systemd drop-in that makes the DHCP service wait for eth0 to be ready:
sudo mkdir -p /etc/systemd/system/isc-dhcp-server.service.d
cat <<'EOF' | sudo tee /etc/systemd/system/isc-dhcp-server.service.d/wait-for-eth0.conf
[Unit]
After=sys-subsystem-net-devices-eth0.device
Wants=sys-subsystem-net-devices-eth0.device
[Service]
ExecStartPre=/bin/sh -c 'i=0; while [ $i -lt 30 ]; do ip -4 addr show eth0 | grep -q inet && exit 0; sleep 1; i=$((i+1)); done; exit 1'
Restart=on-failure
RestartSec=5
EOF
sudo systemctl daemon-reload
sudo systemctl enable --now isc-dhcp-server
The ExecStartPre script polls up to 30 seconds for eth0 to have an IPv4 address. The Restart=on-failure is a safety net, if something else goes wrong, systemd will retry instead of leaving you with no DHCP.
Step 4. Boot the Workers and Verify
Plug all three workers into the switch, power them on, and give them a minute to boot and request DHCP leases. From Pi 1, verify:
# Check DHCP leases were assigned
cat /var/lib/dhcp/dhcpd.leases
# Ping each worker
ping -c 2 10.0.0.11
ping -c 2 10.0.0.12
ping -c 2 10.0.0.13
# Verify workers can reach the internet through Pi 1's NAT
ssh ubuntu@10.0.0.11 'ping -c 2 8.8.8.8'
Step 5. SSH Config on Your Laptop
You'll be managing this cluster from your laptop, not from a monitor plugged into Pi 1. The workers aren't directly reachable from your laptop (they're on the 10.0.0.0/24 subnet, which your laptop doesn't know about), so we use Pi 1 as a jump host.
Add this to ~/.ssh/config on your laptop:
Host k8s-control
HostName <Pi 1's Wi-Fi IP>
User ubuntu
IdentityFile ~/.ssh/id_ed25519
Host worker-1
HostName 10.0.0.11
User ubuntu
IdentityFile ~/.ssh/id_ed25519
ProxyJump k8s-control
Host worker-2
HostName 10.0.0.12
User ubuntu
IdentityFile ~/.ssh/id_ed25519
ProxyJump k8s-control
Host worker-3
HostName 10.0.0.13
User ubuntu
IdentityFile ~/.ssh/id_ed25519
ProxyJump k8s-control
Now ssh worker-2 from your laptop will automatically tunnel through Pi 1. The ProxyJump directive tells SSH to first connect to k8s-control and then hop to the worker from there.
Phase 3: Kubernetes Prerequisites
Before installing Kubernetes the worker nodes need a few changes.
Step 1. Disable Swap
sudo swapoff -a
sudo sed -i '/swap/d' /etc/fstab
Why: Swap lets the OS spill memory to disk when RAM is full. That's useful in general, but it undermines Kubernetes resource management. Disabling swap means a container that exceeds its memory limit gets OOMKilled immediately, which is easier to debug and more honest. Recent Kubernetes versions (1.28+) do support swap, but configuring it properly doesn't belong in a getting-started guide.
Step 2. Load Required Kernel Modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Why overlay? Containerd (the container runtime) uses the OverlayFS filesystem driver to efficiently layer container images. Each container image is made of read-only layers stacked on top of each other with a thin writable layer on top. Without the overlay module, containerd can't create containers.
Why br_netfilter? Kubernetes networking relies on Linux bridges to connect containers. By default, traffic traversing a bridge bypasses iptables rules. The br_netfilter module makes bridged traffic visible to iptables, which is essential because Kubernetes Services (ClusterIP, NodePort) are implemented entirely as iptables rules by kube-proxy.
Step 3. Set Sysctl Parameters
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
These are the runtime settings that complement the kernel modules above. bridge-nf-call-iptables enables the actual filtering we just made possible with br_netfilter. ip_forward allows packets to be routed between network interfaces. Without this, a pod on one node can't communicate with a pod on another node, because the kernel would refuse to forward the traffic.
Phase 4: Container Runtime
Kubernetes doesn't run containers itself. It delegates that to a container runtime. We'll use containerd, which is the industry standard.
Run this on every node:
sudo apt-get update && sudo apt-get install -y containerd
# Generate the default config file
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
# Enable the systemd cgroup driver
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
Why SystemdCgroup = true? Linux cgroups are what enforce resource limits on containers. There are two ways to manage cgroups: the legacy cgroupfs driver and the systemd driver. The kubelet defaults to the systemd driver, so containerd must use the same one
Phase 5: Installing Kubernetes
Step 1. Add the Kubernetes Repository (All Nodes)
Kubernetes packages aren't in Ubuntu's default repos. We need to add the official pkgs.k8s.io repository.
# Import the signing key, this lets apt verify that packages
# actually come from the Kubernetes project and haven't been tampered with
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key \
| sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# Add the repo
# The signed-by clause scopes the key to this repo only,
# so it can't be used to authenticate packages from other sources
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' \
| sudo tee /etc/apt/sources.list.d/kubernetes.list
Step 2. Install and Pin the Packages (All Nodes)
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
What each package does:
kubelet is the node agent. It runs on every node as a systemd service, watches the API server for pod assignments, and tells containerd to start or stop containers accordingly. It's the bridge between Kubernetes (the abstract orchestration layer) and containerd (the thing that actually runs containers).
kubeadm think of it as the Kubernetes installer.
kubectl is the CLI you use to interact with the cluster.
Why apt-mark hold? Kubernetes components must be upgraded in a specific order. If apt-get upgrade quietly bumped kubectl to v1.33 while your kubelet is still on v1.32, you'd have a version skew that could cause subtle breakage.
Phase 6: Initializing the Cluster
This is where four independent Linux machines become a Kubernetes cluster.
Step 1. Initialize the Control Plane (Pi 1 Only)
sudo kubeadm init \
--apiserver-advertise-address=10.0.0.1 \
--pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address=10.0.0.1 tells the API server to bind to Pi 1's ethernet interface. Without this flag, kubeadm might auto-detect the Wi-Fi interface instead, and the workers (which are on the ethernet subnet) wouldn't be able to reach the API server.
--pod-network-cidr=10.244.0.0/16 reserves this IP range for pod-to-pod networking. Every pod in the cluster gets an IP from this range. It's important that this doesn't overlap with your node subnet (10.0.0.0/24) or the default Kubernetes service CIDR (10.96.0.0/12). The 10.244.0.0/16 range is the conventional default for Flannel, the CNI plugin we'll install next.
When this command finishes, kubeadm will have done several things: generated all the TLS certificates under /etc/kubernetes/pki/, written static pod manifests for etcd, the API server, the controller manager, and the scheduler into /etc/kubernetes/manifests/, and started the kubelet which launched all of them as containers.
Save the kubeadm join command from the output, we'll need it in Step 4 to add the workers.
Step 2. Configure kubectl (Pi 1)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm init generated an admin kubeconfig file that contains a client certificate and the cluster CA. This copies it to your home directory so you can run kubectl as a normal user. Without this, you'd need sudo for every kubectl command.
Test it:
kubectl get nodes
Step 3. Install a CNI Plugin (Flannel)
A CNI (Container Network Interface) plugin is what gives pods their IP addresses and enables pod-to-pod communication across nodes. Without one, the kubelet reports the node as NotReady because the networking requirement isn't satisfied.
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
This command creates a DaemonSet that automatically runs one pod on every node. So as workers join, they'll each get a Flannel pod that configures their local networking.
Flannel is one of the simplest CNI plugins. It creates a VXLAN overlay network that tunnels pod traffic between nodes through the existing ethernet network. Each node gets a /24 slice of the 10.244.0.0/16 range and Flannel handles the routing between them.
Step 4. Join the Workers (Pi 2, Pi 3, Pi 4)
SSH into each worker and run the join command from Step 1:
sudo kubeadm join 10.0.0.1:6443 \
--token <your-token> \
--discovery-token-ca-cert-hash sha256:<your-hash>
What happens during a join: the worker contacts the API server at 10.0.0.1:6443, verifies the server's TLS certificate against the hash you provided, sends a certificate signing request which the control plane auto-approves and then starts its kubelet with the signed certificate. The kubelet registers the node with the API server and Kubernetes begins scheduling the Flannel DaemonSet pod onto it.
Step 5. Verify the Cluster
Back on Pi 1:
kubectl get nodes -o wide
After about a minute (while Flannel pulls the container images), all four nodes should show Ready:
Check that all system pods are running:
kubectl get pods -A
Note: The control plane node has a taint (node-role.kubernetes.io/control-plane:NoSchedule) that prevents regular workloads from being scheduled on it.
What You Have Now
Congratulations! You have a functioning Kubernetes cluster. Specifically:
A control plane (Pi 1) running etcd, the API server, the controller manager and the scheduler.
Three worker nodes (Pi 2–4) running kubelet and kube-proxy, ready to accept workloads.
An overlay network (Flannel) that gives every pod a unique IP and routes traffic between them, even across nodes.
Cluster DNS (CoreDNS) so pods can find each other by name instead of IP.
Where to Go Next
This cluster is a foundation. Some next steps to consider:
Add metrics server. Then you will be able to check node and pod metrics like kubectl top nodes
Deploy a real application. Applications like Amazon's retail store or Google's microservice demo is a good place to start.
Try breaking things. Drain a node (kubectl drain), scale a deployment to more replicas than the cluster can handle, set resource limits and watch OOMKill, delete a pod and watch its controller recreate it.
Set up an Ingress controller. Install something like Traefik or ingress-nginx to route external HTTP traffic to services inside the cluster. This is how production clusters expose web applications.
Install a GitOps tool. Tools like ArgoCD let you declare your desired cluster state in a Git repository and have it automatically applied.
Add persistent storage. Explore local-path-provisioner or NFS to give your pods storage that survives pod restarts. Stateful workloads (databases, message queues) need this.
Epilogue
It may look overwhelming but in reality it's nothing more than a weekend project. It's the closest you'll get to hands on Kubernetes experience. Pun intended. Have fun!





Top comments (2)
Great post. How do you add storage? and I think nginx is going away so maybe Traefik and Istio?
Some comments may only be visible to logged-in visitors. Sign in to view all comments.