DEV Community

Purushotam Adhikari
Purushotam Adhikari

Posted on

Building a Kubernetes Cluster on Raspberry Pi for Home Lab Learning

Learning Kubernetes can be intimidating, especially when you're trying to understand concepts like container orchestration, networking, and distributed systems. While cloud platforms offer managed Kubernetes services, there's something powerful about building your own cluster from scratch. Enter the Raspberry Pi – an affordable, energy-efficient way to create a real Kubernetes cluster right on your desk.

In this comprehensive guide, we'll walk through building a multi-node Kubernetes cluster using Raspberry Pi devices. You'll gain hands-on experience with cluster architecture, networking, and deployment patterns that directly translate to production environments.

Why Raspberry Pi for Kubernetes?

Before diving into the technical details, let's understand why Raspberry Pi makes an excellent platform for learning Kubernetes:

Cost-Effective: A 3-node cluster costs around $200-300, compared to thousands for equivalent server hardware.

Low Power Consumption: The entire cluster uses less power than a single laptop, making it perfect for 24/7 home lab operation.

Real Hardware Experience: Unlike virtual machines, you're working with actual network interfaces, storage, and distributed hardware challenges.

ARM Architecture Learning: Gain experience with ARM-based containers, which are increasingly important in edge computing and mobile deployments.

Physical Debugging: When things go wrong, you can see blinking LEDs, check physical connections, and understand infrastructure in a tangible way.

Hardware Requirements

For this project, you'll need the following components:

Essential Components

  • 3-4 Raspberry Pi 4 (4GB RAM minimum): These will serve as your cluster nodes
  • High-quality microSD cards (32GB+, Class 10): One per Pi for the operating system
  • Network switch (5+ ports): To connect all Pis to your home network
  • Ethernet cables: One per Pi plus one for your router connection
  • USB-C power supplies: One per Pi, or a multi-port USB charging station
  • Raspberry Pi cases: Optional but recommended for protection and heat dissipation

Optional Enhancements

  • Cluster case or rack: Keeps everything organized and looks professional
  • Cooling fans: Prevents thermal throttling during heavy workloads
  • External USB storage: For persistent volumes and better I/O performance
  • PoE HATs: If you have a PoE switch, eliminates individual power supplies

Total Investment

Expect to spend $200-400 depending on your choices. This might seem significant initially, but it's far less expensive than cloud costs for equivalent learning time, and you own the hardware permanently.

Network Architecture Planning

Before assembling hardware, let's plan our network topology. A typical home lab Kubernetes cluster follows this pattern:

Internet → Router → Switch → Raspberry Pis
                  ↓
            Your Development Machine
Enter fullscreen mode Exit fullscreen mode

IP Address Planning

Reserve a static IP range for your cluster. For example:

  • pi-master: 192.168.1.100
  • pi-worker-1: 192.168.1.101
  • pi-worker-2: 192.168.1.102
  • pi-worker-3: 192.168.1.103 (if using 4 nodes)

This static addressing simplifies cluster management and makes troubleshooting much easier.

Network Requirements

  • All nodes must be on the same subnet
  • Each node needs internet access for pulling container images
  • Consider VLAN isolation if you have advanced networking gear

Operating System Setup

We'll use Ubuntu Server 22.04 LTS for ARM64, which provides excellent Kubernetes support and regular security updates.

Flashing the OS

Download the Ubuntu Server 22.04 LTS ARM64 image and use the Raspberry Pi Imager:

  1. Install Raspberry Pi Imager from the official website
  2. Select Ubuntu Server 22.04 LTS (64-bit) as your OS
  3. Configure advanced options:

    • Enable SSH with your public key
    • Set username/password
    • Configure WiFi (backup connectivity)
    • Set hostname (pi-master, pi-worker-1, etc.)
  4. Flash each microSD card with the appropriate hostname

First Boot Configuration

After flashing, insert the microSD cards and power on each Pi. SSH into each node and perform initial setup:

# Update system packages
sudo apt update && sudo apt upgrade -y

# Install essential tools
sudo apt install -y curl wget git vim htop

# Set up static IP (edit netplan configuration)
sudo nano /etc/netplan/50-cloud-init.yaml
Enter fullscreen mode Exit fullscreen mode

Configure the netplan file for static IP:

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: false
      addresses:
        - 192.168.1.100/24  # Change per node
      gateway4: 192.168.1.1
      nameservers:
        addresses: [8.8.8.8, 1.1.1.1]
Enter fullscreen mode Exit fullscreen mode

Apply the network configuration:

sudo netplan apply
sudo reboot
Enter fullscreen mode Exit fullscreen mode

System Optimization for Kubernetes

Kubernetes requires specific system configurations that aren't enabled by default:

# Enable cgroup memory and cpu
sudo nano /boot/firmware/cmdline.txt
# Add: cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

# Disable swap (Kubernetes requirement)
sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove

# Load required kernel modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# Configure sysctl parameters
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system
sudo reboot
Enter fullscreen mode Exit fullscreen mode

Container Runtime Installation

Kubernetes needs a container runtime. We'll use containerd, which is the current standard and provides excellent performance on ARM architecture.

Installing containerd

Run these commands on all nodes:

# Add Docker's official GPG key and repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=arm64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install containerd
sudo apt update
sudo apt install -y containerd.io

# Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

# Enable SystemdCgroup (required for Kubernetes)
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

# Restart and enable containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
Enter fullscreen mode Exit fullscreen mode

Verification

Verify containerd is running correctly:

sudo systemctl status containerd
sudo ctr version
Enter fullscreen mode Exit fullscreen mode

Kubernetes Installation

Now we'll install Kubernetes components using kubeadm, kubelet, and kubectl.

Adding Kubernetes Repository

Execute on all nodes:

# Add Kubernetes signing key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Add Kubernetes repository
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install Kubernetes components
sudo apt update
sudo apt install -y kubelet kubeadm kubectl

# Hold packages to prevent automatic updates
sudo apt-mark hold kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode

Initialize the Master Node

On your designated master node (pi-master), initialize the cluster:

sudo kubeadm init \
  --apiserver-advertise-address=192.168.1.100 \
  --pod-network-cidr=10.244.0.0/16 \
  --service-cidr=10.96.0.0/12
Enter fullscreen mode Exit fullscreen mode

This process takes 5-10 minutes. Upon completion, you'll see a join command – save it! You'll need it to add worker nodes.

Configure kubectl Access

Set up kubectl for your user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

Verify the master node is ready:

kubectl get nodes
kubectl get pods --all-namespaces
Enter fullscreen mode Exit fullscreen mode

Network Plugin Installation

Kubernetes requires a Container Network Interface (CNI) plugin for pod networking. We'll use Flannel, which works excellently on ARM architecture and is simple to configure.

Installing Flannel

# Download Flannel manifest
curl -sSL https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml -o kube-flannel.yml

# Apply Flannel configuration
kubectl apply -f kube-flannel.yml

# Verify Flannel pods are running
kubectl get pods -n kube-flannel
Enter fullscreen mode Exit fullscreen mode

Wait for all Flannel pods to reach "Running" status before proceeding.

Adding Worker Nodes

Now we'll join the worker nodes to our cluster using the join command from the master node initialization.

Joining Worker Nodes

On each worker node (pi-worker-1, pi-worker-2, etc.), run the join command you saved earlier:

sudo kubeadm join 192.168.1.100:6443 --token <token> \
    --discovery-token-ca-cert-hash sha256:<hash>
Enter fullscreen mode Exit fullscreen mode

If you lost the join command, generate a new one from the master:

kubeadm token create --print-join-command
Enter fullscreen mode Exit fullscreen mode

Verification

From the master node, verify all nodes joined successfully:

kubectl get nodes
kubectl get pods --all-namespaces
Enter fullscreen mode Exit fullscreen mode

You should see all nodes in "Ready" status. If any nodes show "NotReady", check the logs:

kubectl describe node <node-name>
journalctl -u kubelet -f
Enter fullscreen mode Exit fullscreen mode

Storage Configuration

Raspberry Pi clusters need proper storage configuration for persistent volumes. We'll set up both local storage and NFS options.

Local Storage Class

Create a local storage class for applications that need fast, local disk access:

# local-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
Enter fullscreen mode Exit fullscreen mode

Apply the storage class:

kubectl apply -f local-storage.yaml
kubectl get storageclass
Enter fullscreen mode Exit fullscreen mode

Persistent Volume Example

Create a persistent volume on one of your worker nodes:

# local-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv-1
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/local-storage
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - pi-worker-1
Enter fullscreen mode Exit fullscreen mode

Deploying Your First Application

Let's deploy a sample application to verify everything works correctly. We'll use nginx with a persistent volume.

Nginx Deployment

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "128Mi"
            cpu: "100m"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP
Enter fullscreen mode Exit fullscreen mode

Deploy the application:

kubectl apply -f nginx-deployment.yaml
kubectl get deployments
kubectl get pods -o wide
kubectl get services
Enter fullscreen mode Exit fullscreen mode

Testing the Deployment

Create a temporary pod to test connectivity:

kubectl run test-pod --image=busybox --rm -it --restart=Never -- sh

# Inside the test pod:
wget -qO- http://nginx-service
exit
Enter fullscreen mode Exit fullscreen mode

Monitoring and Observability

A production-ready cluster needs monitoring. Let's set up basic monitoring with metrics-server and optional Prometheus.

Installing Metrics Server

# Download metrics-server manifest
curl -sSL https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -o metrics-server.yaml

# Edit to add --kubelet-insecure-tls flag for self-signed certificates
kubectl apply -f metrics-server.yaml

# Verify installation
kubectl get pods -n kube-system | grep metrics-server
kubectl top nodes
kubectl top pods
Enter fullscreen mode Exit fullscreen mode

Basic Monitoring Commands

Monitor your cluster health with these essential commands:

# Node resource usage
kubectl top nodes

# Pod resource usage across all namespaces
kubectl top pods --all-namespaces

# Cluster events
kubectl get events --sort-by='.lastTimestamp'

# Service status
kubectl get all --all-namespaces
Enter fullscreen mode Exit fullscreen mode

Useful Management Tools

Kubernetes Dashboard (Optional)

For a web-based management interface:

# Deploy dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

# Create admin user
kubectl create serviceaccount dashboard-admin-sa
kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa

# Get access token
kubectl create token dashboard-admin-sa

# Port forward to access dashboard
kubectl proxy --address='0.0.0.0' --accept-hosts='^*$'
Enter fullscreen mode Exit fullscreen mode

Access the dashboard at: http://your-master-ip:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Helm Package Manager

Install Helm for easier application management:

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm version
Enter fullscreen mode Exit fullscreen mode

Troubleshooting Common Issues

Node Not Ready

# Check node status
kubectl describe node <node-name>

# Check kubelet logs
sudo journalctl -u kubelet -f

# Common fixes
sudo systemctl restart kubelet
sudo systemctl restart containerd
Enter fullscreen mode Exit fullscreen mode

Pod Scheduling Issues

# Check resource constraints
kubectl describe pod <pod-name>

# View node resources
kubectl describe nodes

# Check for taints
kubectl get nodes -o json | jq '.items[].spec.taints'
Enter fullscreen mode Exit fullscreen mode

Networking Problems

# Verify Flannel is running
kubectl get pods -n kube-flannel

# Check pod network connectivity
kubectl exec -it <pod-name> -- ping <another-pod-ip>

# Restart networking
kubectl delete pods -n kube-flannel --all
Enter fullscreen mode Exit fullscreen mode

Performance Issues

# Monitor resource usage
kubectl top nodes
kubectl top pods --all-namespaces

# Check for thermal throttling
vcgencmd measure_temp
vcgencmd get_throttled
Enter fullscreen mode Exit fullscreen mode

Security Considerations

Basic Security Hardening

# Update RBAC permissions (remove default permissions)
kubectl delete clusterrolebinding cluster-admin

# Create limited service accounts
kubectl create serviceaccount limited-user
kubectl create rolebinding limited-user --clusterrole=view --serviceaccount=default:limited-user
Enter fullscreen mode Exit fullscreen mode

Network Security

# Example Network Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
Enter fullscreen mode Exit fullscreen mode

Maintenance and Updates

Regular Maintenance Tasks

# Update system packages (monthly)
sudo apt update && sudo apt upgrade -y

# Clean up unused images
sudo crictl rmi --prune

# Check cluster health
kubectl get componentstatuses
kubectl get nodes
kubectl get pods --all-namespaces
Enter fullscreen mode Exit fullscreen mode

Kubernetes Updates

Plan for quarterly Kubernetes updates:

# Check current version
kubectl version

# Plan upgrade
sudo kubeadm upgrade plan

# Upgrade master node
sudo kubeadm upgrade apply v1.28.x

# Upgrade worker nodes
sudo kubeadm upgrade node
sudo systemctl restart kubelet
Enter fullscreen mode Exit fullscreen mode

Learning Exercises and Next Steps

Now that your cluster is running, try these hands-on exercises:

Beginner Exercises

  1. Deploy a multi-tier application (web frontend, API backend, database)
  2. Configure resource limits and requests for different workloads
  3. Set up ingress controllers for external traffic management
  4. Implement secrets management for sensitive configuration data
  5. Create persistent volumes for stateful applications

Intermediate Challenges

  1. Deploy Prometheus and Grafana for comprehensive monitoring
  2. Set up automated backups using Velero
  3. Implement GitOps workflows with ArgoCD or Flux
  4. Configure service mesh with Istio (resource intensive)
  5. Set up disaster recovery procedures

Advanced Projects

  1. Multi-cluster management with cluster API
  2. Custom operators for application lifecycle management
  3. Edge computing scenarios with ARM-specific optimizations
  4. IoT data processing pipelines leveraging the Pi's GPIO capabilities
  5. Machine learning workloads using TensorFlow Lite

Cost Analysis and ROI

Let's break down the economics of your home lab investment:

Initial Investment

  • Hardware: $200-400
  • Time setup: 8-16 hours
  • Learning materials: $0-100

Ongoing Costs

  • Electricity: ~$30/year (24/7 operation)
  • Internet bandwidth: minimal impact
  • Maintenance time: 2-4 hours/month

Learning Value

  • Kubernetes certification prep: $300+ value
  • Production experience: invaluable
  • Interview preparation: significant career impact
  • Side project platform: enables additional learning

Production Readiness Checklist

While this cluster is perfect for learning, consider these factors for production use:

High Availability

  • [ ] Multiple master nodes (3 or 5)
  • [ ] External etcd cluster
  • [ ] Load balancer for API server
  • [ ] Backup and restore procedures

Security

  • [ ] Certificate management
  • [ ] Network policies
  • [ ] Pod security standards
  • [ ] Regular security updates

Monitoring and Logging

  • [ ] Comprehensive monitoring stack
  • [ ] Centralized logging
  • [ ] Alerting configuration
  • [ ] Performance baselines

Conclusion

Building a Kubernetes cluster on Raspberry Pi provides an unmatched learning experience that bridges the gap between theoretical knowledge and practical expertise. You've created a real distributed system that demonstrates core Kubernetes concepts while remaining affordable and manageable.

The skills you've developed – cluster architecture, networking, troubleshooting, and application deployment – translate directly to production environments. Whether you're preparing for certifications, building portfolio projects, or simply satisfying curiosity about container orchestration, this cluster serves as an excellent foundation.

Remember that learning Kubernetes is a journey, not a destination. Your Pi cluster will evolve as you experiment with new technologies, deploy different applications, and encounter real-world challenges. The hands-on experience you gain from managing physical infrastructure will make you a more well-rounded engineer.

Keep experimenting, keep learning, and most importantly, have fun with your new Kubernetes cluster! The investment in time and hardware pays dividends in career growth and technical understanding.


Additional Resources

Have you built your own Raspberry Pi Kubernetes cluster? Share your experiences and challenges in the comments below! What applications are you running on your home lab cluster?

Top comments (0)