Keeping your cluster up-to-date is crucial for security, stability, and access to new features. In this blog, we’ll take a closer look at our home lab Kubernetes cluster running on Raspberry Pi devices. The setup includes 1 master node and 3 worker nodes, and it has been running smoothly for over four months without any major issues.
k8s home lab configuration — nodes
master2 — master node/ control plane
master1 — worker node 3
worker 1 — worker node 1
worker 2 — worker node 2
k8s home lab configuration — system components
As part of regular maintenance and to stay up-to-date with the latest features and security patches, we decided to upgrade the cluster from Kubernetes v1.29 to v1.34. Alongside the upgrade, we’ll also walk through the process of backing up the ETCD datastore, which is crucial for fault tolerance and disaster recovery.
Here’s what we’ll cover:
- ⬆️ Step-by-step guide to upgrading Kubernetes components.
- 🛡️ Safely backing up ETCD in the master node.
- ✅ Post-upgrade verification.
A. Backing Up ETCD
Before diving into the upgrade process, it’s essential to create a backup of ETCD, the key-value store that holds all cluster state data. This step is crucial because if anything goes wrong during the upgrade — whether it’s a misconfiguration or a failed component — we need a reliable way to restore the cluster to its current stable state.
Think of the ETCD backup as your safety net. It ensures that even in the worst-case scenario, you won’t lose your cluster’s configuration, workloads, or networking setup.
Run the following command to confirm the ETCD pod name running on your control plane:
kubectl get pods -n kube-system | grep etcd
k8s configuration — ETCD setup
Suppose your control plane (master node) runs ETCD as a static pod (which is the default in kubeadm-based clusters, including Raspberry Pi setups). In that case, you can run etcdctl directly on the master node host — because the ETCD data directory and certificates are mounted locally at /etc/kubernetes/pki/etcd/ and /var/lib/etcd/. In that case, you can use the following command to take the backup:
sudo ETCDCTL_API=3 etcdctl snapshot save /var/lib/etcd/backup.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key
If ETCD runs as a container (inside the pod) or you don’t have the etcdctl binary available on the host, then you’ll need to exec into the pod to run the backup from inside the container.
In my setup, where etcd runs as a container, I need to access the pod directly to execute the backup. There is no etcdctl binary available on the host.
kubectl exec -n kube-system <node-label>-- \
etcdctl snapshot save /var/lib/etcd/backup.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key
#edit the <node-label> with etcd node
Now, verify the snapshot using the following command:
kubectl exec -n kube-system <node-label>-- \
etcdctl snapshot status /var/lib/etcd/backup.db -w table
In the above screenshot, you can seee a deprecation warning. ETCD v3.6+ is deprecated etcdctl snapshot status and recommends using etcdutl snapshot status. However, in my kubeadm static pod setup, the etcdutl binary is not included in the ETCD container. Only etcdctl exists inside the pod. The warning is simply a deprecation notice, not a failure. You can safely continue using it etcdctl snapshot status — it works perfectly.
In my setup, I came across a deprecation warning while using etcdctl. To verify, I checked the current version of etcd using the following command.
Since my cluster is kubeadm-managed, even though etcd v3.6+ recommends using etcdutl for snapshot management, etcdctl inside the etcd pod still works perfectly for backup and restore. etcdutl is a newer utility introduced in etcd v3.6+ as a replacement for certain snapshot operations, but in my case, continuing with etcdctl is fully sufficient.
kubectl exec -n kube-system etcd-master2 -- etcdctl version
In some scenarios, the etcd backup file may reside only inside the etcd pod and not in a host-accessible directory.
This can happen if the etcd container is minimal and lacks tools like tar to move files around, or if the host does not have etcdctl installed to create the snapshot directly. In such cases, you can use specific commands to move the backup file to a host-accessible directory.
kubectl cp kube-system/etcd-master2:/tmp/backup.tar.gz ./backup.tar.gz
In my setup, I created the etcd snapshot directly on the host in a host-accessible directory (for example, /var/lib/etcd/backup.db). Since the backup is already on the host, it can be copied or archived without using kubectl cp.
sudo cp /var/lib/etcd/backup.db ~/backup.db
sudo chown $USER:$USER ~/backup.db
ls -l ~/backup.db
Additionally, it’s a good practice to create a copy of the backup on a remote server. This ensures an extra layer of safety and helps protect against data loss in case of hardware failure or accidental deletion.
B. Upgrading the cluster
Since we have backed up etcd, we can safely proceed with the kubeadm upgrade for our Raspberry Pi Kubernetes cluster. Let’s go through a step-by-step process to upgrade the cluster from v1.29 → v1.34.
Kubernetes officially supports upgrading one minor version at a time, which means that to move from 1.29 to 1.34, you need to perform incremental upgrades:
Step 1: v1.29 → v1.30
Step 2: v1.30 → v1.31
Step 3: v1.31 → v1.32
Step 4: v1.32 → v1.33
Step 5: v1.33 → v1.34
Please refer to the following URL for k8s official documentation on skew policy:
Step 1: Pre-Checks for upgrading the cluster
Before upgrading the cluster, it’s essential to ensure that the cluster is healthy and stable. This helps prevent issues during the upgrade and ensures a smooth transition between versions.
kubectl version
kubectl get nodes
kubectl get cs
kubectl get pods -n kube-system
Step 2: Upgrade kubeadm on the Master Node
You can also refer to the following documentation for more details regarding kubeadm upgrade:
To upgrade to v1.30, ensure that the directory /etc/apt/keyrings exists. If it doesn’t, create it before running the curl command (see the note below). Afterwards, update the package repository to target v1.30.
Each Kubernetes minor release repository (v1.29, v1.30, v1.31, etc.) may use a different GPG key. Using the key from an older version, such as v1.29, can cause package verification failures when installing packages from the v1.30 repository.
To prevent such errors, always use the GPG key and repository provided specifically for the minor version you are upgrading to.
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Before upgradingkubeadmEnsure your package lists are up to date. Then, install the required version by using the following commands
sudo apt-mark unhold kubelet kubeadm kubectl && \
sudo apt-get update
sudo apt-get install -y kubeadm=1.30.14-1.1 kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo apt-mark hold kubelet kubeadm kubectl
If you want to upgrade to a specific Kubernetes version, you can use the following command to list all available minor versions in the corresponding repository:
apt-cache madison kubeadm
available packages in the repo
For comprehensive guidance on upgrading your Kubernetes cluster to version v1.30 , refer to the official Kubernetes documentation:
This resource provides detailed, step-by-step instructions tailored for clusters managed with kubeadm , ensuring a smooth and efficient upgrade process.
Once the upgrade is complete, verify the kubeadm version using the following command:
kubeadm version
kubectl version --client
kubelet --version
Step 3: Plan the Upgrade
After upgrading kubeadm, it’s important to check the upgrade path and verify the versions of all cluster components. This ensures that all parts of your Kubernetes cluster are compatible and running the expected versions.
sudo kubeadm upgrade plan
It will display:
- The current cluster version and control plane version.
- The available target versions you can upgrade to.
- The versions of kubelet and kubectl on your nodes.
- Any required steps or notes for a smooth upgrade.
kubeadm upgrade plan output — 1
kubeadm upgrade plan output — 2
Step 4: Upgrade the Control Plane
Once the upgrade plan is reviewed and all changes are understood, you can proceed with upgrading the cluster. Use the kubeadm upgrade apply command, which is provided in the output of kubeadm upgrade plan.
sudo kubeadm upgrade apply v1.30.14
The upgrade process may take some time, depending on the size and complexity of your cluster. Once it completes successfully, you should see a message similar to the following:
Now, verify the control plane using the following commands:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
kubectl get pods -n kube-system
Step 5: Upgrade Worker Nodes
Now repeat the same steps from ‘Step 2: Upgrade kubeadm on the Master Node’ again in master1, worker1 and worker2 nodes (worker nodes). Workers do not control the cluster state; running upgrade plan there doesn’t provide meaningful information. First drain the worker node from the control plane, so that all the resources will be taken out of the node
ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl drain <worker-node> --ignore-daemonsets --delete-emptydir-data
***
kubectl drain worker1 --ignore-daemonsets --delete-emptydir-data
***
draining resources from worker node (master1)
Now you can repeat the steps we followed for control plane for upgrading the version:
ssh to master1 node
-------------------------------
#step 1:
sudo rm -rf /etc/apt/sources.list.d/kubernetes.list
#step 2:
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#step 3:
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
#step 4:
sudo apt-mark unhold kubelet kubeadm kubectl && \
sudo apt-get update
sudo apt-get install -y kubeadm=1.30.14-1.1 kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo apt-mark hold kubelet kubeadm kubectl
#step 5:
kubeadm version
#step 6:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
kubectl get pods -n kube-system
#step 7:
kubeadm upgrade node
upgrading worker node — host ‘master2’ is a worker node here. (master2 — master node/ control planemaster1 — worker node 3worker 1 — worker node 1worker 2 — worker node 2)
Also verify the nodes from control plane:
Now we can following commmand to make the node available for schedule;
kubectl uncordon master1
#run this command in the control plane.
kubectl uncordon command is used to mark a node as schedulable again after it has been drained.
Similarly we can do the upgrades in other 2 worker nodes, I will mention the commands below:
worker1:
ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl drain <worker-node> --ignore-daemonsets --delete-emptydir-data
***
kubectl drain worker1 --ignore-daemonsets --delete-emptydir-data
***
ssh to worker1 node
#step 1:
sudo rm -rf /etc/apt/sources.list.d/kubernetes.list
#step 2:
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#step 3:
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
#step 4:
sudo apt-mark unhold kubelet kubeadm kubectl && \
sudo apt-get update
sudo apt-get install -y kubeadm=1.30.14-1.1 kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo apt-mark hold kubelet kubeadm kubectl
#step 5:
kubeadm version
#step 6:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
#step 7:
sudo kubeadm upgrade node
ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl uncordon worker1
kubectl get nodes
kubectl get pods -n kube-system
worker2:
ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl drain <worker-node> --ignore-daemonsets --delete-emptydir-data
***
kubectl drain worker2 --ignore-daemonsets --delete-emptydir-data
***
ssh to worker2 node
-------------------------------
#step 1:
sudo rm -rf /etc/apt/sources.list.d/kubernetes.list
#step 2:
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#step 3:
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
#step 4:
sudo apt-mark unhold kubelet kubeadm kubectl && \
sudo apt-get update
sudo apt-get install -y kubeadm=1.30.14-1.1 kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo apt-mark hold kubelet kubeadm kubectl
#step 5:
kubeadm version
#step 6:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
kubectl get pods -n kube-system
#step 7:
sudo kubeadm upgrade node
ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl uncordon worker2
All nodes have now been successfully upgraded to v1.30.14 , as shown in the following screenshot.
Step 5: Upgrading to 1.34
Following the same process, we’ll now proceed to upgrade the cluster from v1.30.14 to v1.31. Then we will proceed with with
Step 1: v1.29.15 → v1.30.14–1.1
Step 2: v1.30.14–1.1 → v1.31.13–1.1
Step 3: v1.31.13–1.1 → v1.32.9
Step 4: v1.32.9 → v1.33.5
Step 5: v1.33.5 → v1.34.1
In control-plane (master2) use the following steps to upgrade to version 1.31:
ssh to controlplane (master2)
#step 1:
sudo rm -rf /etc/apt/sources.list.d/kubernetes.list
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
#step 2:
sudo apt update
apt-cache madison kubeadm
#step 3:
sudo apt-mark unhold kubelet kubeadm kubectl && \
sudo apt-get update
sudo apt-get install -y kubeadm kubelet kubectl #will take the latest version
sudo apt-mark hold kubelet kubeadm kubectl
#step 4:
kubeadm version
kubectl version --client
kubelet --version
#step 5:
sudo kubeadm upgrade plan
#step 6:
sudo kubeadm upgrade apply <version>
#I used the following command
sudo kubeadm upgrade apply v1.31.13
#step 7:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
kubectl get pods -n kube-system
Now lets setup the worker nodes:
ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl drain <worker-node> --ignore-daemonsets --delete-emptydir-data
ssh to <worker-node> node
-------------------------------
#step 1:
sudo rm -rf /etc/apt/sources.list.d/kubernetes.list
#step 2:
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#step 3:
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
#step 4:
sudo apt update
apt-cache madison kubeadm
sudo apt-mark unhold kubelet kubeadm kubectl && \
sudo apt-get install -y kubeadm kubelet kubectl #will take the latest version
sudo apt-mark hold kubelet kubeadm kubectl
#step 5:
kubeadm version
#step 6:
sudo kubeadm upgrade node
#step 7:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
kubectl get pods -n kube-system
ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl uncordon <worker-node>
Follow the same steps to upgrade sequentially through the remaining versions. The process remains identical — the only change is the repository you add in Step 2 , where the Kubernetes repository is updated.
Simply replace it with the corresponding repository for each target version before proceeding with the upgrade.
For v1.32:
------------
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
For v1.33:
------------
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
For v1.34:
------------
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
so far all good, if you want to restore the ETCD with the backup file you can follow below mentioned steps, you can use this incase if you need to revert back the changes.
sudo systemctl stop kube-apiserver
etcdctl snapshot restore /var/lib/etcd/backup.db \
--data-dir /var/lib/etcd
#mention the exact path -/var/lib/etcd/backup.db
sudo systemctl start kube-apiserver
#now verify the installation using following command
etcdctl endpoint health \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key
kubectl get nodes
kubectl get pods --all-namespaces
We have now successfully completed the cluster upgrade. 🎉
If you have any questions, feel free to reach out — I’m not a specialist, but I’ll be happy to research and share the most accurate updates I can find.
Top comments (0)