DEV Community

Cover image for Building a Kubernetes v1.29 Cluster using kubeadm
Unni P
Unni P

Posted on • Originally published at iamunnip.hashnode.dev

Building a Kubernetes v1.29 Cluster using kubeadm

In this article, we will look how we can set up a three node Kubernetes v1.29 cluster using kubeadm

Introduction

  • kubeadm is a tool used to create Kubernetes clusters

  • It automates the creation of Kubernetes clusters by bootstrapping the control plane, joining the nodes etc

  • Follows Kubernetes release cycle

  • Open-source tool maintained by the Kubernetes community

Prerequisites

  • Create three Ubuntu 22.04 LTS instances for the control plane, node-1 and node-2

  • Each instance has a minimum specification of 2 CPU and 2 GB RAM

  • Networking must be enabled between instances

  • Required ports must be allowed between instances

  • Swap must be disabled on instances

Initial Configuration

Set up unique hostnames on the control plane, node-1 and node-2

Once the hostnames are set, log out from the current session and log back in to reflect the changes

# control-plane

sudo hostnamectl set-hostname control-plane
Enter fullscreen mode Exit fullscreen mode
# node-1

sudo hostnamectl set-hostname node-1
Enter fullscreen mode Exit fullscreen mode
# node-2

sudo hostnamectl set-hostname node-2
Enter fullscreen mode Exit fullscreen mode

Update the hosts file on the control plane, node-1 and node-2 to enable communication via hostnames

# control-plane, node-1 and node-2

sudo vi /etc/hosts

172.31.81.34 control-plane
172.31.81.93 node-1
172.31.90.71 node-2
Enter fullscreen mode Exit fullscreen mode

Disable swap on the control plane, node-1 and node-2 and if a swap entry is present in the fstab file then comment out the line

Swap has been supported since v1.22 and since v1.28, swap has been supported for cgroup v2 only. The NodeSwap feature gate of the kubelet is beta but disabled by default. You must disable swap if the kubelet is not properly configured to use swap.

# control-plane, node-1 and node-2

sudo swapoff -a

sudo vi /etc/fstab
  # comment out swap entry
Enter fullscreen mode Exit fullscreen mode

To set containerd as our container runtime on the control plane, node-1 and node-2, first, we need to load some Kernel modules and modify system settings

# control-plane, node-1 and node-2

cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay

sudo modprobe br_netfilter
Enter fullscreen mode Exit fullscreen mode
# control-plane, node-1 and node-2

cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system
Enter fullscreen mode Exit fullscreen mode

Installation

Once the Kernel modules are loaded and the system settings are modified, now we can install containerd runtime on the control plane, node-1 and node-2

# control-plane, node-1 and node-2

sudo apt update

sudo apt install -y containerd
Enter fullscreen mode Exit fullscreen mode

Once the packages are installed, generate a default configuration file for containerd on the control plane, node-1 and node-2

# control-plane, node-1 and node-2

sudo mkdir -p /etc/containerd

sudo containerd config default | sudo tee /etc/containerd/config.toml
Enter fullscreen mode Exit fullscreen mode

Change the SystemdCgroup value to true in the containerd configuration file and restart the service

# control-plane, node-1 and node-2

sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

sudo systemctl restart containerd
Enter fullscreen mode Exit fullscreen mode

We need to install some prerequisite packages on the control plane, node-1 and node-2 for configuring the Kubernetes package repository

# control-plane, node-1 and node-2

sudo apt update

sudo apt install -y apt-transport-https ca-certificates curl gpg
Enter fullscreen mode Exit fullscreen mode

Download the public signing key for Kubernetes package repository on the control plane, node-1 and node-2

# control-plane, node-1 and node-2

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Enter fullscreen mode Exit fullscreen mode

Add the appropriate Kubernetes apt repository on the control plane, node-1 and node-2

# control-plane, node-1 and node-2

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Enter fullscreen mode Exit fullscreen mode

Install kubeadm, kubelet and kubectl tools and hold their package version on the control plane, node-1 and node-2

# control-plane, node-1 and node-2

sudo apt update

sudo apt install -y kubeadm=1.29.0-1.1 kubelet=1.29.0-1.1 kubectl=1.29.0-1.1

sudo apt-mark hold kubeadm kubelet kubectl
Enter fullscreen mode Exit fullscreen mode

Initialize the cluster by using kubeadm on the control plane

# control-plane

sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.29.0
Enter fullscreen mode Exit fullscreen mode

Once the installation is completed, set up our access to the cluster on the control plane

# control-plane

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

Verify our cluster status by listing the nodes

But our nodes are in a NotReady state because we haven’t set up networking

# control-plane

kubectl get nodes
NAME            STATUS     ROLES           AGE   VERSION
control-plane   NotReady   control-plane   45s   v1.29.0

kubectl get nodes -o wide
NAME            STATUS     ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
control-plane   NotReady   control-plane   52s   v1.29.0   172.31.81.34   <none>        Ubuntu 22.04.3 LTS   6.2.0-1012-aws   containerd://1.7.2
Enter fullscreen mode Exit fullscreen mode

Install the Calico network addon to the cluster and verify the status of the nodes

# control-plane

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico.yaml
Enter fullscreen mode Exit fullscreen mode
# control-plane

kubectl -n kube-system get pods
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7c968b5878-x5trl   1/1     Running   0          46s
calico-node-grrf4                          1/1     Running   0          46s
coredns-76f75df574-cdcj2                   1/1     Running   0          4m19s
coredns-76f75df574-z4gxg                   1/1     Running   0          4m19s
etcd-control-plane                         1/1     Running   0          4m32s
kube-apiserver-control-plane               1/1     Running   0          4m34s
kube-controller-manager-control-plane      1/1     Running   0          4m32s
kube-proxy-78gqq                           1/1     Running   0          4m19s
kube-scheduler-control-plane               1/1     Running   0          4m32s

kubectl get nodes
NAME            STATUS   ROLES           AGE     VERSION
control-plane   Ready    control-plane   4m53s   v1.29.0
Enter fullscreen mode Exit fullscreen mode

Once the networking is enabled, join our workload nodes to the cluster

Get the join command from the control plane using kubeadm

# control-plane

kubeadm token create --print-join-command
Enter fullscreen mode Exit fullscreen mode

Once the join command is retrieved from the control plane, execute it in node-1 and node-2

# node-1 and node-2

sudo kubeadm join 172.31.81.34:6443 --token kvzidi.g65h3s8psp2h3dc6 --discovery-token-ca-cert-hash sha256:56c208595372c1073b47fa47e8de65922812a6ec322d938bd5ac64d8966c1f27
Enter fullscreen mode Exit fullscreen mode

Verify our cluster and all the nodes will be in a Ready state

# control-plane

kubectl get nodes
NAME            STATUS   ROLES           AGE     VERSION
control-plane   Ready    control-plane   7m50s   v1.29.0
node-1          Ready    <none>          76s     v1.29.0
node-2          Ready    <none>          79s     v1.29.0

kubectl get nodes -o wide
NAME            STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
control-plane   Ready    control-plane   8m12s   v1.29.0   172.31.81.34   <none>        Ubuntu 22.04.3 LTS   6.2.0-1012-aws   containerd://1.7.2
node-1          Ready    <none>          98s     v1.29.0   172.31.81.93   <none>        Ubuntu 22.04.3 LTS   6.2.0-1012-aws   containerd://1.7.2
node-2          Ready    <none>          101s    v1.29.0   172.31.90.71   <none>        Ubuntu 22.04.3 LTS   6.2.0-1012-aws   containerd://1.7.2
Enter fullscreen mode Exit fullscreen mode

Application Deployment

Deploy an Nginx pod, expose it as ClusterIP and verify its status

# control-plane

kubectl run nginx --image=nginx --port=80 --expose
service/nginx created
pod/nginx created

kubectl get pods nginx -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          34s   192.168.247.1   node-2   <none>           <none>

kubectl get svc nginx
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx   ClusterIP   10.102.86.253   <none>        80/TCP    56s
Enter fullscreen mode Exit fullscreen mode

Access the Nginx default page using port forwarding in the control plane

# control-plane

kubectl port-forward svc/nginx 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

curl -i http://localhost:8080
HTTP/1.1 200 OK
Server: nginx/1.25.3
Enter fullscreen mode Exit fullscreen mode

That's all for now

Reference

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Top comments (0)