DEV Community

V A I S H A L i
V A I S H A L i

Posted on • Edited on

Install A Kubernetes Cluster | How to setup a basic Kubernetes Cluster using KubeADM

In this tutorial, we're going to set up our first cluster using KubeADM. KubeADM is probably one of the most popular tools for setting up a cluster that's production-ready. This is going to be our start for the production-ready ones.

Don't use this one in production because it's going to have only one control plane.

In the next tutorial of this series, we will add more control planes, and that's when it's going to be production-ready.This blog is an extension of Drew's Playlist, kindly check out.

Introduction

Kubeadm is a powerful tool that simplifies Kubernetes cluster setup. It provides best-practice defaults while ensuring a secure and production-ready environment. In this guide, we will walk through setting up a Kubernetes cluster using Kubeadm, discuss networking options, security considerations, common pitfalls, and next steps for deploying workloads.

SCENE 1: Pre-requisites

  • Note: Taking the help of Virtual Box for these tutorials, there's no need to install them on physical servers. In terms of just setting things up unless you're using managed servers this should get you going!

1.Setting up the VM
We're going to start with 1 control plane node and 3 worker nodes (they are all being set up using the Ubuntu Tutorial. The only difference between these 4 sets and the tutorial is that these are not set up using RAID, and there is no swap partition. We only have a root partition and a boot partition. That's it.

VM_k8s_nodes

Check out official Kubernetes Docs to keep up with the prerequites for creating a cluster using kubeadm.

2.Time to play with the Terminal(ssss)

We are using 4 nodes altogether (The author of the blog has ADHD hence they often recommend multi-tasking). To view and use them in sync follow the TMUX tutorial. After following up with this tutorial, your terminal screen should look like this:

TMUX Screen

All of them are synchronized by panes so we can run the same commands as we need to across all four machines.

3.Looking at KubeADM doc

Head over to KubeADM official doc for setting up our system accordingly.

Checking all these up will help us install all of the requirements efficiently. Let's go!

SCENE 2: Installing a Container Runtime (ContainerD)

One thing that's gonna be common in every tutorial and installations is pre-requisites, which will appear again and again and again specially when we're setting up Kubernetes cluster. So bare with me coz I'm dropping another pre-req but it's important to look out for any.

Step 1 Install and Configure Pre-requisites for containerD

To manually enable IPv4 packet forwarding:

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
Enter fullscreen mode Exit fullscreen mode

Paste them onto the Terminal:

To make sure it's configured and is going to continue with reboot.
Run:

# Apply sysctl params without reboot
sudo sysctl --system
Enter fullscreen mode Exit fullscreen mode

Finally, paste this to verify if the above code worked and set everything to 1.
Run:

sysctl net.ipv4.ip_forward
Enter fullscreen mode Exit fullscreen mode

Make sure the swap is off.

Run and open the fstab.

sudo vim /etc/fstab
Enter fullscreen mode Exit fullscreen mode

and comment out the /swap.img by #/swap.img.

Now, run:

sudo swapoff -a
Enter fullscreen mode Exit fullscreen mode

That will turn the swap off. Check by running free, the swap will display 0 all across.

Now we're ready for KubeADM to run. And if you don't do these things you'll face issue running KubeADM.

Step 2 Install ContainerD

I am bad at pointing out locations when it comes to real-life. LOL. But I will try to point out the when and where of the documentation. Kindly bare, don't come for me in the comment section 😂

Head over to: containerD > getting started with containerD github repo > Install containerD release notes > scroll down to find assets and choose the one for your machine > don't double click instead copy link address of the release and we're gonna paste it on to the terminal.

Phew😮‍💨

Add wget with the link address

Give it a second to download containerD and when it's done.

Run this to extract it under /usr/local:

  • run to be in the current directory:
sudo su
Enter fullscreen mode Exit fullscreen mode
  • then Run:
$ tar Cxzvf /usr/local containerd-1.6.2-linux-amd64.tar.gz
Enter fullscreen mode Exit fullscreen mode

Common FAQ

If you intend to start containerd via systemd, you should also download the containerd.service unit file from https://raw.githubusercontent.com/containerd/containerd/main/containerd.service into /usr/local/lib/systemd/system/containerd.service, and run the following commands:

Run

wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -O /usr/lib/systemd/system/containerd.service
Enter fullscreen mode Exit fullscreen mode

Run

systemctl daemon-reload
systemctl enable --now containerd
Enter fullscreen mode Exit fullscreen mode

Run to see if everything is loaded and active.

systemctl status containerd
Enter fullscreen mode Exit fullscreen mode

Step 3 Install runc

runc is a CLI tool for spawning and running containers on Linux according to the OCI specification.

Download the right version: for my machine its runc.amd64{Dont double click again, paste into the terminal}

  • Copy the link address and RUN:
wget https://github.com/opencontainers/runc/releases/download/v1.2.6/runc.amd64
Enter fullscreen mode Exit fullscreen mode
  • Install runc
$ install -m 755 runc.amd64 /usr/local/sbin/runc
Enter fullscreen mode Exit fullscreen mode
  • RUN runc to check if it's installed.

Step 4 Installing CNI Plugin

Run

$ mkdir -p /opt/cni/bin
Enter fullscreen mode Exit fullscreen mode

Head Over to Getting started with ContainerD > Step 3 Install CNI Plugin > Releases > Choose a plugin for your system > copy the link address for your plugin, mine is Plugin Linux-amd64

Run

wget https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz
Enter fullscreen mode Exit fullscreen mode

Run

$ tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz
Enter fullscreen mode Exit fullscreen mode

Make sure everything has an updated version to it.

Step 5 Looking out for config.toml

Once you complete step 4 you've now created a valid configuration file, config.toml.

ContainerD uses a configuration file located in /etc/containerd/config.toml for specifying daemon level options.

The default configuration can be generated via containerd config default > /etc/containerd/config.toml

Let's roll into our terminal to explore:

Run

mkdir /etc/containerd
Enter fullscreen mode Exit fullscreen mode

Run

containerd config default > /etc/containerd/config.toml
Enter fullscreen mode Exit fullscreen mode

Run

vim /etc/container/config.toml
Enter fullscreen mode Exit fullscreen mode

You've found that file if you land in the world of toml.
We'll configure that further. Exit out of the file and head on to step 6.

Step 6 Configuring the systemd cgroup driver

To use the systemd cgroup driver in /etc/containerd/config.toml with runc, head over to the config.toml file.

Run

vim /etc/containerd.config.toml
Enter fullscreen mode Exit fullscreen mode

Find [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] in the config.toml

Further inside this plugin, find [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
and in here you'll find sub-options, choose SystemdCgroup.

Set SystemdCgroup = false to SystemdCgroup = true.

  • The systemd cgroup driver is recommended if you use cgroup v2.

How to check?

Run

stat -fc %T /sys/fs/cgroup
Enter fullscreen mode Exit fullscreen mode

It reflect the exact status.

Lets take a deep breathe along with the nodes, we've come a long way 😮‍💨

Run

systemctl restart containerd
Enter fullscreen mode Exit fullscreen mode

and you take a pause🍻...

Let's check the status

Run

systemctl status containerd
Enter fullscreen mode Exit fullscreen mode

If evrything is active✅
then, everything is up and running . HOORAYYYY!!!

SCENE 3: Install KubeADM, Kubelet and Kubectl

Now we've done our warm-up, the workout would be easy. Hope so!!
It's pretty straight forward now. Hope SO!!

Step 1 Updating the apt packages needed to use the Kubernetes apt repository

Let's Run

apt-get update
Enter fullscreen mode Exit fullscreen mode

Run

# apt-transport-https may be a dummy package; if so, you can skip that package
apt-get install -y apt-transport-https ca-certificates curl gpg
Enter fullscreen mode Exit fullscreen mode

Step 2 Download the Google cloud public signing key:

Run

# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Enter fullscreen mode Exit fullscreen mode

Step 3 Add the appropriate Kubernetes apt repository
Run

# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Enter fullscreen mode Exit fullscreen mode

Along with following this tutorial, I would highly suggest opening up the official documentation alongside to get the right version at all times.

Step 4 Update the apt package index, install kubelet, kubeadm and kubectl, and pin their version:

Run

apt-get update
Enter fullscreen mode Exit fullscreen mode

Run

apt-get install -y kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode

Run

apt-mark hold kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode

Kubelet: is responsible for making containers real essentially on
nodes.
KubeADM: which is responsible for creating a cluster.
KubeCtl: is responsible for the interaction of the kubenetes API.

Scene 4 Creating a CLUSTER finally

Initializing your control plane mode

We would want a control plane set up and add some worker nodes to it. We would be wanting a Highly Available(HA) control plane. We are going to consider a --control-plane-endpoint. If we were in the cloud we could do that with the IP of a load balancer.

*We are not in the cloud. We are setting up in our machines at home. We need to set up control-plane-endpoint. *

Thankfully, KubeVIP enters the chat✨

KubeVIP is basically a tool that allows us to use a load balancer out of the cloud.

Preparing for HA control plane with KubeVIP

We are going to use KubeVIP documentation.

We need to do this as a part of setting up the KubeADM.

Note:
If you're not worried about High Availability (which you should be) then you can go ahead and skip this section, but I recommend you do this section coz if you don't, you'll have to settle external load balancers to manage the traffic going in. In fact, you'll have to reconfigure the cluster bcoz you wont have this --control-plane-endpoint flag.

Let's get going fellas 🤖🤖🤖

Step 1: Generating a Manifest

" In order to create an easier experience of consuming the various functionality within kube-vip, we can use the kube-vip container itself to generate our static Pod manifest. We do this by running the kube-vip image as a container and passing in the various flags for the capabilities we want to enable. " -says the docs

We need to set a VIP address to be used for control plane:

Here I would use an IP that's free. You can use any IP that's free and would not conflict on your network.

Run

ip a
Enter fullscreen mode Exit fullscreen mode

I'm gonna use this :

export VIP=xxx.xxx.x.xxx
Enter fullscreen mode Exit fullscreen mode

Then I need to export an interface name.

export INTERFACE=interface_name
Enter fullscreen mode Exit fullscreen mode

This is the interface on the control plane that will announce the VIP.

Get the latest version of the kube-vip release by parsing the GitHub API. This step requires that jq and curl are installed.

apt install jq -y
Enter fullscreen mode Exit fullscreen mode
KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")
Enter fullscreen mode Exit fullscreen mode

We don't need to do this kubevip thing on other nodes coz they're not control plane nodes. So the kubevip set-up will be done on one window itself.

Step 2: Creating a Manifest

With the input values now set, we can pull and run the kube-vip image supplying it the desired flags and values.

For containerd, run the below command:

alias kube-vip="ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION; ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"
Enter fullscreen mode Exit fullscreen mode

Step 3: ARP

Make sure the location exists:

mkdir  /etc/kubernetes/manifests
Enter fullscreen mode Exit fullscreen mode

Run

kube-vip manifest pod \
    --interface $INTERFACE \
    --address $VIP \
    --controlplane \
    --services \
    --arp \
    --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml
Enter fullscreen mode Exit fullscreen mode

Jump back to official docs > Initializing the control plane

  1. If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes.

Run

kubeadm init --control-plane-endpoint VIP 192.168.0.200
Enter fullscreen mode Exit fullscreen mode

2.Choose a Pod network add-on, and verify whether it requires any arguments to be passed to kubeadm init. Depending on which third-party provider you choose, you might need to set the --pod-network-cidr to a provider-specific value.

Head over to Calico > https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises > Manifest

Download the Calico networking manifest for the Kubernetes API datastore.

Run

curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/calico.yaml -O

Enter fullscreen mode Exit fullscreen mode

Explaining POD-CIDR

So, a CIDR (Classless Inter-Domain Routing) is a method of signing IP addresses from a block of IP addresses. Your pod uses a POD-CIDR. If you want the PODs to have a network, we need to provide a CIDR for them.

Step 4: Setting up a DNS record

Set up a record in our host file.

Head over to vim /etc/hosts

Write 192.168.0.200 kube-api-server (x.x.x.x domain)in the file.

Run

apt install iputils-ping
Enter fullscreen mode Exit fullscreen mode
ping kube-api-server (ping Domain)
Enter fullscreen mode Exit fullscreen mode

Step 5: Considerations about apiserver-advertise-address and ControlPlaneEndpoint

To set the --apiserver-advertise-address for the particular api server

kubeadm init  --control-plane-endpoint DOMAIN --apiserver-advertise-address NODE_IP
Enter fullscreen mode Exit fullscreen mode

My DOMAIN: 192.168.0.200
My NODE_IP: 192.168.0.201

NOTE:
The IP being used for the control plane endpoint (192.168.0.200) is technically wrong. You can use it and it would work, not a problem. But if that IP changes later, you would no longer be able to access your cluster even if you change the IP in kube config. The reason being the IP and the control plane endpoint flag will be encoded into the certificate that is generated when a Kubernetes cluster is spawn up. Now if the IP changes and you change it in kubeconfig and you used it in the control plane endpoint, when you access your cluster, you'll get the error back saying "the IP doesn't exist". We will configure that later.

Hitting enter to the above code, KubeADM will initialize the cluster.

Okayyyyyyyyyyy!!
The cluster must be up and running and if there's an error there's a good chance something somewhere went wrong in terms of the way you followed along. Run KubeADM RESET and follow thru the process again to fix.

Running this will set your kube config and you'll not have to move to root location.

export KUBECONFIG=/etc/kubernetes/admin.conf
Enter fullscreen mode Exit fullscreen mode

Let's deploy Pod Network into the cluster

Head over to https://kubernetes.io/docs/concepts/cluster-administration/addons/ to look at the options.

We will be sticking to Calico anyways.

Run

kubectl apply -f calico.yaml
Enter fullscreen mode Exit fullscreen mode

Run

kubectl get -f calico.yaml
Enter fullscreen mode Exit fullscreen mode

Run

kubectl get po -n kube-system
Enter fullscreen mode Exit fullscreen mode

FINALE: Adding the worker nodes to the cluster

Go to

vim /etc/hosts
Enter fullscreen mode Exit fullscreen mode

quickly remove the x.x.x.x domain from control planes and reset it by synchronizing all four together.

x.x.x.x domain
Enter fullscreen mode Exit fullscreen mode

on all 4 nodes and set it.

Then you can join any number of worker nodes by running the following on each as root :

kubeadm join 192.168.0.200:6443 --token ztr6zr7.vk8q7x8wvdacfonh \-discovery-token-ca-ert-hash sha256:5c3d9d4ab685067731953ecbd8d4369e26e6b926d105289250ca47a3c8b10f2a
Enter fullscreen mode Exit fullscreen mode

After manually writing into all 4. Get rid of this cmd from the control plane. Let it run on all three worker nodes.

Now all of these 3 will start joining the cluster.

This is your token key under the init. Might be diff so check and paste.

Run

kubectl get node -a
Enter fullscreen mode Exit fullscreen mode

cluster ready

ready

We have all of out nodes running and ready. We're good to go~~

WE GOT A CLUSTER. THAT'S YOUR CLUSTER SET-UP AND RUNNING🚀

In theory, this is production-ready ready but we're gonna implement some advanced tooling so that it can be used in production.

But bravo,
If you stick to the end !!

See you in the next one.

Quickstart image

Django MongoDB Backend Quickstart! A Step-by-Step Tutorial

Get up and running with the new Django MongoDB Backend Python library! This tutorial covers creating a Django application, connecting it to MongoDB Atlas, performing CRUD operations, and configuring the Django admin for MongoDB.

Watch full video →

Top comments (0)

Quickstart image

Django MongoDB Backend Quickstart! A Step-by-Step Tutorial

Get up and running with the new Django MongoDB Backend Python library! This tutorial covers creating a Django application, connecting it to MongoDB Atlas, performing CRUD operations, and configuring the Django admin for MongoDB.

Watch full video →

👋 Kindness is contagious

Please consider leaving a ❤️ or a friendly comment if you found this post helpful!

Okay