DEV Community

Cover image for Mastering On-Premise Kubernetes: A Comprehensive Guide to Setting Up Your Own Cluster
Sai Simha Reddy
Sai Simha Reddy

Posted on

Mastering On-Premise Kubernetes: A Comprehensive Guide to Setting Up Your Own Cluster

Welcome to our guide on setting up a Kubernetes cluster on-premises! In this blog, we'll walk you through the process of implementing a Kubernetes cluster using one master node and three worker nodes, all running on Rocky Linux virtual servers. By following along, you'll gain the skills and knowledge needed to deploy and manage your own Kubernetes infrastructure in an on-premises environment. Let's dive in!

Before getting started:

What is Kubernetes?

Before we dive into the nitty-gritty of setting up our Kubernetes cluster on-premises, let's take a moment to understand what Kubernetes is and why it's such a powerful tool for container orchestration.

Kubernetes, often abbreviated as K8s, is an open-source platform designed to automate the deployment, scaling, and management of containerized applications. Originally developed by Google, Kubernetes has quickly become the de facto standard for container orchestration, offering features such as automated scaling, load balancing, and self-healing capabilities.

With Kubernetes, you can abstract away the underlying infrastructure and focus on deploying your applications in a consistent and efficient manner, regardless of whether they're running on-premises, in the cloud, or in a hybrid environment.
Now that we have a basic understanding of Kubernetes, let's roll up our sleeves and get started with setting up our on-premises cluster using Rocky Linux virtual servers.

Image description

Let's get started:

Pre-requisites:

4 Rocky Linux Servers.(I am using Rocky Linux 9 flavour in this blog).

Let's Start:

1. Set up the hostnames of all 4 servers in "/etc/hosts" file

10.168.100.21 master.kubernetes.com
10.168.100.22 worker1.kubernetes.com
10.168.100.23 worker2.kubernetes.com
10.168.100.24 worker3.kubernetes.com

(Note: Replace hostname with your original hostnames, i have used kubernetes.com just for demo purpose)

2. Disable firewall service in all four nodes

if you are using firewalld service:

  systemctl stop firewalld
  systemctl disable firewalld

if you are using iptables service:

  systemctl stop iptables
  systemctl disable iptables
Enter fullscreen mode Exit fullscreen mode

3. Add container modules to the list of kernel modules

 sudo tee /etc/modules-load.d/containerd.conf <<EOF
 overlay
 br_netfilter
 EOF
Enter fullscreen mode Exit fullscreen mode

4. Manually load kernel modules into the Linux kernel at
runtime, without needing to reboot the system

 sudo modprobe overlay
 sudo modprobe br_netfilter
Enter fullscreen mode Exit fullscreen mode

5. Essential Network Configuration for Kubernetes: Enabling Advanced Networking Features

 sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
 net.bridge.bridge-nf-call-ip6tables = 1
 net.bridge.bridge-nf-call-iptables = 1
 net.ipv4.ip_forward = 1
 EOF

 sudo sysctl --system
Enter fullscreen mode Exit fullscreen mode

6. Deactivate all swap partitions or swap files that are currently in use

 swapoff -a
 sudo sed -i '/swap/ s/^/#/' /etc/fstab
Enter fullscreen mode Exit fullscreen mode

7. Disable SELINUX

 sudo sed -i 's/^SELINUX=.*/SELINUX=disabled/' 
 /etc/selinux/config.bkp
Enter fullscreen mode Exit fullscreen mode

*8. Reboot the all nodes once to apply the above changes *

 reboot
Enter fullscreen mode Exit fullscreen mode

9. Adding Docker Repository to CentOS Yum Configuration in all the nodes

 yum-config-manager --add-repo
 https://download.docker.com/linux/centos/docker-ce.repo
Enter fullscreen mode Exit fullscreen mode

10. Download and install the container run time in all the nodes

  yum install -y docker-ce docker-ce-cli containerd.io
Enter fullscreen mode Exit fullscreen mode

11. Configuring Containerd to Use systemd Cgroups

  containerd config default | sudo tee 
  /etc/containerd/config.toml >/dev/null 2>&1

 sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= 
 true/g' /etc/containerd/config.toml
Enter fullscreen mode Exit fullscreen mode

12. Start and enable the container run time service

  systemctl start containerd
  systemctl enable containerd

  systemctl start docker
  systemctl enable docker
Enter fullscreen mode Exit fullscreen mode

13. Adding Kubernetes Repository to CentOS Yum Configuration in all the nodes

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

14. Download and install kubelet in all the nodes

 yum install -y kubelet kubeadm kubectl -- 
 disableexcludes=kubernetes
Enter fullscreen mode Exit fullscreen mode

15. Start and enable the kubelet service

 systemctl start kubelet
 systemctl eanble kubelet
Enter fullscreen mode Exit fullscreen mode

16. Initializing a Kubernetes Cluster with Custom Configurations(Run the below command in master node)

kubeadm init --apiserver-advertise-address=10.168.100.21 --
pod-network-cidr=172.100.0.0/16

Note: apiserver-advertise-address should be the ip address of master node in our K8S cluster. You can assign your custom cidr for pod-network-cidr.

The cluster will be initialized, and all control plane components, such as the kube-api server, etcd, kube-scheduler, and kube-controller-manager, will be set up.

As soon as the cluster is initialized, it displays the command for worker nodes to join the cluster

kubeadm join 10.168.100.21:6443 --token koyi1v.hoq6ssx0dxy6k5ji --discovery-token-ca-cert-hash sha256:fc5066a9e04652e4e6f67fa9e49749b76c6f9873df382e9b9ac47d747877bd7d(in my case).

17. Setting Up Kubernetes Configuration for User Access(in
master node)

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

*18. Join the worker nodes to the cluster. *

  kubeadm join 10.168.100.21:6443 --token koyi1v.hoq6ssx0dxy6k5ji         --discovery-token-ca-cert-hash sha256:fc5066a9e04652e4e6f67fa9e49749b76c6f9873df382e9b9ac47d747877bd7d(run it on all worker nodes)
Enter fullscreen mode Exit fullscreen mode

check the status in master node using "kubectl get nodes"
It will show the all the worker nodes info but the status is in NotReady state since we have not set up the cluster network yet.

19. Installing a Pod network add-on using calico network(in master node)

  curl https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/calico.yaml -O
Enter fullscreen mode Exit fullscreen mode

If you are using pod CIDR other than 192.168.0.0/16, then uncomment the CALICO_IPV4POOL_CIDR variable in the manifest(calico.yaml) and set it to the same value as your chosen pod CIDR.(in my case i set it to 172.100.0.0/16)

Apply the manifest using the following command.

   kubectl apply -f calico.yaml
Enter fullscreen mode Exit fullscreen mode

Now the pod network and network policy are deployed in our cluster. The nodes use this pod network to communicate with each other.

20. check the pods and nodes are running(in master node)

kubectl get pods -n kube-system
kubectl get nodes

calico-kube-controllers-68cdf756d9-fjfjp   1/1     Running   2 (11d ago)   12d
calico-node-2frrp                          1/1     Running   2 (11d ago)   12d
calico-node-4hnfk                          1/1     Running   1 (11d ago)   12d
calico-node-kf7kf                          1/1     Running   1 (11d ago)   12d
calico-node-svvlh                          1/1     Running   1 (11d ago)   12d
coredns-5dd5756b68-8cqq8                   1/1     Running   1 (11d ago)   12d
coredns-5dd5756b68-x8hrw                   1/1     Running   1 (11d ago)   12d
etcd-aekmlpcik8sm212                       1/1     Running   2 (11d ago)   12d
kube-apiserver-aekmlpcik8sm212             1/1     Running   2 (11d ago)   12d
kube-controller-manager-aekmlpcik8sm212    1/1     Running   4 (11d ago)   12d
kube-proxy-2ffxk                           1/1     Running   1 (11d ago)   12d
kube-proxy-kbfj2                           1/1     Running   1 (11d ago)   12d
kube-proxy-m42v5                           1/1     Running   2 (11d ago)   12d
kube-proxy-v5dsq                           1/1     Running   1 (11d ago)   12d
kube-scheduler-aekmlpcik8sm212             1/1     Running   5 (11d ago)   12d


aekmlpcik8sm212   Ready    control-plane   12d   v1.28.7
aekmlpcik8sw213   Ready    <none>          12d   v1.28.7
aekmlpcik8sw214   Ready    <none>          12d   v1.28.7
aekmlpcik8sw215   Ready    <none>          12d   v1.28.7
Enter fullscreen mode Exit fullscreen mode

The above output depicts that everything is good and working fine in our cluster.

Bravo, Our Kubernetes cluster has been set up successfully

Top comments (1)

Collapse
 
bcouetil profile image
Benoit COUETIL 💫

Welcome, and thank you for sharing !

By starting from scratch, won't there be missing features ? What about persistent volumes ? What about upgrade ?

Maybe using battle-tested tools such as KubeADM would help with the missing features ?