DEV Community

Cover image for Setting Up a Kubernetes Bare Metal Cluster on Ubuntu
Jean B.
Jean B.

Posted on

Setting Up a Kubernetes Bare Metal Cluster on Ubuntu

In this guide, we will walk through the steps to set up a Kubernetes cluster on bare metal servers running Ubuntu. This guide covers the installation of necessary tools, configuration of the master and worker nodes, and setting up network and storage solutions.

Image description

Machine Requirements

To set up the Kubernetes cluster, you will need the following machines:

  • 1 Master Node on Ubuntu
  • 3 Worker Nodes on Ubuntu
  • 1 NFS host on Ubuntu

Step 1: Install SSH Server and Net-tools

First, ensure that SSH server and net-tools are installed on all machines. This will allow you to remotely manage the servers and check network configurations.



sudo apt update
sudo apt install openssh-server
sudo systemctl status ssh
sudo apt install net-tools


Enter fullscreen mode Exit fullscreen mode

Step 2: Cluster Volume Setup

Objectives:

  • Install NFS Server on a separate host
  • Install NFS client on all Kubernetes cluster nodes
  • Test if all clients are connected to the NFS host

NFS Installation

i. On NFS Host Machine

Update system packages and install the NFS server:



sudo apt update 
sudo apt install nfs-kernel-server 
sudo mkdir -p /mnt/nfs_share 
sudo chown -R nobody:nogroup /mnt/nfs_share/ 
sudo chmod 777 /mnt/nfs_share/ 
sudo vim /etc/exports


Enter fullscreen mode Exit fullscreen mode

ii. Grant Access to Clients

Configure the NFS server to allow access from the client machines:



/mnt/nfs_share <nfs_host_ip>/24(rw,sync,no_subtree_check) 
sudo exportfs -a 
sudo systemctl restart nfs-kernel-server
sudo ufw allow from <nfs_host_ip>/20 to any port nfs 
sudo ufw enable 
sudo ufw status 
cd /mnt/nfs_share/ 
touch sample1.text sample2.text 


Enter fullscreen mode Exit fullscreen mode

iii. & iv. On NFS Client Machines

Install the NFS client and mount the shared directory:



sudo apt update  
sudo apt install nfs-common 
sudo mkdir -p /mnt/nfs_clientshare 
sudo mount <nfs_host_ip>:/mnt/nfs_share /mnt/nfs_clientshare 
ls -l /mnt/nfs_clientshare/ 


Enter fullscreen mode Exit fullscreen mode

Step 3: Master Node Setup

i. Assign Static IP and Update Machine

If the machine IP is not static, assign a static IP and update the machine:



vim /etc/netplan/00-installer-config.yaml
network: 
  ethernets: 
    enp0s3: 
      dhcp4: true 
    enp0s8: 
      addresses: [<master_node_ip>/20]
  version: 2 


Enter fullscreen mode Exit fullscreen mode

Apply the changes and reboot:



sudo netplan apply 
sudo shutdown -r now 
sudo -i  
sudo apt update 
sudo apt -y full-upgrade 
[ -f /var/run/reboot-required ] && sudo reboot –f 


Enter fullscreen mode Exit fullscreen mode

ii. Install Kubernetes Tools

Install kubelet, kubeadm, and kubectl:



sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update 
sudo apt -y install vim git curl wget kubelet kubeadm kubectl 
sudo apt-mark hold kubelet kubeadm kubectl 
kubectl version --client && kubeadm version


Enter fullscreen mode Exit fullscreen mode

iii. Disable Swap

Disable swap to ensure Kubernetes runs smoothly:



sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 
sudo swapoff -a 
sudo mount -a 
free –h 


Enter fullscreen mode Exit fullscreen mode

Enable kernel modules and configure IP tables:



sudo modprobe overlay 
sudo modprobe br_netfilter 
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1 
EOF
sudo sysctl --system 


Enter fullscreen mode Exit fullscreen mode

iv. Install CRI-O

Install the Container Runtime Interface:



sudo apt update && sudo apt upgrade 
sudo systemctl reboot
OS=xUbuntu_20.04 
CRIO_VERSION=1.23 
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list 
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list 
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS/Release.key | sudo apt-key add - 
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key add - 
sudo apt update 
sudo apt install cri-o cri-o-runc 
sudo systemctl enable crio.service 
sudo systemctl start crio.service 
systemctl status crio 
sudo apt install cri-tools 
sudo crictl info


Enter fullscreen mode Exit fullscreen mode

v. Install Podman

Install Podman, an alternative to Docker:



sudo apt-get -y update
sudo apt-get -y install podman


Enter fullscreen mode Exit fullscreen mode

vi. Initialize Master Node

Enable kubelet and pull Kubernetes container images:



sudo lsmod | grep br_netfilter 
sudo systemctl enable kubelet 
sudo kubeadm config images pull --cri-socket unix:///var/run/crio/crio.sock


Enter fullscreen mode Exit fullscreen mode

vii. Spin Up Master Node Pods

Initialize the master node:



sudo kubeadm init --apiserver-advertise-address=<master_node_ip> --pod-network-cidr=192.168.0.0/16 --cri-socket unix:///var/run/crio/crio.sock --upload-certs --control-plane-endpoint=<master_node_ip> --v=5


Enter fullscreen mode Exit fullscreen mode

Expose the master node pods:



export KUBECONFIG=/etc/kubernetes/admin.conf 


Enter fullscreen mode Exit fullscreen mode

viii. Install Network Plugins

Install network plugins using Calico:



kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml 
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml 


Enter fullscreen mode Exit fullscreen mode

ix. Install HELM

Install HELM for managing Kubernetes applications:



snap install helm --classic 


Enter fullscreen mode Exit fullscreen mode

x. Install NFS Subdirectory External Provisioner Helm Chart

Install the NFS subdirectory external provisioner:



helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ 
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=<nfs_host_ip> --set nfs.path=/mnt/nfs_share


Enter fullscreen mode Exit fullscreen mode

xi. Install and Configure External IP Service

Create a config.yaml file for the external IP service:



vim config.yaml


Enter fullscreen mode Exit fullscreen mode

Add the following configuration (replace xxx with your IP range):



apiVersion: v1 
kind: ConfigMap 
metadata: 
  namespace: metallb-system 
  name: config 
data: 
  config: | 
    address-pools: 
    - name: my-ip-space 
      protocol: layer2 
      addresses: 
      - xxx.xxx.xxx.210-xxx.xxx.xxx.250


Enter fullscreen mode Exit fullscreen mode

Apply the configuration:



kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml 
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml 
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" 
kubectl get configmap kube-proxy -n kube-system -o yaml | sed -e "s/strictARP: false/strictARP: true/" | kubectl apply -f - -n kube-system 
kubectl apply -f config.yaml


Enter fullscreen mode Exit fullscreen mode

Step 4: Worker Node Setup

Follow steps i to v of the master node setup on each worker node. Then, join the worker nodes to the master node:



kubectl get nodes 
kubeadm token generate 
kubeadm token create <token> --print-join-command 


Enter fullscreen mode Exit fullscreen mode

Run the join command on each worker node:



kubeadm join <master_node_ip>:6443 --token <token> --discovery-token-ca-cert-hash <hash>


Enter fullscreen mode Exit fullscreen mode

Step 5: Install Ingress via HELM

Install the NGINX Ingress controller:



kubectl create namespace ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --set controller.replicaCount=3 \
  --set controller.nodeSelector."kubernetes\.io/os"=linux


Enter fullscreen mode Exit fullscreen mode

Step 6: Create Persistent Volume Profiles

Create a Persistent Volume:



vim ClusterPersistentVolume.yaml


Enter fullscreen mode Exit fullscreen mode

Add the following configuration:



apiVersion: v1 
kind: PersistentVolume 
metadata:
  name: dev-pv 
  labels: 
    type: local 
spec: 
  storageClassName: nfs-client 
  capacity: 
    storage: 50Gi 
  accessModes: 
    - ReadWriteOnce 
  persistentVolumeReclaimPolicy: Retain 
  hostPath: 
    path: /mnt/nfs_share  


Enter fullscreen mode Exit fullscreen mode

Apply the configuration:



kubectl get sc 
kubectl apply –f ClusterPersistentVolume.yaml 
kubectl get pv 


Enter fullscreen mode Exit fullscreen mode

Create a Persistent Volume Claim:



vim ClusterPersistentVolumeClaim.yaml 


Enter fullscreen mode Exit fullscreen mode

Add the following configuration:



apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
  name: dev-pvc
spec: 
  storageClassName: nfs-client 
  accessModes: 
    - ReadWriteOnce 
resources: 
  requests: 
    storage: 25Gi 


Enter fullscreen mode Exit fullscreen mode

Apply the configuration:



kubectl create namespace dev 
kubectl apply –f ClusterPersistentVolumeClaim.yaml -n dev 
kubectl get pvc –n dev


Enter fullscreen mode Exit fullscreen mode

By following these steps, you will have a fully functional Kubernetes cluster running on bare metal servers. This setup provides a robust environment for deploying and managing containerized applications.

Top comments (0)