DEV Community

Ajeet Singh Raina
Ajeet Singh Raina

Posted on

How to Install Kubernetes Cluster on Ubuntu 20.04 with kubeadm in 2023

Kubernetes has become the de facto standard for container orchestration, providing a scalable and resilient platform for managing containerized applications. In this tutorial, we will walk through the process of installing a Kubernetes cluster on Ubuntu 20.04 using Kubeadm, a popular tool for bootstrapping Kubernetes clusters.

Prerequisites:

Before we begin, make sure you have the following prerequisites in place:

  • A machine running Ubuntu 20.04 with a minimum of 2GB RAM and 2 CPUs.
  • Docker installed on the machine. You can refer to the official Docker documentation for installation instructions.
  • A user with sudo privileges to run administrative commands.

Step 1. Add the Kubernetes apt repository:

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Enter fullscreen mode Exit fullscreen mode

Step 2. Add the Google Cloud apt repository key

This command is used to add the Google Cloud apt repository key to the system's keyring. It is an important step in the installation process of Kubernetes on Ubuntu.

When you run this command, it retrieves the GPG (GNU Privacy Guard) key from the specified URL using cURL, and then the "apt-key add" command adds the key to the keyring maintained by APT (Advanced Package Tool). This ensures that the system can trust packages signed with this key when installing Kubernetes components.

The GPG key is necessary to authenticate and verify the integrity of the packages provided by the Kubernetes apt repository. By adding the key to the keyring, you enable APT to verify the authenticity of the packages during installation, which helps maintain the security and integrity of the system.

It's worth noting that this command should be executed as a superuser or with sudo privileges to allow adding the key to the system's keyring.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
OK
Enter fullscreen mode Exit fullscreen mode

Step 3. Update the APT repository

sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Enter fullscreen mode Exit fullscreen mode

Step 4. Install Kubernetes Tools

Next, we'll install the necessary tools for setting up the Kubernetes cluster.

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode

If you face any issue with the command, remove the existing Kubernetes package repository file:

sudo rm /etc/apt/sources.list.d/kubernetes.list
Enter fullscreen mode Exit fullscreen mode

Re-add the Kubernetes package repository:

echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Enter fullscreen mode Exit fullscreen mode

Download and import the GPG key:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Enter fullscreen mode Exit fullscreen mode

Update the package lists:

sudo apt update
Enter fullscreen mode Exit fullscreen mode

If you still encounter a GPG error, try running the following command instead:

sudo apt-get update --allow-unauthenticated
Enter fullscreen mode Exit fullscreen mode

This command will update the package lists while bypassing the GPG signature verification.

Once the package update is successful, you can proceed with your desired operations or package installations on your Ubuntu system.

sudo kubeadm init --apiserver-advertise-address=34.125.55.80 --control-plane-endpoint=34.125.55.80 --upload-certs --pod-network-cidr 10.5.0.0/16
[init] Using Kubernetes version: v1.27.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0525 16:39:31.061355   14307 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
W0525 16:39:40.584704   14307 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local kubezilla-master] and IPs [10.96.0.1 34.125.55.80]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubezilla-master localhost] and IPs [34.125.55.80 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubezilla-master localhost] and IPs [34.125.55.80 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0525 16:39:52.655868   14307 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.003422 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
261bc43097bd432310dbef7793291880d2f9016e9d1f0d4b275cfff35e96333b
[mark-control-plane] Marking the node kubezilla-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kubezilla-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: orbf61.yliqtq6wsgwsb30m
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 34.125.55.80:6443 --token orbfXXXXb30m \
        --discovery-token-ca-cert-hash sha256:adf1173b4db078XXXXXXe9a9f0c3cf99343abc062560fa \
        --control-plane --certificate-key 261bc43097bd4XXXXXXXff35e96333b

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 34.125.55.80:6443 --token orbf61.yliqtq6wsgwsb30m \
        --discovery-token-ca-cert-hash sha256:adf1173b4db078963bed306b98d42c8b45a56ae9a9f0c3cf99343abc062560fa 
Enter fullscreen mode Exit fullscreen mode

Step 5: Set Up the Cluster for Your User
To start using the Kubernetes cluster, you need to set up the necessary configuration files. Run the following commands in the terminal:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ export KUBECONFIG=/etc/kubernetes/admin.conf
Enter fullscreen mode Exit fullscreen mode

Step 6. Install a Pod Network Add-on

Kubernetes requires a pod network add-on to enable communication between the pods across different nodes. In this tutorial, we'll use kuberouter as the pod network add-on. Run the following command:

sudo kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml

configmap/kube-router-cfg created
daemonset.apps/kube-router created
serviceaccount/kube-router created
clusterrole.rbac.authorization.k8s.io/kube-router created
clusterrolebinding.rbac.authorization.k8s.io/kube-router created
Enter fullscreen mode Exit fullscreen mode

Step 7. Check the statis of Kuebrnetes Cluster

The command sudo kubectl get componentstatus is used to check the status of Kubernetes components in the cluster. When executed with superuser privileges (sudo), it provides an overview of the health and availability of key components in the Kubernetes control plane.

Running this command will display information about various Kubernetes components, including their names, status, and associated details. The output typically includes components such as the scheduler, controller-manager, and etcd.

Here's an example of the output you might see:

sudo kubectl get componentstatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""} 
Enter fullscreen mode Exit fullscreen mode

In this example, the controller-manager and scheduler components are reported as healthy, while the etcd component is also shown as healthy with additional details about its health status.

By checking the component status, you can verify that the core Kubernetes components are running properly, ensuring the stability and functionality of your cluster.

Listing the running Pods

sudo kubectl get po -A | grep kube
kube-system   coredns-5d78c9869d-5wlw2                   1/1     Running   0          2m43s
kube-system   coredns-5d78c9869d-lmln9                   1/1     Running   0          2m43s
kube-system   etcd-kubezilla-master                      1/1     Running   0          2m55s
kube-system   kube-apiserver-kubezilla-master            1/1     Running   0          2m55s
kube-system   kube-controller-manager-kubezilla-master   1/1     Running   0          2m59s
kube-system   kube-proxy-skcmt                           1/1     Running   0          2m43s
kube-system   kube-router-pjcnr                          1/1     Running   0          15s
kube-system   kube-scheduler-kubezilla-master            1/1     Running   0          2m55s
Enter fullscreen mode Exit fullscreen mode

In this output:

  • The NAMESPACE column indicates the namespace in which the pods are running. In this case, all the pods belong to the kube-system namespace, which is where Kubernetes system components reside.

  • The NAME column displays the names of the pods.

  • The READY column shows the number of containers running inside the pod and the total number of containers in the pod. For example, 1/1 means there is one container running out of a total of one container in the pod.

  • The STATUS column represents the current status of the pod. In this case, all pods are reported as "Running," indicating they are up and operational.

  • The RESTARTS column indicates the number of times the pod has been restarted.

  • The AGE column provides the elapsed time since the pod was created.

Based on the output, you can see that the Kubernetes system components, including CoreDNS, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-router, and kube-scheduler, are all running without any issues. This confirms that the core components of your Kubernetes cluster are up and functioning properly.

Step 8: Join Worker Nodes to the Cluster

To add worker nodes to the cluster, run the command that was generated in Step 4 on each worker node. The command should look similar to the following:

kubeadm join 34.125.55.80:6443 --token orbf61.yliqtq6wsgwsb30m \
        --discovery-token-ca-cert-hash sha256:adf1173b4db078963bed306b98d42c8b45a56ae9a9f0c3cf99343abc062560fa 
Enter fullscreen mode Exit fullscreen mode

Step 9. Listing the Nodes

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

If everything is set up correctly, you should see the master node and worker nodes listed successfully.

Further References:

Top comments (0)