DEV Community

Cover image for Deploy Kubernetes from scratch
Michael Rudloff
Michael Rudloff

Posted on

Deploy Kubernetes from scratch

Here I want to go through the deployment of a Kubernetes Cluster from scratch.

I am cross-posting this from my website k8stuff.com btw. :)

When I say from scratch – I mean by using kubeadm. So not, scratch scratch – you know what I mean.

Anyway, let’s start with the Bill of Material (BOM)

  • Ubuntu 20.04.5 LTS
  • Kubernetes 1.23.0
  • Containerd 1.6.8
  • 3 Nodes (1 Master, 2 Worker Nodes)

Why not the latest and greatest ? Apart from the obvious (not wanting to play guinea pig for the vendors), kubernetes itself changes a lot throughout the iterations.

Plus, gotta leave room for a post about upgrades, right 🙂 ?

I am not going to explain here what Kubernetes is to be honest, I just presume you know that already, but want to get going with your own cluster.

Personally I am running several clusters, one in VMware Workstation

VMWareWorkStation

As well as online. My personal playground is either Linode or Vultr.

Here I will be deploying a Kubernetes Cluster in Vultr.

Let’s get started.

1. Ubuntu Server Deployment

If you want to receive $100 credit and start your own Vultr instances – click HERE

I will be deploying three nodes of the kind Cloud Compute. No reason other then they are the cheapest.

CloudCompute

I will go with Xeon CPUs and NVMe SSD

HighFrequenc

I select the UK (as I am based in the UK) and choose Ubuntu

Ubuntu

I am not doing a lot with the cluster – in fact, once this blog is finished I will bin it again so I am happy to have a bit of performance. Plus I don’t want to run into any issues when it comes to performance when creating the cluster.

So here I am going with

  • 3 vCPUs
  • 8GB Memory
  • 256GB Storage

resources

I also create a Virtual Private Cloud network. So all communication will be dealt with by private IPs.

vpc
And as mentioned – we need three.

hostnames
Now it’s just a matter of waiting

progress
Once the servers are up and running I ended up with three servers

  • Master / 10.8.96.3
  • Worker01 / 10.8.96.4
  • Worker02 / 10.8.96.5

Next thing to do is to check if you got all essential packages and updates. If you use Vultr – the servers will have all packages like kernel-headers and the latest upgrades.

Still worth checking that you got the right version of Ubuntu deployed

root@master:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.5 LTS
Release:    20.04
Codename:   focal
Enter fullscreen mode Exit fullscreen mode

Next, we need to create some host entries – I am personally not using DNS in THIS lab so I am going to cheat and use said host file. You will see going forward that I am not using SUDO – again, this is a lab that gets wiped after this article is finished, so out of convenience I am using root everywhere.

If you do this in a live environment – you should get slapped 🙂

I also disable the firewall here on all nodes – again, something I only do in lab scenarios

root@master:~# ufw status
Status: active

To                         Action      From
--                         ------      ----
22                         ALLOW       Anywhere                  
22 (v6)                    ALLOW       Anywhere (v6)             

root@master:~# ufw disable
Firewall stopped and disabled on system startup
root@master:~# ufw status
Status: inactive
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

Anyway, unless otherwise specified – do this on ALL three nodes

Add all three nodes to the host file

# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
#     /etc/cloud/cloud.cfg or cloud-config from user-data
#
127.0.0.1 localhost
10.8.96.3 master
10.8.96.4 worker01
10.8.96.5 worker02
Enter fullscreen mode Exit fullscreen mode

Next we disable SWAP

root@master:~# swapoff -a
Enter fullscreen mode Exit fullscreen mode

And comment it out from fstab

# /etc/fstab: static file system information.
# 
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/vda1 during curtin installation
/dev/disk/by-uuid/176ccbe8-961e-4b77-bbd0-40bee15e7fa5 / ext4 defaults 0 1
#/swapfile swap swap defaults 0 0
Enter fullscreen mode Exit fullscreen mode

Next we need to enable some kernel modules for our k8s runtime – so we are creating a file called containerd.conf under /etc/modules-load.d/ and add

  • overlay
  • br_netfilter

To the file

root@master:~# cat /etc/modules-load.d/containerd.conf
overlay
br_netfilter
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

YOU ARE STILL DOING THIS ON ALL SERVERS, RIGHT ?

Now load the newly added kernel modules

And to explain, Overlay is a filesystem module, which is explains and br_netfilter is enabling transparent masquerading.

root@master:~# modprobe overlay
root@master:~# modprobe br_netfilter
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

Next we need to add some kernel parameters for kubernetes. So in /etc/sysctl.d/kubernetes.com you will be adding

  • net.bridge.bridge-nf-call-ip6tables = 1
  • net.bridge.bridge-nf-call-iptables = 1
  • net.ipv4.ip_forward = 1
root@worker02:~# cat  /etc/sysctl.d/kubernetes.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
Enter fullscreen mode Exit fullscreen mode

And load them (again, on all three servers, right)

root@worker02:/etc/sysctl.d# sysctl --system
Enter fullscreen mode Exit fullscreen mode

You should get an output that shows that the reload was successful

* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf ...
.
<snip>
.
* Applying /usr/lib/sysctl.d/protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.conf ...
Enter fullscreen mode Exit fullscreen mode

That’s it for the server prep. Next we will be installing the container runtime.

2. Installing Container Runtime

Now that we got our servers ready to go – it’s time to deploy the container runtime used by Kubernetes.

First we need some dependencies

  • curl
  • gnupg2
  • software-properties-common
  • apt-transport-https
  • ca-certificates

There is a good chance those packages already exist on the machines you are working with.

Again, you do this on all three machines.

root@master:~# apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
Reading package lists... Done
Building dependency tree       
Reading state information... Done
ca-certificates is already the newest version (20211016~20.04.1).
curl is already the newest version (7.68.0-1ubuntu2.13).
software-properties-common is already the newest version (0.99.9.8).
software-properties-common set to manually installed.
gnupg2 is already the newest version (2.2.19-3ubuntu2.2).
The following NEW packages will be installed:
  apt-transport-https
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 1,704 B of archives.
After this operation, 162 kB of additional disk space will be used.
Get:1 http://gb.clouds.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.9 [1,704 B]
Fetched 1,704 B in 0s (164 kB/s)                
Selecting previously unselected package apt-transport-https.
(Reading database ... 118809 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_2.0.9_all.deb ...
Unpacking apt-transport-https (2.0.9) ...
Setting up apt-transport-https (2.0.9) ...
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

Next we enable the Docker repo

root@master:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
root@master:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Hit:1 http://gb.clouds.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://gb.clouds.archive.ubuntu.com/ubuntu focal-updates InRelease 
Hit:3 http://gb.clouds.archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu focal-security InRelease         
Get:5 https://download.docker.com/linux/ubuntu focal InRelease [57.7 kB]                             
Get:6 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages [18.5 kB]                 
Hit:7 https://apprepo.vultr.com/ubuntu universal InRelease
Fetched 76.2 kB in 0s (222 kB/s)               
Reading package lists... Done
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

And install containerd

apt update
apt install -y containerd.io
Enter fullscreen mode Exit fullscreen mode
root@worker02:/etc/sysctl.d# apt update
Hit:1 http://gb.clouds.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://gb.clouds.archive.ubuntu.com/ubuntu focal-updates InRelease                                    
Hit:3 http://gb.clouds.archive.ubuntu.com/ubuntu focal-backports InRelease                                  
Hit:4 http://security.ubuntu.com/ubuntu focal-security InRelease                                             
Hit:5 https://download.docker.com/linux/ubuntu focal InRelease                                               
Hit:6 https://apprepo.vultr.com/ubuntu universal InRelease                             
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.
root@worker02:/etc/sysctl.d# apt install -y containerd.io
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  containerd.io
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 28.1 MB of archives.
After this operation, 127 MB of additional disk space will be used.
Get:1 https://download.docker.com/linux/ubuntu focal/stable amd64 containerd.io amd64 1.6.8-1 [28.1 MB]
Fetched 28.1 MB in 0s (130 MB/s)       
Selecting previously unselected package containerd.io.
(Reading database ... 118813 files and directories currently installed.)
Preparing to unpack .../containerd.io_1.6.8-1_amd64.deb ...
Unpacking containerd.io (1.6.8-1) ...
Setting up containerd.io (1.6.8-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Processing triggers for man-db (2.9.1-1) ...
root@worker02:/etc/sysctl.d# 
Enter fullscreen mode Exit fullscreen mode

Now we need to configure containerd so it starts using systemd as cgroup. This is a kubelet requirement and the deployment will fail if this was not done.

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
Enter fullscreen mode Exit fullscreen mode

Again, do this on all three nodes

root@master:~# containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
root@master:~# sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

Next we start containerd as well as enable it on startup

root@master:~# systemctl restart containerd
root@master:~# systemctl enable containerd
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

3. Install Kubernetes 1.23.0

Next we are adding the Kubernetes repo

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Enter fullscreen mode Exit fullscreen mode
root@master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
root@master:~# apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Hit:1 http://gb.clouds.archive.ubuntu.com/ubuntu focal InRelease
Get:2 http://gb.clouds.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]      
Get:3 http://gb.clouds.archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]    
Get:4 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]              
Hit:5 https://download.docker.com/linux/ubuntu focal InRelease                                                       
Hit:7 https://apprepo.vultr.com/ubuntu universal InRelease                           
Get:6 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [9,383 B]
Get:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [60.1 kB]
Fetched 405 kB in 1s (655 kB/s)     
Reading package lists... Done
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

Next we install Kubernetes. As mentioned, I want to run 1.23.0 for now so I ensure, when downloading the packages, I specify the require version. Without this, apt will download the very latest available version.

apt-get install kubelet=1.23.0-00 kubectl=1.23.0-00 kubeadm=1.23.0-00 -y
apt-mark hold kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode
root@worker02:/etc/sysctl.d# apt-get install kubelet=1.23.0-00 kubectl=1.23.0-00 kubeadm=1.23.0-00 -y
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  conntrack cri-tools ebtables kubernetes-cni socat
Suggested packages:
  nftables
The following NEW packages will be installed:
  conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded.
Need to get 80.4 MB of archives.
After this operation, 341 MB of additional disk space will be used.
Get:1 http://gb.clouds.archive.ubuntu.com/ubuntu focal/main amd64 conntrack amd64 1:1.4.5-2 [30.3 kB]
Get:2 http://gb.clouds.archive.ubuntu.com/ubuntu focal/main amd64 ebtables amd64 2.0.11-3build1 [80.3 kB]
Get:3 http://gb.clouds.archive.ubuntu.com/ubuntu focal/main amd64 socat amd64 1.7.3.3-2 [323 kB]
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.25.0-00 [17.9 MB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 1.1.1-00 [25.0 MB]
Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.23.0-00 [19.5 MB]
Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.23.0-00 [8,932 kB]
Get:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.23.0-00 [8,588 kB]
Fetched 80.4 MB in 3s (24.0 MB/s)  
Selecting previously unselected package conntrack.
(Reading database ... 118829 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.5-2_amd64.deb ...
Unpacking conntrack (1:1.4.5-2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.25.0-00_amd64.deb ...
Unpacking cri-tools (1.25.0-00) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.11-3build1_amd64.deb ...
Unpacking ebtables (2.0.11-3build1) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../3-kubernetes-cni_1.1.1-00_amd64.deb ...
Unpacking kubernetes-cni (1.1.1-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../4-socat_1.7.3.3-2_amd64.deb ...
Unpacking socat (1.7.3.3-2) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../5-kubelet_1.23.0-00_amd64.deb ...
Unpacking kubelet (1.23.0-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../6-kubectl_1.23.0-00_amd64.deb ...
Unpacking kubectl (1.23.0-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../7-kubeadm_1.23.0-00_amd64.deb ...
Unpacking kubeadm (1.23.0-00) ...
Setting up conntrack (1:1.4.5-2) ...
Setting up kubectl (1.23.0-00) ...
Setting up ebtables (2.0.11-3build1) ...
Setting up socat (1.7.3.3-2) ...
Setting up cri-tools (1.25.0-00) ...
Setting up kubernetes-cni (1.1.1-00) ...
Setting up kubelet (1.23.0-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.23.0-00) ...
Processing triggers for man-db (2.9.1-1) ...
root@worker02:/etc/sysctl.d# apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.
root@worker02:/etc/sysctl.d# 
Enter fullscreen mode Exit fullscreen mode

Now that we have all binaries deployed we can use kubeadm to initiate the cluster.

DO THIS ON THE MASTER ONLY !

(specify the correct master node name – here I simply use mine which is master)

kubeadm init --control-plane-endpoint=master
Enter fullscreen mode Exit fullscreen mode

You should see a successful message

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join master:6443 --token b2yxxn.23372jp3m8ezp5eh \
    --discovery-token-ca-cert-hash sha256:7f95e6d1eb8efddde83c6af9276634b8670f7872877dace343a0c8158c29d12d \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join master:6443 --token b2yxxn.23372jp3m8ezp5eh \
    --discovery-token-ca-cert-hash sha256:7f95e6d1eb8efddde83c6af9276634b8670f7872877dace343a0c8158c29d12d 
Enter fullscreen mode Exit fullscreen mode

Still on the MASTER, we follow the instructions on the output, first, create a default config file in your profile

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

Next I am creating an alias for kubectl – to simply ‘k

root@master:~# alias k=kubectl
Enter fullscreen mode Exit fullscreen mode

First test – check the cluster status

root@master:~# k cluster-info
Kubernetes control plane is running at https://master:6443
CoreDNS is running at https://master:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

Awesome. Let’s check the nodes

root@master:~# k get nodes
NAME     STATUS     ROLES                  AGE     VERSION
master   NotReady   control-plane,master   5m50s   v1.23.0
root@master:~# 

Enter fullscreen mode Exit fullscreen mode

Awesome. So our Master node is up but not ready – this is expected. It will be ready once we have configured the network.

Now we can join the worker nodes. In the kubeadm output you can find the command that is being used to join worker nodes. It’s at the very end. For example:

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join master:6443 --token b2yxxn.23372jp3m8ezp5eh \
    --discovery-token-ca-cert-hash sha256:7f95e6d1eb8efddde83c6af9276634b8670f7872877dace343a0c8158c29d12d 
Enter fullscreen mode Exit fullscreen mode

The great thing is that it automatically adds the required token to the command.

Run this now on all servers designated worker nodes.

root@worker01:~# kubeadm join master:6443 --token b2yxxn.23372jp3m8ezp5eh \
> --discovery-token-ca-cert-hash sha256:7f95e6d1eb8efddde83c6af9276634b8670f7872877dace343a0c8158c29d12d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1003 12:03:31.574496   31417 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@worker01:~# 
Enter fullscreen mode Exit fullscreen mode

You should see the message that the node has joined the cluster.

When you check with the kubectl get nodes command again on the master, you should see all three nodes now – again, not ready is at this stage as expected.

root@master:~# k get nodes
NAME       STATUS     ROLES                  AGE     VERSION
master     NotReady   control-plane,master   9m43s   v1.23.0
worker01   NotReady   <none>                 38s     v1.23.0
worker02   NotReady   <none>                 32s     v1.23.0
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

Last step to get our cluster up and running – the Container Network Interface, or CNI for short.

4. Install Kubernetes CNI

A CNI, or Container Network Interface, is required to deal with everything, well, networking.

Here in this example I am using CALICO. No real reason other than that this is straight forward and can literally be deployed in a few clicks (well, commands).

First, we download the required yaml from Tigera.

curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O
kubectl apply -f calico.yaml
Enter fullscreen mode Exit fullscreen mode

This is a great test as we now deploy our very first yaml into our cluster.

Again, do this on the **MASTER **only.

root@master:~# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

Now we can monitor the deployment of our Calico pods

root@master:~# k get pods -A
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-66966888c4-8cmpz   0/1     ContainerCreating   0          17s
kube-system   calico-node-94zhx                          0/1     Running             0          17s
kube-system   calico-node-9f5bh                          0/1     Running             0          17s
kube-system   calico-node-njjnj                          0/1     Running             0          17s
kube-system   coredns-64897985d-5xqnn                    0/1     ContainerCreating   0          18m
kube-system   coredns-64897985d-9zsrl                    0/1     ContainerCreating   0          18m
kube-system   etcd-master                                1/1     Running             0          18m
kube-system   kube-apiserver-master                      1/1     Running             0          18m
kube-system   kube-controller-manager-master             1/1     Running             0          18m
kube-system   kube-proxy-lw4zr                           1/1     Running             0          9m45s
kube-system   kube-proxy-q5jk9                           1/1     Running             0          9m51s
kube-system   kube-proxy-w6c7b                           1/1     Running             0          18m
kube-system   kube-scheduler-master                      1/1     Running             0          18m
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

And they should all be eventually come online

root@master:~# k get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-66966888c4-8cmpz   1/1     Running   0          42s
kube-system   calico-node-94zhx                          1/1     Running   0          42s
kube-system   calico-node-9f5bh                          1/1     Running   0          42s
kube-system   calico-node-njjnj                          1/1     Running   0          42s
kube-system   coredns-64897985d-5xqnn                    1/1     Running   0          19m
kube-system   coredns-64897985d-9zsrl                    1/1     Running   0          19m
kube-system   etcd-master                                1/1     Running   0          19m
kube-system   kube-apiserver-master                      1/1     Running   0          19m
kube-system   kube-controller-manager-master             1/1     Running   0          19m
kube-system   kube-proxy-lw4zr                           1/1     Running   0          10m
kube-system   kube-proxy-q5jk9                           1/1     Running   0          10m
kube-system   kube-proxy-w6c7b                           1/1     Running   0          19m
kube-system   kube-scheduler-master                      1/1     Running   0          19m
root@master:~# 
Enter fullscreen mode Exit fullscreen mode

With deployed a CNI – your nodes should come online as well

root@master:~# k get nodes
NAME       STATUS   ROLES                  AGE   VERSION
master     Ready    control-plane,master   19m   v1.23.0
worker01   Ready    <none>                 10m   v1.23.0
worker02   Ready    <none>                 10m   v1.23.0
Enter fullscreen mode Exit fullscreen mode

That’s it. Your Kubernetes cluster is up and running. Last part of this series is a simple test to ensure all components are working.

5. Test Kubernetes Install

The easiest way to test a deployment is to simply download a random image. Here I am just grabbing some nginx pod.

root@master:~# k create deployment nginx --image=nginx
deployment.apps/nginx created
Enter fullscreen mode Exit fullscreen mode

Next we make sure the pod is up and running

root@master:~# k get pods -A -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
default       nginx-85b98978db-wj9q2                     1/1     Running   0          46s   192.168.30.66    worker02   <none>           <none>
kube-system   calico-kube-controllers-66966888c4-8cmpz   1/1     Running   0          30m   192.168.219.66   master     <none>           <none>
kube-system   calico-node-94zhx                          1/1     Running   0          30m   10.8.96.4        worker01   <none>           <none>
kube-system   calico-node-9f5bh                          1/1     Running   0          30m   10.8.96.3        master     <none>           <none>
kube-system   calico-node-njjnj                          1/1     Running   0          30m   10.8.96.5        worker02   <none>           <none>
kube-system   coredns-64897985d-5xqnn                    1/1     Running   0          48m   192.168.219.67   master     <none>           <none>
kube-system   coredns-64897985d-9zsrl                    1/1     Running   0          48m   192.168.219.65   master     <none>           <none>
kube-system   etcd-master                                1/1     Running   0          48m   10.8.96.3        master     <none>           <none>
kube-system   kube-apiserver-master                      1/1     Running   0          48m   10.8.96.3        master     <none>           <none>
kube-system   kube-controller-manager-master             1/1     Running   0          48m   10.8.96.3        master     <none>           <none>
kube-system   kube-proxy-lw4zr                           1/1     Running   0          39m   10.8.96.5        worker02   <none>           <none>
kube-system   kube-proxy-q5jk9                           1/1     Running   0          39m   10.8.96.4        worker01   <none>           <none>
kube-system   kube-proxy-w6c7b                           1/1     Running   0          48m   10.8.96.3        master     <none>           <none>
kube-system   kube-scheduler-master                      1/1     Running   0          48m   10.8.96.3        master     <none>           <none>
Enter fullscreen mode Exit fullscreen mode

When you scroll the text box to the right you can see the node it is deployed on as well as its POD IP Address

If we now curl the PODs IP on the port the nginx service is running (80) – we should be able to get a response

root@master:~# curl http://192.168.30.66
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

If you are wondering WHY it gets a response – the CNI – Calico – actually creates routing entries on the nodes so you can connect directly to pods.

From Calico :“Calico by default creates a BGP mesh between all nodes of the cluster and broadcasts the routes for container networks to all worker nodes.”

But to make a further test we can expose the pod and just to prove that we cannot connect to the node IP right now to get the nginx page

root@master:~# curl http://worker02
curl: (7) Failed to connect to worker02 port 80: Connection refused
Enter fullscreen mode Exit fullscreen mode

So lets expose its port 80

root@master:~# k expose pod nginx-85b98978db-wj9q2 --type=NodePort --port=80
service/nginx-85b98978db-wj9q2 exposed
Enter fullscreen mode Exit fullscreen mode

Next, let’s check the port given

root@master:~# k get svc -A
NAMESPACE     NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes               ClusterIP   10.96.0.1      <none>        443/TCP                  67m
default       nginx-85b98978db-wj9q2   NodePort    10.111.19.98   <none>        80:31408/TCP             23s
kube-system   kube-dns                 ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP,9153/TCP   67m
Enter fullscreen mode Exit fullscreen mode

You can see our NodePort **here is **31408

Now when we specify the worker node with the NodePort we should be able to connect to our nginx pod

root@master:~# curl http://worker02:31408
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

So we know the networking works. Last test I want to do is scaling. Right now there is only one pod – let’s scale it up to two

root@master:~# k scale deployment nginx --replicas=2
deployment.apps/nginx scaled
root@master:~# 
Enter fullscreen mode Exit fullscreen mode
root@master:~# k get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       nginx-85b98978db-jl8w8                     1/1     Running   0          15s
default       nginx-85b98978db-wj9q2                     1/1     Running   0          21m
Enter fullscreen mode Exit fullscreen mode

That's it - you now got a fully working Kubernetes Cluster running on Ubuntu with Containerd as runtime.

Top comments (0)