DEV Community

Cover image for Set up a Kubernetes master-slave architecture using kubeadm
Preetham
Preetham

Posted on • Updated on

Set up a Kubernetes master-slave architecture using kubeadm

We will go into the detail of how to install Kubernetes using kubeadm. Kubernetes can be installed and set up step by step on your own or we can use multiple installation tools or stacks available. We are going to use kubeadm.

According to the kubeadm source, Kubeadm is a tool built to provide kubeadm init and kubeadm join as best-practice “fast paths” for creating Kubernetes clusters.

kubeadm performs the actions necessary to get a minimum viable cluster up and running. By design, it cares only about bootstrapping, not about provisioning machines.

We can use kubeadm for creating a production-grade Kubernetes environment.

We will consider building a Kubernetes setup with one master node and 2 worker nodes.

Let us assume that we have three Ubuntu Linux machines named master, worker1, and worker1 in the same network. For practice purposes, you can create 3 VMS in VirtualBox or you can create 3 VMs in the cloud. The VMs will be accessible from each other. As the name suggests, we will add the necessary configuration in the master machine to make it a Kubernetes master node, and connect the worker1 and worker2 to it.

In all the 3 machines do the following. Follow only the ubuntu approach and follow the centos approach only when you have centos installed instead of ubuntu.

Step 1: Installing Docker as the container runtime Interface

Installation in Ubuntu

#update the repository
sudo apt-get update

#uninstall any old Docker software
sudo apt-get remove docker docker-engine docker.io

#Install docker
sudo apt install docker.io

#Start and automate docker to start at run time
sudo systemctl start docker
sudo systemctl enable docker

#verify docker installation
docker container ls
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS               NAMES
Enter fullscreen mode Exit fullscreen mode

Installation in CentOS 7(only when required)

#removing existing docker
sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

sudo yum remove docker*

#adding repository
sudo yum install -y yum-utils
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

#install docker engine
sudo yum install docker-ce docker-ce-cli containerd.iosystemctl enable docker -y

#Start and automate docker to start at run time
sudo systemctl start docker
sudo systemctl enable docker

#verify docker installation
docker container ls
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS               NAMES
Enter fullscreen mode Exit fullscreen mode

Kubeadm will by default use docker as the container runtime interface. In case a machine has both docker and other container runtimes like contained, docker takes precedence. If you don't specify a container runtime interface, kubeadm will automatically search for the installed CRI by scanning through the default Linux domain sockets.
Runtime Path to linux domain socket

  • Docker **/var/run/docker.sock**
  • contained **/run/containerd/containerd.sock**
  • CRI-O **/var/run/crio/crio.sock**

Step 2: Installing kubeadm tool

Installation in Ubuntu

#add the required repository for kubeadm
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

#update the repository
$ sudo apt-get update

#installing kubelet, kubeadm and kubectl
sudo apt-get install -y kubelet kubeadm kubectl

#setting apt-mark
sudo apt-mark hold kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode

apt-mark will change whether a package has been marked as being automatically installed. Hold is used to mark a package as held back, which will prevent the package from being automatically installed, upgraded, or removed.

Restart the kubelet if required

systemctl daemon-reload
systemctl restart kubelet
Enter fullscreen mode Exit fullscreen mode

Installation in CentOS 7(only when required)

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet
Enter fullscreen mode Exit fullscreen mode

Perform the following step only in the master node

Step 3 Initializing the control plane or making the node as master

kubeadm init will initialize this machine to make it as master.
kubeadm init first runs a series of prechecks to ensure that the machine is ready to run Kubernetes. These prechecks expose warnings and exit on errors. kubeadm init then downloads and installs the cluster control plane components. This may take several minutes.

We have to take care that the Pod network must not overlap with any of the host networks: you are likely to see problems if there is any overlap. We will specify the private cidr for the pods to be created.

$ kubeadm init --pod-network-cidr 10.13.0.0/16
Enter fullscreen mode Exit fullscreen mode

In case you are not actually creating the production environment and your master machine has only one CPU(minimum recommended CPU is 2), if an error occurs due to preflight check of CPU, let us run the below:

#swapoff -a (if swap issue is seen - only in testing or practice)
kubeadm init --ignore-preflight-errors=NumCPU --pod-network-cidr 10.13.0.0/16
Enter fullscreen mode Exit fullscreen mode

The output should look like

[init] Using Kubernetes version: vX.Y.Z
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm-cp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.501735 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-X.Y" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-cp" as an annotation
[mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: <token>
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  /docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Enter fullscreen mode Exit fullscreen mode

Now as seen in the output above, we need to run the below commands as a normal user to use the cluster.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

if you are the root user, run

export KUBECONFIG=/etc/kubernetes/admin.conf
Enter fullscreen mode Exit fullscreen mode

Now the machine is initialized as master.

If we check the nodes, we will see only the master node.

kubectl get nodes
NAME               STATUS   ROLES    AGE   VERSION
ip-172-31-45-165   Ready    master   48s   v1.18.5
Enter fullscreen mode Exit fullscreen mode

Step 5 Installing a Pod network add-on

You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other.
Cluster DNS (CoreDNS) will not start up before a network is installed.

We will use **Calico** as our CNI tool.

Calico is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.

Calico will automatically detect which IP address range to use for pod IPs based on the value provided via the --pod-network-cidr flag or via kubeadm's configuration.

Run the below

kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
Enter fullscreen mode Exit fullscreen mode

Now calico is installed as our CNI.
We can verify if the coreDNS and calico are running perfectly fine by running the command

kubectl get pods --all-namespaces
NAMESPACE     NAME                                                    READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-76d4774d89-5rz9t                1/1     Running   0          70s
kube-system   calico-node-lj8cp                                       1/1     Running   0          70s
kube-system   coredns-66bff467f8-h7k7r                                1/1     Running   0          2m23s
kube-system   coredns-66bff467f8-rqdcj                                1/1     Running   0          2m23s
kube-system   etcd-dlesptrnapp3v.jdadelivers.com                      1/1     Running   0          2m33s
kube-system   kube-apiserver-dlesptrnapp3v.jdadelivers.com            1/1     Running   0          2m33s
kube-system   kube-controller-manager-dlesptrnapp3v.jdadelivers.com   1/1     Running   0          2m33s
kube-system   kube-proxy-4b4pj                                        1/1     Running   0          2m23s
kube-system   kube-scheduler-dlesptrnapp3v.jdadelivers.com            1/1     Running   0          2m33s
Enter fullscreen mode Exit fullscreen mode

Run step 5 only in the worker nodes and run it in both the worker nodes.

Step 5 Join the worker nodes to the master node

Now in the worker nodes, run the kubeadm join common which you obtained when running the kubeadm init in the master machine/ control-plane-host.

kubeadm join 172.31.45.165:6443 --token jjyon1.bfztgwfuxavldki2 --discovery-token-ca-cert-hash sha256:29b6281be8bd08fd2d993e322f8037491f32006d925a4cd96cd1568563d12e5f 
Enter fullscreen mode Exit fullscreen mode

You will see an output like below.

W0715 03:14:44.854420   13222 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Enter fullscreen mode Exit fullscreen mode

Run the command as sudo

sudo kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Enter fullscreen mode Exit fullscreen mode

step 6 verifying if the nodes are added in the master/control-plane machine

kubectl get nodes
NAME               STATUS   ROLES    AGE     VERSION
ip-172-31-33-64    Ready    <none>   119s    v1.18.5
ip-172-31-38-44    Ready    <none>   116s    v1.18.5
ip-172-31-45-165   Ready    master   5m30s   v1.18.5
Enter fullscreen mode Exit fullscreen mode

Thus we have set up a Kubernetes master-slave architecture using kubeadm.

Oldest comments (0)