DEV Community

MohammedBanabila
MohammedBanabila

Posted on

Setup multi node kubernetes cluster using kubeadm

in this lab we deploy control plane and worker nodes with any cloud provider example Aws

using ec2 instance with run on ubuntu .

Steps:

  1. create vpc and subnet and in this lab we create vpc with cidr block=77.132.0.0/16 and 3 public subnets with block size=24
  2. deploy 3 ec2 instance for all node and define which be control plane , others be worker nodes
  3. create 2 security groups for control plane node and worker nodes 4 . setup network acceess control list 5 . associate elastic ip with those instances
  4. integrate internet gateway with vpc 7 . create routetable and associate it with subnets
  5. after the infrastructure are up and running example ex2 instances

note:
we use control plane node ports which need it by its components

apiserver port 6443
ectd port 2379-2380
scheduler port 10257
controller-manager 10259
access to control plane node port 22 only to my ip address /32

and for workers node port

kubelet    port 10250 
kube-proxy   port 10256 
nodePort      30000-32767 
access to worker node    port 22   only to  my ip  address /32
Enter fullscreen mode Exit fullscreen mode

we add those ports to control plane security group and worker security groups

before start setup , first update and upgrade packages using sudo apt update -y && sudo apt upgrade -y
step for setup control plane node as master node :

  1. disable swap
  2. update kernel params 3 install container runtime
  3. install runc
  4. install cni plugin
  5. install kubeadm , kubelet , kubectl and check version
  6. initialize control plane node using kubeadm and print output token which let you to join worker nodes 8 . install cni plugin calico 9 . use kubeadm join to master optional: you can change hostname at nodes change hostname to be worker1 , worker2 for worker nodes

step for setup worker nodes :

  1. disable swap
  2. update kernel params 3 install container runtime
  3. install runc
  4. install cni plugin
  5. install kubeadm , kubelet , kubectl and check version
  6. use kubeadm join to master

Note: copy kubeconfig from control plane to workers node at .kube/config

at control plane node
cd .kube/
cat config and copy it all

Note:
you can deploy it at console or as infrastructure as code
in this lab , I used pulumi python for provisioning the resources.

"""An AWS Python Pulumi program"""
import pulumi , pulumi_aws as aws , json

cfg1=pulumi.Config()

vpc1=aws.ec2.Vpc(
"vpc1",
aws.ec2.VpcArgs(
cidr_block=cfg1.require(key='block1'),
tags={
"Name": "vpc1"
}
)
)

intgw1=aws.ec2.InternetGateway(
"intgw1",
aws.ec2.InternetGatewayArgs(
vpc_id=vpc1.id,
tags={
"Name": "intgw1"
}
)
)

zones="us-east-1a"
publicsubnets=["subnet1","subnet2","subnet3"]
cidr1=cfg1.require(key="cidr1")
cidr2=cfg1.require(key="cidr2")
cidr3=cfg1.require(key="cidr3")

cidrs=[ cidr1 , cidr2 , cidr3 ]

for allsubnets in range(len(publicsubnets)):
publicsubnets[allsubnets]=aws.ec2.Subnet(
publicsubnets[allsubnets],
aws.ec2.SubnetArgs(
vpc_id=vpc1.id,
cidr_block=cidrs[allsubnets],
map_public_ip_on_launch=False,
availability_zone=zones,
tags={
"Name" : publicsubnets[allsubnets]
}
)
)

table1=aws.ec2.RouteTable(
"table1",
aws.ec2.RouteTableArgs(
vpc_id=vpc1.id,
routes=[
aws.ec2.RouteTableRouteArgs(
cidr_block=cfg1.require(key="any_ipv4_traffic"),
gateway_id=intgw1.id
)
],
tags={
"Name" : "table1"
}
)
)

associate1=aws.ec2.RouteTableAssociation(
"associate1",
aws.ec2.RouteTableAssociationArgs(
subnet_id=publicsubnets[0].id,
route_table_id=table1.id
)
)
associate2=aws.ec2.RouteTableAssociation(
"associate2",
aws.ec2.RouteTableAssociationArgs(
subnet_id=publicsubnets[1].id,
route_table_id=table1.id
)

)

associate3=aws.ec2.RouteTableAssociation(
"associate3",
aws.ec2.RouteTableAssociationArgs(
subnet_id=publicsubnets[2].id,
route_table_id=table1.id
)

)

ingress_traffic=[
aws.ec2.NetworkAclIngressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_block=cfg1.require(key="myips"),
icmp_code=0,
icmp_type=0,
action="allow",
rule_no=100
),
aws.ec2.NetworkAclIngressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_block=cfg1.require(key="any_ipv4_traffic"),
icmp_code=0,
icmp_type=0,
action="deny",
rule_no=101
),

aws.ec2.NetworkAclIngressArgs(
  from_port=0,
  to_port=0,
  protocol="-1",
  cidr_block=cfg1.require(key="any_ipv4_traffic"),
  icmp_code=0,
  icmp_type=0,
  action="allow",
  rule_no=200
)
Enter fullscreen mode Exit fullscreen mode

]

egress_traffic=[
aws.ec2.NetworkAclEgressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_block=cfg1.require(key="myips"),
icmp_code=0,
icmp_type=0,
action="allow",
rule_no=100
),
aws.ec2.NetworkAclEgressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_block=cfg1.require(key="any_ipv4_traffic"),
icmp_code=0,
icmp_type=0,
action="deny",
rule_no=101
),

aws.ec2.NetworkAclEgressArgs(
  from_port=0,
  to_port=0,
  protocol="-1",
  cidr_block=cfg1.require(key="any_ipv4_traffic"),
  icmp_code=0,
  icmp_type=0,
  action="allow",
  rule_no=200
)
Enter fullscreen mode Exit fullscreen mode

]

nacls1=aws.ec2.NetworkAcl(
"nacls1",
aws.ec2.NetworkAclArgs(
vpc_id=vpc1.id,
ingress=ingress_traffic,
egress=egress_traffic,
tags={
"Name" : "nacls1"
},
)
)

nacllink1=aws.ec2.NetworkAclAssociation(
"nacllink1",
aws.ec2.NetworkAclAssociationArgs(
network_acl_id=nacls1.id,
subnet_id=publicsubnets[0].id
)
)

nacllink2=aws.ec2.NetworkAclAssociation(
"nacllink2",
aws.ec2.NetworkAclAssociationArgs(
network_acl_id=nacls1.id,
subnet_id=publicsubnets[1].id
)
)

nacllink3=aws.ec2.NetworkAclAssociation(
"nacllink3",
aws.ec2.NetworkAclAssociationArgs(
network_acl_id=nacls1.id,
subnet_id=publicsubnets[2].id
)
)

masteringress=[
aws.ec2.SecurityGroupIngressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_blocks=[cfg1.require(key="myips")],

),
aws.ec2.SecurityGroupIngressArgs(
from_port=6443,
to_port=6443,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]
),
aws.ec2.SecurityGroupIngressArgs(
from_port=2379,
to_port=2380,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]

),

aws.ec2.SecurityGroupIngressArgs(
from_port=10249,
to_port=10260,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]
)
]
masteregress=[
aws.ec2.SecurityGroupEgressArgs(
from_port=0,
to_port=0,
protocol="-1",
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]
),
]

mastersecurity=aws.ec2.SecurityGroup(
"mastersecurity",
aws.ec2.SecurityGroupArgs(
vpc_id=vpc1.id,
name="mastersecurity",
ingress=masteringress,
egress=masteregress,
tags={
"Name" : "mastersecurity"
}
)
)

workeringress=[
aws.ec2.SecurityGroupIngressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_blocks=[cfg1.require(key="myips")]
),

aws.ec2.SecurityGroupIngressArgs(
from_port=10250,
to_port=10250,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]
),
aws.ec2.SecurityGroupIngressArgs(
from_port=10256,
to_port=10256,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]
),
aws.ec2.SecurityGroupIngressArgs(
from_port=30000,
to_port=32767,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]
),
]
workeregress=[
aws.ec2.SecurityGroupEgressArgs(
from_port=0,
to_port=0,
protocol="-1",
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]

),
]

workersecurity=aws.ec2.SecurityGroup(
"workersecurity",
aws.ec2.SecurityGroupArgs(
vpc_id=vpc1.id,
name="workersecurity",
ingress=workeringress,
egress=workeregress,
tags={
"Name" : "workersecurity"
}
)
)

master=aws.ec2.Instance(
"master",
aws.ec2.InstanceArgs(
ami=cfg1.require(key='ami'),
instance_type=cfg1.require(key='instance-type'),
vpc_security_group_ids=[mastersecurity.id],
subnet_id=publicsubnets[0].id,
availability_zone=zones,
key_name="mykey1",
tags={
"Name" : "master"
},
ebs_block_devices=[
aws.ec2.InstanceEbsBlockDeviceArgs(
device_name="/dev/sdm",
volume_size=8,
volume_type="gp3"
)
],
)
)

worker1=aws.ec2.Instance(
"worker1",
aws.ec2.InstanceArgs(
ami=cfg1.require(key='ami'),
instance_type=cfg1.require(key='instance-type'),
vpc_security_group_ids=[workersecurity.id],
subnet_id=publicsubnets[1].id,
availability_zone=zones,
key_name="mykey2",
tags={
"Name" : "worker1"
},
ebs_block_devices=[
aws.ec2.InstanceEbsBlockDeviceArgs(
device_name="/dev/sdb",
volume_size=8,
volume_type="gp3"
)
],
)
)

worker2=aws.ec2.Instance(
"worker2",
aws.ec2.InstanceArgs(
ami=cfg1.require(key='ami'),
instance_type=cfg1.require(key='instance-type'),
vpc_security_group_ids=[workersecurity.id],
subnet_id=publicsubnets[2].id,
key_name="mykey2",
tags={
"Name" : "worker2"
},
ebs_block_devices=[
aws.ec2.InstanceEbsBlockDeviceArgs(
device_name="/dev/sdc",
volume_size=8,
volume_type="gp3"
)
],
)
)

eips=[ "eip1" , "eip2" , "eip3" ]
for alleips in range(len(eips)):
eips[alleips]=aws.ec2.Eip(
eips[alleips],
aws.ec2.EipArgs(
domain="vpc",
tags={
"Name" : eips[alleips]
}
)
)

eiplink1=aws.ec2.EipAssociation(
"eiplink1",
aws.ec2.EipAssociationArgs(
allocation_id=eips[0].id,
instance_id=master.id
)
)

eiplink2=aws.ec2.EipAssociation(
"eiplink2",
aws.ec2.EipAssociationArgs(
allocation_id=eips[1].id,
instance_id=worker1.id
)
)

eiplink3=aws.ec2.EipAssociation(
"eiplink3",
aws.ec2.EipAssociationArgs(
allocation_id=eips[2].id,
instance_id=worker2.id
)
)

pulumi.export("master_eip" , value=eips[0].public_ip )

pulumi.export("worker1_eip", value=eips[1].public_ip )

pulumi.export("worker2_eip", value=eips[2].public_ip )

pulumi.export("master_private_ip", value=master.private_ip)

pulumi.export("worker1_private_ip" , value=worker1.private_ip)

pulumi.export( "worker2_private_ip" , value=worker2.private_ip )

================================================================

Notes:
create 3 script shell for control plane and worker nodes

example:
master.sh for control plane node
worker1.sh for worker node 1
worker2.sh for worker node 2

give permission to execute those script

sudo chmod +x master.sh

sudo chmod +x worker1.sh
sudo chmod +x worker2.sh

Under master.sh :

!bin/bash

disable swap

sudo swapoff -a
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab

update kernel params

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

install container runtime

install containerd 1.7.27

curl -LO https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz

sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz

curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

sudo systemctl daemon-reload
sudo systemctl enable containerd --now

install runc

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64

sudo install -m 755 runc.amd64 /usr/local/sbin/runc

install cni plugins

curl -LO "https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz"

sudo mkdir -p /opt/cni/bin

sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz

install kubeadm, kubelet, kubectl

sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm version
kubelet --version
kubectl version --client

configure crictl to work with containerd

sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock
sudo chmod 777 -R /var/run/containerd/

initialize the control plane node using kubeadm

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=77.132.100.6 --node-name master

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

sudo chmod 777 .kube/
sudo chmod 777 -R /etc/kubernetes/

export KUBECONFIG=/etc/kubernetes/admin.conf

install cni calico

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/tigera-operator.yaml

curl -o custom-resources.yaml https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/custom-resources.yaml

kubectl apply -f custom-resources.yaml

On control plane, get join command:

sudo kubeadm token create --print-join-command >> join-master.sh
sudo chmod +x join-master.sh

sudo kubeadm join 77.132.100.6:6443 --token 26qu35.wtps7hdhutk22nt8 \
--discovery-token-ca-cert-hash sha256:4c0cf2ce4c52d2b1e767056ffd7e2196358ef95c508f1a1abbe83e706a273538

Under worker1.sh :

!bin/bash

disable swap

sudo swapoff -a
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab

update kernel params

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

install container runtime

install containerd 1.7.27

curl -LO https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz

sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz

curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

sudo systemctl daemon-reload
sudo systemctl enable containerd --now

install runc

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64

sudo install -m 755 runc.amd64 /usr/local/sbin/runc

install cni plugins

curl -LO "https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz"

sudo mkdir -p /opt/cni/bin

sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz

install kubeadm, kubelet, kubectl

sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm version
kubelet --version
kubectl version --client

configure crictl to work with containerd

sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock
sudo chmod 777 -R /var/run/containerd/

sudo hostname worker1

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

sudo chmod 777 .kube/
sudo chmod 777 -R /etc/kubernetes/

sudo kubeadm join 77.132.100.6:6443 --token 26qu35.wtps7hdhutk22nt8 \
--discovery-token-ca-cert-hash sha256:4c0cf2ce4c52d2b1e767056ffd7e2196358ef95c508f1a1abbe83e706a273538

=======================
Under worker2.sh

!bin/bash

disable swap

sudo swapoff -a
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab

update kernel params

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

install container runtime

install containerd 1.7.27

curl -LO https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz

sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz

curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

sudo systemctl daemon-reload
sudo systemctl enable containerd --now

install runc

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64

sudo install -m 755 runc.amd64 /usr/local/sbin/runc

install cni plugins

curl -LO "https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz"

sudo mkdir -p /opt/cni/bin

sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz

install kubeadm, kubelet, kubectl

sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm version
kubelet --version
kubectl version --client

configure crictl to work with containerd

sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock
sudo chmod 777 -R /var/run/containerd/

sudo hostname worker2

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

sudo chmod 777 .kube/
sudo chmod 777 -R /etc/kubernetes/

sudo kubeadm join 77.132.100.6:6443 --token 26qu35.wtps7hdhutk22nt8 \
--discovery-token-ca-cert-hash sha256:4c0cf2ce4c52d2b1e767056ffd7e2196358ef95c508f1a1abbe83e706a273538

Notes:

  1. remember not all cni plugins support network policies
    you can use calico or cilium plugins

  2. for kubeadm , kubetlet , kubectl should same version package
    in this lab I used v1.31 to have 1.31.7
    references:
    https://kubernetes.io/docs/reference/networking/ports-and-protocols/
    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
    https://github.com/opencontainers/runc/releases/
    https://github.com/containerd/containerd/releases
    https://github.com/opencontainers/runc

Cka 2024-2025 series:
https://www.youtube.com/watch?v=6_gMoe7Ik8k&list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC

Note: --apiserver-advertise-address= [ use private ip address of control plane node ] example 77.132.100.6

Top comments (0)