I have 3 VMs(a master node, 2 worker nodes).
Plan
Here is the step that we will do to set it up.
Setup K8s
Preparation
Before doing anything, we need to:
===All Nodes===
- Adding a static IP to our Ubuntu instance
go to /etc/netplan
folder
cd /etc/netplan
ls
edit the file in there, in my case is 50-cloud-init.yaml
backup it before editing just in case we need to reset it back
sudo cp 50-cloud-init.yaml 50-cloud-init.yaml.bak
sudo nano 50-cloud-init.yaml
network:
version: 2
ethernets:
ens18:
addresses: [192.168.***.***/24]
nameservers:
addresses: [192.168.***.***]
routes:
- to: default
via: 192.168.***.***
Current IP: can be any IP, but just stick with current one.
Nameserver: usually 0.0.0.0 or default gateway, type resolvectl
to check
Default gateway: usually router IP
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.**.*** 0.0.0.0 UG 0 0 0 ens18
Apply new change
sudo netplan try
and enter
To test it
ping 0.0.0.0
ping google.com
- Disable swap
Check if you have swap enable, type free -m
, if it shows 0
then it means it is disabled
sudo swapoff -a
sudo nano /etc/fstab
comment out this line
# UUID=1d3c29bb-d730-4ad0-a659-45b25f60c37d none swap sw 0 0
then sudo reboot
- Enabling bridging in /etc/sysctl.conf
sudo nano /etc/sysctl.conf
# uncomment this line
net.ipv4.ip_forward=1
- Enabling br_netfilter
type sudo nano /etc/modules-load.d/k8s.conf
and add
br_netfilter
Install containerd
===All Nodes===
- Install a container runtime(containerd)
sudo apt update && sudo apt install -y containerd
- Config systemd cgroup driver
check the init system type
ps -p 1
PID TTY TIME CMD
1 ? 00:00:02 systemd
change config sytemd to use Cgroup
cd /etc/containerd
# if it is not exist
sudo mkdir -p /etc/containerd
containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
# you might need to change
# sandbox_image = "registry.k8s.io/pause:3.8" to
# sandbox_image = "registry.k8s.io/pause:3.10" the latest version
sudo systemctl restart containerd
Install kubeadm
- Install kubeadm
===All Nodes===
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
See the official document
- Kubectl autocomplete
source <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
alias k=kubectl
complete -o default -F __start_kubectl k
- Init kubeadm
=== Master Node ===
sudo kubeadm init --apiserver-advertise-address <master_node_ip> --pod-network-cidr "10.244.0.0/16" --upload-certs
set the admin user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Set up pod network plugin
===Master Node===
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
- Join node === Worker Node ===
sudo kubeadm join 192.168.**.***:6443 --token 05ntql.idv214eptmcihi5t \
--discovery-token-ca-cert-hash sha256:3c...
You can type kubeadm token create --print-join-command
on master node to see the command
Check all nodes in the cluster
Reset kubeadm
===All Nodes===
If you have the problem with control plan or node like IP change or other error that you need to reset the cluster here is the easy way to do it.
Restart node
Something it is necessary to restart K8s nodes
1.Drain the node that you want to restart kubectl drain <node-name>
2.Restart node
3.Allow new pods to be scheduled on it kubectl uncordon <node-name>
4.Verify kubectl get nodes
Reset kubeadm
sudo kubeadm reset
Cleanup of K8s
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
Cleanup of dependency
sudo apt-get autoremove
Cleanup of ~/.kube
sudo rm -rf ~/.kube
Read the official document
=== Done ===
How to edit route table
To view route table
ip route show
# or
ip route list
To set a route to the locally connected network eth0
ip route add <network>/<netmask> via <gateway> dev <interface>
# example
sudo ip route add 192.168.2.0/24 via 192.168.2.254 dev eth0
To a default route
ip route add default via 192.168.1.254
To delete route from table
ip route delete 192.168.1.0/24 dev eth0
# or
ip route delete default
Leave a comment if you have any questions.
===========
Please keep in touch
Portfolio
Linkedin
Github
Youtube
Top comments (2)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.