In my previous article, I was talking about Kubernetes fundamentals and a small hands on exercise of kubernetes using Docker Desktop. So I thought of showing how to set up a basic kubernetes cluster in AWS EC2, using only free tier components. We will be using kubeadm to setup the cluster.
Prerequisites
- AWS Free Tier Account Access. Use this link to set it up.
- Basic knowledge of AWS
- Basic knowledge of Linux
Standing up two EC2 instances
We will be using two ubuntu t2.micro instances for this purpose. Create the instance as below.
- As a first step, create a security group with below inbound rules. I am enabling all ICMP and TCP traffic between the servers in the same security groups. Also, enable SSH from all sources, if you want to use SSH client to connect to the servers.
After creating the security group, click on launch instance in instance screen. I am calling first instance as kubemaster and second instance as node01.
Select t2.micro instance type and proceed without a key pair. Kubernetes recommends at least 2 CPUs and 2 GB RAM for the nodes. However, as this is just for setting up basic cluster, i am going with t2.micro instances. Also, you can choose to create key pairs if you want to connect instances using ssh clients. I will be using EC2 Instance Connect to establish connection with servers.
- Click EDIT on the network settings and select the same subnet for both instances. I am selecting the first subnet from the list. This is done to ensure that the network will be open between these two instances. Select the security group we created in the first step.
- Click Advanced Network Configuration and set two consecutive primary ip addresses for kubemaster and node01. Please make sure the IP address is from the subnet range. I am selecting 172.31.32.10 and 172.31.32.11 for kubemaster and node01 respectively.
- Leave other fields as it is and launch the instance. Do the same steps for node01 instance.
- Make sure both instance are in running state and then select one instance at a time and click on the connect button on top of the page to connect using EC2 instance connect. Or you can choose to connect using the ssh client of your choice using the ssh key.
Setting up Kubernetes cluster
Setting up instance hostname and hosts file
As a first step, we will setup the hostname for these two ec2 instances and set up /etc/hosts file to resolve both these machines.
On kubemaster instance:
root@ip-172-31-32-10:~# hostnamectl set-hostname kubemaster
root@ip-172-31-32-10:~# vi /etc/hosts
Enter below and save in hosts file. Use your specific IP addresses.
127.0.0.1 localhost kubemaster
172.31.32.10 kubemaster
172.31.32.11 node01
On node01 instance:
root@ip-172-31-32-11:~# hostnamectl set-hostname node01
root@ip-172-31-32-11:~# vi /etc/hosts
Enter below and save in hosts file. Use your specific IP addresses.
127.0.0.1 localhost node01
172.31.32.11 node01
172.31.32.10 kubemaster
Installing Container Runtime
We will be using Docker Engine for our purpose. Run below commands in both kubemaster and node01 instances.
- Forwarding IPv4 and letting iptables see bridged traffic
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
- Installing Docker Engine
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce
- Installing cri-dockerd. This might take few minutes to complete.
git clone https://github.com/Mirantis/cri-dockerd.git
# Run these commands as root
###Install GO###
wget https://storage.googleapis.com/golang/getgo/installer_linux
chmod +x ./installer_linux
./installer_linux
source ~/.bash_profile
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket
Installing kubeadm, kubelet and kubectl
These commands needs to be run on both kubemaster and node01.
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Initializing the controlplane using kubeadm
These commands needs to be run only on kubemaster
kubeadm init --pod-network-cidr=192.16.0.0/16 --apiserver-advertise-address=172.31.32.10 --cri-socket=unix:///var/run/cri-dockerd.sock --ignore-preflight-errors=NumCPU,Mem
Here I am suppressing the errors due to the low CPU and Memory of t2.micro instance as our aim is just to bring up the bare minimum cluster. Once the above command is run, controlplane will be initialized and you will get an output similar to the below.
From the output, please keep a note of the kubeadm join command somewhere for setting up node01 later.
Run the below commands from the kubeadm init output to setup kubectl configuration.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now you can see that the controlplane and components are running.
root@ip-172-31-32-10:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster NotReady control-plane 4m46s v1.27.2
root@ip-172-31-32-10:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d78c9869d-67skv 0/1 Pending 0 4m34s
kube-system coredns-5d78c9869d-xsck7 0/1 Pending 0 4m34s
kube-system etcd-kubemaster 1/1 Running 0 4m48s
kube-system kube-apiserver-kubemaster 1/1 Running 0 4m51s
kube-system kube-controller-manager-kubemaster 1/1 Running 0 4m51s
kube-system kube-proxy-7qfxs 1/1 Running 0 4m34s
kube-system kube-scheduler-kubemaster 1/1 Running 0 4m46s
root@ip-172-31-32-10:~
Installing pod network using weave-net
Run the below command using kubectl
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
We need to update the pod network cidr we provided during the kubeadm init to the Weave-net configuration. Run the below command and edit the environment variable as shown below. We need to include a environment variable named IPALLOC_RANGE and value 192.16.0.0/16. Please make sure you are editing the container named 'weave'.
kubectl edit daemonset weave-net -n kube-system
spec:
containers:
- command:
- /home/weave/launch.sh
env:
- name: IPALLOC_RANGE
value: 192.16.0.0/16
- name: INIT_CONTAINER
value: "true"
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: weaveworks/weave-kube:latest
imagePullPolicy: Always
name: weave
Joining the node01 instance to the cluster.
In node01 terminal, use the kubeadm join command we had copied before to join the node01 to the cluster. At the end of the command add the --cri-socket=unix:///var/run/cri-dockerd.sock, to select docker as the container runtime.
root@ip-172-31-32-11:~# kubeadm join 172.31.32.10:6443 --token pew42g.8wcpqxe3c2avc3ky --discovery-token-ca-cert-hash sha256:4fb3037e7e98599763dae8f2aa62deac27139c1af96cb8b8e482590ebaaeb45c --cri-socket=unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
We have successfully added two node cluster in AWS EC2.
root@ip-172-31-32-10:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready control-plane 62m v1.27.2
node01 Ready <none> 2m56s v1.27.2
Please note that since we used the lowest capacity EC2 instances for the cluster, we should expect low performance on this Kubernetes cluster. But this would give an experience on how to setup the cluster using kubeadm.
Top comments (1)
Hey Surendu,
Thanks allot of the write up of this article, it motivated me to actually try
and install Kubernetes on AWS EC2 for free, and surprisingly it worked!!
Although i must add for others to be aware as it is slow..
On learning AWS \ Kubernetes i found this article a good guide on being
able to solve some common issues i had along the way. So for those in the same shoes as myself, below are some of the issues i bumped in and how i solved them.
There were other errors i got but i managed to solve them and was not so difficult with a bit of googling.. I would encourage those with curiosity to
at least give it a whirl. 😊👍