DEV Community

Cover image for Manual Kubernetes Cluster Deployment. My step-by-step story.
Lisa Ellington
Lisa Ellington

Posted on

Manual Kubernetes Cluster Deployment. My step-by-step story.

When I first decided to deploy a Kubernetes cluster manually, I have to admit, it felt intimidating. Even with years of tech experience, starting from zero and setting each component myself looked like a big task. But as I went through the steps, I realized that building my cluster from the ground up gave me a deep understanding of Kubernetes. This story will show you exactly how I did it. I followed a hands-on path using kubeadm and set this up across a few cloud VMs, but this process works with any Linux servers.

Disclosure: This content was produced with AI technology support and may feature companies I have affiliations with.

I will walk you through everything I did: preparing each node, setting up the container engine, installing key Kubernetes tools, getting networking working, checking if everything is healthy, and launching my first app. If you want to get certified or really learn how Kubernetes ticks, I believe this step-by-step process is one of the best ways to do it.


Prerequisites and Environment Setup

Before I got started, I checked my setup:

  • I made sure to have at least two servers. I needed one control plane node (some folks call this the master), and at least one worker. I wanted to be practical, so I used two VMs.
  • I installed Ubuntu 22.04. Each VM got at least 2 vCPU and 2GB RAM for the master and 1 vCPU, 2GB RAM for the worker.
  • I gave each machine a static private IP. I checked to be sure my cluster’s pod network range would not overlap with these IPs.
  • I tested that all my nodes could “see” each other over the network across all needed Kubernetes ports.
  • I set up firewalls and cloud security so Kubernetes traffic (like port 6443 for the API and node ports 30000-32767) would not get blocked.

I usually spin up my environments on Google Cloud or VirtualBox if I test locally, but honestly, any provider works fine, or even a couple of Raspberry Pis if you want.


Preparing the Nodes

Manually installing Kubernetes always starts with cleaning up and tuning your operating systems. I logged into each machine with a user that had sudo rights.

Set Hostnames and Hosts File

To keep things clear, I gave each node its own name.

sudo hostnamectl set-hostname k8s-master
sudo hostnamectl set-hostname k8s-worker1
Enter fullscreen mode Exit fullscreen mode

Then, I edited /etc/hosts on every node and added lines like this so each node would resolve the others’ names.

192.168.1.10  k8s-master
192.168.1.11  k8s-worker1
192.168.1.12  k8s-worker2
Enter fullscreen mode Exit fullscreen mode

Disable Swap

I ran into issues the first time I forgot to do this. Kubernetes will not even start if you have swap enabled.

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Enter fullscreen mode Exit fullscreen mode

Load Required Kernel Modules

Kubernetes needs some networking support to be enabled in the Linux kernel. I ran these commands:

sudo modprobe overlay
sudo modprobe br_netfilter

sudo tee /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
Enter fullscreen mode Exit fullscreen mode

Installing the Container Runtime

Kubernetes does not run containers itself. It needs a container runtime. These days, containerd is a safe default and works great.

Installing containerd

I set this up by running:

sudo apt-get update
sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
Enter fullscreen mode Exit fullscreen mode

I learned the hard way to check the config file. In /etc/containerd/config.toml, I made sure this part existed:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  SystemdCgroup = true
Enter fullscreen mode Exit fullscreen mode

Then, to finish:

sudo systemctl restart containerd
sudo systemctl enable containerd
Enter fullscreen mode Exit fullscreen mode

Remove Docker if Present

For me, Docker was pre-installed on some older VMs. Since Kubernetes now recommends containerd or CRI-O, I removed Docker if it was there to avoid conflicts.


Installing kubeadm, kubelet, and kubectl

These three tools are the main pieces you need on each node to run and manage Kubernetes itself. I installed all three everywhere.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet=1.33.2-1.1 kubeadm=1.33.2-1.1 kubectl=1.33.2-1.1

sudo apt-mark hold kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode

Initialize the Kubernetes Control Plane

The most crucial part happens on my k8s-master node. This is where the brains of the cluster come online.

Configure Network Settings

I ran this one more time on the control plane before starting:

sudo sysctl --system
Enter fullscreen mode Exit fullscreen mode

Initialize with kubeadm

The pod network CIDR is important. It must match the CNI plugin I will use later. For Calico, I used:

sudo kubeadm init \
  --pod-network-cidr=192.168.0.0/16 \
  --cri-socket=unix:///run/containerd/containerd.sock
Enter fullscreen mode Exit fullscreen mode

The output showed me lots of details. It checked my system, created TLS certs, grabbed container images, and started the control plane. At the end, it gave me two things:

  • Exact steps to set up kubectl for my own user
  • A command that looks like kubeadm join ... for bringing worker nodes into the cluster

Configure kubectl

I set up kubectl as my normal (non-root) user so I did not have to log in as root every time.

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

Join Worker Nodes to the Cluster

On each worker machine, I pasted the full kubeadm join line that I got earlier.

sudo kubeadm join <MASTER_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH>
Enter fullscreen mode Exit fullscreen mode

Each node ran checks and then securely joined the cluster. If it failed, I found that double-checking firewalls or the join token did the trick.


Setting Up Cluster Networking (CNI)

After workers joined in, they showed up as "NotReady." Turns out, Kubernetes needs a network plugin before pods can talk to each other.

Installing Calico

I like Calico. It is solid and has good docs. On the control plane, I ran:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Enter fullscreen mode Exit fullscreen mode

Sometimes I needed a different network CIDR, so I downloaded the YAML file, edited the CALICO_IPV4POOL_CIDR line, and then applied it.

Monitor Pod Status

I always kept an eye on the system pods to see when they are up and running:

kubectl get pods -n kube-system
Enter fullscreen mode Exit fullscreen mode

When Calico pods and coredns were running, my nodes finally showed “Ready”:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

This moment always feels good.


(Optional) Deploy the Metrics Server

To see pod and node CPU or memory usage with kubectl top, I needed the metrics server. Setting this up was fast.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Enter fullscreen mode Exit fullscreen mode

After a minute, I checked the pod status, then ran:

kubectl top nodes
kubectl top pods
Enter fullscreen mode Exit fullscreen mode

Suddenly, I could see real-time stats for everything.


Deploy Your First Application

Once my cluster was running, I wanted to launch something real. I started with NGINX.

Example: Deploying NGINX

I created a deployment and exposed it using a NodePort.

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
Enter fullscreen mode Exit fullscreen mode

To get the port number, I ran:

kubectl get svc nginx
Enter fullscreen mode Exit fullscreen mode

Then, I opened my browser and visited <WorkerNodeIP>:<NodePort>. When the NGINX welcome page appeared, I knew everything had worked.


Cluster Management Tips

  • I make sure kubeadm, kubelet, and kubectl match in version on all nodes.
  • I protect my environments by using secure SSH keys, updating firewalls, and locking down the admin.conf file.
  • For backup, I regularly save the /etc/kubernetes directory, mainly admin.conf and the PKI folder.
  • I always watch the health of my cluster using kubectl get pods -n kube-system and kubectl get nodes.
  • I read the Kubernetes docs before doing upgrades or changes to make sure I avoid problems.
  • If I want to manage the cluster from my laptop, I copy the admin.conf to my ~/.kube/config directory.

At this point, I often find that keeping track of different cloud provider tools and learning paths can get overwhelming, especially when scaling up beyond personal projects. This is where platforms like Canvas Cloud AI can make a difference. It lets you experiment visually with cloud architectures-including Kubernetes-and offers guided hands-on scenarios, tailored templates, and compare services across AWS, Azure, GCP, and OCI. The free learning paths, cheat sheets, and embeddable resources are especially helpful for both new learners and those expanding their cloud skills portfolio.


FAQ

How is manual Kubernetes installation different from using managed solutions?

In my experience, building Kubernetes by hand teaches you every layer of the system. With managed services (like GKE or EKS), or when using tools like minikube, all the hard stuff is already done. But when I face a real cluster issue, I am always glad I did a manual install at least once. It really helps with troubleshooting, understanding network plugins, and it gave me the confidence to pursue Kubernetes certifications.

Can I use Docker as my container runtime?

I used Docker for years, but Kubernetes stopped supporting it directly after version 1.24. Now, I use containerd for every new cluster. If you really need Docker, you must set up an extra compatibility layer called the Docker shim (cri-dockerd), but I recommend switching to containerd or CRI-O. It works better and keeps things simple.

What if I lose my Kubernetes ‘kubeadm join’ command?

This happened to me when I closed my terminal too early. Not a problem! On the control plane, I ran:

kubeadm token create --print-join-command
Enter fullscreen mode Exit fullscreen mode

That gave me a fresh join command with a new token for the next worker node.

My nodes stay in NotReady status after joining. Why?

Every time my nodes were stuck as "NotReady," it was almost always because networking was not configured yet. I double-checked that I installed the CNI plugin (like Calico) from the control plane node. I also checked firewalls to make sure pod and node ports were open.


By going through this manual Kubernetes cluster deployment, I gained real knowledge of Kubernetes’ inner workings. It made troubleshooting less scary, and I now feel prepared to handle more advanced or production setups. I really recommend doing it at least once for anyone working with Kubernetes.

Top comments (0)