When using managed platforms like Google Kubernetes Engine (GKE), there are many things that you don’t have to worry about that it takes care of for you! But have you ever paused to consider what machinery operates beneath that simple surface? Perhaps wondered how you might be able to do it on your own, or how to at least go about trying to do it?
My personal motivation for this project was simple: I wanted to give this a try myself to truly understand Kubernetes at its core. Moving past the abstraction of managed services, I set out to peel back the layers and inspect the core components. This approach involves setting up standard Google Compute Engine (GCE) Virtual Machines, manually installing and configuring every component, and wiring the networking together ourselves. This deliberate, hands-on process helps build the foundational knowledge necessary to effectively troubleshoot, optimize, and better understand our own clusters.
Prerequisites
Before we dive in, ensure you have the following:
- A Google Cloud Platform (GCP) account.
- The gcloud CLI installed and authenticated on your local machine.
Step 1: Provision the Base Instance
We start by creating a “seed” or a base instance. This VM serves as our foundation where we install all necessary software, providing a clear template for how the Kubernetes nodes/VMs are constructed. The E2 series is often chosen because it offers the most cost-optimized VMs on GCP, providing a performance-to-cost balance suitable for a Kubernetes control plane. We are using the e2-standard-2 machine type because Kubernetes requires at least 2 vCPUs and 2GB of RAM to run comfortably.
Run the following command to create your VM:
gcloud compute instances create k8s-seed \
--zone=us-central1-a \
--machine-type=e2-standard-2 \
--image-project=ubuntu-os-cloud \
--image-family=ubuntu-2204-lts \
--boot-disk-size=50GB
Once created, SSH into the machine:
gcloud compute ssh k8s-seed --zone=us-central1-a
Step 2: Configure the OS
Kubernetes has specific requirements for the underlying Linux OS. We need to load specific kernel modules and tweak network settings to allow Kubernetes to manipulate traffic correctly.
1. Load Kernel Modules
These modules allow Kubernetes to manipulate network traffic for Pods and Services.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
2. Network Bridge Settings
Ensure bridged traffic is passed to iptables for correct filtering.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
3. Disable Swap
This is critical. The Kubelet will fail to start if swap is enabled.
sudo swapoff -a
# Edit fstab to prevent swap from turning on after reboot
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Step 3: Install the Runtime (Containerd)
Kubernetes needs a container runtime to launch pods. We will use containerd.
We choose containerd because it is the industry-standard core container runtime and is lightweight. Kubernetes now defaults to containerd, and runtimes like Docker are being deprecated in modern Kubernetes versions.
1. Add Docker’s Apt Repository
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
2. Install Containerd
sudo apt-get update
sudo apt-get install -y containerd.io
3. Configure Systemd Cgroups
Kubernetes recommends using systemd as the cgroup driver.
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
Step 4: Install Kubernetes Tools
Now we install the “big three”:
- kubeadm: The bootstrapper.
- kubelet: The node agent.
- kubectl: The CLI.
# Add Kubernetes repo
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Install
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Note: We use apt-mark hold to prevent automatic updates from breaking the cluster.
Step 5: Initialize the Cluster
At this point, our “seed” machine is fully prepped. We will use it now to initialize the Control Plane.
1. Run Init
We specify a pod network CIDR that is compatible with our chosen networking plugin (Calico).
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
2. Configure Kubectl
To run commands against your new cluster:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 6: Install Networking (Calico)
Nodes cannot communicate until a Container Network Interface (CNI) is installed. We’ll use Calico.
Calico offers high-performance networking by deploying without encapsulation or overlays, and provides a distributed firewall for flexible network policy enforcement.
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml
Run kubectl get nodes and within a minute, you should see your node transition to Ready.
What if I’d Just Used GKE?
To appreciate the work we just did, it’s worth noting that you could have achieved a fully managed, production-hardened equivalent with a single Google Kubernetes Engine (GKE) command:
gcloud container clusters create k8s-easy \
--zone us-central1-a \
--machine-type e2-standard-2 \
--num-nodes 3
This single command provisions the control plane (managed by Google), creates worker nodes, configures networking, and sets up authentication. But by building part of it manually, you now understand how those components work together.
What’s Next?
Congratulations! You have successfully built a functional Kubernetes control plane from scratch.
By building the cluster “the kinda hard way,” you’ve moved beyond being a user of managed services to understanding more about the components that orchestrate modern containerized applications. This foundation is key to troubleshooting, optimizing, and scaling production-grade Kubernetes environments. But what if you want to use more than control nodes?? Maybe think about how you might be able to create worker nodes to work with your existing cluster.
Top comments (0)