<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Duncan</title>
    <description>The latest articles on DEV Community by Duncan (@alchemicduncan).</description>
    <link>https://dev.to/alchemicduncan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alchemicduncan"/>
    <language>en</language>
    <item>
      <title>Misadventures in Kubernetes: Autoscaling Workers</title>
      <dc:creator>Duncan</dc:creator>
      <pubDate>Sun, 10 May 2026 01:31:11 +0000</pubDate>
      <link>https://dev.to/alchemicduncan/misadventures-in-kubernetes-autoscaling-workers-2p8o</link>
      <guid>https://dev.to/alchemicduncan/misadventures-in-kubernetes-autoscaling-workers-2p8o</guid>
      <description>&lt;p&gt;So we’ve already done a few things for setting up our own custom cluster. We’ve manually configured a Kubernetes Control Plane and joined worker nodes by hand. While that was a great way for us to learn the components, let’s be honest: setting up every server by hand is just not scalable for a real production environment. Or for creating a more resilient cluster as well! Really the main issue boils down to how we set up the initial pool of worker nodes.&lt;/p&gt;

&lt;p&gt;If what I’m talking about us doing isn’t familiar you should first read our &lt;a href="https://dev.to/@alchemicduncan/kubernetes-the-kinda-hard-way-creating-workers-45c493715547"&gt;prior post&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Manual Nodes
&lt;/h2&gt;

&lt;p&gt;Right now, our cluster is static, and honestly, it’s a bit of a liability. If worker-1 decides to take an unscheduled vacation and crashes, it’s just gone. Our capacity takes a hit, and we are stuck in the dark until someone manually notices and provisions a replacement. In a modern setup, having to SSH into every new VM just to run a join command isn’t just tedious, it’s a bottleneck that keeps us from scaling when it actually matters. Not only that, but it also makes it hard to meet the expectations of what we thought Kubernetes would be able to help as well, no!?&lt;/p&gt;

&lt;p&gt;There are a few things that we need our cluster to be able to do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic Joining: New nodes should just join the cluster the moment they boot up, no human needed.&lt;/li&gt;
&lt;li&gt;Self-Healing: If a node dies, the system should recognize the loss and automatically spin up a healthy replacement.&lt;/li&gt;
&lt;li&gt;Smart Scaling: The cluster needs to get bigger when the load increases and shrink back down when things quieten down to save money.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Startup Scripts
&lt;/h2&gt;

&lt;p&gt;The key to our automation is ensuring a new VM joins the cluster automatically when it boots. We can’t be there to SSH in and run kubeadm join every time.&lt;/p&gt;

&lt;p&gt;To achieve this, we will use a GCP Startup Script to run the join command for us.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Generate a Permanent Token
&lt;/h3&gt;

&lt;p&gt;Standard kubeadm tokens expire after 24 hours. For our autoscaling group that might last months, we need a permanent token.&lt;/p&gt;

&lt;p&gt;On your Control Plane, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm token create &lt;span class="nt"&gt;--print-join-command&lt;/span&gt; &lt;span class="nt"&gt;--ttl&lt;/span&gt; 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the output command, as you’ll need it for the next step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create an Instance Template
&lt;/h3&gt;

&lt;p&gt;An Instance Template tells GCP how to build a VM by specifying the image, machine type, and scripts to run. Note that the &lt;em&gt;k8s-node-family&lt;/em&gt; referenced here is a custom image we built in the previous part of this series.&lt;/p&gt;

&lt;p&gt;We use the — metadata startup-script flag to inject our join command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute instance-templates create k8s-worker-template &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--image-family&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s-node-family &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--machine-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;e2-standard-2 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s-worker &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--metadata&lt;/span&gt; startup-script&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'#! /bin/bash &amp;amp;lt;PASTE_YOUR_JOIN_COMMAND_HERE&amp;amp;gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Create the Managed Instance Group (MIG)
&lt;/h3&gt;

&lt;p&gt;We will create a Regional MIG. This means GCP will spread our nodes across multiple zones, such as us-central1-a, b, and c, for high availability.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute instance-groups managed create k8s-worker-mig &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s-worker-template &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GCP will immediately spin up 1 node. It will boot, run the startup script, and join your cluster automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Enable Autoscaling
&lt;/h3&gt;

&lt;p&gt;Now we tell GCP to watch the CPU usage of these nodes. If the average CPU usage exceeds 60%, it will add more nodes for us (up to 5).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute instance-groups managed set-autoscaling k8s-worker-mig &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--max-num-replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--min-num-replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--target-cpu-utilization&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.60 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Stress Testing the Autoscaler
&lt;/h2&gt;

&lt;p&gt;Let’s prove it works. We’ll use a local kubectl connection to create artificial load.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create a Load Generator We’ll deploy a simple busybox container that does nothing but an infinite loop to burn CPU.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create deployment load-generator &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;busybox &lt;span class="nt"&gt;--&lt;/span&gt; /bin/sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"while true; do :; done"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2. &lt;strong&gt;Request CPU&lt;/strong&gt; This step is absolutely critical. We must tell Kubernetes. exactly how much CPU this pod requires; without these resource requests, the cluster autoscaler will not recognize that the node is at capacity and will fail to trigger the scale-up event.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;set &lt;/span&gt;resources deployment load-generator &lt;span class="nt"&gt;--requests&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;cpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;200m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3. Scale the Load Scale it to 20 replicas.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl scale deployment load-generator &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4. Watch the Magic Open two terminal windows. In one, watch your nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-w&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the other, watch GCP instances:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute instance-groups managed list-instances k8s-worker-mig &lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see the single node fill up, pods go into Pending state, and the GCP Autoscaler provision new VMs to handle the load.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Easy Way (GKE)
&lt;/h2&gt;

&lt;p&gt;It’s always worth pausing to appreciate just how much effort we put into this. If you were using Google Kubernetes Engine (GKE), this entire process could have been replaced by a single “easy button” command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud container clusters create k8s-easy-cluster &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--zone&lt;/span&gt; us-central1-a &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--num-nodes&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--machine-type&lt;/span&gt; e2-medium
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While GKE is powerful, understanding the “hard way” makes us better operators because we know exactly which components to investigate when things go wrong.&lt;/p&gt;

&lt;p&gt;Ready for the next challenge? Follow along to the next part of the series: upgrading your control plane without downtime!&lt;/p&gt;

&lt;h2&gt;
  
  
  Are We Done? (Or Just Getting Started?)
&lt;/h2&gt;

&lt;p&gt;At this stage, we have a functional Kubernetes cluster on Compute Engine that includes self-healing and auto-scaling capabilities. This setup provides control over the operating system, kernel, and networking without GKE management fees, demonstrating that automation can be integrated into a manual build. However, this configuration is merely the foundation for a much broader architectural exploration. What are some things that you might want this cluster to do that it doesn’t already?&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Deeper
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The Original Blueprint: This entire series is inspired by Kelsey Hightower’s foundational guide, &lt;a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="noopener noreferrer"&gt;Kubernetes The Hard Way&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Official Documentation: Get familiar with the source of truth for all components: &lt;a href="https://kubernetes.io/docs/" rel="noopener noreferrer"&gt;Kubernetes.io Documentation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Official GKE Documentation: Access the complete guide for &lt;a href="https://cloud.google.com/kubernetes-engine/docs?utm_campaign=CDR_0x586bd5d5_default_b479225639&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Google Kubernetes Engine (GKE) Documentation&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>gcp</category>
      <category>devops</category>
    </item>
    <item>
      <title>Misadventures in Kubernetes: Creating Workers</title>
      <dc:creator>Duncan</dc:creator>
      <pubDate>Sun, 10 May 2026 01:31:00 +0000</pubDate>
      <link>https://dev.to/alchemicduncan/misadventures-in-kubernetes-creating-workers-29bb</link>
      <guid>https://dev.to/alchemicduncan/misadventures-in-kubernetes-creating-workers-29bb</guid>
      <description>&lt;p&gt;So we have built a control plan from scratch. That’s kinda useful and was also kinda hard to make? But, really what we want to make is a proper, functioning, multi-node cluster that can actually run our containerized applications. A control plane by itself is just the brain; we need the muscle (the worker nodes) to do the actual work.&lt;/p&gt;

&lt;p&gt;When we set up the control node, setting up the Control Plane required us to manually configure every single detail of the seed VM, line-by-line. While that was an important exercise in understanding the components, trying to repeat those steps for 10, 50, or 100 worker nodes would quickly become a nightmare. This part of the series is about making the process scalable by creating a “Golden Image” from our pre-configured machine, allowing us to rapidly provision identical workers and efficiently join them to our existing cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Golden Image
&lt;/h2&gt;

&lt;p&gt;The core principle here is to stop doing repetitive manual configuration. We already spent the time when we set up the control node getting the operating system tuned exactly right. Remember all those steps? We loaded the overlay and br_netfilter kernel modules, tweaked the network bridge settings for iptables, and, critically, disabled swap entirely because Kubelet won’t run otherwise. We also installed containerd and configured it to use systemd as the cgroup driver.&lt;/p&gt;

&lt;p&gt;That entire stack of prerequisites, the OS configuration, and the necessary Kubernetes binaries (kubeadm, kubelet, kubectl) are now perfectly baked into our k8s-seed VM. A &lt;strong&gt;Golden Image&lt;/strong&gt; is simply a frozen, ready-to-use snapshot of that perfect setup. Instead of repeating those steps on every new node, we use this image as the template to rapidly spin up as many identical worker nodes as we need. This moves us from a manual, configuration-based process to an automated, image-based provisioning process, which is the foundational step toward true cloud-native infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Generalize the Seed Instance
&lt;/h2&gt;

&lt;p&gt;Before we can use our k8s-seed VM as a template, we need to “seal” it. Every running Linux machine has unique identifiers — a machine ID and specific network logs. If we duplicate these across multiple nodes, the Kubernetes Control Plane will be confused, seeing multiple machines with the same identity. We need to clear these unique details so that when a new VM boots from the image, it generates its own fresh identity and network configuration.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;SSH into the seed machine:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute ssh k8s-seed &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2. &lt;strong&gt;Clean up unique identifiers:&lt;/strong&gt; Run the following command to remove the machine ID and reset cloud-init logs. This ensures that when new VMs boot from this image, they generate their own fresh identity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;cloud-init clean &lt;span class="nt"&gt;--seed&lt;/span&gt; &lt;span class="nt"&gt;--logs&lt;/span&gt; &lt;span class="nt"&gt;--machine-id&lt;/span&gt;
&lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3. &lt;strong&gt;Stop the instance:&lt;/strong&gt; Back on your local machine, stop the VM so we can snapshot its disk.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute instances stop k8s-seed &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Create the Custom Image
&lt;/h2&gt;

&lt;p&gt;With the seed machine prepped and stopped, we now create a Google Compute Engine image from that disk. This image, k8s-node-image-v1, will serve as the foundation for every node in our cluster, whether they eventually become a control plane member or a worker.&lt;/p&gt;

&lt;p&gt;Run the following command to create the image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute images create k8s-node-image-v1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-disk&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s-seed &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-disk-zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--family&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s-node-family
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We added the — family flag so we can easily reference the “latest” version of this image later without needing to know the exact version name. This is a great cloud-native practice for image management!&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Provision Worker Nodes
&lt;/h2&gt;

&lt;p&gt;Now for the easy part. Because the heavy lifting of configuration is done, spinning up new workers is instantaneous. We can create two worker nodes identically, pre-loaded with all the necessary Kubernetes prerequisites. This process takes minutes, a stark contrast to the hours spent manually configuring the seed VM in when we setup the control node.&lt;/p&gt;

&lt;p&gt;Run these commands to create worker-1 and worker-2 using the image family we just created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute instances create worker-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image-family&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s-node-family &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s-worker

gcloud compute instances create worker-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image-family&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s-node-family &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s-worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Join the Cluster
&lt;/h2&gt;

&lt;p&gt;Your worker VMs are running, but they are just generic machines with Kubernetes components installed; they don’t know they belong to a cluster yet. This is where the magic of the kubeadm join command comes in. The worker’s kubelet agent needs to securely authenticate and register with the Control Plane’s API server. The join command provides the token and the hash that validates the worker’s identity, effectively making it a true node.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Locate your join command:&lt;/strong&gt; Find the kubeadm join command you saved from the output of kubeadm init when we set up the control node. It looks something like this:&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Join the workers:&lt;/strong&gt; SSH into each worker and run that command with sudo. For Worker 1:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;10.128.0.x:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; &amp;lt;token&amp;gt; &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:&amp;lt;&lt;span class="nb"&gt;hash&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3. &lt;strong&gt;For Worker 2:&lt;/strong&gt; Repeat the same steps for worker-2.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute ssh worker-1 &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a
&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm &lt;span class="nb"&gt;join&lt;/span&gt; ... &lt;span class="c"&gt;# Paste your command here&lt;/span&gt;
&lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Verify the Cluster
&lt;/h2&gt;

&lt;p&gt;Head back to your &lt;strong&gt;Control Plane&lt;/strong&gt; to verify that everyone has checked in. This is the satisfying moment where all that hard work pays off and you see your multi-node cluster come alive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output similar to this, with all nodes showing a Ready status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME            STATUS   ROLES           AGE   VERSION
control-plane   Ready    control-plane   10m   v1.35.0
worker-1        Ready    &amp;lt;none&amp;gt;          2m    v1.35.0
worker-2        Ready    &amp;lt;none&amp;gt;          1m    v1.35.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Easy Way (GKE)
&lt;/h2&gt;

&lt;p&gt;It’s always worth pausing to appreciate just how much effort we put into this. Throughout this series, we’ve manually provisioned VMs, installed binaries, configured systemd services, and joined nodes — all to get a basic Kubernetes cluster running.&lt;/p&gt;

&lt;p&gt;If you were using &lt;strong&gt;Google Kubernetes Engine (GKE)&lt;/strong&gt;, this entire process (for both the control plane and worker nodes) could have been replaced by a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud container clusters create k8s-easy-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--zone&lt;/span&gt; us-central1-a &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--num-nodes&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--machine-type&lt;/span&gt; e2-medium
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;You now have a multi-node cluster, which is a huge win. But there’s a big problem: your cluster is still static. If worker-1 crashes, it’s gone forever and your capacity is reduced until you manually replace it. If your application load increases, you have to manually provision worker-3, SSH in, and run the kubeadm join command. Future-proofing your infrastructure means moving beyond these manual interventions.&lt;/p&gt;

&lt;p&gt;To achieve a truly “cloud-native” and resilient cluster, we need to automate the lifecycle of our worker nodes, ensuring they can be recreated and scaled without manual effort. This is something to think about, maybe this is something that we might touch on in the future possibly?&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Deeper
&lt;/h2&gt;

&lt;p&gt;The journey through “Kubernetes the Kinda Hard Way” is fundamentally about building foundational knowledge. Ready to solidify your understanding and tackle advanced concepts? Check out these resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Original Blueprint:&lt;/strong&gt; This entire series is inspired by Kelsey Hightower’s foundational guide, &lt;a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="noopener noreferrer"&gt;&lt;strong&gt;Kubernetes The Hard Way&lt;/strong&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Official Documentation:&lt;/strong&gt; Get familiar with the source of truth for all components: &lt;a href="https://kubernetes.io/docs/" rel="noopener noreferrer"&gt;&lt;strong&gt;Kubernetes.io Documentation&lt;/strong&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>gcp</category>
      <category>devops</category>
    </item>
    <item>
      <title>Misadventures in Kubernetes: Provisioning the Control Plane</title>
      <dc:creator>Duncan</dc:creator>
      <pubDate>Sun, 10 May 2026 01:30:29 +0000</pubDate>
      <link>https://dev.to/alchemicduncan/misadventures-in-kubernetes-provisioning-the-control-plane-12g5</link>
      <guid>https://dev.to/alchemicduncan/misadventures-in-kubernetes-provisioning-the-control-plane-12g5</guid>
      <description>&lt;p&gt;When using managed platforms like Google Kubernetes Engine (GKE), there are many things that you don’t have to worry about that it takes care of for you! But have you ever paused to consider what machinery operates beneath that simple surface? Perhaps wondered how you might be able to do it on your own, or how to at least go about trying to do it?&lt;/p&gt;

&lt;p&gt;My personal motivation for this project was simple: I wanted to give this a try myself to truly understand Kubernetes at its core. Moving past the abstraction of managed services, I set out to peel back the layers and inspect the core components. This approach involves setting up standard Google Compute Engine (GCE) Virtual Machines, manually installing and configuring every component, and wiring the networking together ourselves. This deliberate, hands-on process helps build the foundational knowledge necessary to effectively troubleshoot, optimize, and better understand our own clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we dive in, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Google Cloud Platform (GCP) account.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://docs.cloud.google.com/sdk/docs/install-sdk" rel="noopener noreferrer"&gt;gcloud CLI&lt;/a&gt; installed and authenticated on your local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Provision the Base Instance
&lt;/h2&gt;

&lt;p&gt;We start by creating a “seed” or a base instance. This VM serves as our foundation where we install all necessary software, providing a clear template for how the Kubernetes nodes/VMs are constructed. The E2 series is often chosen because it offers the most cost-optimized VMs on GCP, providing a performance-to-cost balance suitable for a Kubernetes control plane. We are using the e2-standard-2 machine type because Kubernetes requires at least 2 vCPUs and 2GB of RAM to run comfortably.&lt;/p&gt;

&lt;p&gt;Run the following command to create your VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute instances create k8s-seed &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--machine-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;e2-standard-2 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-project&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ubuntu-os-cloud &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image-family&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ubuntu-2204-lts &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--boot-disk-size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;50GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once created, SSH into the machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute ssh k8s-seed &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Configure the OS
&lt;/h2&gt;

&lt;p&gt;Kubernetes has specific requirements for the underlying Linux OS. We need to load specific kernel modules and tweak network settings to allow Kubernetes to manipulate traffic correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Load Kernel Modules
&lt;/h3&gt;

&lt;p&gt;These modules allow Kubernetes to manipulate network traffic for Pods and Services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe overlay
&lt;span class="nb"&gt;sudo &lt;/span&gt;modprobe br_netfilter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Network Bridge Settings
&lt;/h3&gt;

&lt;p&gt;Ensure bridged traffic is passed to iptables for correct filtering.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;sysctl &lt;span class="nt"&gt;--system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Disable Swap
&lt;/h3&gt;

&lt;p&gt;This is critical. The Kubelet will fail to start if swap is enabled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;span class="c"&gt;# Edit fstab to prevent swap from turning on after reboot&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/ swap / s/^\(.*\)$/#\1/g'&lt;/span&gt; /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Install the Runtime (Containerd)
&lt;/h2&gt;

&lt;p&gt;Kubernetes needs a container runtime to launch pods. We will use containerd.&lt;/p&gt;

&lt;p&gt;We choose containerd because it is the industry-standard core container runtime and is lightweight. Kubernetes now defaults to containerd, and runtimes like Docker are being deprecated in modern Kubernetes versions.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Add Docker’s Apt Repository
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; ca-certificates curl gnupg
&lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/apt/keyrings
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.gpg
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;a+r /etc/apt/keyrings/docker.gpg

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; /etc/os-release &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$VERSION_CODENAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Install Containerd
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; containerd.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Configure Systemd Cgroups
&lt;/h3&gt;

&lt;p&gt;Kubernetes recommends using systemd as the cgroup driver.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/containerd
&lt;span class="nb"&gt;sudo &lt;/span&gt;containerd config default | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/containerd/config.toml
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/SystemdCgroup = false/SystemdCgroup = true/'&lt;/span&gt; /etc/containerd/config.toml
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Install Kubernetes Tools
&lt;/h2&gt;

&lt;p&gt;Now we install the “big three”:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubeadm: The bootstrapper.&lt;/li&gt;
&lt;li&gt;kubelet: The node agent.&lt;/li&gt;
&lt;li&gt;kubectl: The CLI.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add Kubernetes repo&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-transport-https ca-certificates curl gpg

curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/kubernetes-apt-keyring.gpg

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /'&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/kubernetes.list

&lt;span class="c"&gt;# Install&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubelet kubeadm kubectl
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-mark hold kubelet kubeadm kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We use apt-mark hold to prevent automatic updates from breaking the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Initialize the Cluster
&lt;/h2&gt;

&lt;p&gt;At this point, our “seed” machine is fully prepped. We will use it now to initialize the Control Plane.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Run Init
&lt;/h3&gt;

&lt;p&gt;We specify a pod network CIDR that is compatible with our chosen networking plugin (Calico).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.0.0/16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Configure Kubectl
&lt;/h3&gt;

&lt;p&gt;To run commands against your new cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Install Networking (Calico)
&lt;/h2&gt;

&lt;p&gt;Nodes cannot communicate until a Container Network Interface (CNI) is installed. We’ll use Calico.&lt;/p&gt;

&lt;p&gt;Calico offers high-performance networking by deploying without encapsulation or overlays, and provides a distributed firewall for flexible network policy enforcement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run kubectl get nodes and within a minute, you should see your node transition to Ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  What if I’d Just Used GKE?
&lt;/h2&gt;

&lt;p&gt;To appreciate the work we just did, it’s worth noting that you could have achieved a fully managed, production-hardened equivalent with a single Google Kubernetes Engine (GKE) command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud container clusters create k8s-easy &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--zone&lt;/span&gt; us-central1-a &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--machine-type&lt;/span&gt; e2-standard-2 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--num-nodes&lt;/span&gt; 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This single command provisions the control plane (managed by Google), creates worker nodes, configures networking, and sets up authentication. But by building part of it manually, you now understand &lt;em&gt;how&lt;/em&gt; those components work together.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;Congratulations! You have successfully built a functional Kubernetes control plane from scratch.&lt;/p&gt;

&lt;p&gt;By building the cluster “the kinda hard way,” you’ve moved beyond being a user of managed services to understanding more about the components that orchestrate modern containerized applications. This foundation is key to troubleshooting, optimizing, and scaling production-grade Kubernetes environments. But what if you want to use more than control nodes?? Maybe think about how you might be able to create worker nodes to work with your existing cluster.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>gcp</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
