DEV Community

Cover image for I Ran 9 Kubernetes Labs on a Local KIND Cluster — Here Is Everything I Learned
Vivian Chiamaka Okose
Vivian Chiamaka Okose

Posted on • Originally published at vivianokose.hashnode.dev

I Ran 9 Kubernetes Labs on a Local KIND Cluster — Here Is Everything I Learned

By Vivian Chiamaka Okose | DevOps Engineer


If you have been putting off learning Kubernetes because it feels overwhelming, this post is for you.

This week I completed a 9 hands-on labs covering the core building blocks of Kubernetes: Pods, ReplicaSets, Deployments, Horizontal Pod Autoscaler, health probes, and all three Service types. I ran everything locally on a KIND (Kubernetes IN Docker) cluster inside WSL Ubuntu on Windows, with zero cloud costs.

Here is exactly what I did, what broke, and what I now understand that I did not before.


The Setup: KIND on WSL

Before I could run a single kubectl command, I needed a cluster. I chose KIND because it runs Kubernetes entirely inside Docker containers on your local machine. If you already have Docker, you are most of the way there.

# Install KIND
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# Create the cluster
kind create cluster --name k8s-labs
kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Within two minutes I had a real Kubernetes node showing Ready. That felt good.


Assignment 1: Pods, ReplicaSets, and Deployments

Lab 1: Your First Pod

A Pod is the atomic unit in Kubernetes. Everything else in the system is built around it. I created one two ways.

The imperative way (fast, not repeatable):

kubectl run nginx-pod --image=nginx
kubectl get pods
Enter fullscreen mode Exit fullscreen mode

The declarative way (YAML, production standard):

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
    - name: nginx-container
      image: nginx
      ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode
kubectl apply -f nginx-pod.yaml
kubectl describe pod nginx-pod
kubectl logs nginx-pod
kubectl exec -it nginx-pod -- /bin/bash
Enter fullscreen mode Exit fullscreen mode

The key insight: YAML is how you work in production because it is version-controllable, reviewable, and reproducible. The imperative method is fine for quick experiments but you would never use it to manage a real system.


Lab 2: ReplicaSets — the First Sign of Real Resilience

A single Pod has a problem: if it crashes or is deleted, it is gone. You would have to manually recreate it. That is not acceptable in production.

A ReplicaSet fixes this by ensuring a fixed number of Pod replicas are always running.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-replicaset
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.21.1
Enter fullscreen mode Exit fullscreen mode

I applied this, confirmed 3 Pods were running, then deleted one:

kubectl delete pod nginx-replicaset-xxxxx
kubectl get pods
Enter fullscreen mode Exit fullscreen mode

Before I could even blink, Kubernetes had already created a replacement. That moment is when auto-healing stops being a concept and becomes something you have actually seen.

I also scaled from 3 to 5 by changing replicas: 3 to replicas: 5 and reapplying. Two extra Pods appeared immediately.

The catch: ReplicaSets do not support rolling updates. That is where Deployments come in.


Lab 3: Deployments — Zero Downtime Updates

A Deployment manages a ReplicaSet and adds the features you actually need for production: rolling updates and rollback.

strategy:
  type: RollingUpdate
  rollingUpdate:
    maxSurge: 1
    maxUnavailable: 0
Enter fullscreen mode Exit fullscreen mode

This config means: spin up 1 new Pod before killing an old one, never leave any Pods unavailable. That is zero downtime.

To update the nginx image:

kubectl set image deployment/nginx-deployment nginx=nginx:1.23.0
kubectl rollout status deployment/nginx-deployment
Enter fullscreen mode Exit fullscreen mode

To roll back instantly:

kubectl rollout undo deployment/nginx-deployment
kubectl rollout history deployment/nginx-deployment
Enter fullscreen mode Exit fullscreen mode

The rollout history command showed me the revision log like a git blame for infrastructure. That is powerful.


Assignment 2: Auto-Scaling and Health Management

Lab 4: HPA — Let Kubernetes Decide How Many Pods You Need

The Horizontal Pod Autoscaler reads CPU (or memory) metrics and scales your Deployment up or down automatically.

For this to work on KIND, I needed to install the metrics-server with a flag for local clusters:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
kubectl patch deployment metrics-server -n kube-system --type=json \
  -p '[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]'
Enter fullscreen mode Exit fullscreen mode

Then I created an HPA:

kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=5
Enter fullscreen mode Exit fullscreen mode

And in a second terminal, I hammered it with a load generator:

kubectl run -i --tty load-generator --rm --image=busybox:1.28 \
  --restart=Never -- /bin/sh -c \
  "while sleep 0.01; do wget -q -O- http://php-apache; done"
Enter fullscreen mode Exit fullscreen mode

Watching kubectl get hpa --watch while the load generator ran was one of those moments that makes DevOps genuinely fun. The Pod count went from 1 to 5. When I killed the load generator, it scaled back down on its own.

This is why autoscaling matters: you pay for what you use, and your app handles spikes without anyone manually touching it.


Lab 5: Readiness Probes — the Traffic Gate

A readiness probe answers the question: "Is this Pod ready to receive user traffic?"

readinessProbe:
  httpGet:
    path: /
    port: 80
  initialDelaySeconds: 5
  periodSeconds: 5
  failureThreshold: 3
Enter fullscreen mode Exit fullscreen mode

Without a readiness probe, Kubernetes sends traffic to a Pod the moment it starts, even if the application inside is still warming up. That causes 502 errors during deployments.

With a readiness probe, the Pod only enters the Service's endpoint list after passing the health check. You get clean deployments with no cold-start errors reaching users.


Lab 6: Liveness Probes — the Self-Healing Mechanism

A liveness probe answers a different question: "Is this running container still healthy?"

livenessProbe:
  httpGet:
    path: /
    port: 80
  initialDelaySeconds: 10
  periodSeconds: 10
  failureThreshold: 3
Enter fullscreen mode Exit fullscreen mode

To see it in action, I deleted the nginx index.html file inside a running Pod:

kubectl exec -it <pod-name> -- rm /usr/share/nginx/html/index.html
kubectl get pods --watch
Enter fullscreen mode Exit fullscreen mode

After 3 consecutive failed probes (30 seconds), Kubernetes restarted the container. The RESTARTS counter went from 0 to 1. No manual action required.

The real-world case for this: imagine an app that hits a deadlock or runs out of memory and becomes unresponsive but the process is still technically running. Without a liveness probe, that Pod sits there broken forever. With one, Kubernetes restarts it within minutes.


Assignment 3: Services and Networking

This assignment was the one that changed how I think about Kubernetes networking.

Lab 7: ClusterIP — Stable Internal Communication

Pod IPs are ephemeral. Every time a Pod restarts or is replaced, it gets a new IP. If another service is hard-coding that IP, it breaks.

A ClusterIP Service solves this by providing:

  • A stable virtual IP that never changes
  • A stable DNS name (e.g. nginx-svc.default.svc.cluster.local)
  • Automatic load balancing across all Ready Pods
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
  type: ClusterIP
Enter fullscreen mode Exit fullscreen mode

I proved it works by launching a busybox Pod and making requests from inside the cluster:

kubectl exec -it tester -- sh -c "wget -qO- http://nginx-svc | head -5"
kubectl exec -it tester -- nslookup nginx-svc
Enter fullscreen mode Exit fullscreen mode

The DNS lookup resolved to the ClusterIP and the request succeeded. That is microservice communication working exactly as it should.


Lab 8: NodePort — Opening the Door from Outside

NodePort opens a specific port (between 30000-32767) on every node, making your app reachable from outside the cluster.

spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
Enter fullscreen mode Exit fullscreen mode

On KIND, since there is no real node IP accessible from the host, I used port-forward:

kubectl port-forward svc/nginx-nodeport 8080:80
Enter fullscreen mode Exit fullscreen mode

Opening http://localhost:8080 in the browser showed the NGINX welcome page. External access confirmed.


Lab 9: LoadBalancer — the Cloud-Native Way to Go External

In a real cloud environment (Azure, AWS, GCP), a LoadBalancer Service automatically provisions a public IP address and routes internet traffic to your Pods. In KIND, there is no cloud behind it, so I used MetalLB to simulate this.

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml
Enter fullscreen mode Exit fullscreen mode

After configuring an IP address pool from the Docker bridge subnet, I created a LoadBalancer Service and got an external IP assigned from that pool:

kubectl get svc nginx-loadbalancer
# NAME                 TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
# nginx-loadbalancer   LoadBalancer   10.96.xxx.xxx  172.18.255.200   80:xxxxx/TCP
Enter fullscreen mode Exit fullscreen mode

A curl against that external IP returned the NGINX welcome page. In Azure, that IP would be a real public internet address. MetalLB let me understand the behaviour without spending a penny on cloud resources.


What I Would Tell Anyone Starting Kubernetes

Start with Pods. Not because they are what you use in production (you use Deployments), but because understanding what a Pod is makes every other concept click faster.

Break things on purpose. Delete a Pod. Break a readiness probe. Kill a container. The fastest way to understand Kubernetes is to watch it recover from failure.

Health probes are not optional. Every production Deployment needs both a readiness probe and a liveness probe. Not having them is like deploying blind.

Services exist because Pod IPs are unreliable. Once you understand that one sentence, all three service types make sense immediately.

KIND is excellent for local learning. You get a real Kubernetes cluster with no cloud bill. The only gotchas are the metrics-server TLS flag and needing MetalLB for LoadBalancer behaviour.


What Is Next

Next up: Kubernetes Ingress, ConfigMaps, Secrets, and persistent storage. Each week I am writing up what I actually did, what broke, and what I learned from it.

If you are on a similar path, follow along. And if you are already deep into Kubernetes, I would love to know what concept took the longest to click for you.


Vivian Chiamaka Okose
DevOps Engineer
LinkedIn | GitHub

Top comments (0)