Okay, so a few days ago I sat down and told myself — today I will finally understand Kubernetes. Not just know the name, not just say "yes, yes, K8s" in a meeting and nod. Actually understand it.
I had been using Docker for a while. Build an image, run a container, push to a registry. That part I knew. But then people kept asking things like — "how do you handle scaling?" or "what happens when a container dies?" And I would just smile and say "hmm, good point."
So I started reading, set up a small cluster on my laptop, and spent a few hours just playing around. Here is what I learned, told the way I wish someone had told me.
The Problem Docker Doesn't Solve
Docker gives you a nice box to put your app in. You know how it works, you know what's inside, you can run it anywhere. Wonderful.
But the moment you move from your single laptop to actual servers — or even to a few containers talking to each other — questions start piling up:
Where do these containers actually run? What happens when one dies at 3am? How do they find each other on the network? How do I run three copies of the same service?
Kubernetes is basically the answer to all of those questions at once. It is an open-source platform that runs your containers, watches over them, restarts them when they die, and lets them talk to each other. You just tell it what you want, and it figures out the how.
Think of it as an operating system — but for your whole data center instead of one machine.
The Words You Need to Know
Before anything makes sense, there are a few terms. I was scared of these at first because they sound very technical. But they are actually simple ideas with fancy names.
| Term | What it means |
|---|---|
| Cluster | All the machines working together as one system — your "data center." |
| Node | A single machine (real or virtual) inside the cluster that actually runs your apps. |
| Pod | The smallest thing K8s manages. Usually one container. Treat it as temporary — it can die anytime. |
| Deployment | Says "I want 3 copies of this pod always running." Handles restarts and updates for you. |
| Service | A stable address for your pods. Even if pods come and go, the Service address stays the same. |
| Namespace | A folder inside your cluster. Useful for keeping dev, staging, prod separate. |
The Restaurant That Made It Click
I read a lot of explanations. The one that finally made sense to me was a restaurant comparison. Let me tell it my own way.
Imagine a restaurant. The whole restaurant is your Cluster. The manager sitting in the back — who decides which cook does what, who watches the orders, who keeps track of everything — that is the Control Plane. The actual cooking stations (grill, prep, fryer) are the Nodes. Each dish being cooked right now is a Pod.
Now here is the interesting part. When you order a "Chicken Burger" from the menu, you don't say "I want Chef Ramesh specifically to make it." You just say Chicken Burger. The menu item is always there, even if a cook leaves and a new one joins.
That menu item is the Service. It gives you a stable name to call, and it figures out which pod (cook) to send the work to. Even when pods die and new ones come up, the Service name stays the same.
Important: Never talk to a Pod directly by its IP address. Pods are temporary — their IP can change or disappear. Always use a Service as the stable middleman.
How Traffic Actually Flows
┌─────────────────────────────────────────────────────┐
│ CLUSTER │
│ ┌──────────────┐ ┌───────────────────────────┐ │
│ │ Control Plane│────▶│ Worker Nodes │ │
│ │ scheduler │ │ ┌─────┐ ┌─────┐ ┌─────┐ │ │
│ │ etcd │ │ │ Pod │ │ Pod │ │ Pod │ │ │
│ │ controllers │ │ └─────┘ └─────┘ └─────┘ │ │
│ └──────────────┘ └───────────────────────────┘ │
└─────────────────────────────────────────────────────┘
Client ──▶ Service ──▶ Pod(s) ──▶ Container(s)
Your request starts from outside, hits a Service, and the Service decides which Pod handles it. You never call a Pod's IP directly.
kubectl — The Remote Control
To talk to your cluster, you use a command-line tool called kubectl. Think of it as a remote control. You press buttons, and the cluster does things.
One thing that confused me early: kubectl cannot create a cluster. It can only talk to one that already exists. This is like — the TV remote can change channels, but it cannot build the TV.
To create a cluster on your laptop for learning, you use a separate tool. The two most popular ones are kind and minikube.
| Tool | What it does | Best for |
|---|---|---|
kind |
Runs K8s nodes as Docker containers. Very light. | Quick setup, CI, already using Docker |
minikube |
Spins up a local VM or Docker cluster with extra addons. | Built-in dashboard, more feature-rich for learning |
I used kind because it was the easiest to start. One command and you have a working cluster:
# Create a cluster named k8-learning
kind create cluster --name k8-learning
# Check what's in it
kubectl get nodes
kubectl get pods
When I ran kubectl get nodes, I saw only one node called k8-learning-control-plane. I panicked a little — "where are the other nodes?" But this is normal. With kind's default setup, you get one node that does everything. Fine for learning.
Writing My First Pod
A Pod is just a YAML file describing what you want to run. Here is the simplest possible one:
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
namespace: default
spec:
containers:
- name: app
image: nginx:stable
Apply it and check on it:
kubectl apply -f pod.yaml
kubectl get pods
kubectl get pods -o wide
A few seconds later, your nginx container is running inside the cluster.
The Namespace Trap: I kept running
kubectl get podsand seeing nothing, even after creating pods. The reason was namespaces. If you create something in a namespace calledproductionand forget to add-n productionto your command, K8s will look in the wrong place. Always specify the namespace.
Don't Use Bare Pods. Use Deployments.
Here is a mistake I almost made. I was creating bare Pods and thinking "okay, I'm done." But bare Pods are not safe for real apps.
If a bare Pod dies, it just... stays dead. Nothing brings it back. You have to do it manually. That is terrible for any real service.
A Deployment is the right way. You tell it "I want 3 copies of this pod always running." If one dies, the Deployment notices and starts a new one. If you want to update the image, it does a rolling update — takes down the old ones slowly, brings up new ones, no downtime.
Bare Pods are for quick tests. Deployments are for anything you actually care about.
Mistakes I (Almost) Made
❌ Treating Pods like permanent things. They are not. They die, they get restarted, their IP changes. Never rely on a Pod being there.
❌ Calling Pod IPs directly. Pod IPs are temporary. Use a Service as a stable address. Always.
❌ Forgetting
-nfor namespaces. "Why can't I see my pod?!" — because you're looking in the wrong namespace.❌ Putting too many containers in one Pod. One main container per Pod is the rule. Only combine containers if they truly need to share memory or files — like an app and a logging sidecar.
❌ Thinking kubectl creates clusters. It doesn't.
kindorminikubecreate the cluster.kubectljust talks to it.
Want to Try It Yourself?
1. Install the tools
On macOS: brew install kubectl and brew install kind. On Linux, follow the official docs — both have simple install scripts.
2. Create a cluster
kind create cluster --name k8-learning
Wait a minute. You now have a working Kubernetes cluster on your laptop.
3. Explore it
kubectl cluster-info
kubectl get nodes
kubectl get nodes -o wide
kubectl get pods
4. Run a pod
kubectl run nginx --image=nginx:stable --restart=Never
kubectl get pods -o wide
5. Clean up
kind delete cluster --name k8-learning
Everything gone, laptop happy.
What's Next
This was my first real day with Kubernetes. I went from "I know the name" to "I can actually run things and understand what is happening." That felt good.
The next things I want to learn: how Deployments handle rolling updates, how Services work in different modes (ClusterIP, NodePort, LoadBalancer), and eventually how to set up something close to a real production setup.
If you are a backend developer and you have been putting off learning this — don't. It is not as scary as it looks from outside. Set up kind, run a pod, break something, fix it. That is how it clicks.
Thank you for reading. 🙏
A small note: This is the beginning of a series where I write about things as I learn them — no expert voice, no pretending I know everything. Just a developer figuring things out and writing it down. Kubernetes is the current chapter. More blogs covering what I learn next will be coming soon — Deployments, Services, real configs, and whatever breaks along the way. Follow along if you are on a similar journey. 🚀
Top comments (0)