DEV Community

Cover image for Kubernetes from Zero to Hero – Part 0
Varun Verma
Varun Verma

Posted on

Kubernetes from Zero to Hero – Part 0

Why I'm Finally Learning This (And Why You Might Be Ready Too)


I'm Varun Verma—a full-stack developer with a backend bias, currently building microservices at scale. I've spent the last few years getting comfortable with containers, APIs, and the general chaos of distributed systems. This is the start of my Kubernetes learning journey, documented in real time.


Docker made me feel like a better engineer.

Spin up a container, mount my code, hit the endpoint, see the logs scroll in real time. Everything contained. Everything predictable. When something broke, I knew exactly where to look. One container, one process, one set of logs. Clean.

For a while, that was enough.

I built services that talked to Postgres, ran Redis sidecars for caching, hooked up RabbitMQ for async jobs. Docker Compose became my best friend—define the stack, run docker-compose up, and suddenly I had a working system on my laptop. No more "works on my machine" conversations. Just containers doing their thing.

But then the projects got bigger. And something started to shift.


It started innocently. A feature that touched three services instead of one. User request hits the API gateway, which calls the auth service, which validates tokens against a separate identity store, which then passes the request to the core service, which publishes an event to a message queue, which triggers a notification worker.

One feature. Six hops. Six sets of logs.

The first time I had to debug why a notification wasn't sending, I spent two hours just figuring out where the request died. Was it the gateway timing out? Did the auth service reject it silently? Did the message queue drop it? Did the worker crash and restart before I could see the error?

I had terminal windows tiled across my screen like some kind of debugging war room. docker logs -f gateway. docker logs -f auth. docker logs -f notifications. Tab. Tab. Tab. Scroll. Scroll. Nothing.

Eventually I found it—a misconfigured environment variable in the notification worker that pointed to the wrong queue. Twenty seconds to fix. Two hours to find.

That was the first crack.


Then came the networking headaches.

Services needed to find each other. In Docker Compose, you get service names as hostnames out of the box. Fine. But what happens when you want to run the same setup across multiple machines? What about when one service needs to expose a port externally while another stays internal? What about health checks and automatic restarts?

I found myself writing longer and longer Compose files. Environment variables everywhere. Port mappings accumulating like technical debt. depends_on chains that kind of worked but didn't actually wait for services to be ready, just started. So I added health checks. Then retry logic in my application code. Then backoff strategies.

I was no longer writing features. I was writing infrastructure workarounds.

And the worst part? It was all manual. I was the orchestrator. I was the load balancer. I was the service mesh. Every decision about how services talked to each other lived in my head and in scattered YAML files across different repos.


The breaking point came during a production incident at work.

We had a service that processed payment webhooks. It was getting hammered—way more traffic than we expected. The container kept restarting because it ran out of memory. Every time it restarted, we lost the requests that were in flight. The payment provider had retry logic, thankfully, but we were still dropping events.

The fix wasn't complicated—scale up the service, maybe add a queue in front of it. But in our setup, "scaling" meant manually spinning up more containers, updating the load balancer config, making sure the new instances registered themselves, and hoping we didn't mess up the environment variables in the process.

We got through it. But that night, I sat there thinking: there has to be something better. Some system that just handles this. Knows when a service is struggling. Spins up more instances automatically. Routes traffic away from unhealthy pods. Restarts failed processes without human intervention.

Something that manages the chaos instead of me.


I'd heard of Kubernetes, of course. Everyone has. It was one of those words that floated around in tech conversations—alongside "service mesh" and "cloud native" and other terms I nodded along to without fully understanding.

For a long time, I actively avoided it. Kubernetes felt like overkill. A massive hammer for what seemed like small nails. I read blog posts about YAML sprawl and complexity explosion and thought, "My Docker Compose setup is fine. I don't need a whole orchestration platform."

And honestly? For simpler projects, that was true. Still is.

But the projects weren't staying simple. The microservices kept multiplying. The debugging sessions kept getting longer. The mental overhead of keeping track of which container talked to which service through which network on which port was eating into time I should have spent building actual features.

I wasn't looking for Kubernetes because it was trendy. I was looking for it because I was tired.


So here's where I am: starting from zero.

I don't know how pods work. I don't really understand what a deployment is or why I'd want a service versus an ingress. The control plane is a mystery. kubectl feels like learning a new language.

But I'm going to figure it out. Slowly. Deliberately. And I'm going to write about it as I go.

This isn't going to be a polished tutorial written by someone who already knows all the answers. It's going to be messy. I'll document the confusion, the mental models that turned out to be wrong, the "aha" moments when something finally clicked.

I'm not going to rush to YAML files and command-line incantations. I want to understand what Kubernetes is doing before I start copying configurations from Stack Overflow. Why does it exist? What problems was it designed to solve? How do the pieces fit together conceptually before I worry about how to wire them technically?

Here's my promise: Every post in this series will focus on understanding before implementation. No assumed knowledge. No skipped steps. If I struggled with something, I'll tell you. If a mental model helped me finally get it, I'll share it.

If you've been staring at your Docker Compose files, wondering why everything feels like it's held together with duct tape and environment variables—you're in the right place.

If you've been avoiding Kubernetes because it seems overwhelming—same. Let's figure it out together.

Part 1 is coming. We'll start with the basics: what even is an orchestrator, and why do distributed systems need one?

No YAML yet. Just concepts. Just understanding.

See you there.

Top comments (0)