Docker isn’t dead, Kubernetes didn’t “kill” it, and Swarm… well, let’s talk. Here’s the no-fluff guide to when you should (and shouldn’t) use each tool.

Remember when everyone on Twitter screamed “Kubernetes killed Docker!” back in 2022? Yeah, that was about as accurate as saying Dark Souls killed Minecraft. Different games, different bosses, different pain. What really happened: Kubernetes v1.24 quietly removed Dockershim, the shim layer that let it talk to Docker Engine. Since then, Kubernetes uses CRI runtimes like containerd or CRI-O under the hood.
But here’s the kicker: Docker is still very much alive just playing a slightly different role. Think of it like your favorite band member who left the group but still writes the songs. Docker builds your images, containerd actually runs them, and Kubernetes… well, it’s the slightly overbearing manager scheduling every show, deciding who plays where, and sometimes canceling your tour if you mess up RBAC.
Meme moment (placeholder): Nobody: … Me deploying a static blog with full Kubernetes Ingress + HPA.
TLDR:
- Docker = your local dev buddy and image builder.
- Kubernetes = the production orchestrator that does the heavy lifting.
- Swarm = mostly in maintenance mode, but still hanging around like that guildmate who logs in once a month.
- Compose = still clutch for local stacks and small projects.
If you’ve ever asked “Do I need both Docker and Kubernetes?” or “Did K8s drop Docker completely?” this guide is the brutally honest coffee-chat answer. No fluff, no vendor hype, just straight dev talk.
Build vs Run vs Orchestrate
Here’s the deal: a lot of devs smash “Docker” and “Kubernetes” into the same brain folder, but they’re not even playing the same class. Let’s split it out clean:
-
Docker (build): Your toolbox for packaging apps into containers. You write a
Dockerfile
, rundocker build
, and out pops an image. Think of Docker as the chef prepping meals and putting them neatly into takeout boxes. - Container runtime (run): This is the actual engine pulling the images and spinning up containers. Today, that’s usually containerd or CRI-O, not Docker Engine. These are the stoves actually heating the food. Kubernetes talks directly to them.
- Kubernetes (orchestrate): The bossy restaurant manager screaming “Table 4 needs more pods!” It doesn’t cook or prep food. It just tells the stoves what to do, scales them up when 200 people suddenly walk in, and sometimes locks you out of the kitchen with RBAC when you fat-finger a config.
So the pipeline looks like this:
Docker builds → container runtime runs → Kubernetes orchestrates.
If you’re just hacking together a side project, Docker alone is usually enough. The moment you need to scale, roll out updates without downtime, or survive 3 a.m. pager duty, that’s when Kubernetes steps in.
Here’s the funny part: people still ask “Do I need Docker if I’m using Kubernetes?” Yes because you still need to build images somehow, and Docker is still the most common way. It’s like asking if you need a keyboard when you’ve got a gaming PC. Sure, you could technically use something else (BuildKit, Podman, Bazel), but Docker is the default muscle memory for most devs.
Dockershim removal → CRI
Back in Kubernetes v1.24, the community dropped a bomb: Dockershim is gone.
Cue the headlines: “Kubernetes drops Docker support!” and dev Twitter collectively losing its mind. Reality? Way less dramatic.
Here’s what actually happened:
- Dockershim was a translation layer that let Kubernetes talk to the Docker Engine like it was any other Container Runtime Interface (CRI).
- Problem is, Docker wasn’t really a CRI. It had extra moving parts, so Dockershim existed to bridge the gap. That bridge turned into tech debt.
- The Kubernetes maintainers finally said: “You know what? Let’s stop duct-taping this thing.”
So now Kubernetes talks directly to containerd or CRI-O runtimes that were designed for CRI from day one.
What breaks?
- Any tooling that assumed
docker
was running on your cluster nodes. Example: if you were SSH’ing into nodes and typingdocker ps
like it’s 2016, sorry, that’s not gonna work. - Custom log drivers, some monitoring agents, or DIY scripts that hooked into Docker’s API.
What doesn’t break?
- Your images. Docker-built images still work because they follow the OCI image spec. Doesn’t matter if it’s Docker, Podman, or Buildah if it’s OCI, containerd can run it.
- Your
docker build
workflow for dev. Local development didn’t magically change overnight.
Myth busted
Kubernetes didn’t “drop Docker.” It dropped the middleman (Dockershim) and now runs on runtimes that were built for the job. Think of it like when your favorite game ditched Games for Windows Live (RIP) and just used Steam directly. The game still works actually better you just lost a janky extra launcher.
Real-world tip: migrating a node from Docker Engine → containerd usually goes smoother than expected. The scariest part is the first kubectl get nodes
after the swap… but spoiler: your pods still show up.
Local dev stacks
Okay, so you’ve got containers running in prod. But what about local dev? This is where things get spicy, because your options all feel like different difficulty settings in a roguelike.
Docker compose
Old reliable. You spin up a docker-compose.yml
, run docker compose up
, and suddenly you’ve got Postgres, Redis, and your app running locally. It’s fast, simple, and the learning curve is about as friendly as Stardew Valley.
When to use it:
- Side projects, prototypes, local dev environments.
- When you don’t need to simulate Kubernetes you just want your stack up.
- Great for teams that don’t want to spend two weeks writing Helm charts just to test a login form.
Kind (Kubernetes in Docker)
This is basically Kubernetes running inside Docker containers. Sounds cursed, but it works surprisingly well. It’s the closest you can get to real Kubernetes locally without sacrificing your laptop battery to the cluster gods.
When to use it:
- Testing real Kubernetes manifests before shipping to prod.
- CI pipelines that need ephemeral clusters.
- When you want your local setup to feel almost identical to prod without full-on VMs.
Minikube
The granddaddy of local K8s. It runs a lightweight VM with Kubernetes baked in. Slower to spin up than kind, but still useful if you want something “closer to metal.”
When to use it:
- If your prod clusters run with specific drivers or you need to test node-level configs.
- Developers who like the “classic” approach.
- Folks who don’t mind waiting a bit for startup.
Dev take
Here’s the honest truth:
- Compose is the easiest on your sanity.
- Kind is the most practical for K8s dev work.
- Minikube is… well, kind of like that Linux distro you respect but don’t install anymore.
At the end of the day, pick the one that matches your tolerance for yak-shaving. Some devs want a cluster on their laptop, others just want docker compose up
to work so they can code. Both are valid.

Ops playbook
Once you cross the line from “I just need my app running” into “oh crap, real users are here”, you start touching the sharp edges of Kubernetes. This is the ops playbook the stuff you’ll be forced to care about whether you like it or not.
Hpa (horizontal pod autoscaler)
In Docker land, scaling usually means running docker compose up --scale web=3
. Simple. In Kubernetes, the HPA watches metrics (CPU, memory, custom ones if you’re fancy) and automatically spins pods up or down. Sounds magical until you realize your app isn’t stateless and your users are suddenly playing musical chairs with their sessions.
Pro tip: test HPA before you brag about being “auto-scaled.” Watching pods spin up is fun until Redis becomes the bottleneck.
Ingress
Docker’s version of this is -p 8080:80
and you’re done. Kubernetes? Welcome to the world of Ingress controllers (NGINX, Traefik, Istio if you’re a masochist). These let you route traffic with rules, TLS, and all the grown-up stuff. But brace yourself: your first Ingress config error will feel like debugging firewall rules in the dark.
Rbac (role-based access control)
With Docker, everyone with access can just run containers. Kubernetes? Nope. You need RBAC to say who can do what. It’s powerful… and also the reason half of dev Slack is filled with “Why does kubectl say Forbidden again??” messages.
Mini-dev story: first time I hit RBAC, I spent two hours thinking my cluster was broken. Nope, I just didn’t have permission to list pods. 10/10 humbling experience.
Secrets
In Compose, secrets usually live in a .env
file that you try to remember not to commit to GitHub. In Kubernetes, you’ve got Secrets objects, which at least feel more official. They’re still base64-encoded strings (not encryption, folks), but at least they integrate with Vault/SealedSecrets/whatever secret manager your ops team swears by.
Tl;dr of ops reality
- Scaling: cool but tricky.
- Routing: works, but expect hair loss.
- Permissions: RBAC is both your best friend and your worst enemy.
- Secrets: not perfect, but better than
.env
roulette.
Kubernetes gives you serious power tools, but like power tools, they will also happily cut your hand off if you’re careless.
When not to use k8s
Here’s the part no one on LinkedIn wants to admit: you probably don’t need Kubernetes.
There, I said it.
Kubernetes shines when you’re running dozens (or hundreds) of services at scale, across multiple nodes, and you need rolling updates, self-healing, and fine-grained control. But if you’re a solo dev shipping a side project or an indie team running a SaaS with a few thousand users, Kubernetes can be like bringing a mech suit to a knife fight.
Small apps and indie projects
If your app is just a couple of containers (say, a Node.js API + Postgres), Kubernetes is overkill. You’ll waste days writing YAML that could’ve been docker compose up
. Those days? They could’ve been spent actually coding features.
Dev story: the flask app
I once watched a team spin up a three-node Kubernetes cluster to host… a single Flask app with 12 daily users. By the time the Helm chart worked, the project’s stakeholders had already lost interest. Classic “we did it for the tech” energy.
Cron jobs and scripts
Got a script that runs once a night? Don’t wrap it in a Job manifest and deploy a full CRONJob controller. Use systemd, use cron, or literally just run it on a cheap VM. Kubernetes doesn’t make the job fancier — it just adds more ways for it to fail.
The ops tax
Kubernetes comes with an ops tax. Clusters break, YAML gets messy, networking gremlins sneak in. If you don’t have an ops team (or the time to moonlight as one), you’re better off with simpler tooling.
Bottom line: unless you’re solving scale problems, Kubernetes is often more trouble than it’s worth. Sometimes Compose or even bare Docker is the smarter play.
Faqs
Did kubernetes drop docker?
No it dropped Dockershim in v1.24. Docker-built images still work because they follow the OCI spec.
Do i need both?
Yep. Docker builds images, Kubernetes runs and orchestrates them.
Can i still use swarm?
Sure… but it’s basically in life support. Fine for legacy, not for new projects.
What about cloud (aks/eks/gke)?
Managed K8s removes some pain (control plane), but you still own worker nodes, RBAC, networking, and all that YAML.

Fast experiment
Want proof that Dockershim’s removal isn’t a doomsday event? Try this:
- Take a worker node running Docker Engine.
- Swap it to containerd (most distros already default here).
- Deploy the same workload.
What changes?
- Image pulls: about the same.
- Pod startup: maybe a hair faster.
- Resource use: containerd is leaner, but not night-and-day.
The real difference is ops muscle memory no more docker ps
on nodes. You’ll need ctr
or crictl
. That’s the part that bites at 3 a.m. when something’s on fire.
Lesson: your apps won’t care, but your tooling might.
Conclusion
Docker builds, runtimes run, Kubernetes orchestrates simple as that.
Most devs don’t need K8s for side projects; it shines when you’ve got real scale and ops headaches. Swarm is legacy, Compose is still clutch.
Hot take: Kubernetes is overused. Use it when you need resilience and scaling, not just to feel “enterprise.”
The future? Lighter orchestrators, more serverless, and maybe fewer weekends lost to debugging YAML. Until then, K8s is here to stay just make sure you actually need it before diving in.
Helpful resources

Top comments (0)