Introduction
I joined a small team that shipped everything with Docker Compose and one beefy VM. We were proud — containers, immutable images, fast deploys. At first, this looked fine… until it wasn’t.
This is not a tutorial on manifests. It’s a set of real mistakes, decisions, and trade-offs I lived through while moving from Docker-only workflows to Kubernetes. Here’s what we learned the hard way.
The Trigger
Traffic spiked after a marketing campaign. Our single VM hit CPU and memory limits. Services crashed, logs disappeared, and the restart logic was messy.
We chose Kubernetes because it promised automatic restarts, scaling, and better deployment control. That decision exposed a dozen hidden assumptions.
What Went Wrong (and why it stung)
Misreading Docker for Kubernetes
We treated Kubernetes as "Docker, but bigger". We containerized everything and expected the platform to be magic.
Reality: containers are just a packaging unit. Kubernetes adds scheduling, networking, and state management. If your app assumes a local filesystem, in-memory cache, or single-instance behavior, moving to k8s breaks it.
No resource requests or limits
We had no CPU/memory requests. Pods were scheduled onto nodes that then OOM-killed them. At first, it looked fine — pods started — until the node became unstable.
Health checks that meant nothing
We copied readiness and liveness checks that only checked the HTTP port. They passed while the app was stuck in a busy-loop or leaking memory.
Treating images as mutable
Using the latest tag and rebuilding without changing image tags led to inconsistent deployments. Rollbacks were a guessing game.
Expecting Docker Compose parity
Docker Compose and Kubernetes look similar on paper. But behaviors like networking, service discovery, and volume handling differ enough to surprise you in production.
What We Tried (and why some things failed)
- Rewriting everything at once
- We tried to convert every service to Kubernetes in one sprint. The team was overwhelmed and debugging was a nightmare.
- Turning on every feature immediately
- PodDisruptionBudgets, NetworkPolicies, PodSecurityPolicies, custom resource controllers all at once. We spent more time on infra than on app problems.
- Building images on developers' laptops
- Different Docker versions, leftover caches, and local volumes made images non-reproducible.
What Actually Worked
Start small: use Docker locally, Kubernetes for staging and prod
We kept Docker Compose for local dev. Developers iterate fast. Use Kubernetes when you need multi-node scheduling, resourcing, and the orchestration benefits.
Incremental migration
- Move one service at a time
- Keep the same external contracts
- Run integration tests under k8s early
That reduced blast radius and made debugging manageable.
Make containers explicitly measurable
- Always set resource requests and reasonable limits
- Add memory/CPU budgets and observe them in staging
This prevented noisy neighbors and unexpected OOM kills.
Health checks that actually test behavior
- Readiness should reflect whether the app can serve traffic
- Liveness should detect unrecoverable states
- Add lightweight internal endpoints that validate essential dependencies
Immutable, reproducible images
- Tag images with CI build IDs or git SHAs
- Use a single CI pipeline to build/publish images
- Scan images for vulnerabilities as part of the pipeline
Use managed k8s early if you can
We used a managed control plane to avoid debugging kube-apiserver, etcd backups, and control plane upgrades. It let us focus on apps.
Logging and debugging
- Centralize logs with a sidecar or fluentd
- Use ephemeral debugging containers (kubectl debug) instead of SSHing into nodes
Minimal useful abstractions
Start with Deployments, Services, and ConfigMaps. Add StatefulSets and CRDs only when you need persistent state or operators.
Trade-offs (real choices, not dogma)
Simplicity vs. Control: Docker Compose is simple and fast for single-host dev. Kubernetes gives control at cost of complexity.
Managed vs. Self-hosted: Managed k8s reduces operational burden but can hide limits and cost structures. Self-hosting gives flexibility but requires ops maturity.
Early adoption vs. incremental migration: Moving everything too early slows feature delivery. Waiting too long risks scaling disasters.
We chose incremental migration and paid the complexity tax only where it bought reliability.
Mistakes to Avoid (Most teams miss this)
Not defining resource requests/limits
Betting on the platform to make your app scalable without code changes
Using latest image tags in production
Treating Kubernetes as a drop-in replacement for a VM
Skipping proper health checks and observability
Over-privileged containers and permissive RBAC by default
Final Takeaway
Kubernetes is powerful, but it's an operational shift, not a magic bullet. Docker and containers give you reproducible packaging; Kubernetes gives you scheduling, networking, and lifecycle management — if you adapt your apps and ops to it.
Here’s what we learned the hard way: start small, make your containers predictable, and don't confuse orchestration for automatic scalability.
If you're a junior developer: keep using Docker to learn container basics, but treat Kubernetes as an ops platform that needs planning. At first, this looked fine… until it wasn't, and the remediation cost was far higher than the migration cost would have been if we'd been more intentional.
Use the system that matches your current problems, and plan the migration points where orchestration value outweighs complexity.

Top comments (1)
A great read that highlights the crucial distinction between containerization and orchestration—the "hard way" lessons are always the most valuable for avoiding common architectural pitfalls. Your focus on beginner mistakes really reinforces why understanding the underlying Docker fundamentals is essential before scaling up to the complexities of Kubernetes.