I swear every time I open a Dockerfile, I get the same feeling as when I open my fridge at night: I know something cursed is in there, I just don’t know where it’s hiding.
Dockerfiles are like the “draw the rest of the owl” meme for developers. One minute everything is fine, the next you’re staring at a 3GB image that takes longer to build than Baldur’s Gate 3’s character creator.
Anyway this whole article started because last week, during a perfectly normal deploy, the container image ballooned from 900MB to 1.7GB after merging one “small change.” And by “small,” I mean someone added a single RUN command that installed half of Ubuntu.
A senior dev did that. A good senior dev. A senior dev who I would trust with my production database but apparently not with apt-get .
And it reminded me of something I keep relearning: nobody actually knows how to write Dockerfiles. Not fully. Not consistently. Not even the people who speak about containers at conferences.
Let’s talk about the mistakes even experienced devs make the ones we pretend we don’t make but absolutely do. And yes, I’m calling myself out too.
Why Is My Docker Image 3GB? Syndrome
The funniest Docker bug I ever saw involved a friend let’s call him “Ryan” because that’s actually his name who built a Python API and wondered why the image was 3GB.
He was using Alpine.
Alpine.
The whole point is that it’s tiny.
Turns out he COPY’d his entire repo. Including the .git directory. And the node_modules from a frontend folder. And a folder called old_logs with something like 82MB of gzipped logs because “we might need these someday.”
I’ve done this too. Not the logs thing okay yes the logs thing but mostly the part where I just throw COPY . because I'm too lazy to think about paths.
We treat Docker contexts like the mystery box from Deal or No Deal: “I don’t know what’s in here, but the container can have it.”
The real fix?
Be aggressively explicit.
COPY only what you need .dockerignore like your job depends on it. Because it does. Eventually.
And yes, one time I .dockerignore too much and the app couldn’t find the /app folder at all, so it built an empty image that technically “deployed fine” but served nothing. That was fun. I spent an hour debugging silence.
The RUN Command That Destroyed Everyone’s Will to Live
Here’s a sentence I never want to hear again:
“It works on my machine, but Docker is being weird.”
Docker isn’t being weird. You are being weird.
Specifically: that RUN command you copy-pasted from a StackOverflow answer from 2017 that includes three package installs, a mkdir, two environment configs, and a wget to some random GitHub URL that you didn’t pin to a commit.
I once saw a RUN chain that looked like someone rage-typed every command they remembered from Linux 101:
RUN apt-get update && <br> apt-get install -y python3 && <br> pip install pip && <br> rm -rf /var/lib/apt/lists/*
Why was it installing pip when pip was already inside python3?
Why was it installing python3 when the base image already included it?
Why was the cleanup command written twice?
The answer, of course:
“I copied it from another project and didn’t think about it.”
Ah yes, the dev equivalent of inheriting a cursed sword with unknown enchantments.
The real fix is boring: split your RUN commands intentionally. Think about layers. Actually read the Dockerfile reference once in your life. And pin your package versions unless you enjoy lottery-style builds.
But yeah… I still forget this. Especially when all RUN commands become one giant YAML-flavored omelette.
ENV Variables: The Silent Killers
If you’ve ever accidentally leaked secrets into a container image, don’t worry you’re not alone. You’re just part of a very prestigious club of tired developers who said:
“I’ll just stick this ENV var here for now, it’s fine.”
Spoiler: it was not fine.
My personal favorite mistake was setting an ENV variable like this:
ENV NODE_ENV=development
in the production Dockerfile.
Guess what happened?
Everything used prod configs except the part of the app that looked at NODE_ENV and decided:
“Oh we’re in dev mode? Awesome. Let’s skip caching, turn off optimizations, and log everything.”
We didn’t notice for months. Literally months. We just thought AWS was “slow this quarter.”
The day we spotted this, someone on the team said, “I feel like I’ve been gaslit by a config value,” and honestly, I still think about that sentence.
Takeaway?
Double-check your ENV. Then triple-check.
Also: put them all in one place. If your Dockerfile has ENV sprinkled around like confetti, you are living dangerously.

The Multi-Stage Build That Turned Into The 12-Stage Build of Pain
Multi-stage builds are incredible… for the first two stages. After that, it starts to feel like one of those puzzle games where you need the red key to open the blue door to grab the yellow key to go back and grab the red door handle.
I once reviewed a Dockerfile with eight stages. And the thing is it wasn’t even doing anything fancy. It was building a Go binary, then copying it, then building a frontend, then copying that, then doing something mysterious with an Alpine image that nobody really understood.
The author told me:
“I keep the old stages in case we need them later.”
This is not VSCode.
A Dockerfile is not the place for your emotional attachments.
Delete stages you don’t need.
Don’t hoard old build steps.
This is not Pokémon; you don’t need to catch all the base images.
Meanwhile I once accidentally nested a build stage inside another stage by indenting it wrong. The container still built. And worked. Somehow. And I decided that was a sign from the universe to never touch it again.
Sometimes the worst mistake is fear of cleaning things up.
Caching: The Final Boss
Docker caching is like fighting a Dark Souls boss blindfolded. You think you understand it, you think you’re winning, you think you’ve “mastered” it… and then suddenly the build starts running a full dependency install on every rebuild and you’re like:
“What did I do to deserve this?”
I promise you:
Everyone breaks caching. Everyone.
Even senior devs. Especially senior devs.
One misordered COPY and suddenly your build goes from 8 seconds to 8 minutes.
A personal horror story: I once put COPY . . before installing npm deps.
It was one line. One tiny line. One innocent-looking line.
And it invalidated the cache. Forever.
You ever watch someone run a dev build on a laptop while Slack, Chrome, and VSCode are open? The fans go full fighter jet. That laptop could lift off.
My team basically did that to our pipeline.
The fix is stupidly simple:
install dependencies first, copy source later, keep the cache sacred.
But honestly?
Half the time I still mess this up. Usually on days when my coffee tastes suspiciously weak.
Conclusion: Dockerfiles Are Just Vibes, and That’s the Problem
Here’s the wild thing: Docker is one of the most-used tools in modern development… and yet half of us are just out here freestyling our Dockerfiles like jazz musicians.
The truth is:
Dockerfiles are less about knowing the “right way” and more about knowing how badly things can go if you’re sloppy.
You learn the rules by breaking them.
You get better by shipping cursed images.
You refine your craft by Googling “why is Docker slow” at night.
And honestly? That’s okay. Containers are an ecosystem built on mistakes just very efficiently packaged ones.
So if you’re reading this thinking, “Oh god, I’ve done all of these,” congratulations: you’re officially a real developer. The kind who has scars. And stories. And a Dockerfile somewhere that you refuse to touch because you’re afraid looking at it will summon a demon.
Anyway, I’d love to hear your horror stories. What’s the worst Dockerfile you’ve ever seen? Or made?
(If you send me one where you installed curl three times in a row, I will treasure it.)
Optional Resources
- Dockerfile best practices: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
- Multi-stage builds explained well: https://docs.docker.com/build/building/multi-stage/
Top comments (0)