Most Docker tutorials end at the win.
"Look, smaller image! Ship it!" And then you're left alone at 11pm wondering why your perfectly optimized co...
For further actions, you may consider blocking this person and/or reporting abuse
The part about what Alpine takes away is the part most optimization guides skip entirely. They show you the before/after numbers and stop there.
I hit the Alpine compatibility wall recently in a specific way: I added Grype (a CVE scanner) to a Docker image as part of a security tool I'm building. The install script works fine on Debian-based images. On Alpine, it fails silently in ways that are genuinely confusing to debug — different shell, different package manager, missing glibc. The image was smaller. The scanner didn't work.
The trade-off you're describing — Alpine is not a smaller Debian, it's a different OS that happens to look similar at the surface — is the one that costs people real time when they hit it at 11pm in production.
The multi-stage build approach is the right call for most cases. You get the build-time dependencies in a fat image, the runtime in a lean one, and you're not fighting Alpine's compatibility surface for things that were never designed with musl in mind.
Good article for being honest about both sides. The 1.58GB to 186MB number is real, and so is the debugging session that follows.
The silent failure point is what gets me. A broken app throws an error and you chase it. A scanner that quietly stops scanning gives you a false green light. That is genuinely a worse outcome than the image not building at all. Had not thought about the security tooling angle when I wrote this and it changes the stakes of the conversation. Thanks for adding that, it made the article better than it was.
That reframe — "a scanner that silently stops scanning is worse than an image that doesn't build" — is exactly right, and it points to a general pattern worth naming: invisible regressions are more expensive than visible failures.
A build error has a clear cost. You see it, you fix it, you move on. A tool that appears to work but doesn't has a hidden cost that compounds — every day it's running you're accumulating false confidence.
I ended up solving the Grype-on-Alpine problem the way you'd expect after reading your article: multi-stage build with a Debian base for the scanner install, Alpine-derived runtime for everything else. The image is larger than pure Alpine, smaller than pure Debian, and the scanner actually scans. Which is the whole point.
The optimization metric that matters isn't MB. It's "does the thing still do the thing."
@alvarito1983 , I like that framing a lot. Invisible regressions being more costly than visible failures captures the real risk here.
The Grype example makes it clearer because things can look correct while failing quietly in the background.
The multi-stage + mixed base approach you mentioned feels like the practical middle ground for real production use cases.
Exactly. The failure mode isn't "it's broken," it's "it looks fine." That's what makes it dangerous — no one opens a ticket for a scanner that runs green every night while missing half its signatures.
Multi-stage with mixed bases ended up being my default for anything that links against glibc (scanners, some observability agents, a few CLIs). Alpine for the final layer where it's safe, Debian slim for the stages that actually need to work. The image grows, but "grows" is a measurable cost. "Silently degraded" isn't.
The real lesson for me was stopping treating Alpine as the default and start treating it as a choice — one that needs justification per-binary, not per-project.
shaving off a gig of bloat feels like magic until you realize you deleted the actual magic too. i’ve done this with my firebase builds where i try to get too lean and end up breaking a random node dependency. for the small apps i’m building in cursor, getting that image size down is a huge win for cold starts — but only if the thing actually boots. austin taught me: just start the thing, but maybe check the logs before you celebrate too hard.
The cold start angle is one I did not cover and honestly should have. Smaller image means faster pull means faster spin-up — but only if the runtime is intact. Firebase functions have their own compatibility surface on top of all this. "Check the logs before you celebrate" should be the tagline for this entire article. Glad it landed.
Worth flagging for anyone running data or ML workloads: Alpine and musl will ruin your day. Pre-built PyPI wheels for numpy, pandas, scipy all expect glibc, so on Alpine everything recompiles from source (when it compiles at all). For anything with C bindings,
python:3.x-slimsaves you hours of debugging for maybe 100MB more.@valentin_monteiro this is an important addition, especially for ML and data workloads.
I focused on Node in the article, but Python changes the risk level because so many core libraries depend on compiled binaries.
That glibc vs musl issue is exactly where things start breaking in ways that are not obvious during build time.
1.58GB down to 186MB is a dramatic reduction — what was the biggest single win? Multi-stage builds, Alpine base, or something less obvious?
The "what I actually broke" framing is the most valuable part of posts like this. Size optimization often trades off against debuggability — are you still able to attach debuggers or do you need to rebuild with symbols for incident response?
Also: how does this affect your CI/CD pipeline speed? Larger images mean slower pushes to registry and slower pulls on deployment, which can matter more than the disk savings in high-frequency deployment scenarios.
@motedb , good questions. The biggest win came from multi-stage builds rather than Alpine alone.
Alpine reduced runtime size, but separation of build and runtime did most of the work.
On debuggability, you are right, smaller images reduce noise but also remove tools you often need during incidents. That trade-off is easy to underestimate.
CI/CD impact shows up more at scale where pull time starts to matter in frequent deployments.
been there at 11pm - Alpine's musl libc wiped out my C extension deps. spent 2 hours tracking why python-magic was silently failing. size win is real but it's a cognitive tax on future-you
@itskondrat that “cognitive tax on future-you” is exactly what makes this trade-off painful.
The hardest part is not the size reduction itself, but the silent failures that only show up later in production when context is already gone.