When I first started deploying Flask apps, I thought a Dockerfile was just some ritual incantation you copy-pasted from Stack Overflow — slap it in, cross your fingers, and pray the CI pipeline doesn’t blow up at 7:58 PM on a Friday.
Spoiler: it always blew up.
I remember the time — clearly — when I pushed what I thought was a simple /health endpoint, only to get this glorious stack trace:
"
ModuleNotFoundError: No module named 'flask'
"
On the production deploy.
Turns out, I’d forgotten to COPY requirements.txt before running pip install. Yep. One line. Downstream chaos. And a Slack message from my CTO with just a single fire emoji.
We’ve all been there.
Especially juniors.
A junior I was mentoring last year — team of six, scrappy startup vibes — asked me why their Docker build took 12 minutes every single time, even when they changed one .py file.
We opened the Dockerfile.
They were doing COPY . . before installing dependencies.
No .dockerignore.
No layer strategy.
Just raw, unfiltered hope.
Yeah. I’ve been there.
Here’s the thing: writing solid Dockerfile best practices python flask isn’t about memorizing syntax. It’s about why each line exists. It’s about empathy for the next person (or future-you-at-midnight) who has to debug a 900MB image with 43 layers and no idea where gunicorn even came from.
So let’s walk through this — not like a textbook. Like a senior dev unwinding after a long week, coffee in hand, still mildly annoyed at a merge conflict in docker-compose.yml.
--
📦 Multi-Stage Builds — Why They Shrink Your Image
Look —
Nobody needs a 1.2GB Docker image for a 200-line Flask app.
That’s not efficiency. That’s negligence.
And I don’t mean to scare you —
But that image size? It’s not just slower deploys. It’s higher cloud bills. More attack surface. More things to go wrong.
The usual culprit? Building everything in one stage.
You install pip, python-dev, gcc, build some C extensions — great.
But then you ship all of it to production?
No.
Use multi-stage builds. Separate the build environment from the runtime.
You don’t need wheel or cryptography build tools when you're just serving /api/users.
Like this:
FROM python:3.11-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user -r requirements.txt
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY app.py .
ENV PATH=/root/.local/bin:$PATH
CMD ["gunicorn", "app:app"]
The first stage builds everything. The second? Copies only what’s needed.
No build deps. No cache. No bloat.
I did this on a project — user-service-flask, 300 lines, nothing fancy — switched to multi-stage.
Image went from 900MB to 110MB.
The DevOps lead sent me a chai coupon. (And a GIF of a dancing goat.)
Worth every second.
- Smaller images = faster deployments
- Fewer packages = fewer CVEs
- Cleaner layers = easier debugging
--
🧠 Layer Caching — Why Your Build Shouldn’t Redo Everything
Docker caching is powerful — but fragile.
It works until one wrong line breaks the whole chain.
Here’s what kills it:
Changing a .py file and watching your entire pip install rerun.
Why? Because you COPY . . too early.
Docker caches layers until a command changes.
So if you copy your whole app before installing dependencies, any file change nukes the cache.
But reverse it? Copy requirements.txt first — then install — then copy the rest?
Boom. Cache stays warm unless dependencies change.
⚡ Correct Order Matters
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
This order isn’t trivial. It’s critical.
I once spent two hours debugging why CI builds took 8 minutes.
Turned out someone moved COPY . . to the top.
One line.
Eight minutes.
And yes — we reverted it in a hotfix.
With a commit message like “DO NOT TOUCH ORDER - SERIOUSLY”.
🗑️ Use .dockerignore
Seriously.
How many times have I seen **pycache**, .git, or — God help us — .env files in production images?
Too many.
Create a .dockerignore. It’s not optional.
It’s survival.
Like this:
__pycache__
*.pyc
.git
.env
.dockerignore
Dockerfile
README.md
It’s .gitignore, but for Docker. (And honestly? Easier to mess up.)
Don’t ship what you don’t need.
Attackers love finding .env files.
"Efficient Dockerfiles aren’t written—they’re iterated. Every layer is a trade-off between speed, size, and clarity."
--
🐍 Python-Specific Optimizations — Why Shebangs and Binaries Bite
Python in Docker? Full of footguns.
Ignore them, and you’ll get weird ImportErrors. Or slow starts. Or Permission denied on .cache/pip.
Not fun.
🐍 Use -no-cache-dir and -user
pip caches by default.
But in Docker? That cache is gone after build.
So why waste space?
Do this:
RUN pip install --no-cache-dir -r requirements.txt
Faster build. Smaller layer.
And use -user.
Avoids needing root. Installs to /root/.local or /home/user/.local.
Cleaner. Safer.
But — and this is important — don’t just slap -user everywhere.
Test it. Some old packages don’t like it. (Looking at you, pycrypto.)
📄 Don’t Use pip install . Without Reason
Your Flask app isn’t a library.
It’s a service.
So unless you’re publishing it to PyPI or using editable installs in dev, just COPY the code and run.
No need to pip install . and complicate the environment.
Yeah, I learned this the hard way — had a junior add pip install . to the Dockerfile, and we spent a day debugging why gunicorn couldn’t find the module.
Turned out the package name in setup.py didn’t match the import.
Classic.
🚀 Use Gunicorn, Not flask run -host=0.0.0.0
I get it. flask run is easy. It works locally.
But in prod? Single-threaded. No worker management. Crashes under real load.
So use Gunicorn. Always.
First:
pip install gunicorn
Then:
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:5000", "app:app"]
Four workers. Binds to all interfaces. Handles concurrency.
I learned this the hard way — our app passed all local tests.
Then we got 50 concurrent users during a demo.
Down it went.
Not a great look.
--
🔐 Security & Permissions — Why Running as Root Is a Bad Idea
Here’s a fun fact:
Over 60% of public Docker images run as root.
That means — if someone breaks in — they already have root access inside the container.
And if the host is misconfigured?
Boom. Full system compromise.
Not ideal.
So stop running as root.
Create a user:
FROM python:3.11-slim
# Create a non-root user
RUN adduser --disabled-password --gecos '' appuser
# Switch to the user
USER appuser
WORKDIR /home/appuser
COPY --chown=appuser requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
COPY --chown=appuser . .
ENV PATH=/home/appuser/.local/bin:$PATH
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:5000", "app:app"]
Now your app runs as appuser.
Even if exploited, damage is limited.
Also — pin your Python version.
Don’t use python:3.
Use python:3.11-slim.
Why? Because python:3 could be 3.12 tomorrow. And your app might break.
Update intentionally. Not by surprise.
And scan your images.
Use docker scan your-image — or better, Trivy or Snyk.
One known CVE in urllib3 once cost my old team two days of patching.
We found it in staging.
Could’ve been caught at build time.
Would’ve saved so much time.
--
⚙️ Environment Variables & Configuration — Why Hardcoding Fails
Your Flask app needs a secret key. A DB URL. Maybe Redis.
Hardcode them?
Please don’t.
Use environment variables. Always.
Like this:
import os
from flask import Flask
app = Flask(__name__)
app.config['SECRET_KEY'] = os.getenv('SECRET_KEY', 'dev-key-for-local')
Then set via docker run -e SECRET_KEY=… or docker-compose.yml.
Never bake secrets into the image.
Not even for "quick testing".
I once saw someone commit a .env file with a production DB password.
We rotated three databases that night.
Not fun.
Use python-decouple or python-dotenv in dev — but exclude .env from Docker.
And set FLASK_ENV=production?
Honestly? I skip it.
In production, I just use Gunicorn and move on.
Flask’s dev checks aren’t worth the risk.
This is part of Dockerfile best practices python flask : config at runtime, not build time.
--
🟩 Final Thoughts
An efficient Dockerfile isn’t just about passing CI.
It’s about intentionality.
Every line should answer:
Why is this here?
When does it change?
What breaks if it fails?
I used to treat Dockerfiles like scripts — just plumbing to make the app run.
Now I see them as contracts.
Between dev and ops.
Between today’s build and tomorrow’s incident.
When you follow Dockerfile best practices python flask , you’re not just shipping code.
You’re shipping confidence.
Confidence the image is small. Secure. Reproducible.
Confidence that a junior won’t break staging with a print() statement.
And honestly — that’s more valuable than any new feature.
Because in real-world development?
It’s not the flashy frontend or the clever algorithm that wins.
It’s the quiet, reliable backend that just… works.
So next time you write a Dockerfile, ask:
Is this efficient?
Is it maintainable?
Would I trust this to run unattended for two years?
If yes — you’re doing it right.
--
❓ Frequently Asked Questions
Should I use Alpine Linux for smaller Flask images?
Alpine is small — yeah. But it uses musl libc instead of glibc. Some Python packages (like cryptography or psycopg2) hate it. You’ll fight build errors for hours. Not worth the few MB saved. Stick with python:3.11-slim unless you really need Alpine. (And even then — think twice.)
How do I handle database migrations in Docker?
Run migrations in CI/CD — or use an entrypoint script that applies them safely. Never auto-migrate on startup in prod. Race conditions happen. Use flask db upgrade in a job, not in CMD. I once had two pods migrate at once — chaos. Learned my lesson.
Can I use Docker Compose for Flask apps?
Absolutely. Docker Compose is perfect for local dev with Redis, PostgreSQL, Celery. Just don’t bake secrets into docker-compose.yml. Use environment variables. It’s how you scale Dockerfile best practices python flask to real, multi-service systems.

Top comments (0)