When working with Docker images in production, size matters a lot.
Large images mean longer build times, slower deployments, and even higher storage costs in container registries like GCP Artifact Registry or Docker Hub.
In one of our projects, the image size grew beyond 1.9 GB, and we optimized it down to just 495 MB — a 75% reduction. Here’s how we did it.
Bonus : Have shared common inteview prep questions and answers in the end.
🔹 Why Docker Image Size Matters
- ⏱ Faster builds & CI/CD pipelines
- 📦 Less storage usage in registries
- 🚀 Faster deployments & scaling
- 💰 Lower cloud costs
- 🔒 Smaller attack surface
🔹 Our Starting Point
We started with this basic Dockerfile:
FROM google/cloud-sdk:latest
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python3", "app.py"]
❌ Problems with this approach
-
google/cloud-sdk:latest
is ~3 GB 😱 - Copying everything at once invalidates Docker cache → slow builds
- No
.dockerignore
→ unnecessary files (e.g.,.git
,__pycache__
,.vscode/
) got copied -
pip install
cached dependencies → bloated layers
🔹 Optimized Approach
We switched to a smaller base image and installed only what we need.
✅ Optimized Dockerfile
FROM python:3.12-slim
WORKDIR /app
# Install gcloud SDK (instead of using 3GB google/cloud-sdk image)
RUN apt-get update && \
apt-get install -y apt-transport-https ca-certificates gnupg curl && \
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg && \
apt-get update && \
apt-get install -y google-cloud-sdk && \
rm -rf /var/lib/apt/lists/*
# Copy only requirements first (for caching)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
CMD ["python3", "app.py"]
🔹 Why This Works
-
Slim Base Image
-
python:3.12-slim
is just 124 MB vsgoogle/cloud-sdk:latest
at ~3 GB. - We add only the
gcloud
tools we need (gsutil
,gcloud sql
, etc.).
-
-
Cache-Friendly Layering
-
COPY requirements.txt
first →pip install
is cached until dependencies change. - App code changes don’t trigger dependency reinstall.
-
.dockerignore
Optimization
A.dockerignore
file prevents unnecessary files from being copied into the image:
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
.git
.vscode
.DS_Store
This keeps unnecessary files out of the image.
- No Pip Cache
pip install --no-cache-dir
avoids leaving hundreds of MB in cached wheels.APT Cleanup
rm -rf /var/lib/apt/lists/*
keeps layers lean.
📊 Before vs After
Version | Size | Notes |
---|---|---|
testv1.5.0 |
1.96 GB | Full google/cloud-sdk , no cleanup |
v1.6.9 |
1.36 GB | Slim base, partial optimization |
testv1.4.1 |
495 MB | Slim base + caching + .dockerignore + cleanup |
💡 Result: 75% reduction in size 🚀
- 3x faster builds
- Faster pushes/pulls from Artifact Registry
- Lower cloud storage costs
🔹 Common Interview Questions on Docker Image Optimization
If you’re preparing for DevOps/SRE interviews, expect questions like:
Q1. How do you reduce Docker image size?
- Use slim base images.
- Multi-stage builds.
-
.dockerignore
. - Remove build-time dependencies.
- Clean caches (
pip
,apt
).
Q2. Why copy requirements.txt
separately before app code?
- To leverage Docker’s layer caching → only reinstall deps when requirements change.
Q3. What’s the role of .dockerignore
?
- Prevents unnecessary files (like
.git
, logs, caches) from being copied → smaller, cleaner image.
Q4. When should you use multi-stage builds?
- When compiling dependencies (Go, Java, C extensions).
- First stage: build artifacts.
- Second stage: copy only binaries/libs into a minimal runtime image.
Q5. Why not use google/cloud-sdk:latest
directly?
- It’s huge (~3 GB).
- Installing only required gcloud components on
python:slim
keeps image much smaller.
🚀 Final Thoughts
Optimizing Dockerfiles is not just about saving space — it impacts CI/CD speed, developer productivity, and cloud costs.
By combining slim images, layer caching, .dockerignore, and minimal installs, you can cut image sizes dramatically.
Top comments (0)