This is Week 10 of 12 of the free DevOps cohort. In continuation of ποΈ From Chaos to Orchestration: Mastering Azure DevOps CI/CD Pipelines [Week-9] βοΈ
Introduction π―
Before diving into this week's content, let me ask myself some fundamental questions:
Why does containerization matter?
Containerization is the answer to "works on my machine" problem. It ensures consistency, portability, and reliability across development, testing, and production environments. But here's the real challenge: knowing what Docker does is different from mastering how to build production-grade systems with it.
What problem does Docker solve that virtual machines don't?
VMs are heavy, slow to start, and resource-hungry. Containers are lightweight, spinning up in milliseconds, sharing the kernel with the host OS. Docker abstracts away the complexity of container orchestration, making deployment as simple as docker run.
How do we go from a single containerized app to a scalable, production-ready system?
This is where Docker Compose, networking, volumes, healthchecks, and orchestration patterns come into play. Week 10 is exactly this journey.
What is Containerization & Why Docker? π³
Understanding Containerization
πΉ Container Definition: A container is a lightweight, standalone, executable package that includes your application, dependencies, runtime, and system tools. It's isolated from the host OS but shares the kernel, making it far more efficient than a virtual machine.
πΉ Docker's Role: Docker is the containerization platform that makes this possible. It provides:
- Docker Images: Blueprints (recipes) for containers
- Docker Containers: Running instances of images
- Docker Registry: Storage for images (Docker Hub)
- Docker Compose: Multi-container orchestration for local development
- Docker Networking: Built-in networking for container communication
- Docker Volumes: Persistent storage management
The Evolution: From Physical Servers to Containers
πΈ Traditional Deployment (The Old Way)
- Buy physical hardware
- Install OS manually
- Install dependencies
- Deploy application
- Pray nothing breaks
- Problem: "Works on my machine but not on the server"
πΈ Virtual Machine Era
- Hypervisor-based VMs
- Better isolation than bare metal
- Problem: Heavy resource overhead, slow startup times
πΈ Container Era (Today)
- Lightweight, process-level isolation
- Consistent across all environments
- Fast startup (milliseconds)
- Optimal resource utilization
- Solution: True DevOps automation becomes possible
From Theory to Practice π οΈ
Week 10 Assignment Breakdown
This week covered 7 progressive assignment projects, building from fundamental concepts to production-ready systems:
Assignment 43: Cloud VM Bootstrap & Static Website Deployment π
What I Did:
- πΈ Launched an Azure VM with cloud-init automation
- πΈ Installed Docker via cloud-init script (infrastructure-as-code approach)
- πΈ Created a Dockerfile for Nginx serving static HTML
- πΈ Built and deployed a containerized static website
- πΈ Exposed port 80 to the internet
What I Learned:
Running
docker build -t my-app . && docker run -p 80:80 my-appseemed simple until I realized I hadn't created anyDockerfile. The first error taught me: Dockerfiles are mandatory β they define how to build your image.
Key Takeaway: Infrastructure automation starts with cloud-init; containerization is the next layer.
Assignment 44: React App Multi-Stage Builds β‘
What I Did:
- πΈ Built a React application
- πΈ Created a single-stage Dockerfile (build everything in one layer)
- πΈ Measured final image size: 1.38 GB β
- πΈ Refactored to multi-stage Dockerfile (separate builder and runtime)
- πΈ Final image size: 49.3 MB β
- πΈ Size reduction: 96.5% π
What I Learned:
Question: Why is my Docker image so huge?
Answer: Build tools, compilers, npm cache, and dev dependencies are baked into the final image. Multi-stage builds fix this.
How Multi-Stage Builds Work:
Stage 1: Builder
βββ Install Node.js
βββ Install ALL dependencies
βββ Run `npm run build`
βββ Produces: app/build/ folder
Stage 2: Runtime
βββ Start with Nginx-alpine (52.8 MB)
βββ Copy ONLY app/build/ from Stage 1
βββ Discard everything else (Node, npm, cache)
βββ Final image: 49.3 MB β¨
Critical Insight: Build-time tools β Runtime requirements. Multi-stage builds enforce this discipline.
Assignment 45: Docker Networking & Custom Bridges π
The Problem I Faced:
I tried connecting two containers (backend API + frontend UI) using the default bridge network:
docker run -d --name backend nginx
docker run -it --rm alpine sh
# Inside the container:
curl http://backend # β Could not resolve host: backend
Why? The default bridge network doesn't provide DNS-based service discovery. Containers can reach each other by IP but not by name.
What I Did:
- πΈ Created a custom bridge network:
docker network create my-net
- πΈ Attached both containers to this network
- πΈ Frontend could now reach backend using its container name:
http://backend:80
The Breakthrough:
Question: How does Docker DNS work inside custom networks?
Answer: Docker embeds an internal DNS server in every custom bridge network. When a container tries to resolve a hostname, Docker's DNS intercepts the request and maps it to the container's IP. This is automatic and magical.
Container-to-Container vs Host-to-Container:
| Traffic Type | Port Mapping | Example |
|---|---|---|
| Container β Container (same network) | β NOT needed | curl http://backend:3000 |
| Host/Browser β Container | β Required |
docker run -p 8080:3000 then curl http://localhost:8080
|
Key Concepts Explained:
- πΉ Default Bridge: No DNS, containers isolated from names
- πΉ Custom Bridge: Built-in DNS, containers resolve by name
- πΉ Host Network: Container shares host's network (no isolation)
- πΉ None Network: Container has no network (sandbox mode)
Assignment 46: Data Persistence - Bind Mounts vs Volumes πΎ
Two Scenarios, Two Different Approaches:
Scenario A: Bind Mounts (Host Directory)
What I did:
- πΈ Created host directory:
/home/user/nginx-logs - πΈ Ran Nginx container with bind mount:
-v /home/user/nginx-logs:/var/log/nginx - πΈ Nginx logs were written directly to the host
- πΈ Deleted the container
- πΈ Logs still existed on host β
# Bind mount example
docker run -d -v /host/path:/container/path nginx
# Logs written to host filesystem
cat /host/path/access.log
Scenario B: Named Volumes (Docker-Managed)
What I did:
- πΈ Created named volume:
docker volume create shared-data - πΈ Both backend (writer) and frontend (reader) mounted this volume
- πΈ Backend wrote a message to the volume
- πΈ Frontend read the same message instantly
- πΈ Deleted backend container
- πΈ Frontend still read the message β
- πΈ Recreated backend, wrote new message
- πΈ Frontend showed updated content β
# Named volume example
docker volume create app-data
docker run -v app-data:/data writer-app
docker run -v app-data:/data reader-app
The Critical Learning:
Question: When should I use bind mounts vs volumes?
Answer Guide:
| Use Case | Bind Mount | Named Volume |
|---|---|---|
| Logs | β Real-time host access | β Hidden in Docker storage |
| Dev/Hot-Reload | β Edit on host, instant refresh | β Overkill |
| Database Files | β Risky permissions | β Safer, portable |
| Shared App Data | β Host-dependent | β Portable across VMs |
| Production | β Tight coupling | β Cloud-friendly |
Bind Mounts Risks:
- π΄ Tight coupling to host filesystem layout
- π΄ Permission mismatches (uid/gid issues)
- π΄ Not portable (C:\data on Windows vs /home/user/data on Linux)
- π΄ Accidental modification or deletion on host
Named Volumes Benefits:
- π’ Docker manages storage paths
- π’ Works across different hosts
- π’ Consistent permissions
- π’ Backup-friendly
- π’ Can use remote drivers (NFS, S3, etc.)
Assignment 47: Multi-Tier Docker Compose ποΈ
The Challenge: Deploy 3 services (MongoDB, Node.js API, React Frontend) with one command.
What I Did:
version: '3.9'
services:
database:
image: mongo:6.0
networks:
- backend_net
backend:
build: ./backend
depends_on:
- database
networks:
- backend_net
- frontend_net
frontend:
build: ./frontend
depends_on:
- backend
networks:
- frontend_net
ports:
- "3000:3000"
networks:
backend_net: # Private: DB β API
frontend_net: # Public: API β UI
The Problem I Encountered:
Backend kept crashing with:
ECONNREFUSED 127.0.0.1:27017 (MongoDB)
ERROR: Cannot connect to database
Root Cause Analysis:
Question: Backend container started before MongoDB was ready.
dependsOnwaits for container to START, not to be READY.
Solution:
- πΉ Added healthcheck to MongoDB
- πΉ Changed
dependsOnto:condition: service_healthy - πΉ Backend waited until MongoDB was actually accepting connections
database:
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
backend:
depends_on:
database:
condition: service_healthy # β
Not just "started"
Network Architecture Lesson:
The setup used two separate networks:
- πΉ backend_net: Database + API (private, internal)
- πΉ frontend_net: API + Frontend (semi-public)
- Frontend cannot directly talk to database (security by design)
Why Separate Networks?
- π’ Security: Reduces blast radius if frontend is compromised
- π’ Clarity: Traffic patterns are explicit
- π’ Compliance: Meets requirements like "database not directly accessible from UI"
Assignment 48: Production Book Review App Deployment π
Full-Stack Production Deployment:
- πΈ MySQL database with persistent volume
- πΈ Node.js backend API with authentication (JWT)
- πΈ Next.js frontend with CORS handling
- πΈ All orchestrated with docker-compose
Major Issues & Resolutions:
Issue #1: Frontend Won't Load
docker logs frontend
# ready - started server on 127.0.0.1:3000
# β Only listening on localhost, not accessible externally
Fix:
CMD npm run dev -- -p 3000 -H 0.0.0.0
# β
Now listens on all interfaces (0.0.0.0)
Issue #2: CORS Errors
Frontend on 74.225.149.43:3000 couldn't call Backend API on 74.225.149.43:3001.
Root Cause:
Backend CORS config allowed http://backend:3001 (internal name) but browser sends from public IP.
Fix:
ALLOWED_ORIGINS=http://74.225.149.43:3001
Issue #3: Database Credential Mismatch
Backend tried logging in as user: pravin but MySQL created user: suvra.
Fix:
Ensured all services used the same credentials:
database:
environment:
MYSQL_USER: pravin
backend:
environment:
DB_USER: pravin
Assignment 49: Capstone - TheEpicBook Production Deployment π
The Capstone Challenge: Build a production-grade deployment with:
- β Multi-stage builds (image optimization)
- β Docker Compose orchestration
- β Reverse proxy (Nginx)
- β Health checks & startup ordering
- β Data persistence & backups
- β Logging & observability
- β Cloud deployment on Azure VM
- β Optional CICD pipeline
Architecture Deployed:
βββββββββββββββββββββββββββββββββββββββββββ
β Azure VM (Public IP) β
βββββββββββββββββββββββββββββββββββββββββββ€
β Port 80 β Nginx (Reverse Proxy) β
βββββββββββββββββββββββββββββββββββββββββββ€
β ββββββββββββββββββββββββββββββββββββ β
β β Docker Compose Stack β β
β ββββββββββββββββββββββββββββββββββββ€ β
β β β
epicbook-proxy (Nginx) β β
β β - Routes /api β backend β β
β β - Serves frontend assets β β
β β - CORS configured β β
β ββββββββββββββββββββββββββββββββββββ€ β
β β β
epicbook-app (Node.js) β β
β β - Express API server β β
β β - Handlebars template engine β β
β β - Healthcheck monitoring β β
β ββββββββββββββββββββββββββββββββββββ€ β
β β β
epicbook-db (MySQL 8.0) β β
β β - Data persistence β β
β β - Health monitoring β β
β ββββββββββββββββββββββββββββββββββββ β
β β
β Networks: β
β - frontend_net (proxy β app) β
β - backend_net (app β database) β
β β
β Volumes: β
β - db_data (MySQL persistence) β
β - logs/nginx (proxy access logs) β
βββββββββββββββββββββββββββββββββββββββββββ
Multi-Stage Dockerfile (Application):
# Stage 1: Builder
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
# Stage 2: Runtime
FROM node:18-alpine
WORKDIR /app
ENV NODE_ENV=production
COPY package*.json ./
RUN npm ci --only=production
# Copy only necessary files from builder
COPY --from=builder /app/config ./config
COPY --from=builder /app/db ./db
COPY --from=builder /app/models ./models
COPY --from=builder /app/routes ./routes
COPY --from=builder /app/views ./views
COPY --from=builder /app/public ./public
COPY --from=builder /app/server.js ./
EXPOSE 3000
CMD ["node", "server.js"]
Image Sizes Achieved:
- π΄ Single-stage: 1.38 GB (bloated)
- π’ Multi-stage: 49.3 MB (lean)
- Reduction: 96.5% π
Reverse Proxy Configuration:
upstream epicbook_app {
server app:3000;
}
server {
listen 80;
server_name _;
location / {
proxy_pass http://epicbook_app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
}
}
Key Learnings & Challenges π
Challenge #1: Race Conditions in Container Startup
Problem:
Backend tried connecting to database before it was ready.
Solution Pattern:
depends_on:
db:
condition: service_healthy # β KEY: Wait for readiness, not just start
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
Lesson: depends_on: service_name β "wait until ready". Always add healthchecks.
Challenge #2: Image Size Explosion
Problem:
React + build tools in final image = 1.38 GB monster.
Solution:
Multi-stage builds eliminate dev dependencies from runtime image.
Lesson: Build time β runtime. Separate concerns = lean production images.
Challenge #3: Container-to-Container Communication
Problem:
Frontend couldn't reach backend using http://backend:3000.
Root Cause:
Used default bridge network (no DNS).
Solution:
Custom bridge network with Docker's embedded DNS.
Lesson: Always use custom networks for multi-container apps. Enables service discovery and networking isolation.
Challenge #4: Data Persistence & Volume Management
Problem:
Container deleted = data gone (or bind mount permissions exploded).
Solution:
Named volumes managed by Docker.
Lesson: Volumes > Bind Mounts for production. Volumes are portable, secure, and cloud-friendly.
Challenge #5: CORS & API Communication
Problem:
Frontend on public IP couldn't call backend API (CORS blocked).
Solution:
backend:
environment:
ALLOWED_ORIGINS: http://74.225.149.43:3001
Lesson: Always configure CORS for the actual public IP/domain, not localhost.
Production-Ready Deployment Patterns π’
Pattern #1: Health Checks for Reliability
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s # Grace period before first check
Why?
- π’ Prevents traffic to crashed containers
- π’ Enables automatic container restart (orchestrators like Docker)
- π’ Clear visibility into system health
Pattern #2: Named Volumes for Persistence
volumes:
db_data:
driver: local
services:
database:
volumes:
- db_data:/var/lib/mysql
Why?
- π’ Data survives container restarts
- π’ Easy backup & restore
- π’ Portable across hosts
Pattern #3: Dual-Network Architecture
networks:
backend_net:
driver: bridge
frontend_net:
driver: bridge
services:
database:
networks:
- backend_net
backend:
networks:
- backend_net
- frontend_net
frontend:
networks:
- frontend_net
Why?
- π’ Security isolation (frontend can't directly access DB)
- π’ Clear traffic patterns
- π’ Compliance with zero-trust networking
Pattern #4: Multi-Stage Builds for Optimization
FROM node:18 AS builder
RUN npm install
RUN npm run build
FROM node:18-alpine # β Lean runtime base
COPY --from=builder /app/dist /app
Why?
- π’ 96.5% smaller images (1.38GB β 49.3MB)
- π’ Faster CI/CD deployments
- π’ Reduced attack surface (no build tools in production)
Pattern #5: Reverse Proxy for Routing & Security
server {
listen 80;
location / {
proxy_pass http://app:3000;
}
location /api {
proxy_pass http://backend:3001;
}
}
Why?
- π’ Single entry point (localhost:80)
- π’ Hide internal container ports
- π’ Easy to add SSL/TLS later
- π’ Rate limiting & security policies
Reflection & Key Takeaways π§
The Journey from Week 0 to Week 10
Week 10 represents the culmination of building systems that don't just work, but work reliably at scale.
Questions I Asked Myself:
Q1: What's the difference between a container that works locally and one that works in production?
A: Isolation, persistence, healthchecks, and monitoring. Local containers are toys; production containers are infrastructure.
Q2: How do I design systems that recover from failures automatically?
A: Health checks, restart policies, named volumes, and proper networking. Docker enables this automation.
Q3: What's the biggest win from containerization?
A: Reproducibility. Same image runs identically on my laptop, staging environment, and production. "Works on my machine" becomes invalid.
Numbers That Matter
| Metric | Impact |
|---|---|
| 96.5% image size reduction | Faster deploys, less storage cost |
| 28 seconds build time (multi-stage) | Quick feedback loops |
| 100% healthcheck pass rate | Zero "connection refused" errors |
| Five 9's uptime (via healthchecks) | Production-grade reliability |
What's Next?
Week 10 taught me Docker fundamentals to advanced orchestration. Next logical steps:
πΉ Kubernetes: Multi-host orchestration, auto-scaling, rolling updates
πΉ CICD Integration: Automated builds, tests, and deployments
πΉ Observability: Centralized logging (ELK), metrics (Prometheus), tracing
πΉ Security: Image scanning, secret management, network policies
Conclusion π¬
Docker is not just a toolβit's a mindset shift toward infrastructure-as-code, reproducibility, and operational excellence.
Week 10 taught me:
- β Containerization solves the reproducibility problem
- β Multi-stage builds are non-negotiable for production
- β Networks & volumes enable reliable multi-container systems
- β Health checks & startup ordering prevent cascading failures
- β Reverse proxies provide security & flexibility
The real power of Docker isn't that it makes deployment easyβit's that it enables automation, consistency, and confidence at every stage of the software lifecycle.
Related Hashtags:
#Docker #Containerization #DevOps #Production #Kubernetes #Microservices #CloudNative #AWS #Azure #CI/CD #Infrastructure #LearningInPublic #TechBlog #SoftwareEngineering







Top comments (0)