DEV Community

Cover image for 🐳 From Chaos to Orchestration: Mastering Docker Containerization & Production Deployments [Week-10] πŸš€
Suvrajeet Banerjee
Suvrajeet Banerjee Subscriber

Posted on

🐳 From Chaos to Orchestration: Mastering Docker Containerization & Production Deployments [Week-10] πŸš€

This is Week 10 of 12 of the free DevOps cohort. In continuation of πŸ—οΈ From Chaos to Orchestration: Mastering Azure DevOps CI/CD Pipelines [Week-9] βš™οΈ


Introduction 🎯

Before diving into this week's content, let me ask myself some fundamental questions:

Why does containerization matter?

Containerization is the answer to "works on my machine" problem. It ensures consistency, portability, and reliability across development, testing, and production environments. But here's the real challenge: knowing what Docker does is different from mastering how to build production-grade systems with it.

What problem does Docker solve that virtual machines don't?

VMs are heavy, slow to start, and resource-hungry. Containers are lightweight, spinning up in milliseconds, sharing the kernel with the host OS. Docker abstracts away the complexity of container orchestration, making deployment as simple as docker run.

How do we go from a single containerized app to a scalable, production-ready system?

This is where Docker Compose, networking, volumes, healthchecks, and orchestration patterns come into play. Week 10 is exactly this journey.

docker


What is Containerization & Why Docker? 🐳

Understanding Containerization

πŸ”Ή Container Definition: A container is a lightweight, standalone, executable package that includes your application, dependencies, runtime, and system tools. It's isolated from the host OS but shares the kernel, making it far more efficient than a virtual machine.

πŸ”Ή Docker's Role: Docker is the containerization platform that makes this possible. It provides:

  • Docker Images: Blueprints (recipes) for containers
  • Docker Containers: Running instances of images
  • Docker Registry: Storage for images (Docker Hub)
  • Docker Compose: Multi-container orchestration for local development
  • Docker Networking: Built-in networking for container communication
  • Docker Volumes: Persistent storage management

The Evolution: From Physical Servers to Containers

πŸ”Έ Traditional Deployment (The Old Way)

  • Buy physical hardware
  • Install OS manually
  • Install dependencies
  • Deploy application
  • Pray nothing breaks
  • Problem: "Works on my machine but not on the server"

πŸ”Έ Virtual Machine Era

  • Hypervisor-based VMs
  • Better isolation than bare metal
  • Problem: Heavy resource overhead, slow startup times

πŸ”Έ Container Era (Today)

  • Lightweight, process-level isolation
  • Consistent across all environments
  • Fast startup (milliseconds)
  • Optimal resource utilization
  • Solution: True DevOps automation becomes possible

evol


From Theory to Practice πŸ› οΈ

Week 10 Assignment Breakdown

This week covered 7 progressive assignment projects, building from fundamental concepts to production-ready systems:

Assignment 43: Cloud VM Bootstrap & Static Website Deployment 🌐

What I Did:

  • πŸ”Έ Launched an Azure VM with cloud-init automation
  • πŸ”Έ Installed Docker via cloud-init script (infrastructure-as-code approach)
  • πŸ”Έ Created a Dockerfile for Nginx serving static HTML
  • πŸ”Έ Built and deployed a containerized static website
  • πŸ”Έ Exposed port 80 to the internet

What I Learned:

Running docker build -t my-app . && docker run -p 80:80 my-app seemed simple until I realized I hadn't created any Dockerfile. The first error taught me: Dockerfiles are mandatory β€” they define how to build your image.

Key Takeaway: Infrastructure automation starts with cloud-init; containerization is the next layer.


Assignment 44: React App Multi-Stage Builds ⚑

What I Did:

  • πŸ”Έ Built a React application
  • πŸ”Έ Created a single-stage Dockerfile (build everything in one layer)
  • πŸ”Έ Measured final image size: 1.38 GB ❌
  • πŸ”Έ Refactored to multi-stage Dockerfile (separate builder and runtime)
  • πŸ”Έ Final image size: 49.3 MB βœ…
  • πŸ”Έ Size reduction: 96.5% πŸš€

What I Learned:

Question: Why is my Docker image so huge?

Answer: Build tools, compilers, npm cache, and dev dependencies are baked into the final image. Multi-stage builds fix this.

How Multi-Stage Builds Work:

Stage 1: Builder
β”œβ”€β”€ Install Node.js
β”œβ”€β”€ Install ALL dependencies
β”œβ”€β”€ Run `npm run build`
└── Produces: app/build/ folder

Stage 2: Runtime
β”œβ”€β”€ Start with Nginx-alpine (52.8 MB)
β”œβ”€β”€ Copy ONLY app/build/ from Stage 1
β”œβ”€β”€ Discard everything else (Node, npm, cache)
└── Final image: 49.3 MB ✨
Enter fullscreen mode Exit fullscreen mode

Critical Insight: Build-time tools β‰  Runtime requirements. Multi-stage builds enforce this discipline.

msb


Assignment 45: Docker Networking & Custom Bridges πŸŒ‰

The Problem I Faced:

I tried connecting two containers (backend API + frontend UI) using the default bridge network:

docker run -d --name backend nginx
docker run -it --rm alpine sh
# Inside the container:
curl http://backend  # ❌ Could not resolve host: backend
Enter fullscreen mode Exit fullscreen mode

Why? The default bridge network doesn't provide DNS-based service discovery. Containers can reach each other by IP but not by name.

What I Did:

  • πŸ”Έ Created a custom bridge network:
  docker network create my-net
Enter fullscreen mode Exit fullscreen mode
  • πŸ”Έ Attached both containers to this network
  • πŸ”Έ Frontend could now reach backend using its container name: http://backend:80

The Breakthrough:

Question: How does Docker DNS work inside custom networks?

Answer: Docker embeds an internal DNS server in every custom bridge network. When a container tries to resolve a hostname, Docker's DNS intercepts the request and maps it to the container's IP. This is automatic and magical.

Container-to-Container vs Host-to-Container:

Traffic Type Port Mapping Example
Container β†’ Container (same network) ❌ NOT needed curl http://backend:3000
Host/Browser β†’ Container βœ… Required docker run -p 8080:3000 then curl http://localhost:8080

Key Concepts Explained:

  • πŸ”Ή Default Bridge: No DNS, containers isolated from names
  • πŸ”Ή Custom Bridge: Built-in DNS, containers resolve by name
  • πŸ”Ή Host Network: Container shares host's network (no isolation)
  • πŸ”Ή None Network: Container has no network (sandbox mode)

ntwrk


Assignment 46: Data Persistence - Bind Mounts vs Volumes πŸ’Ύ

Two Scenarios, Two Different Approaches:

Scenario A: Bind Mounts (Host Directory)

What I did:

  • πŸ”Έ Created host directory: /home/user/nginx-logs
  • πŸ”Έ Ran Nginx container with bind mount: -v /home/user/nginx-logs:/var/log/nginx
  • πŸ”Έ Nginx logs were written directly to the host
  • πŸ”Έ Deleted the container
  • πŸ”Έ Logs still existed on host βœ…
# Bind mount example
docker run -d -v /host/path:/container/path nginx

# Logs written to host filesystem
cat /host/path/access.log
Enter fullscreen mode Exit fullscreen mode

Scenario B: Named Volumes (Docker-Managed)

What I did:

  • πŸ”Έ Created named volume: docker volume create shared-data
  • πŸ”Έ Both backend (writer) and frontend (reader) mounted this volume
  • πŸ”Έ Backend wrote a message to the volume
  • πŸ”Έ Frontend read the same message instantly
  • πŸ”Έ Deleted backend container
  • πŸ”Έ Frontend still read the message βœ…
  • πŸ”Έ Recreated backend, wrote new message
  • πŸ”Έ Frontend showed updated content βœ…
# Named volume example
docker volume create app-data
docker run -v app-data:/data writer-app
docker run -v app-data:/data reader-app
Enter fullscreen mode Exit fullscreen mode

The Critical Learning:

Question: When should I use bind mounts vs volumes?

Answer Guide:

Use Case Bind Mount Named Volume
Logs βœ… Real-time host access ❌ Hidden in Docker storage
Dev/Hot-Reload βœ… Edit on host, instant refresh ❌ Overkill
Database Files ❌ Risky permissions βœ… Safer, portable
Shared App Data ❌ Host-dependent βœ… Portable across VMs
Production ❌ Tight coupling βœ… Cloud-friendly

Bind Mounts Risks:

  • πŸ”΄ Tight coupling to host filesystem layout
  • πŸ”΄ Permission mismatches (uid/gid issues)
  • πŸ”΄ Not portable (C:\data on Windows vs /home/user/data on Linux)
  • πŸ”΄ Accidental modification or deletion on host

Named Volumes Benefits:

  • 🟒 Docker manages storage paths
  • 🟒 Works across different hosts
  • 🟒 Consistent permissions
  • 🟒 Backup-friendly
  • 🟒 Can use remote drivers (NFS, S3, etc.)

vlm


Assignment 47: Multi-Tier Docker Compose πŸ—οΈ

The Challenge: Deploy 3 services (MongoDB, Node.js API, React Frontend) with one command.

What I Did:

version: '3.9'

services:
  database:
    image: mongo:6.0
    networks:
      - backend_net

  backend:
    build: ./backend
    depends_on:
      - database
    networks:
      - backend_net
      - frontend_net

  frontend:
    build: ./frontend
    depends_on:
      - backend
    networks:
      - frontend_net
    ports:
      - "3000:3000"

networks:
  backend_net:  # Private: DB ↔ API
  frontend_net:  # Public: API ↔ UI
Enter fullscreen mode Exit fullscreen mode

The Problem I Encountered:

Backend kept crashing with:

ECONNREFUSED 127.0.0.1:27017 (MongoDB)
ERROR: Cannot connect to database
Enter fullscreen mode Exit fullscreen mode

Root Cause Analysis:

Question: Backend container started before MongoDB was ready. dependsOn waits for container to START, not to be READY.

Solution:

  • πŸ”Ή Added healthcheck to MongoDB
  • πŸ”Ή Changed dependsOn to: condition: service_healthy
  • πŸ”Ή Backend waited until MongoDB was actually accepting connections
database:
  healthcheck:
    test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
    interval: 10s
    timeout: 5s
    retries: 5

backend:
  depends_on:
    database:
      condition: service_healthy  # βœ… Not just "started"
Enter fullscreen mode Exit fullscreen mode

Network Architecture Lesson:

The setup used two separate networks:

  • πŸ”Ή backend_net: Database + API (private, internal)
  • πŸ”Ή frontend_net: API + Frontend (semi-public)
  • Frontend cannot directly talk to database (security by design)

Why Separate Networks?

  • 🟒 Security: Reduces blast radius if frontend is compromised
  • 🟒 Clarity: Traffic patterns are explicit
  • 🟒 Compliance: Meets requirements like "database not directly accessible from UI"

Assignment 48: Production Book Review App Deployment πŸ“š

Full-Stack Production Deployment:

  • πŸ”Έ MySQL database with persistent volume
  • πŸ”Έ Node.js backend API with authentication (JWT)
  • πŸ”Έ Next.js frontend with CORS handling
  • πŸ”Έ All orchestrated with docker-compose

Major Issues & Resolutions:

Issue #1: Frontend Won't Load

docker logs frontend
# ready - started server on 127.0.0.1:3000
# ❌ Only listening on localhost, not accessible externally
Enter fullscreen mode Exit fullscreen mode

Fix:

CMD npm run dev -- -p 3000 -H 0.0.0.0
# βœ… Now listens on all interfaces (0.0.0.0)
Enter fullscreen mode Exit fullscreen mode

Issue #2: CORS Errors

Frontend on 74.225.149.43:3000 couldn't call Backend API on 74.225.149.43:3001.

Root Cause:
Backend CORS config allowed http://backend:3001 (internal name) but browser sends from public IP.

Fix:

ALLOWED_ORIGINS=http://74.225.149.43:3001
Enter fullscreen mode Exit fullscreen mode

Issue #3: Database Credential Mismatch

Backend tried logging in as user: pravin but MySQL created user: suvra.

Fix:
Ensured all services used the same credentials:

database:
  environment:
    MYSQL_USER: pravin

backend:
  environment:
    DB_USER: pravin
Enter fullscreen mode Exit fullscreen mode

Assignment 49: Capstone - TheEpicBook Production Deployment 🎭

The Capstone Challenge: Build a production-grade deployment with:

  • βœ… Multi-stage builds (image optimization)
  • βœ… Docker Compose orchestration
  • βœ… Reverse proxy (Nginx)
  • βœ… Health checks & startup ordering
  • βœ… Data persistence & backups
  • βœ… Logging & observability
  • βœ… Cloud deployment on Azure VM
  • βœ… Optional CICD pipeline

Architecture Deployed:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚           Azure VM (Public IP)          β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Port 80 β†’ Nginx (Reverse Proxy)        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚   Docker Compose Stack           β”‚   β”‚
β”‚  β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€   β”‚
β”‚  β”‚  βœ… epicbook-proxy (Nginx)       β”‚   β”‚
β”‚  β”‚     - Routes /api β†’ backend      β”‚   β”‚
β”‚  β”‚     - Serves frontend assets     β”‚   β”‚
β”‚  β”‚     - CORS configured            β”‚   β”‚
β”‚  β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€   β”‚
β”‚  β”‚  βœ… epicbook-app (Node.js)       β”‚   β”‚
β”‚  β”‚     - Express API server         β”‚   β”‚
β”‚  β”‚     - Handlebars template engine β”‚   β”‚
β”‚  β”‚     - Healthcheck monitoring     β”‚   β”‚
β”‚  β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€   β”‚
β”‚  β”‚  βœ… epicbook-db (MySQL 8.0)      β”‚   β”‚
β”‚  β”‚     - Data persistence           β”‚   β”‚
β”‚  β”‚     - Health monitoring          β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                                          β”‚
β”‚  Networks:                               β”‚
β”‚  - frontend_net (proxy ↔ app)           β”‚
β”‚  - backend_net (app ↔ database)         β”‚
β”‚                                          β”‚
β”‚  Volumes:                                β”‚
β”‚  - db_data (MySQL persistence)          β”‚
β”‚  - logs/nginx (proxy access logs)       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Enter fullscreen mode Exit fullscreen mode

Multi-Stage Dockerfile (Application):

# Stage 1: Builder
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci

COPY . .

# Stage 2: Runtime
FROM node:18-alpine
WORKDIR /app
ENV NODE_ENV=production

COPY package*.json ./
RUN npm ci --only=production

# Copy only necessary files from builder
COPY --from=builder /app/config ./config
COPY --from=builder /app/db ./db
COPY --from=builder /app/models ./models
COPY --from=builder /app/routes ./routes
COPY --from=builder /app/views ./views
COPY --from=builder /app/public ./public
COPY --from=builder /app/server.js ./

EXPOSE 3000
CMD ["node", "server.js"]
Enter fullscreen mode Exit fullscreen mode

Image Sizes Achieved:

  • πŸ”΄ Single-stage: 1.38 GB (bloated)
  • 🟒 Multi-stage: 49.3 MB (lean)
  • Reduction: 96.5% πŸ“‰

Reverse Proxy Configuration:

upstream epicbook_app {
    server app:3000;
}

server {
    listen 80;
    server_name _;

    location / {
        proxy_pass http://epicbook_app;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_cache_bypass $http_upgrade;
    }
}
Enter fullscreen mode Exit fullscreen mode

Key Learnings & Challenges πŸŽ“

Challenge #1: Race Conditions in Container Startup

Problem:
Backend tried connecting to database before it was ready.

Solution Pattern:

depends_on:
  db:
    condition: service_healthy  # ← KEY: Wait for readiness, not just start

healthcheck:
  test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
  interval: 10s
  timeout: 5s
  retries: 5
Enter fullscreen mode Exit fullscreen mode

Lesson: depends_on: service_name β‰  "wait until ready". Always add healthchecks.


Challenge #2: Image Size Explosion

Problem:
React + build tools in final image = 1.38 GB monster.

Solution:
Multi-stage builds eliminate dev dependencies from runtime image.

Lesson: Build time β‰  runtime. Separate concerns = lean production images.


Challenge #3: Container-to-Container Communication

Problem:
Frontend couldn't reach backend using http://backend:3000.

Root Cause:
Used default bridge network (no DNS).

Solution:
Custom bridge network with Docker's embedded DNS.

Lesson: Always use custom networks for multi-container apps. Enables service discovery and networking isolation.


Challenge #4: Data Persistence & Volume Management

Problem:
Container deleted = data gone (or bind mount permissions exploded).

Solution:
Named volumes managed by Docker.

Lesson: Volumes > Bind Mounts for production. Volumes are portable, secure, and cloud-friendly.


Challenge #5: CORS & API Communication

Problem:
Frontend on public IP couldn't call backend API (CORS blocked).

Solution:

backend:
  environment:
    ALLOWED_ORIGINS: http://74.225.149.43:3001
Enter fullscreen mode Exit fullscreen mode

Lesson: Always configure CORS for the actual public IP/domain, not localhost.


Production-Ready Deployment Patterns 🏒

Pattern #1: Health Checks for Reliability

healthcheck:
  test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 40s  # Grace period before first check
Enter fullscreen mode Exit fullscreen mode

Why?

  • 🟒 Prevents traffic to crashed containers
  • 🟒 Enables automatic container restart (orchestrators like Docker)
  • 🟒 Clear visibility into system health

Pattern #2: Named Volumes for Persistence

volumes:
  db_data:
    driver: local

services:
  database:
    volumes:
      - db_data:/var/lib/mysql
Enter fullscreen mode Exit fullscreen mode

Why?

  • 🟒 Data survives container restarts
  • 🟒 Easy backup & restore
  • 🟒 Portable across hosts

Pattern #3: Dual-Network Architecture

networks:
  backend_net:
    driver: bridge
  frontend_net:
    driver: bridge

services:
  database:
    networks:
      - backend_net

  backend:
    networks:
      - backend_net
      - frontend_net

  frontend:
    networks:
      - frontend_net
Enter fullscreen mode Exit fullscreen mode

Why?

  • 🟒 Security isolation (frontend can't directly access DB)
  • 🟒 Clear traffic patterns
  • 🟒 Compliance with zero-trust networking

Pattern #4: Multi-Stage Builds for Optimization

FROM node:18 AS builder
RUN npm install
RUN npm run build

FROM node:18-alpine  # ← Lean runtime base
COPY --from=builder /app/dist /app
Enter fullscreen mode Exit fullscreen mode

Why?

  • 🟒 96.5% smaller images (1.38GB β†’ 49.3MB)
  • 🟒 Faster CI/CD deployments
  • 🟒 Reduced attack surface (no build tools in production)

Pattern #5: Reverse Proxy for Routing & Security

server {
    listen 80;

    location / {
        proxy_pass http://app:3000;
    }

    location /api {
        proxy_pass http://backend:3001;
    }
}
Enter fullscreen mode Exit fullscreen mode

Why?

  • 🟒 Single entry point (localhost:80)
  • 🟒 Hide internal container ports
  • 🟒 Easy to add SSL/TLS later
  • 🟒 Rate limiting & security policies

prod


Reflection & Key Takeaways 🧠

The Journey from Week 0 to Week 10

Week 10 represents the culmination of building systems that don't just work, but work reliably at scale.

Questions I Asked Myself:

Q1: What's the difference between a container that works locally and one that works in production?

A: Isolation, persistence, healthchecks, and monitoring. Local containers are toys; production containers are infrastructure.

Q2: How do I design systems that recover from failures automatically?

A: Health checks, restart policies, named volumes, and proper networking. Docker enables this automation.

Q3: What's the biggest win from containerization?

A: Reproducibility. Same image runs identically on my laptop, staging environment, and production. "Works on my machine" becomes invalid.


Numbers That Matter

Metric Impact
96.5% image size reduction Faster deploys, less storage cost
28 seconds build time (multi-stage) Quick feedback loops
100% healthcheck pass rate Zero "connection refused" errors
Five 9's uptime (via healthchecks) Production-grade reliability

What's Next?

Week 10 taught me Docker fundamentals to advanced orchestration. Next logical steps:

πŸ”Ή Kubernetes: Multi-host orchestration, auto-scaling, rolling updates

πŸ”Ή CICD Integration: Automated builds, tests, and deployments

πŸ”Ή Observability: Centralized logging (ELK), metrics (Prometheus), tracing

πŸ”Ή Security: Image scanning, secret management, network policies


Conclusion 🎬

Docker is not just a toolβ€”it's a mindset shift toward infrastructure-as-code, reproducibility, and operational excellence.

Week 10 taught me:

  • βœ… Containerization solves the reproducibility problem
  • βœ… Multi-stage builds are non-negotiable for production
  • βœ… Networks & volumes enable reliable multi-container systems
  • βœ… Health checks & startup ordering prevent cascading failures
  • βœ… Reverse proxies provide security & flexibility

The real power of Docker isn't that it makes deployment easyβ€”it's that it enables automation, consistency, and confidence at every stage of the software lifecycle.


eco


Related Hashtags:

#Docker #Containerization #DevOps #Production #Kubernetes #Microservices #CloudNative #AWS #Azure #CI/CD #Infrastructure #LearningInPublic #TechBlog #SoftwareEngineering


Top comments (0)