DEV Community

Cover image for Part 3: Dependency Hell - Why Docker Exists
David Nwosu
David Nwosu

Posted on

Part 3: Dependency Hell - Why Docker Exists

Series: From "Just Put It on a Server" to Production DevOps

Reading time: 14 minutes

Level: Beginner to Intermediate


The "Works on My Machine" Problem

It's Monday morning. Your coworker tries to deploy a critical bug fix to production.

They SSH into the server, pull the latest code, and restart the app with PM2.

The app crashes.

Error: Cannot find module 'pg'
Enter fullscreen mode Exit fullscreen mode

"That's weird," they say. "It works on my machine."

They run npm install. Still crashes.

Error: The module '/opt/sspp/node_modules/bcrypt/...' was compiled against a different Node.js version
Enter fullscreen mode Exit fullscreen mode

Now they're rebuilding native modules. Still failing.

After 90 minutes of debugging, they discover:

  • Production has Node 16.x (they have 18.x)
  • Production has different OpenSSL version (native module incompatibility)
  • Production PostgreSQL is 14, code uses 15 features
  • Someone manually edited files on the server (never committed to git)

The bug fix still isn't deployed. Users are angry.

This is dependency hellβ€”and it kills productivity.


What Are Containers?

Containers solve the "works on my machine" problem by packaging your entire runtime environment:

  • Your code
  • All dependencies (node_modules, system libraries)
  • The exact runtime (specific Node.js version)
  • System tools (curl, git, whatever you need)

Everything your app needs to run, bundled into a single, portable package called a container image.

Containers vs Virtual Machines

Virtual Machines:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Application             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚       Node.js + Dependencies    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚      Guest OS (Ubuntu)          β”‚  ← Full OS copy
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚         Hypervisor              β”‚  ← Virtualization layer
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚      Host OS (Linux)            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚         Hardware                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Enter fullscreen mode Exit fullscreen mode

Containers:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Application             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚       Node.js + Dependencies    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚      Container Runtime (Docker) β”‚  ← Lightweight isolation
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚      Host OS (Linux)            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚         Hardware                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Enter fullscreen mode Exit fullscreen mode

Key Differences:

Aspect Virtual Machine Container
Size GBs (full OS) MBs (just your app)
Startup Minutes Seconds
Isolation Strong (separate kernel) Process-level
Overhead High (full OS per VM) Minimal
Portability Moderate High

The magic: Containers share the host OS kernel but isolate everything else.


Installing Docker

On your Linode server:

# Update packages
apt update

# Install prerequisites
apt install -y apt-transport-https ca-certificates curl software-properties-common

# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

# Add Docker repository
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# Install Docker
apt update
apt install -y docker-ce docker-ce-cli containerd.io

# Start Docker
systemctl start docker
systemctl enable docker

# Verify
docker --version
docker run hello-world
Enter fullscreen mode Exit fullscreen mode

Output:

Hello from Docker!
This message shows that your installation appears to be working correctly.
Enter fullscreen mode Exit fullscreen mode

Building Our First Container: The API Service

Step 1: Create a Dockerfile

A Dockerfile is a recipe for building a container image.

cd /opt/sspp/services/api
nano Dockerfile
Enter fullscreen mode Exit fullscreen mode
# Stage 1: Builder
FROM node:18-alpine AS builder

# Enable pnpm
RUN corepack enable && corepack prepare pnpm@latest --activate

WORKDIR /app

# Copy dependency files
COPY package.json pnpm-lock.yaml* ./

# Install dependencies
RUN pnpm install --frozen-lockfile

# Copy source code
COPY . .

# Build TypeScript to JavaScript
RUN pnpm run build

# Stage 2: Production
FROM node:18-alpine

# Enable pnpm
RUN corepack enable && corepack prepare pnpm@latest --activate

WORKDIR /app

# Copy dependency files
COPY package.json pnpm-lock.yaml* ./

# Install ONLY production dependencies
RUN pnpm install --prod --frozen-lockfile

# Copy built application from builder stage
COPY --from=builder /app/dist ./dist

# Expose port
EXPOSE 3000

# Run the app
CMD ["pnpm", "run", "start:prod"]
Enter fullscreen mode Exit fullscreen mode

Let's break this down:

Multi-Stage Build

We use two stages to keep the final image small:

  1. Builder stage: Has dev dependencies, compiles TypeScript
  2. Production stage: Only runtime dependencies, no build tools

Why? The final image is 50-70% smaller.

Base Image: node:18-alpine

  • node:18 = Node.js version 18
  • alpine = Minimal Linux distro (~5MB vs ~100MB for Ubuntu-based)

WORKDIR

Sets the working directory inside the container to /app.

COPY

Copies files from your local filesystem into the image.

COPY package.json pnpm-lock.yaml* ./
Enter fullscreen mode Exit fullscreen mode

The * makes pnpm-lock.yaml optional (if it doesn't exist, no error).

RUN

Executes commands during image build:

RUN pnpm install --frozen-lockfile
Enter fullscreen mode Exit fullscreen mode

--frozen-lockfile ensures exact dependency versions (reproducible builds).

EXPOSE

Documents that the container listens on port 3000 (doesn't actually publish it).

CMD

The command to run when the container starts:

CMD ["pnpm", "run", "start:prod"]
Enter fullscreen mode Exit fullscreen mode

Step 2: Build the Image

docker build -t sspp-api:latest .
Enter fullscreen mode Exit fullscreen mode

What happens:

  1. Docker reads the Dockerfile
  2. Pulls the node:18-alpine base image (if not cached)
  3. Runs each instruction (RUN, COPY, etc.)
  4. Creates layers (each instruction = one layer)
  5. Tags the final image as sspp-api:latest

This takes 2-5 minutes the first time. Subsequent builds are faster (cached layers).

Output:

[+] Building 123.4s (17/17) FINISHED
 => [internal] load build definition from Dockerfile
 => [internal] load .dockerignore
 => [builder 1/6] FROM docker.io/library/node:18-alpine
 => [builder 2/6] RUN corepack enable && corepack prepare pnpm@latest --activate
 => [builder 3/6] COPY package.json pnpm-lock.yaml* ./
 => [builder 4/6] RUN pnpm install --frozen-lockfile
 => [builder 5/6] COPY . .
 => [builder 6/6] RUN pnpm run build
 => [stage-1 2/5] RUN corepack enable && corepack prepare pnpm@latest --activate
 => [stage-1 3/5] COPY package.json pnpm-lock.yaml* ./
 => [stage-1 4/5] RUN pnpm install --prod --frozen-lockfile
 => [stage-1 5/5] COPY --from=builder /app/dist ./dist
 => exporting to image
 => => naming to docker.io/library/sspp-api:latest
Enter fullscreen mode Exit fullscreen mode

Verify the image:

docker images

REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
sspp-api     latest    a1b2c3d4e5f6   30 seconds ago   185MB
Enter fullscreen mode Exit fullscreen mode

Step 3: Run the Container

docker run -d \
  --name sspp-api \
  -p 3000:3000 \
  -e NODE_ENV=production \
  -e DB_HOST=172.17.0.1 \
  -e DB_PORT=5432 \
  -e DB_NAME=sales_signals \
  -e DB_USER=sspp_user \
  -e DB_PASSWORD=sspp_password \
  -e REDIS_HOST=172.17.0.1 \
  -e REDIS_PORT=6379 \
  -e ELASTICSEARCH_URL=http://172.17.0.1:9200 \
  sspp-api:latest
Enter fullscreen mode Exit fullscreen mode

Flags explained:

  • -d = Detached (run in background)
  • --name sspp-api = Container name (for easy reference)
  • -p 3000:3000 = Port mapping (host:container)
  • -e KEY=value = Environment variables
  • sspp-api:latest = Image to run

What's 172.17.0.1? That's the Docker bridge network gatewayβ€”how containers reach the host machine's services (PostgreSQL, Redis).

Check if it's running:

docker ps

CONTAINER ID   IMAGE              COMMAND                  CREATED          STATUS          PORTS                    NAMES
f8a9b1c2d3e4   sspp-api:latest    "docker-entrypoint..."   10 seconds ago   Up 9 seconds    0.0.0.0:3000->3000/tcp   sspp-api
Enter fullscreen mode Exit fullscreen mode

Test it:

curl http://localhost:3000/api/v1/health
Enter fullscreen mode Exit fullscreen mode

Output:

{
  "status": "ok",
  "timestamp": "2025-12-22T12:00:00.000Z"
}
Enter fullscreen mode Exit fullscreen mode

πŸŽ‰ Your containerized API is running!


Step 4: View Logs

docker logs sspp-api

# Live tail
docker logs -f sspp-api

# Last 50 lines
docker logs --tail 50 sspp-api
Enter fullscreen mode Exit fullscreen mode

Building the Worker Container

Same process for the worker service:

cd /opt/sspp/services/worker
nano Dockerfile
Enter fullscreen mode Exit fullscreen mode
FROM node:18-alpine AS builder

RUN corepack enable && corepack prepare pnpm@latest --activate

WORKDIR /app

COPY package.json pnpm-lock.yaml* ./
RUN pnpm install --frozen-lockfile

COPY . .
RUN pnpm run build

FROM node:18-alpine

RUN corepack enable && corepack prepare pnpm@latest --activate

WORKDIR /app

COPY package.json pnpm-lock.yaml* ./
RUN pnpm install --prod --frozen-lockfile

COPY --from=builder /app/dist ./dist

CMD ["pnpm", "start"]
Enter fullscreen mode Exit fullscreen mode

Build and run:

docker build -t sspp-worker:latest .

docker run -d \
  --name sspp-worker \
  -e NODE_ENV=production \
  -e DB_HOST=172.17.0.1 \
  -e DB_PORT=5432 \
  -e DB_NAME=sales_signals \
  -e DB_USER=sspp_user \
  -e DB_PASSWORD=sspp_password \
  -e REDIS_HOST=172.17.0.1 \
  -e REDIS_PORT=6379 \
  -e ELASTICSEARCH_URL=http://172.17.0.1:9200 \
  -e QUEUE_NAME=sales-events \
  sspp-worker:latest
Enter fullscreen mode Exit fullscreen mode

Check status:

docker ps

CONTAINER ID   IMAGE                  COMMAND       CREATED          STATUS          PORTS                    NAMES
f8a9b1c2d3e4   sspp-api:latest        "..."         5 minutes ago    Up 5 minutes    0.0.0.0:3000->3000/tcp   sspp-api
a1b2c3d4e5f6   sspp-worker:latest     "..."         10 seconds ago   Up 9 seconds                             sspp-worker
Enter fullscreen mode Exit fullscreen mode

What We Just Accomplished

1. Reproducible Builds

Anyone can build the exact same image:

git clone https://github.com/daviesbrown/sspp
cd sspp/services/api
docker build -t sspp-api:latest .
Enter fullscreen mode Exit fullscreen mode

Same code + same Dockerfile = same image. Always.

2. Isolated Dependencies

Each container has its own:

  • Node.js version
  • npm/pnpm version
  • System libraries
  • Environment variables

No more version conflicts.

3. Portable

Build on your Mac, run on Linux. Build on dev, run on prod. It's the same image.

4. Lightweight

docker images

REPOSITORY      TAG       SIZE
sspp-api        latest    185MB
sspp-worker     latest    178MB
Enter fullscreen mode Exit fullscreen mode

Compare to a full Ubuntu VM: 2-5GB.


Docker Layer Caching

Docker is smart about rebuilding. Each instruction creates a layer:

FROM node:18-alpine              # Layer 1 (cached if unchanged)
COPY package.json ./             # Layer 2 (cached if files unchanged)
RUN pnpm install                 # Layer 3 (cached if layer 2 unchanged)
COPY . .                         # Layer 4 (cached if files unchanged)
RUN pnpm build                   # Layer 5 (cached if layer 4 unchanged)
Enter fullscreen mode Exit fullscreen mode

Order matters! Put frequently-changing files (source code) after rarely-changing files (dependencies).

Good:

COPY package.json ./     # Changes rarely
RUN pnpm install         # Cached most of the time
COPY . .                 # Changes often
Enter fullscreen mode Exit fullscreen mode

Bad:

COPY . .                 # Changes often
RUN pnpm install         # Runs every time (slow!)
Enter fullscreen mode Exit fullscreen mode

Common Docker Commands

Images

# List images
docker images

# Remove image
docker rmi sspp-api:latest

# Remove unused images
docker image prune

# Remove ALL images
docker rmi $(docker images -q)
Enter fullscreen mode Exit fullscreen mode

Containers

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# Stop container
docker stop sspp-api

# Start stopped container
docker start sspp-api

# Restart container
docker restart sspp-api

# Remove container
docker rm sspp-api

# Force remove (even if running)
docker rm -f sspp-api

# Remove all stopped containers
docker container prune
Enter fullscreen mode Exit fullscreen mode

Logs & Debugging

# View logs
docker logs sspp-api

# Execute command in running container
docker exec -it sspp-api sh

# Inspect container details
docker inspect sspp-api

# View resource usage
docker stats sspp-api
Enter fullscreen mode Exit fullscreen mode

What We Solved

βœ… "Works on my machine" - Same environment everywhere

βœ… Dependency conflicts - Each container is isolated

βœ… Version management - Exact Node.js, system libs

βœ… Reproducible builds - Same Dockerfile = same image

βœ… Portability - Run anywhere Docker runs

βœ… Lightweight - Much smaller than VMs


What We Didn't Solve

❌ Multi-container coordination - Manual networking, port management

❌ Service discovery - How does API find Redis? Hard-coded IPs

❌ Volume management - What about database data persistence?

❌ Environment variables - Still passing 10+ -e flags per container

❌ Startup order - What if PostgreSQL isn't ready yet?

❌ Scaling - Running multiple workers is manual

We're running containers, but managing them is still tedious.


Real-World Docker Tips

1. Use .dockerignore

Prevent copying unnecessary files:

cat > .dockerignore <<EOF
node_modules
.git
.env
*.log
dist
coverage
EOF
Enter fullscreen mode Exit fullscreen mode

2. Don't Run as Root

Security best practice:

# Create non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Switch to that user
USER appuser
Enter fullscreen mode Exit fullscreen mode

3. Use Specific Tags

# ❌ Don't use 'latest'
FROM node:latest

# βœ… Use specific version
FROM node:18.19.0-alpine3.19
Enter fullscreen mode Exit fullscreen mode

4. Health Checks

Tell Docker how to check if your app is healthy:

HEALTHCHECK --interval=30s --timeout=3s --start-period=40s \
  CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
Enter fullscreen mode Exit fullscreen mode

5. Multi-Stage Builds Always

Keep final images small by separating build and runtime stages.


What's Next?

We've containerized our services! But running them individually with docker run doesn't scale.

In Part 4, we'll use Docker Compose to:

  • Manage multiple containers together
  • Define networking automatically
  • Set environment variables in one place
  • Control startup order
  • Run the entire stack with one command

Spoiler: docker-compose up and your entire system (API, Worker, PostgreSQL, Redis, Elasticsearch) starts in perfect harmony.


Try It Yourself

Challenge: Containerize both API and Worker services, then:

  1. Build images for both
  2. Run them with proper environment variables
  3. Send an event to the API
  4. Watch the Worker process it (check logs)
  5. Verify data in PostgreSQL

Bonus: Modify the Dockerfile to add a health check endpoint.


Discussion

What's your Docker horror story? Or success story?

Share on GitHub Discussions.


Previous: Part 2: Process Managers - Keeping Your App Alive with PM2

Next: Part 4: Running Multiple Services Locally with Docker Compose

About the Author

Documenting real DevOps infrastructure for my Proton.ai application. Hiring? Let's connect.

Top comments (0)