Series: From "Just Put It on a Server" to Production DevOps
Reading time: 14 minutes
Level: Beginner to Intermediate
The "Works on My Machine" Problem
It's Monday morning. Your coworker tries to deploy a critical bug fix to production.
They SSH into the server, pull the latest code, and restart the app with PM2.
The app crashes.
Error: Cannot find module 'pg'
"That's weird," they say. "It works on my machine."
They run npm install. Still crashes.
Error: The module '/opt/sspp/node_modules/bcrypt/...' was compiled against a different Node.js version
Now they're rebuilding native modules. Still failing.
After 90 minutes of debugging, they discover:
- Production has Node 16.x (they have 18.x)
- Production has different OpenSSL version (native module incompatibility)
- Production PostgreSQL is 14, code uses 15 features
- Someone manually edited files on the server (never committed to git)
The bug fix still isn't deployed. Users are angry.
This is dependency hellβand it kills productivity.
What Are Containers?
Containers solve the "works on my machine" problem by packaging your entire runtime environment:
- Your code
- All dependencies (node_modules, system libraries)
- The exact runtime (specific Node.js version)
- System tools (curl, git, whatever you need)
Everything your app needs to run, bundled into a single, portable package called a container image.
Containers vs Virtual Machines
Virtual Machines:
βββββββββββββββββββββββββββββββββββ
β Application β
βββββββββββββββββββββββββββββββββββ€
β Node.js + Dependencies β
βββββββββββββββββββββββββββββββββββ€
β Guest OS (Ubuntu) β β Full OS copy
βββββββββββββββββββββββββββββββββββ€
β Hypervisor β β Virtualization layer
βββββββββββββββββββββββββββββββββββ€
β Host OS (Linux) β
βββββββββββββββββββββββββββββββββββ€
β Hardware β
βββββββββββββββββββββββββββββββββββ
Containers:
βββββββββββββββββββββββββββββββββββ
β Application β
βββββββββββββββββββββββββββββββββββ€
β Node.js + Dependencies β
βββββββββββββββββββββββββββββββββββ€
β Container Runtime (Docker) β β Lightweight isolation
βββββββββββββββββββββββββββββββββββ€
β Host OS (Linux) β
βββββββββββββββββββββββββββββββββββ€
β Hardware β
βββββββββββββββββββββββββββββββββββ
Key Differences:
| Aspect | Virtual Machine | Container |
|---|---|---|
| Size | GBs (full OS) | MBs (just your app) |
| Startup | Minutes | Seconds |
| Isolation | Strong (separate kernel) | Process-level |
| Overhead | High (full OS per VM) | Minimal |
| Portability | Moderate | High |
The magic: Containers share the host OS kernel but isolate everything else.
Installing Docker
On your Linode server:
# Update packages
apt update
# Install prerequisites
apt install -y apt-transport-https ca-certificates curl software-properties-common
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
# Add Docker repository
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install Docker
apt update
apt install -y docker-ce docker-ce-cli containerd.io
# Start Docker
systemctl start docker
systemctl enable docker
# Verify
docker --version
docker run hello-world
Output:
Hello from Docker!
This message shows that your installation appears to be working correctly.
Building Our First Container: The API Service
Step 1: Create a Dockerfile
A Dockerfile is a recipe for building a container image.
cd /opt/sspp/services/api
nano Dockerfile
# Stage 1: Builder
FROM node:18-alpine AS builder
# Enable pnpm
RUN corepack enable && corepack prepare pnpm@latest --activate
WORKDIR /app
# Copy dependency files
COPY package.json pnpm-lock.yaml* ./
# Install dependencies
RUN pnpm install --frozen-lockfile
# Copy source code
COPY . .
# Build TypeScript to JavaScript
RUN pnpm run build
# Stage 2: Production
FROM node:18-alpine
# Enable pnpm
RUN corepack enable && corepack prepare pnpm@latest --activate
WORKDIR /app
# Copy dependency files
COPY package.json pnpm-lock.yaml* ./
# Install ONLY production dependencies
RUN pnpm install --prod --frozen-lockfile
# Copy built application from builder stage
COPY --from=builder /app/dist ./dist
# Expose port
EXPOSE 3000
# Run the app
CMD ["pnpm", "run", "start:prod"]
Let's break this down:
Multi-Stage Build
We use two stages to keep the final image small:
- Builder stage: Has dev dependencies, compiles TypeScript
- Production stage: Only runtime dependencies, no build tools
Why? The final image is 50-70% smaller.
Base Image: node:18-alpine
-
node:18= Node.js version 18 -
alpine= Minimal Linux distro (~5MB vs ~100MB for Ubuntu-based)
WORKDIR
Sets the working directory inside the container to /app.
COPY
Copies files from your local filesystem into the image.
COPY package.json pnpm-lock.yaml* ./
The * makes pnpm-lock.yaml optional (if it doesn't exist, no error).
RUN
Executes commands during image build:
RUN pnpm install --frozen-lockfile
--frozen-lockfile ensures exact dependency versions (reproducible builds).
EXPOSE
Documents that the container listens on port 3000 (doesn't actually publish it).
CMD
The command to run when the container starts:
CMD ["pnpm", "run", "start:prod"]
Step 2: Build the Image
docker build -t sspp-api:latest .
What happens:
- Docker reads the Dockerfile
- Pulls the
node:18-alpinebase image (if not cached) - Runs each instruction (RUN, COPY, etc.)
- Creates layers (each instruction = one layer)
- Tags the final image as
sspp-api:latest
This takes 2-5 minutes the first time. Subsequent builds are faster (cached layers).
Output:
[+] Building 123.4s (17/17) FINISHED
=> [internal] load build definition from Dockerfile
=> [internal] load .dockerignore
=> [builder 1/6] FROM docker.io/library/node:18-alpine
=> [builder 2/6] RUN corepack enable && corepack prepare pnpm@latest --activate
=> [builder 3/6] COPY package.json pnpm-lock.yaml* ./
=> [builder 4/6] RUN pnpm install --frozen-lockfile
=> [builder 5/6] COPY . .
=> [builder 6/6] RUN pnpm run build
=> [stage-1 2/5] RUN corepack enable && corepack prepare pnpm@latest --activate
=> [stage-1 3/5] COPY package.json pnpm-lock.yaml* ./
=> [stage-1 4/5] RUN pnpm install --prod --frozen-lockfile
=> [stage-1 5/5] COPY --from=builder /app/dist ./dist
=> exporting to image
=> => naming to docker.io/library/sspp-api:latest
Verify the image:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
sspp-api latest a1b2c3d4e5f6 30 seconds ago 185MB
Step 3: Run the Container
docker run -d \
--name sspp-api \
-p 3000:3000 \
-e NODE_ENV=production \
-e DB_HOST=172.17.0.1 \
-e DB_PORT=5432 \
-e DB_NAME=sales_signals \
-e DB_USER=sspp_user \
-e DB_PASSWORD=sspp_password \
-e REDIS_HOST=172.17.0.1 \
-e REDIS_PORT=6379 \
-e ELASTICSEARCH_URL=http://172.17.0.1:9200 \
sspp-api:latest
Flags explained:
-
-d= Detached (run in background) -
--name sspp-api= Container name (for easy reference) -
-p 3000:3000= Port mapping (host:container) -
-e KEY=value= Environment variables -
sspp-api:latest= Image to run
What's 172.17.0.1? That's the Docker bridge network gatewayβhow containers reach the host machine's services (PostgreSQL, Redis).
Check if it's running:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f8a9b1c2d3e4 sspp-api:latest "docker-entrypoint..." 10 seconds ago Up 9 seconds 0.0.0.0:3000->3000/tcp sspp-api
Test it:
curl http://localhost:3000/api/v1/health
Output:
{
"status": "ok",
"timestamp": "2025-12-22T12:00:00.000Z"
}
π Your containerized API is running!
Step 4: View Logs
docker logs sspp-api
# Live tail
docker logs -f sspp-api
# Last 50 lines
docker logs --tail 50 sspp-api
Building the Worker Container
Same process for the worker service:
cd /opt/sspp/services/worker
nano Dockerfile
FROM node:18-alpine AS builder
RUN corepack enable && corepack prepare pnpm@latest --activate
WORKDIR /app
COPY package.json pnpm-lock.yaml* ./
RUN pnpm install --frozen-lockfile
COPY . .
RUN pnpm run build
FROM node:18-alpine
RUN corepack enable && corepack prepare pnpm@latest --activate
WORKDIR /app
COPY package.json pnpm-lock.yaml* ./
RUN pnpm install --prod --frozen-lockfile
COPY --from=builder /app/dist ./dist
CMD ["pnpm", "start"]
Build and run:
docker build -t sspp-worker:latest .
docker run -d \
--name sspp-worker \
-e NODE_ENV=production \
-e DB_HOST=172.17.0.1 \
-e DB_PORT=5432 \
-e DB_NAME=sales_signals \
-e DB_USER=sspp_user \
-e DB_PASSWORD=sspp_password \
-e REDIS_HOST=172.17.0.1 \
-e REDIS_PORT=6379 \
-e ELASTICSEARCH_URL=http://172.17.0.1:9200 \
-e QUEUE_NAME=sales-events \
sspp-worker:latest
Check status:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f8a9b1c2d3e4 sspp-api:latest "..." 5 minutes ago Up 5 minutes 0.0.0.0:3000->3000/tcp sspp-api
a1b2c3d4e5f6 sspp-worker:latest "..." 10 seconds ago Up 9 seconds sspp-worker
What We Just Accomplished
1. Reproducible Builds
Anyone can build the exact same image:
git clone https://github.com/daviesbrown/sspp
cd sspp/services/api
docker build -t sspp-api:latest .
Same code + same Dockerfile = same image. Always.
2. Isolated Dependencies
Each container has its own:
- Node.js version
- npm/pnpm version
- System libraries
- Environment variables
No more version conflicts.
3. Portable
Build on your Mac, run on Linux. Build on dev, run on prod. It's the same image.
4. Lightweight
docker images
REPOSITORY TAG SIZE
sspp-api latest 185MB
sspp-worker latest 178MB
Compare to a full Ubuntu VM: 2-5GB.
Docker Layer Caching
Docker is smart about rebuilding. Each instruction creates a layer:
FROM node:18-alpine # Layer 1 (cached if unchanged)
COPY package.json ./ # Layer 2 (cached if files unchanged)
RUN pnpm install # Layer 3 (cached if layer 2 unchanged)
COPY . . # Layer 4 (cached if files unchanged)
RUN pnpm build # Layer 5 (cached if layer 4 unchanged)
Order matters! Put frequently-changing files (source code) after rarely-changing files (dependencies).
Good:
COPY package.json ./ # Changes rarely
RUN pnpm install # Cached most of the time
COPY . . # Changes often
Bad:
COPY . . # Changes often
RUN pnpm install # Runs every time (slow!)
Common Docker Commands
Images
# List images
docker images
# Remove image
docker rmi sspp-api:latest
# Remove unused images
docker image prune
# Remove ALL images
docker rmi $(docker images -q)
Containers
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop container
docker stop sspp-api
# Start stopped container
docker start sspp-api
# Restart container
docker restart sspp-api
# Remove container
docker rm sspp-api
# Force remove (even if running)
docker rm -f sspp-api
# Remove all stopped containers
docker container prune
Logs & Debugging
# View logs
docker logs sspp-api
# Execute command in running container
docker exec -it sspp-api sh
# Inspect container details
docker inspect sspp-api
# View resource usage
docker stats sspp-api
What We Solved
β
"Works on my machine" - Same environment everywhere
β
Dependency conflicts - Each container is isolated
β
Version management - Exact Node.js, system libs
β
Reproducible builds - Same Dockerfile = same image
β
Portability - Run anywhere Docker runs
β
Lightweight - Much smaller than VMs
What We Didn't Solve
β Multi-container coordination - Manual networking, port management
β Service discovery - How does API find Redis? Hard-coded IPs
β Volume management - What about database data persistence?
β Environment variables - Still passing 10+ -e flags per container
β Startup order - What if PostgreSQL isn't ready yet?
β Scaling - Running multiple workers is manual
We're running containers, but managing them is still tedious.
Real-World Docker Tips
1. Use .dockerignore
Prevent copying unnecessary files:
cat > .dockerignore <<EOF
node_modules
.git
.env
*.log
dist
coverage
EOF
2. Don't Run as Root
Security best practice:
# Create non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Switch to that user
USER appuser
3. Use Specific Tags
# β Don't use 'latest'
FROM node:latest
# β
Use specific version
FROM node:18.19.0-alpine3.19
4. Health Checks
Tell Docker how to check if your app is healthy:
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
5. Multi-Stage Builds Always
Keep final images small by separating build and runtime stages.
What's Next?
We've containerized our services! But running them individually with docker run doesn't scale.
In Part 4, we'll use Docker Compose to:
- Manage multiple containers together
- Define networking automatically
- Set environment variables in one place
- Control startup order
- Run the entire stack with one command
Spoiler: docker-compose up and your entire system (API, Worker, PostgreSQL, Redis, Elasticsearch) starts in perfect harmony.
Try It Yourself
Challenge: Containerize both API and Worker services, then:
- Build images for both
- Run them with proper environment variables
- Send an event to the API
- Watch the Worker process it (check logs)
- Verify data in PostgreSQL
Bonus: Modify the Dockerfile to add a health check endpoint.
Discussion
What's your Docker horror story? Or success story?
Share on GitHub Discussions.
Previous: Part 2: Process Managers - Keeping Your App Alive with PM2
Next: Part 4: Running Multiple Services Locally with Docker Compose
About the Author
Documenting real DevOps infrastructure for my Proton.ai application. Hiring? Let's connect.
- GitHub: @daviesbrown
- LinkedIn: David Nwosu
Top comments (0)