Yesterday, I had a working PostgreSQL database with real data. But there was one problem...
My backend still ran with npm run dev. It worked on MY laptop, but would it work anywhere else?
Today, I solved that forever.
I packaged my Node.js backend into a Docker container — a self-contained unit that runs IDENTICALLY on any computer. And I learned why containers are revolutionizing software deployment.
First: What Even IS Docker? (Explained Simply)
The Problem Docker Solves
Before Docker (The "It works on my machine" nightmare):
| Computer | Issue |
|---|---|
| Your laptop | Node.js v18, PostgreSQL local |
| Your coworker's laptop | Node.js v14, different OS |
| Cloud server | Ubuntu, missing dependencies |
| Production | Completely different environment |
Result: "But it worked on MY machine!"
The Docker Solution
Think of Docker like a shipping container:
| Analogy | Docker |
|---|---|
| Shipping container | Docker container |
| Ship, train, truck | Any computer running Docker |
| Your stuff inside | Your app + Node.js + all dependencies |
The magic: Once it's in a container, it runs IDENTICALLY everywhere.
Step 1: Checking Docker Installation
First, I needed to verify Docker was installed on my Ubuntu system:
docker --version
Output:
Docker version 29.2.1, build a5c7197
Docker was installed! But I hit a permission issue:
docker ps
Error:
permission denied while trying to connect to the docker API
Fixing Docker Permissions
The issue: My user wasn't in the docker group.
Solution:
# Add my user to docker group
sudo usermod -aG docker mkangeth
# Log out and back in (or restart)
# Then verify
docker ps
Alternative (what I used): Added sudo before each docker command:
sudo docker ps
Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Empty list = Docker is working!
Step 2: Creating the Dockerfile
A Dockerfile is like a recipe that tells Docker how to build your container.
What is a Dockerfile?
Think of it like baking a cake:
| Baking Analogy | Dockerfile Instruction |
|---|---|
| Start with base ingredients | FROM node:18-alpine |
| Set up your workspace | WORKDIR /app |
| Add your ingredients | COPY package*.json ./ |
| Mix everything | RUN npm install |
| Present the final cake | CMD ["npm", "start"] |
Creating the File
First, navigate to my backend folder:
cd ~/Desktop/Production-Ready-Microservices-Platform/backend
Create the Dockerfile:
touch Dockerfile
nano Dockerfile
The Complete Dockerfile (Line by Line)
# Use Node.js 18 on Alpine Linux (tiny Linux, only 5MB!)
FROM node:18-alpine
# Create and set working directory inside the container
WORKDIR /app
# Copy package files first (for Docker caching)
COPY package*.json ./
# Install all dependencies
RUN npm install
# Copy all source code
COPY . .
# Tell Docker the container listens on port 5000
EXPOSE 5000
# Start the application
CMD ["npm", "start"]
Why Each Line Matters
| Instruction | What it does | Why it's important |
|---|---|---|
FROM node:18-alpine |
Base image with Node.js | Alpine is tiny (5MB vs 100MB+) |
WORKDIR /app |
Sets working directory | All commands run from here |
COPY package*.json ./ |
Copies dependency list | Docker caches this layer |
RUN npm install |
Installs dependencies | Happens inside the container |
COPY . . |
Copies your code | Last layer (changes most often) |
EXPOSE 5000 |
Documents the port | Doesn't publish, just informs |
CMD ["npm", "start"] |
Runs when container starts | Production command, not dev |
Step 3: Building the Docker Image
Now the magic happens — building my image:
sudo docker build -t backend-app .
Breaking down the command:
| Part | Meaning |
|---|---|
sudo docker build |
Build an image from a Dockerfile |
-t backend-app |
Tag (name) the image "backend-app" |
. |
Use Dockerfile in current directory |
What I saw:
[+] Building 45.2s (10/10) FINISHED
=> [1/5] FROM node:18-alpine
=> [2/5] WORKDIR /app
=> [3/5] COPY package*.json ./
=> [4/5] RUN npm install
=> [5/5] COPY . .
=> exporting to image
=> => naming to docker.io/library/backend-app:latest
Verifying the Image
sudo docker images
Output:
REPOSITORY TAG IMAGE ID CREATED SIZE
backend-app latest e0e0904815be 2 minutes ago 135MB
node 18-alpine f0286cc18189 2 weeks ago 120MB
My image is 135MB — much smaller than a full Ubuntu image would be (thanks, Alpine!)
Step 4: Running the Container
Time to run my backend inside a container:
sudo docker run -p 3001:5000 backend-app
Understanding port mapping:
| Part | Meaning |
|---|---|
-p 3001:5000 |
Map port 3001 (my computer) → port 5000 (container) |
backend-app |
Which image to use |
Why port mapping?
- Inside container: app runs on port 5000
- On my computer: I can access it via port 3001
Output:
> backend@1.0.0 start
> node server.js
Server is running on http://localhost:5000
Try these endpoints:
- http://localhost:5000/
- http://localhost:5000/health
- http://localhost:5000/users
The terminal stays "stuck" here — that's GOOD! My server is running!
Step 5: Testing the Container
In a new terminal, I tested my running container:
curl http://localhost:3001/health
Response:
{"status":"healthy","service":"backend-api","timestamp":"2026-04-11T14:03:42.057Z"}
Then tested the users endpoint:
curl http://localhost:3001/users
Response:
[
{"id":1,"name":"Alice","email":"alice@example.com"},
{"id":2,"name":"Bob","email":"bob@example.com"},
{"id":3,"name":"Charlie","email":"charlie@example.com"}
]
SUCCESS! My Dockerized backend was working perfectly!
Step 6: Stopping the Container
To stop the container, I went back to the first terminal and pressed:
Ctrl + C
The server shut down gracefully.
What I Learned
Docker Concepts I Mastered
| Concept | What I learned |
|---|---|
| Image | A blueprint/template (like a class in programming) |
| Container | A running instance of an image (like an object) |
| Dockerfile | The recipe that defines how to build an image |
| Port mapping | Connecting container ports to host ports |
| Layer caching | Docker reuses unchanged layers for faster builds |
The Docker Workflow
Dockerfile → docker build → Image → docker run → Container
(recipe) (build) (blueprint) (run) (running app)
Why Alpine Linux?
| Base Image | Size | Use case |
|---|---|---|
node:latest |
~1GB | Full OS, many tools |
node:18-slim |
~200MB | Minimal Debian |
node:18-alpine |
~120MB | Tiny, security-focused |
I chose Alpine: Smaller = faster downloads, less storage, smaller attack surface.
Mistakes I Made (And Fixed)
Mistake 1: Docker Permission Denied
Error:
permission denied while trying to connect to the docker API
Fix: Added sudo before docker commands OR added user to docker group:
sudo usermod -aG docker $USER
# Then log out and back in
Mistake 2: Wrong Port Mapping
What happened: My server ran on port 5000, but I mapped to port 8000
Fix:
# Wrong
sudo docker run -p 3001:8000 backend-app
# Correct
sudo docker run -p 3001:5000 backend-app
Lesson: Always check what port your app actually uses!
Docker Commands Cheat Sheet
| Command | What it does |
|---|---|
docker --version |
Check Docker version |
docker images |
List all images on your computer |
docker ps |
List RUNNING containers |
docker ps -a |
List ALL containers (including stopped) |
docker build -t name . |
Build an image from Dockerfile |
docker run -p HOST:CONTAINER image |
Run a container from an image |
docker stop container_id |
Stop a running container |
docker rm container_id |
Remove a stopped container |
docker rmi image_id |
Remove an image |
docker logs container_id |
See container logs |
Key Takeaways
- Containers are NOT virtual machines — They share the host OS kernel, making them lightweight and fast
- Dockerfiles are recipes — They define exactly how to build your container
- Images are blueprints — You build once, run anywhere
- Containers are running instances — You can run many containers from one image
- Port mapping is essential — Containers have their own network; you must expose ports
- Alpine Linux is your friend — Tiny images = faster everything
Resources
- Docker Official Documentation
- Dockerfile Reference
- Node.js Docker Best Practices
- Alpine Linux (tiny Docker images)
Let's Connect!
Have you used Docker before? What was your first container? Any tips for someone just starting?
Drop a comment or connect on LinkedIn. Let's learn together!
This is Day 5 of my 30-Day Cloud & DevOps Challenge. Follow along as I build a complete microservices platform from scratch!
Top comments (0)