๐ณ You're probably using Docker wrong
Not trying to be mean. But every week I see developers run docker run with 12 flags they copied from Stack Overflow without understanding a single one.
Docker isn't hard. It's just badly taught.
Most tutorials explain it like a college textbook. That's the problem. Let me explain it like a human.
๐ฏ 1) docker system prune โ Clean up ALL the junk
Your Docker is eating 40GB of disk space and you don't know why.
Here's why: every build, every container, every image you ever pulled โ they're all still there. Docker hoards everything.
# Nuclear option โ removes EVERYTHING unused
docker system prune -a
# You'll see:
# Deleted Containers: 23
# Deleted Images: 47
# Deleted build cache: 12.4GB
# Total reclaimed space: 31.2GB
๐ง What's actually getting deleted?
- Stopped containers
- Dangling images (no tag, not used by any container)
- Unused networks
- Build cache
- With
-a: ALL unused images (not just dangling ones)
Pro tip โ be more surgical:
# Only remove stopped containers
docker container prune
# Only remove unused images
docker image prune -a
# Only remove build cache
docker builder prune
# Only remove unused volumes (โ ๏ธ careful โ this deletes data)
docker volume prune
Use it when: your disk is full, you've been building for weeks, you want a fresh start without reinstalling Docker.
โก 2) docker exec -it โ Jump inside a running container
When something breaks IN a container, you need to get inside it.
# Get a shell inside a running container
docker exec -it my-app /bin/bash
# If bash isn't available (Alpine images), use sh
docker exec -it my-app /bin/sh
๐ Real-world example:
Your app throws "connection refused" to the database. But the database container IS running. What's happening?
# 1. Jump into the app container
docker exec -it my-app /bin/sh
# 2. Test the connection from INSIDE the container
ping db-host
# or
curl http://db-host:5432
# or
nc -zv db-host 5432
If ping works from inside the container but your app can't connect, it's a config problem, not a networking problem. You just narrowed it down in 30 seconds.
The -it flags:
-
-i= interactive (keeps STDIN open) -
-t= pseudo-TTY (gives you a proper terminal)
Without both, you get a broken shell. Always use both together.
๐ 3) docker logs โ See what's actually happening
Your container crashed. No error message on your screen. Where did it go?
# Show all logs
docker logs my-app
# Follow logs in real-time (like tail -f)
docker logs -f my-app
# Last 100 lines
docker logs --tail 100 my-app
# Logs from the last 5 minutes
docker logs --since 5m my-app
# Logs with timestamps
docker logs -t my-app
๐ง Combine for debugging:
# Watch real-time logs with timestamps, last 50 lines
docker logs -f -t --tail 50 my-app
# Search logs for errors
docker logs my-app 2>&1 | grep -i error
# Save logs to a file
docker logs my-app > app.log 2>&1
Use it when: container exits unexpectedly, debugging API responses, checking startup errors, monitoring request flow.
๐๏ธ 4) Multi-stage builds โ Shrinking your images by 90%
This is the single biggest optimization most Dockerfiles miss.
# โ Bad โ final image includes ALL build tools
FROM node:20
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
CMD ["node", "dist/index.js"]
# Image size: ~1.2GB
# โ
Good โ multi-stage build
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
# Image size: ~180MB
๐ค What's happening here?
Stage 1 (builder): Full Node.js image. Has compilers, build tools, dev dependencies. Builds your app.
Stage 2 (runtime): Slim image. Only copies the BUILD OUTPUT from stage 1. No compilers. No source code. No dev dependencies.
The final image only contains what it needs to RUN, not what it needed to BUILD.
You can have as many stages as you want:
FROM node:20 AS deps
COPY package*.json ./
RUN npm ci
FROM node:20 AS builder
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:20-slim AS runtime
COPY --from=builder /app/dist ./dist
COPY --from=deps /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
๐ 5) docker network โ Make containers talk to each other
Containers are isolated by default. They can't see each other unless you connect them.
# Create a network
docker network create my-network
# Run containers on the same network
docker run -d --name api --network my-network my-api
docker run -d --name db --network my-network postgres
# Now "api" can reach "db" by name:
# postgres://db:5432/mydb
๐ง Why this matters
When you use docker-compose, it creates a network automatically. But when running docker run manually, containers are on the default bridge network where DNS resolution by name doesn't work.
# โ This won't work on default bridge
docker run my-app curl http://db:5432
# โ
This works
docker network create app-net
docker run --network app-net --name db postgres
docker run --network app-net my-app curl http://db:5432
Inspect a network:
docker network inspect my-network
# Shows all connected containers and their IPs
Use it when: running multiple containers that need to communicate, debugging network issues, setting up microservices locally.
๐ก๏ธ 6) .dockerignore โ Stop sending your entire disk to Docker
When you run docker build ., Docker sends your ENTIRE directory as context to the daemon. Including node_modules. Including .git. Including that 4GB video file.
# .dockerignore
node_modules
.git
.gitignore
*.md
.env
.env.local
docker-compose*.yml
Dockerfile
.dockerignore
coverage
.nyc_output
.vscode
.idea
*.log
tmp
๐ง Why this matters
Without .dockerignore:
Sending build context to Docker daemon 2.34GB
With .dockerignore:
Sending build context to Docker daemon 12.4MB
That's not just faster builds. Some files like .env contain secrets. You don't want those baked into your image layers where anyone with image access can read them.
# Verify what Docker can see
docker build --no-cache -t test . 2>&1 | head -5
# Check the "Sending build context" line
๐ฆ 7) docker compose watch โ Live reload in development
This one is new and it's amazing for development.
# docker-compose.yml
services:
web:
build: .
ports:
- "3000:3000"
develop:
watch:
- action: sync
path: ./src
target: /app/src
- action: rebuild
path: package.json
# Start with watch mode
docker compose watch
๐ค What's happening?
sync: When files in ./src change, they're copied INTO the running container. No rebuild needed. Your app hot-reloads.
rebuild: When package.json changes, the entire container is rebuilt (because dependencies changed).
Before this, you had to choose between:
- Bind mounts (fast but breaks on macOS/Windows with file system differences)
- Manual rebuilds (slow and annoying)
docker compose watch gives you the best of both.
๐งฌ 8) docker inspect โ The x-ray machine
When you need to know EVERYTHING about a container, image, network, or volume:
# Full JSON dump
docker inspect my-container
# Get a specific value
docker inspect --format='{{.State.Status}}' my-container
# running
docker inspect --format='{{.NetworkSettings.IPAddress}}' my-container
# 172.17.0.2
docker inspect --format='{{json .Mounts}}' my-container
# [{"Type":"bind","Source":"/home/user/data","Destination":"/app/data"}]
docker inspect --format='{{.Config.Env}}' my-container
# [PATH=/usr/local/sbin:... DATABASE_URL=postgres://...]
๐ Useful inspection combos:
# What port is exposed?
docker inspect --format='{{json .NetworkSettings.Ports}}' my-app
# What image was this container built from?
docker inspect --format='{{.Config.Image}}' my-app
# When was it created?
docker inspect --format='{{.Created}}' my-app
# What's the restart policy?
docker inspect --format='{{.HostConfig.RestartPolicy.Name}}' my-app
Use it when: debugging networking, checking environment variables, verifying volume mounts, finding container configuration.
๐ 9) docker compose profiles โ Conditional services
You don't always need Redis, Elasticsearch, and Mailhog. Start only what you need.
# docker-compose.yml
services:
api:
build: .
ports:
- "3000:3000"
db:
image: postgres:16
profiles: ["database"]
redis:
image: redis:7
profiles: ["cache"]
elasticsearch:
image: elasticsearch:8
profiles: ["search"]
mailhog:
image: mailhog/mailhog
profiles: ["email"]
# Start only api + db
docker compose --profile database up
# Start api + db + redis
docker compose --profile database --profile cache up
# Start EVERYTHING
docker compose --profile database --profile cache --profile search --profile email up
Without --profile, only non-profiled services start (just api in this case).
Use it when: different team members need different stacks, CI needs minimal services, you want fast local startup for daily development.
๐งช 10) docker buildx โ Build for any platform
Need to build an image for ARM (Raspberry Pi, M1/M2 Mac) but you're on x86?
# Create a multi-platform builder
docker buildx create --name multiarch --use
# Build for multiple platforms at once
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t my-app:latest --push .
๐ง What's happening?
buildx uses QEMU emulation to build for architectures your machine doesn't have. The --push flag sends the multi-platform manifest to your registry.
When someone pulls my-app:latest, Docker automatically picks the right architecture:
# On M1 Mac โ pulls arm64
docker pull my-app:latest
# On x86 server โ pulls amd64
docker pull my-app:latest
Build for Raspberry Pi from your laptop:
docker buildx build --platform linux/arm/v7 -t my-app:pi --load .
--load loads it into your local Docker (instead of pushing to a registry). Note: can only load one platform at a time locally.
๐ง TL;DR Cheat Sheet
| Command | What it does | Use when |
|---|---|---|
docker system prune -a |
Clean all unused data | Disk full |
docker exec -it <c> sh |
Shell into container | Debugging |
docker logs -f <c> |
Stream container logs | Monitoring |
| Multi-stage builds | Shrinks image 90% | Production |
docker network create |
Connect containers | Microservices |
.dockerignore |
Exclude files from build | Every build |
docker compose watch |
Live reload in dev | Development |
docker inspect |
Full container details | Debugging |
| Compose profiles | Conditional services | Flexible stacks |
docker buildx |
Cross-platform builds | ARM/Pi/M1 |
๐ Docker isn't magic. It's just Linux.
Every container is just a Linux process with namespace isolation and cgroup limits. That's it. There's no virtual machine. There's no magic.
Once you understand that, Docker stops being scary. It becomes a tool you control instead of a tool that controls you.
Stop memorizing commands. Start understanding what they do.
Tags: #docker #devops #webdev #programming
Top comments (0)