Docker is often introduced as a tool to package applications but many developers struggle once they move beyond basic docker run commands. In this post, I’ll walk through how Docker actually works, how to write an efficient Dockerfile, and how to debug common issues that appear in real-world projects
Why Docker Matters
- Before Docker, applications often failed with it works on my machine problems.
- Different OS versions, dependencies, and configurations made deployment painful.
- Docker solves this by packaging an application and its dependencies into a container, which runs consistently across environments.
Unlike virtual machines, containers
- Share the host OS kernel
- Start faster
- Use fewer resources
Core Docker Concepts
Image: A read-only template containing the app and its dependencies
Container: A running instance of an image
Dockerfile: Instructions to build an image
Layer: Each Dockerfile instruction creates a cached layer
Dockerizing a Simple Node.js App
Writing a Minimal Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
Common Docker Mistake: Huge Image Sizes
Many beginners copy everything before installing dependencies
COPY . .
RUN npm install
This breaks caching and bloats images
Multi-Stage Builds
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app .
CMD ["node", "index.js"]
This keeps only what’s needed in the final image
** Debugging a Real Docker Issue**
Symptom:Container builds successfully but crashes immediately on startup.
docker logs <container-id>
Fix
Used explicit defaults and validated required variables at startup:
if (!process.env.PORT) {
throw new Error("PORT not set");
}
This made failures obvious and easier to debug
Verifying the Container
docker build -t demo-app .
docker run -p 3000:3000 demo-app
Visiting localhost:3000 confirms the container is working correctly.
Key Takeaways
- Docker images are built in layers — ordering matters
- Smaller images mean faster builds and deployments
- Debugging containers requires logs, not guesswork
- Good Dockerfiles are optimized, readable, and predictable
Conclusion
Docker is more than a deployment tool — it’s a way to make software reproducible and reliable. By understanding how images, layers, and containers work internally, developers can avoid common pitfalls and build systems that scale cleanly.
Top comments (0)