Docker has fundamentally changed how developers build, ship, and run software. If you've ever heard "it works on my machine" — Docker is the fix. This guide walks you through everything you need to get started, from core concepts to writing your first Dockerfile and managing multi-container apps.
What is Docker?
Docker is an open-source platform that packages applications inside lightweight, isolated environments called containers. A container bundles your code, runtime, system libraries, and configuration into a single portable unit, so an app that runs on your laptop runs identically on a colleague's machine or a production cloud server.
Released in 2013, Docker builds on long-standing Linux kernel features (namespaces and cgroups) but wraps them in a developer-friendly CLI, a public image registry (Docker Hub), and a declarative file format for building images.
Why Use Docker?
Before Docker, deploying apps meant carefully matching server environments, language runtimes, and OS patches. A single mismatch could break production. Docker solves this and brings several other benefits:
- Consistency — the same image runs identically on every machine
- Speed — containers start in seconds and use far fewer resources than VMs
- Isolation — each container has its own filesystem, processes, and network stack
- Reproducibility — a Dockerfile is a recipe; anyone can rebuild the exact same image
- Microservices-friendly — small, single-purpose containers compose into larger systems easily
- Massive ecosystem — Docker Hub hosts hundreds of thousands of ready-to-use images
Containers vs. Virtual Machines
Both containers and VMs provide isolation, but they work very differently. A VM includes a full guest OS on top of a hypervisor, while a container shares the host kernel and isolates only the application and its dependencies.
| Feature | Virtual Machine | Docker Container |
|---|---|---|
| Boot time | Minutes | Seconds |
| Size on disk | Gigabytes | Megabytes |
| OS overhead | Full guest OS | Shares host kernel |
| Isolation level | Hardware-level | Process-level |
| Performance | Lower (virtualized) | Near-native |
| Portability | Limited | Excellent |
Note: Containers don't replace VMs in every situation. VMs are still preferable when you need a completely different OS or strict hardware-level isolation.
Core Docker Concepts
Five terms come up constantly. Understand these and everything else makes sense.
- Image — a read-only template containing your app and its dependencies. Think of it like a class in OOP — a blueprint.
- Container — a running instance of an image. You can run many containers from one image.
- Dockerfile — a text file with step-by-step instructions for building an image.
- Registry — a server that stores and distributes images. Docker Hub is the default public registry.
- Volume — a mechanism for persisting data outside a container's lifecycle.
How Docker Works
Docker uses a client-server architecture. When you run docker run nginx:
- The Docker CLI sends your command to the Docker Daemon (
dockerd) over a REST API. - The daemon checks if the image exists locally; if not, it pulls it from Docker Hub.
- The daemon creates a container, allocates a writable filesystem layer, attaches networking, and starts the process.
- Output streams back to your terminal.
On Linux, the daemon talks directly to the kernel. On macOS and Windows, Docker Desktop runs a small Linux VM under the hood — but you barely notice.
Installing Docker
macOS and Windows
Install Docker Desktop — an all-in-one bundle that includes the daemon, CLI, Docker Compose, and a GUI for managing images and containers.
Linux (Ubuntu/Debian)
# Update package index
sudo apt-get update
# Install prerequisites
sudo apt-get install ca-certificates curl gnupg
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Add the Docker repository
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu jammy stable" \
| sudo tee /etc/apt/sources.list.d/docker.list
# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
Tip: After installation on Linux, add your user to the
dockergroup so you don't needsudofor every command:sudo usermod -aG docker $USER, then log out and back in.
Verifying the Installation
docker --version
# Output: Docker version 25.0.0, build ...
docker info
# Run the official hello-world image
docker run hello-world
The hello-world image is tiny (a few kilobytes) and prints a friendly message confirming that the daemon, CLI, image pulling, and container execution all work.
Running Your First Container
Let's run something more interesting — an Nginx web server — in a single command:
# Run nginx in the background and map port 8080 -> 80
docker run -d -p 8080:80 --name my-nginx nginx
# Open http://localhost:8080 in your browser
# When done, clean up
docker stop my-nginx
docker rm my-nginx
Breaking down that command:
-
docker run— create and start a new container from an image -
-d— detached mode; run in the background -
-p 8080:80— publish container port 80 to host port 8080 -
--name my-nginx— give the container a friendly name -
nginx— the image to run
Best Practice: Always give containers explicit names with
--name. Without it, Docker assigns a random name likeelated_einstein, making them harder to manage in scripts.
Docker Images
A Docker image is a read-only, layered filesystem snapshot containing everything needed to run your software: the OS base, language runtime, application code, dependencies, and metadata.
Images are identified by name:tag — for example, node:20-alpine refers to Node.js v20 built on Alpine Linux. If you omit the tag, Docker assumes :latest.
Pulling Images from Docker Hub
Docker Hub is the world's largest public registry of container images. You can pull any public image with a single command:
docker pull nginx
docker pull nginx:1.25-alpine
docker pull node:20-alpine
docker pull postgres:16
Note: Tags like
alpinemean the image is built on Alpine Linux, which is often under 10 MB. Prefer Alpine variants when image size matters.
Listing and Removing Images
You can see all images stored locally with the following commands:
# List all local images
docker images
# Remove a single image
docker rmi nginx:latest
# Remove all unused images
docker image prune
# Remove ALL images (be careful!)
docker rmi $(docker images -q)
Warning: docker image prune and docker system prune can free a lot of disk space, but they also delete images and containers you might still need. Read the prompt carefully before confirming.
Fixing the "Image Is Being Used" Error
A common error when removing images:
$ docker rmi nginx:latest
Error response from daemon: conflict: unable to remove repository reference
"nginx:latest" — container eebd38aa4c91 is using its referenced image
This is not a bug. Docker is protecting you: an image cannot be deleted while a container — even a stopped one — still depends on it. The fix is to deal with the container first. You have two options.
Option 1: Remove the container first (recommended)
# 1. Find the container using the image
# Use -a so STOPPED containers show up too
docker ps -a
# 2. Remove that container (use -f if it is still running)
docker rm eebd38aa4c91
# or by name:
docker rm my-nginx
# 3. Now the image deletes cleanly
docker rmi nginx:latest
The container ID in your error message (here, eebd38aa4c91) is exactly the one you need to remove in step 2 — Docker tells you which container is the blocker.
Option 2: Force-remove the image
The -f (or --force) flag tells Docker to remove the image anyway. It does this by untagging the image — the underlying layers stay on disk as long as the container references them, and are cleaned up once that container is gone.
# Force-remove the image reference
docker rmi -f nginx:latest
# The container eebd38aa4c91 still exists and still runs,
# but the image is now "dangling" (untagged, shown as <none>)
docker images
# REPOSITORY TAG IMAGE ID SIZE
# <none> <none> 6f8edba05e38 161MB
Warning: Forcing removal does not actually free disk space while a container is still using the image — it only removes the name/tag. Prefer Option 1: remove the container first, then the image. That genuinely reclaims the space.
Tip: To clean up everything related to an image in one go: stop and remove all containers created from it, then remove the image. The one-liner docker rm -f $(docker ps -aq --filter ancestor=nginx) removes every container based on the nginx image, after which docker rmi nginx succeeds.
Image Tags and Versions
Common tag patterns you'll see on Docker Hub:
-
latest— the most recent stable build (avoid in production) -
20.10.0— a specific semantic version, fully reproducible -
20-alpine— major version on Alpine Linux -
20-slim— a slimmer Debian variant -
bullseye/bookworm— built on a specific Debian release
Warning: Avoid
:latestin production. It's a moving target — pin to specific versions likenode:20.10.0-alpinefor reproducible builds.
Image Layers Explained
Every image is built from a stack of layers. Each instruction in a Dockerfile produces a layer. Layers are cached and shared between images, which is what makes Docker so efficient. If two images both start with the same base layer (FROM node:20-alpine, for example), they share that layer on disk.
You can inspect the layers of any image:
# Show the history (layers) of an image
docker history nginx:latest
Note: The order of instructions in a Dockerfile directly affects layer caching. Place rarely-changing instructions (installing OS packages) before frequently-changing ones (copying source code). We'll cover this in detail in Section 4.
Working with Containers
Starting Containers (docker run)
The docker run command is the workhorse of the Docker CLI. It creates a new container from an image and starts it. The general syntax is:
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Here are the options you'll use over and over:
| Flag | Effect |
|---|---|
-d |
Run in the background (detached) |
-it |
Interactive + TTY (for shells) |
-p HOST:CONTAINER |
Publish a port |
--name NAME |
Assign a name |
-e KEY=VALUE |
Set an environment variable |
-v HOST:CONTAINER |
Mount a volume |
--rm |
Auto-remove when the container exits |
--network NAME |
Attach to a specific network |
A few realistic examples:
# Interactive Ubuntu shell, auto-removed on exit
docker run -it --rm ubuntu:22.04 bash
# Redis in the background
docker run -d -p 6379:6379 --name cache redis:7-alpine
# Postgres with a named volume for persistence
docker run -d \
--name db \
-e POSTGRES_PASSWORD=secret \
-v pgdata:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:16
Detached Mode and Naming
By default docker run attaches your terminal to the container, which is useful for short-lived commands but not for long-running services. Adding -d runs the container in the background and prints its ID. Combine that with --name so you can refer to the container by a friendly handle.
# Bad: random name, you'll need the ID later
docker run -d nginx
# Good: predictable name, easy to script
docker run -d --name web nginx
Stopping, Starting, Restarting
# Gracefully stop a container (sends SIGTERM, then SIGKILL after 10s)
docker stop web
# Start a stopped container
docker start web
# Restart (stop + start)
docker restart web
# Force-kill immediately (sends SIGKILL)
docker kill web
# Pause / unpause (freeze the process)
docker pause web
docker unpause web
Note:
docker stopis almost always what you want. docker kill is for unresponsive containers — it skips the graceful shutdown signal.
Removing Containers
# Remove a stopped container
docker rm web
# Force-remove a running container (stop + remove)
docker rm -f web
# Remove all stopped containers
docker container prune
# Remove every container on the system
docker rm -f $(docker ps -aq)
Tip: If you don't need the container after it exits — for example, a one-off CLI tool — add --rm to docker run so cleanup happens automatically.
Executing Commands Inside a Container
The docker exec command runs an additional process inside an already-running container. The most common use is opening a shell to debug or inspect state:
# Open an interactive bash shell inside the "web" container
docker exec -it web bash
# Some minimal images don't have bash -- use sh instead
docker exec -it web sh
# Run a one-off command without a shell
docker exec web ls /etc/nginx
# Run a command as a different user
docker exec -u root -it web bash
Viewing Logs
# Print all logs
docker logs web
# Follow logs in real time
docker logs -f web
# Show the last 100 lines
docker logs --tail 100 web
# Show logs with timestamps
docker logs -t web
# Show logs from the last 5 minutes
docker logs --since 5m web
Best Practice: Configure your app to log to
stdout/stderrinstead of writing to files inside the container. This is the 12-factor approach and lets Docker and orchestrators like Kubernetes collect logs centrally.
Inspecting Containers
For deep introspection — IP address, mount points, environment, network settings, and more — use docker inspect:
# Print full JSON details of a container
docker inspect web
# Extract a specific field using a format template
docker inspect -f '{{ .NetworkSettings.IPAddress }}' web
# Get the exit code of a stopped container
docker inspect -f '{{ .State.ExitCode }}' web
Writing Dockerfiles
A Dockerfile is a plain text file with instructions Docker follows to build a custom image. Each instruction produces a new layer. Dockerfiles are the foundation of reproducible builds.
Anatomy of a Dockerfile
Here is a minimal example. We'll dissect every line below:
# Use an official Node.js LTS image as the base
FROM node:20-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy dependency manifests first (better caching)
COPY package*.json ./
# Install production dependencies
RUN npm ci --omit=dev
# Copy the rest of the application source code
COPY..
# Document the port the application listens on
EXPOSE 3000
# Define environment variables
ENV NODE_ENV=production
# Default command to run when the container starts
CMD ["node", "server.js"]
Common Instructions
| Instruction | Purpose |
|---|---|
FROM |
Sets the base image — every Dockerfile starts here |
WORKDIR |
Sets the working directory for subsequent instructions |
COPY |
Copies files from the build context into the image |
RUN |
Executes a command during the build (e.g., installing packages) |
ENV |
Sets an environment variable (persists at runtime) |
ARG |
Defines a build-time variable (disappears after the build) |
EXPOSE |
Documents which port the container listens on |
USER |
Sets the user the container runs as |
CMD |
Default command when the container starts (overridable) |
ENTRYPOINT |
Fixed executable — combined with CMD for default flags |
Note:
CMDvsENTRYPOINT— useCMDfor default arguments that are easy to override at runtime. UseENTRYPOINTwhen your image is a specific executable. They work well together:ENTRYPOINTfor the binary,CMDfor default flags.
Your First Dockerfile: A Node.js App
Let's containerize a minimal Express app. The project structure looks like this:
my-app/
├── package.json
├── server.js
├── .dockerignore
└── Dockerfile
Step 1: Create the project
mkdir my-app
cd my-app
npm init -y
npm install express
Step 2: Create server.js
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Hello from inside a Docker container!');
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Listening on port ${PORT}`);
});
Step 3: Create the Dockerfile
# Start from the official Node.js 20 image (Alpine = small)
FROM node:20-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy ONLY the dependency manifests first (better caching)
COPY package*.json ./
# Install dependencies inside the container
RUN npm install --omit=dev
# Copy the rest of the application source code
COPY . .
# Document the port the app listens on
EXPOSE 3000
# The command that runs when the container starts
CMD ["node", "server.js"]
Step 4: Create .dockerignore
node_modules
npm-debug.log
.git
.env
Why exclude
.env? Environment variables should be injected at runtime via-eflags or Docker secrets, not baked into the image. Committing secrets into an image is a serious security risk.
This article covers only a portion of the 40 pages complete Beginner's Guide to Docker. The full guide goes much deeper — with more examples, real-world walkthroughs, and step-by-step projects across all the sections below:
Section 1: Introduction to Docker
What is Docker? · Why Use Docker? · Containers vs Virtual Machines · Core Docker Concepts · How Docker Works · Installing Docker · Running Your First ContainerSection 2: Docker Images
What Is an Image? · Pulling Images from Docker Hub · Listing and Removing Images · Fixing the "Image Is Being Used" Error · Image Tags and Versions · Searching the Registry · Image Layers ExplainedSection 3: Working with Containers
Starting Containers · Detached Mode and Naming · Listing Containers · Stopping, Starting, Restarting · Removing Containers · Executing Commands Inside a Container · Viewing Logs · Inspecting ContainersSection 4: Writing Dockerfiles
What Is a Dockerfile? · Anatomy of a Dockerfile · Common Instructions Explained · Your First Dockerfile (Node.js App) · Building the Image · Updating Your Code · The Build Cache and Layer Ordering · .dockerignore · Multi-Stage BuildsSection 5: Data Persistence with Volumes
Why Containers Are Ephemeral · Bind Mounts vs Named Volumes · Creating and Using Volumes · Real-World Example: Postgres VolumeSection 6: Docker Networking
Default Networks · Creating a User-Defined Bridge · Connecting Containers by Name · Publishing Ports · Inspecting NetworksSection 7: Docker Compose
Why Compose? · Installing Docker Compose · Your First docker-compose.yml · Common Compose Commands · Real Example: Node.js + Postgres + Redis · Environment Variables and .env FilesSection 8: Best Practices & Next Steps
Image Size Optimization · Security Tips · Tagging and Versioning Strategy · Where to Go From Here
Also worth reading: If you're interested in staying up to date with the React ecosystem, check out React 18: A Complete Guide to Every New Feature — a deep dive into everything that changed in React 18.
About Me
I'm a freelancer, mentor, and full-stack developer with 12+ years of experience, working primarily with React, Next.js, and Node.js.
Alongside building real-world web applications, I'm also an Industry/Corporate Trainer, training developers and teams in modern JavaScript, Next.js, and MERN stack technologies with a focus on practical, production-ready skills.
I've also created various courses with 3000+ students enrolled.
My Portfolio: https://yogeshchavan.dev/
Follow me on Linkedin for regular content that I share every day.
Top comments (0)