Docker Quick Guide — Build, Ship, and Run Applications Anywhere
Your fastest path from “it works on my machine” to reproducible, cloud‑ready containers.
Most teams don’t struggle because Docker is “hard.”
They struggle because they never built a clear mental model of:
- What exactly a container is (and isn’t)
- How images, layers, and volumes actually work
- When to use bind mounts vs named volumes
- How Docker Compose fits into local development
- Why Docker is the answer to build → distribute → run
This guide turns that confusion into a practical, interview‑ready Docker foundation you can use in real projects and confidently explain to others.
TL;DR — What You’ll Learn
✅ The 3 pillars Docker solves: Build, Distribute, Run
✅ The difference between VMs and containers (and why it matters)
✅ A solid mental model of Docker daemon, CLI, and REST API
✅ The essential Docker commands you’ll actually use daily
✅ How to keep containers alive, exec into them, and expose ports
✅ How to make data persistent with bind mounts and volumes
✅ How images and layers work (and why images are immutable)
✅ A gentle but powerful intro to Docker Compose for dev environments
✅ A final architecture checklist for Docker‑powered workflows
Copy‑paste friendly commands included. Let’s dive in. 🐳
1. The Three Pillars Docker Is Designed to Fix
Professional software development always circles the same three problems:
Build
Write and compile code on the developer’s machine.Distribute
Package the application so it can travel to servers or cloud environments.Run
Execute the application reliably in production.
Without containers, every environment is slightly different:
- “It works on my machine.”
- “But not in staging.”
- “And production has a different Python/Node/OpenSSL version.”
🔥 Docker’s value proposition:
“Give me your app once, and I’ll make it build, ship, and run the same way everywhere.”
It does that by packaging your app and its runtime into images that can be turned into containers on any Docker‑capable host.
2. Virtual Machines vs Containers (Mental Model First)
Traditional Virtual Machines (VMs)
A virtual machine simulates an entire computer:
- A full guest OS (Linux/Windows)
- Its own virtualized hardware
- Its own disk image (VDI, VMDK, VHD…)
They work, but they’re heavy:
- ❌ Duplicate OS layers for each VM
- ❌ High admin cost (patching OS, updates, backups)
- ❌ Large images that are slow to copy and start
Containers: The Modern Alternative
Containers don’t ship a full OS for every app.
Instead, they share the host’s OS kernel and isolate:
- Processes
- Filesystem view
- Network interfaces
- Resource usage (CPU, RAM, I/O limits)
This gives us:
- 🚀 Fast creation & startup
- 🔁 Continuous delivery‑friendly immutable images
- 📦 Clean Dev vs Ops separation
- 🔍 App‑level observability (logs, metrics, health)
- 🔒 Strong resource isolation
- 🧩 Perfect for microservices
- 🧭 Consistent behavior from laptop → server → cloud
- ☁️ Cloud‑agnostic portability
- 🧱 Higher resource efficiency than VMs
If VMs are full houses, containers are apartments in the same building: isolated, but sharing infrastructure.
3. How Docker Works Under the Hood
After installing Docker, confirm the installation:
docker --version
docker info
Docker is built around three main pieces:
🔹 Docker Daemon (dockerd)
The background process that:
- Manages images, containers, networks, and volumes
- Listens for API requests (local or remote)
- Talks to the OS kernel features (namespaces, cgroups, etc.)
🔹 Docker CLI (docker)
The command‑line tool you interact with:
docker run hello-world
docker ps
docker images
The CLI does not run containers directly; it sends commands to the daemon.
🔹 Docker REST API
A programmatic HTTP API that:
- The CLI uses under the hood
- You can call from other tools (CI/CD, scripts, remote hosts)
Mental model:
You talk to the CLI → CLI calls Docker REST API → Daemon performs the work.
4. Core Docker Building Blocks
🧊 Containers
“A container is a running instance of an image.”
- Process (or set of processes) with isolation
- Has its own filesystem view and network namespace
- Identified by a container ID (and optional name)
Images
“An image is a read‑only template used to create containers.”
It includes:
- Base OS layer (e.g.,
ubuntu,alpine) - Language runtime / tools (Node, Python, Java, etc.)
- Your application code and dependencies
- Metadata (entrypoint, environment defaults, exposed ports)
💾 Volumes
Persistent data storage managed by Docker.
Used for things like databases, uploaded files, and state you don’t want to lose when containers are removed.
🌐 Networks
Virtual networks that let containers talk to each other by name (e.g., db, redis, api).
5. Your First Container
Run the classic Docker test:
docker run hello-world
This does three things:
- Pulls the
hello-worldimage if it’s not present - Creates a container from it
- Runs it and prints a message
🎉 Congrats! You’ve just run a container.
6. Essential Docker Commands (The Ones You’ll Actually Use)
# Run a simple container
docker run hello-world
# List running containers
docker ps
# List *all* containers (including stopped)
docker ps -a
# Inspect container details (by ID or name)
docker inspect <id-or-name>
# Give your container a friendly name when you create it
docker run --name my-container hello-world
# Rename an existing container
docker rename my-container my-renamed-container
# Remove a single container
docker rm <id-or-name>
# Remove all stopped containers (careful but useful)
docker container prune
💡 Tip: use names for containers in local dev (api, db, queue) so Compose and logs are easier to read.
7. Interactive Ubuntu Container (Your Linux Playground)
Run an Ubuntu container and open an interactive shell:
docker run -it ubuntu
Flags:
-
-i→ interactive -
-t→ allocate a TTY (terminal)
Inside the container, check OS info:
cat /etc/lsb-release
Exit with exit or Ctrl+D — the container will stop if the main process ends.
8. Container Lifecycle & Keeping Things Alive
Every container has a main process.
If that process exits → the container stops.
Run a container that stays alive “doing nothing”:
docker run --name alwaysup -d ubuntu tail -f /dev/null
-
-d→ detached (runs in the background) -
tail -f /dev/null→ simple command that never exits
Enter the running container:
docker exec -it alwaysup bash
You can now debug, inspect, or experiment inside the container.
Key idea:
“A container isn’t a VM; it’s a process with isolation and a filesystem.”
9. Exposing Containers (Port Mapping)
By default, containers are isolated from the host network.
To make a container accessible from your machine, you map ports.
Example: run Nginx and expose it on port 8080 of your host:
docker run -d --name proxy -p 8080:80 nginx
-
80is the container’s port -
8080is your machine’s port
Now open:
If you see the Nginx welcome page, you’ve just exposed a containerized web server 🎉
10. Persisting Data: Bind Mounts vs Volumes
Containers are ephemeral by design. If you delete them, their internal filesystem goes away.
For persistent data (databases, uploads, configs), you have two main options.
1️⃣ Bind Mounts (Host ↔ Container)
“Mount this specific folder from my machine into the container.”
Example with MongoDB:
docker run -d --name db-mongo -v /path/on/my/machine:/data/db mongo
Use cases:
- Mount your source code into a container during development
- Share config files from the host
- Quick hacks, local experimentation
2️⃣ Volumes (Docker‑Managed Storage)
“Let Docker manage storage for me, decoupled from the host’s directory layout.”
Create a volume:
docker volume create dbdata
Run MongoDB attached to that volume:
docker run -d --name db --mount src=dbdata,dst=/data/db mongo
Benefits:
- Docker manages where the data lives
- Volumes are easy to back up, migrate, and inspect
- Ideal for databases in multi‑container setups
Rule of thumb:
- Dev‑only & code sharing → bind mounts
- Long‑term app data → volumes
11. Copying Files To and From Containers
Even with mounts and volumes, sometimes you just need to drop in or extract a file.
Create a file on your host:
touch file.txt
Copy it into a container:
docker cp file.txt mycontainer:/path/file.txt
Copy a directory out of a container:
docker cp mycontainer:/path localfolder
Great for:
- Grabbing logs or generated files
- Injecting one‑off configs or scripts
12. Building Custom Images (Dockerfile 101)
So far we’ve used images from Docker Hub.
Now let’s build our own.
Create a file called Dockerfile:
FROM ubuntu:latest
RUN mkdir -p /usr/src && touch /usr/src/hello-docker.txt
Build your image:
docker build -t ubuntu:hello .
-
-t ubuntu:hello→ names the image -
.→ build context is the current directory
Run it:
docker run -it ubuntu:hello bash
ls /usr/src
You should see hello-docker.txt inside.
Publishing to Docker Hub
docker tag ubuntu:hello myuser/ubuntu:hello
docker push myuser/ubuntu:hello
Now others (or your CI/CD system) can pull it:
docker pull myuser/ubuntu:hello
13. Docker Image Layers (Why Images Are So Fast)
Every instruction in your Dockerfile creates a layer:
FROM ubuntu:latest # layer 1
RUN apt-get update # layer 2
RUN apt-get install -y curl # layer 3
COPY . /app # layer 4
Key properties:
- Layers are stacked to form the final image
- Layers are cached and reused between images
- Images are immutable → once built, they don’t change
- Containers add a writable layer on top
You can also “capture” changes in a running container:
docker commit <container-id> ubuntu-with-tools
This is useful for quick experiments, but in real projects you should codify changes in the Dockerfile, not via docker commit.
14. Docker Compose — Your Local Micro‑Platform
Docker Compose lets you orchestrate multiple containers with a single YAML file.
Example docker-compose.yml:
version: "3.8"
services:
app:
image: holamundoapp
environment:
MONGO_URL: "mongodb://db:27017/test"
depends_on:
- db
ports:
- "3000:3000"
db:
image: mongo
Run everything:
docker-compose up
Stop and remove containers, networks, etc.:
docker-compose down
Other useful commands:
# Show logs for all services
docker-compose logs
# Show logs for a single service
docker-compose logs app
# Exec into a running service container
docker-compose exec app bash
# Inspect networks
docker network ls
Why Compose matters:
- One command to start a full stack (API + DB + cache + worker)
- Everyone on the team runs the same environment
- Easy to plug into CI pipelines
15. Docker & Compose as Everyday Development Tools
Using Docker in dev unlocks:
- 🧪 Reproducible environments — no more “it works on my machine.”
- 🚀 Easy onboarding — new devs run
docker-compose upand are ready. - ♻️ Fast resets — tear down and rebuild clean environments quickly.
- 🧱 Consistent stacks — same stack in dev, staging, and production.
You can use this pattern for:
- Django / Flask apps
- Node.js + Postgres APIs
- Go services with Redis
- .NET APIs + SQL Server
- Multi‑service microservice demos
Once you’re comfortable locally, the same ideas translate to:
- Kubernetes
- ECS / Fargate
- Azure Container Apps
- Google Cloud Run
Docker is the on‑ramp to the entire container ecosystem.
16. Docker Architecture Checklist (For Real Projects)
Containers & Images
- [ ] One clear responsibility per image (API, DB, worker, etc.)
- [ ] Small, layered Dockerfiles with caching in mind
- [ ] No secrets baked into images (env vars or secret managers instead)
Volumes & Data
- [ ] Use named volumes for databases and persistent state
- [ ] Use bind mounts only where host ↔ container sharing is intentional
- [ ] Have a backup/restore story for critical volumes
Networking
- [ ] Services talk via Docker network names (e.g.,
db:27017) - [ ] Only expose ports to the host that truly need to be public
- [ ] Use
.envfiles or Compose profiles for configuration per environment
Compose & Dev Workflow
- [ ] One
docker-compose.ymlchecked into version control - [ ] Optional
docker-compose.override.ymlfor local tweaks - [ ]
README.mdwith copy‑paste‑ready commands for new team members
If you can check most of these boxes, you’re already ahead of many “Docker‑using” teams.
Final Thoughts
Docker is one of the most powerful tools in modern software engineering.
Mastering containers unlocks:
- Cloud‑native development
- Microservices architectures
- DevOps automation
- CI/CD pipelines
- Scalable, reproducible deployments
You don’t have to memorize every command.
Start with:
docker run
docker ps
docker logs
docker exec
docker build
docker-compose up
Then gradually add images, volumes, networks, and Compose patterns as your projects grow.
If you’d like a follow‑up article, I can dive into:
- Multi‑stage builds for tiny production images
- Docker + Node.js / Python / .NET real‑world examples
- Debugging containers and optimizing image size
- Docker best practices for CI/CD pipelines
✍️ Written for developers who want to think about containers like engineers, not magicians.

Top comments (0)