Most people use Docker every day.
But very few understand what actually happens when a container starts.
A container is not magic. It is a carefully orchestrated combination of Linux kernel features, storage layering, and process isolation.
This article breaks Docker down into the internal components that make containers possible.
Docker Engine Architecture: What Happens When You Run a Command
When you execute:
id="m8x2dr"
docker run nginx
three major components are involved:
- CLI
- REST API
- Docker Daemon
Docker CLI
The CLI is what you interact with directly.
It converts your command into an API request.
REST API
The REST API acts as the bridge between client requests and Docker Engine.
Docker Daemon
The daemon performs the real work:
- pulls images
- creates containers
- manages networks
- handles volumes
Why Containers Feel Like Separate Machines
Containers look isolated because Linux Namespaces create separate views for each container.
Namespaces isolate:
- process IDs
- network interfaces
- mount points
- IPC communication
- hostname visibility
A container sees its own world even though it shares the host kernel.
The PID 1 Illusion
Inside a container:
id="yjlwmf"
ps aux
The application may appear as:
id="t6cvfp"
PID 1
But on the host, that same process may actually be PID 3482 or another host-level process ID.
Docker maps process identity through namespaces.
That illusion is one reason containers feel independent.
Resource Limits: How Docker Prevents Host Exhaustion
Without limits, one container can consume excessive CPU or memory.
Docker uses Linux cgroups to control this.
Example:
id="wll1g4"
docker run --cpus=0.5 ubuntu
Limit memory:
id="l3x4hh"
docker run --memory=100m ubuntu
This ensures workloads stay predictable.
Where Docker Stores Everything
Docker stores runtime data under:
/var/lib/docker
Important directories include:
- containers
- images
- volumes
- overlay2
Each serves a different purpose.
Docker Image Layers: Why Builds Are Fast
Every Dockerfile instruction creates a new read-only layer.
Example:
FROM ubuntu
RUN apt-get update
RUN pip install flask
COPY . /app
Each instruction becomes an incremental layer.
This makes builds reusable.
Layer Caching: Why Order Matters
Docker reuses unchanged layers.
If source code changes late in the Dockerfile, earlier layers remain cached.
Better:
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
This improves build speed significantly.
Copy-on-Write: What Happens During Runtime
Images are read-only.
When a container starts, Docker adds one writable layer.
When files are modified:
Docker copies from lower read-only layers into the writable layer.
This is called Copy-on-Write.
Important:
If the container is removed, that writable layer disappears.
Volumes vs Bind Mounts vs Writable Layer
Writable Layer
Temporary container changes.
Destroyed with container deletion.
Volumes
Managed by Docker.
Best for databases.
Bind Mounts
Direct host path mapping.
Best for development.
Example:
docker run --mount type=bind,source=/data,target=/app/data nginx
Why Overlay2 Matters
Docker commonly uses:
overlay2
It efficiently merges image layers and writable layers.
This is the default storage driver in most Linux environments.
Final Mental Model
A running container is built from:
- Docker daemon
- REST API
- namespaces
- cgroups
- overlay storage
- writable runtime layer
Docker is not just packaging.
It is Linux primitives assembled into an elegant runtime system.
Key Takeaway
The moment you understand Docker internals, debugging becomes much easier.
Because then you stop memorizing commands and start understanding behavior.
Top comments (0)