Have you ever wondered what happens behind the scenes when you run docker run hello-world? Let's dive into the inner workings of Docker and understand how it manages to isolate applications so effectively.
The Three Pillars of Docker Engine
When you install Docker on a Linux system, you’re actually installing three distinct components that work together seamlessly:
1. Docker CLI (Command Line Interface)
This is what you interact with directly. The docker command you type in your terminal. Think of it as the remote control for your TV; it's just the interface, not the actual TV.
# These are all Docker CLI commands
docker run nginx
docker ps
docker stop container_name
docker build -t myapp .
2. Docker REST API
This is the communication bridge between the Docker client and the Docker daemon. Every command you type gets translated into HTTP requests that the daemon can understand. It’s like a translator that converts your English commands into a language the daemon speaks.
This API can be accessed directly using HTTP clients like curl or wget, or through HTTP libraries available in most programming languages.
3. Docker Daemon (dockerd)
This is the heavy lifter, a background process that does all the actual work. It manages:
- Images : The blueprints for containers
- Containers : Running instances of images
- Volumes : Persistent data storage
- Networks : Communication between containers
The Core Challenge: Application Isolation
Before we dive into how Docker works, let’s understand the fundamental problem it solves. When you run multiple applications on a single machine, they all share the same operating system resources. This creates several critical issues:
The Problems with Traditional Application Deployment
1. Resource Conflicts
# Two web servers trying to use port 80
nginx starts on port 80 ✓
apache tries to start on port 80 ✗ (Port already in use!)
2. Dependency Hell
# App A needs Python 3.8, App B needs Python 3.9
App A: "I need libssl version 1.1"
App B: "I need libssl version 1.0"
System: "I can only have one version installed!"
3. Process ID Collisions
# What if two applications create processes with the same ID?
App A creates process with PID 1000
App B tries to create process with PID 1000 → CONFLICT!
4. Security Concerns
# Applications can see and potentially interfere with each other
ps aux # Shows ALL processes from ALL applications
kill 1000 # Can accidentally kill other app's process
Why This Matters for Docker
Docker’s promise is: “Package an application with all its dependencies and run it anywhere, isolated from other applications.”
But how can you achieve this isolation when all applications are running on the same Linux kernel? You can’t install multiple operating systems for each application — that would be too resource-intensive.
The Solution: Linux Namespaces and C groups
How Applications Get Containerized
Now comes the fascinating part, how does Docker isolate applications so effectively? The secret lies in Namespaces.
Docker uses a concept called namespaces to handle this isolation.
The Problem: Process ID Conflicts
Imagine you’re running two applications on your system, and both try to create a process with ID 1000. Without isolation, this would cause a conflict:
Without Namespaces:
The Solution: Namespaces
Docker uses namespaces to create isolated workspaces for each container. Even though you tried to create two processes with same id you gets a your container specific process Id and the host system specific process Id. Now you can have as many as containers without conflicting.
With Namespaces:
Seeing Namespaces in Action
# Run a container and check its process ID from inside
docker run -it ubuntu bash
# Inside the container:
echo $$ # This will show PID 1 (the bash process)
ps aux # You'll see a minimal process list
# From another terminal on the host:
docker ps # Find your container ID
docker exec <container_id> ps aux # Same minimal process list
ps aux | grep bash # You'll see the actual PID on the host (much higher number)
Visual explanation
Host System View:
PID 1 → systemd
PID 2 → kthreadd
...
PID 15847 → docker-containerd-shim
PID 15864 → bash (your container process)
Container View:
PID 1 → bash (appears as the main process)
The Complete Picture: Namespaces + Cgroups
Now, here’s an important point: Namespaces alone don’t provide complete isolation!
Namespaces solve the “visibility” problem — they create separate views of system resources. But what about resource usage? What stops one container from consuming all the CPU or memory and starving other containers?
# This is still a problem with namespaces alone:
Container A: "I need all 16GB of RAM!"
Container B: "Hey, leave some RAM for me!"
System: "First come, first served!"
Docker uses TWO technologies together:
- Namespaces → Provide isolation (separate views)
- Cgroups → Provide resource control (limits and quotas)
💡 Key Takeaway
The Docker CLI, REST API, and daemon orchestrate both namespaces and cgroups to create secure, isolated, and resource-controlled containers. Each container gets its own view of processes, network, filesystem, AND guaranteed resource limits.
Docker’s complete isolation magic comes from two Linux technologies working together :
- Namespaces create isolated views — each container thinks it’s running alone
- Cgroups control resource usage — preventing containers from monopolizing system resources
Top comments (0)