"It works on my machine." — if you’ve ever said or heard this, containers exist because of you 😄
Modern software development is no longer just about writing code. It’s also about running that code reliably everywhere — on your laptop, in testing, and in production. This is where containerization and Docker come in.
In this blog, we’ll break down what containers are, why Docker was created, and how Docker works internally — in a way that’s easy to understand, even if you’re completely new.
1. Introduction to Containers
The Problem Containers Were Created to Solve
Before containers, applications were usually deployed directly on servers or virtual machines. This caused several issues:
- Different environments (dev, test, prod) behaved differently
- Dependency conflicts between applications
- Difficult deployments and rollbacks
- Poor scalability
Developers needed a way to package an application along with everything it needs to run, and run it the same way everywhere.
What Are Containers?
A container is a lightweight, portable unit that packages:
- Application code
- Runtime (Node, Python, Java, etc.)
- Libraries and dependencies
- Configuration
Containers share the host operating system kernel but run in isolated user spaces, making them fast and efficient.

Think of a container as a shipping container:
- The contents inside don’t matter to the ship
- The container can move between ships, trucks, or ports
- Everything inside stays the same
Containers vs Virtual Machines (VMs)
| Feature | Virtual Machines | Containers |
|---|---|---|
| OS | Full guest OS per VM | Share host OS kernel |
| Startup Time | Minutes | Seconds or milliseconds |
| Resource Usage | Heavy | Lightweight |
| Isolation | Strong | Process-level isolation |
| Portability | Limited | Very high |
Analogy:
- VMs are like renting a full house for each guest
- Containers are like renting rooms in the same building
2. Challenges of Non-Containerized Applications
Before containers became popular, applications were usually installed directly on servers or VMs. This worked — but it came with many headaches.
Common Problems (Explained Simply)
1. Dependency Conflicts
One application might need Node 16, another needs Node 18.
Installing both on the same machine often breaks something.
Result: upgrading one app accidentally breaks another.
2. Environment Inconsistency ("Works on my machine")
- App works on a developer laptop
- Fails on QA
- Breaks in production
Why? Different OS versions, libraries, configs, or runtime versions.
3. Difficult Scaling
To handle more traffic:
- You clone the server or VM
- Configure everything again
- Takes time and effort
Scaling is slow and expensive.
4. Painful Deployments & Rollbacks
- Manual deployment steps
- Long release windows
- Rolling back means reconfiguring servers
Mistakes often lead to downtime.
How Containerization Fixes These Problems
- Each app runs with its own dependencies
- Same container runs everywhere
- Scale by running more containers
- Roll back by switching container versions
3. Introduction to Docker
Now that we understand why containers are needed, let’s talk about the tool that made containers popular and easy to use — Docker.
What Is Docker? (In Simple Words)
Docker is a tool that helps you:
- Package your application
- Include everything it needs to run
- Run it the same way on any machine
Docker lets developers say:
“Build my app once, and run it anywhere.”
Instead of manually setting up environments, Docker automates this using containers.
Why Was Docker Created?
Before Docker:
- Containers existed, but were hard to use
- Each company had its own custom tooling
- Developers struggled with setup and consistency
Docker solved this by providing:
- A standard format (Dockerfile & images)
- Simple commands (
docker build,docker run) - Easy image sharing via registries
Docker made containers developer-friendly.
How Docker Solves Real Problems
Let’s connect Docker to the problems we discussed earlier:
- Dependency conflicts → Each app has its own container
- Environment mismatch → Same image runs everywhere
- Slow deployments → Start containers in seconds
- Difficult rollbacks → Switch image versions easily
Docker doesn’t remove complexity — it packages it neatly.
Where Docker Fits in Modern Development
Docker is used in almost every modern software workflow:
- Local development – same setup for all developers
- CI/CD pipelines – predictable builds and tests
- Microservices – each service runs in its own container
- Cloud & Kubernetes – Docker images are the standard unit
Docker is often the first step toward:
- Kubernetes
- DevOps
- Cloud-native architectures
4. How Docker Works Internally
Let's keep this simple. Think of Docker as a small system that helps you package apps and run them the same way everywhere. Below are the few, friendly building blocks you need to know.
The two sides: Client and Daemon
-
Docker Client — what you type (
docker build,docker run) or what a GUI calls. It sends requests. -
Docker Daemon (
dockerd) — the background service that actually builds images and runs containers. The client talks to the daemon.
Quick analogy: the client is the remote control, the daemon is the TV doing the work.
The runtimes (containerd & runc)
- containerd and runc are the low-level helpers the daemon uses. They turn an image into a running process on your machine. You don’t need to memorize them — just know Docker delegates the actual process creation to small, focused tools.
Images vs Containers (simple)
- Image — a frozen snapshot, like a recipe or blueprint. It’s read-only.
- Container — a running instance of that image. It adds a thin writable layer so the app can run and change files while it’s alive.
Dockerfile — the recipe
A Dockerfile is a simple text file with instructions to build an image (what base to use, what to copy, what commands to run). docker build reads it and creates the image.
Registry — where images live
A registry (Docker Hub, GitHub Container Registry, private registries) stores images so you can docker push and docker pull them from anywhere.
How Docker isolates and controls resources (very simple)
Docker uses two kernel features:
- Namespaces — give containers their own view (processes, network, filesystem). Think of a namespace as a private room: you can’t see other rooms.
- cgroups (control groups) — limit how much CPU, memory, or disk a container can use. Think of cgroups as the room’s power limiter.
Together they make containers feel like little isolated environments without the overhead of a full OS.
Image layers & cache (why builds are fast)
- Each Dockerfile step produces a layer. Layers are reused (cached) if they haven’t changed. This is why ordering your Dockerfile sensibly makes builds faster.
Networking & storage (short)
-
Networking: Docker gives each container a network interface and lets you map host ports (
-p host:container) so services are reachable. - Volumes: For data that must survive container restarts, use volumes. They keep data outside the container’s temporary writable layer.
Typical simple workflow
- Write a Dockerfile (describe app environment).
-
docker build -t my-app .— build an image locally. -
docker run -p 3000:3000 my-app— run it as a container. -
docker push username/my-app— upload to a registry. -
docker pull username/my-appon another machine to run the same image.
5. Practical Example: Simple Web App (Hands-On)
In this section, you’ll actually run a containerized app by copy-pasting commands. No prior Docker experience required.
We’ll containerize a very simple Node.js web server.
Step 0: Prerequisites
Make sure you have:
- Docker installed (
docker --versionshould work) - Any OS (Windows / macOS / Linux)
Step 1: Create a Project Folder
mkdir simple-docker-app
cd simple-docker-app
Step 2: Create a Simple Web App
Create a file named index.js:
const http = require('http');
const PORT = 3000;
const server = http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello from Docker Container!');
});
server.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
Step 3: Create package.json
{
"name": "simple-docker-app",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"start": "node index.js"
}
}
Step 4: Create a Dockerfile
Create a file named Dockerfile (no extension):
# Use an official Node.js runtime
FROM node:18
# Set working directory inside container
WORKDIR /app
# Copy package files first (for caching)
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy application source code
COPY . .
# Expose application port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
Step 5: Build the Docker Image
Run this command in the same folder:
docker build -t simple-web-app .
This creates a Docker image called simple-web-app.
Step 6: Run the Container
docker run -p 3000:3000 simple-web-app
Now open your browser and visit:
http://localhost:3000
You should see:
Hello from Docker Container!
What Just Happened (In Simple Words)
- Docker packaged your app + Node.js + config into an image
- A container was created from that image
- Port
3000inside the container was mapped to your machine - The app runs the same way everywhere
Why This Matters
Now this app can run:
- On any developer laptop
- Inside CI/CD pipelines
- On cloud servers
No environment setup required — just Docker.
Final Thoughts
Docker is not just a tool — it’s a fundamental shift in how software is built and shipped.
By using containers:
- You eliminate environment issues
- You simplify deployments
- You scale with confidence
If you understand:
- What containers are
- Why Docker exists
- How Docker works internally






Top comments (0)