If you've ever heard "works on my machine" — containers are the permanent fix to that problem.
TL;DR
- A container bundles your app + libraries + minimal OS into a portable, isolated unit
- Containers share the host kernel — making them ~100× lighter than VMs
- Docker is the toolchain that builds, runs, and distributes containers
- The lifecycle is three commands:
docker build→docker run→docker push - DockerHub is the default public registry for sharing images
What Is a Container?
A container is a self-contained, portable unit of software. It bundles together:
- Your application code
- All required libraries and dependencies
- The minimum system dependencies needed to run
Everything is packaged so the application runs identically regardless of the underlying environment — your laptop, a CI server, or a cloud VM in us-east-1.
Simple Mental Model: Think of a container like a shipping container on a cargo ship. The contents are standardized and isolated. The ship (host OS) doesn't care what's inside — it just carries it.
Containers vs. Virtual Machines
Both containers and VMs isolate applications, but the architecture is fundamentally different — and that difference has huge practical consequences.
| Dimension | Containers | Virtual Machines |
|---|---|---|
| OS Overhead | Shares host kernel | Full OS per VM |
| Startup Time | Milliseconds | 30s – 2 minutes |
| Image Size | ~22 MB (Ubuntu base) | ~2.3 GB (Ubuntu VM) |
| Portability | Any compatible host OS | Requires matching hypervisor |
| Isolation Level | Process-level (namespaces) | Full OS isolation |
| Management | Lightweight, fast-moving | Heavier tooling required |
The Ubuntu container base image is almost 100× smaller than its VM equivalent. That's not a rounding error — that's a fundamentally different architecture.
Why Are Containers So Lightweight?
The secret is in what containers don't include. Rather than bundling a full operating system, a container shares the host OS kernel through containerization.
What a container base image includes:
/bin # ls, cp, ps — essential executables
/sbin # init, shutdown — system executables
/etc # config files for system services
/lib # libraries used by executables
/usr # user apps, utilities, docs
/var # logs, spool files, temp files
/root # home directory of root user
What it borrows from the host OS:
- Kernel & system calls — the container calls out to the host kernel for CPU, memory, I/O
- Networking stack — for connectivity, using host networking or a virtual network
- Linux namespaces — creates isolation for file system, PID, and network per container
- Control groups (cgroups) — limits how much CPU/memory/I/O each container can use
Key Insight: Changes inside a container do NOT affect the host or other containers. Isolation is enforced at the kernel level via namespaces and cgroups — even though they share the same kernel.
What Is Docker?
Containerization is the concept. Docker is the implementation.
Docker is the platform that makes it easy to:
- Build container images from a simple text file (Dockerfile)
- Run those images as containers on any machine
- Push and pull images from public/private registries like DockerHub
Docker Architecture
Docker CLI (docker) ← you type commands here
│ sends API requests to
▼
Docker Daemon (dockerd) ← the core service
├── manages → Images
├── manages → Containers
├── manages → Networks
└── manages → Volumes
│ communicates with
▼
Docker Registry (DockerHub / Quay.io / private)
If the Docker Daemon stops running, Docker is effectively dead — it's the orchestrator for everything. The CLI is just a thin client that sends commands to the daemon via the Docker API.
The Docker Lifecycle
Everything in Docker revolves around three core commands:
Dockerfile ──docker build──▶ Image ──docker run──▶ Container
│
docker push
│
▼
Registry
| Command | Does What |
|---|---|
docker build |
Reads a Dockerfile and produces an image |
docker run |
Starts a container from an image |
docker push |
Uploads an image to a registry |
Key Docker Terminology
Dockerfile
A plain-text file where you define the steps to build your image. Each instruction creates a new layer in the image. Only changed layers are rebuilt on subsequent builds — which is why Docker builds stay fast.
Image
A read-only template built from a Dockerfile. Images are often composed on top of other images — for example, you might extend the official ubuntu image by installing Python on top of it.
Container
A running instance of an image. The same image can spawn many containers simultaneously. Containers are ephemeral by default — stop one and the data inside is gone unless you use volumes.
Docker Daemon (dockerd)
The background service that manages everything — images, containers, networks, volumes. Listens for Docker API requests from the CLI or other daemons.
Docker Client (docker)
The CLI tool most users interact with. When you run docker run, it sends that command to dockerd via the Docker API.
Docker Registry
A storage service for images. DockerHub is the default public registry. You can also run a private registry, or use services like Quay.io, GitHub Container Registry, or AWS ECR.
Installing Docker on Ubuntu
The official docs at docs.docker.com/get-docker cover all platforms. For a quick Ubuntu / EC2 setup:
sudo apt update
sudo apt install docker.io -y
Start the Daemon & Grant User Access
This is where many beginners get stuck. After installing Docker, you need to both start the daemon and grant your user access to run Docker commands.
# Check daemon status
sudo systemctl status docker
# Start daemon if not running
sudo systemctl start docker
# Add your user to the docker group (replace 'ubuntu' with your username)
sudo usermod -aG docker ubuntu
Important: You must log out and log back in after running
usermodfor the group change to take effect.
Verify the Installation
docker run hello-world
If you see a permission denied error, it usually means either the daemon isn't running or your user isn't in the docker group yet.
If you see Hello from Docker! — you're good to go.
Building & Running Your First Image
Step 1: Clone the example repo
git clone https://github.com/iam-veeramalla/Docker-Zero-to-Hero
cd examples
Step 2: Log in to DockerHub
Create a free account at hub.docker.com if you don't have one, then:
docker login
# Enter your DockerHub username and password when prompted
Step 3: Build your image
docker build -t yourusername/my-first-docker-image:latest .
You'll see Docker executing each step in your Dockerfile, pulling base images if needed, and tagging the final result:
Successfully built 960d37536dcd
Successfully tagged yourusername/my-first-docker-image:latest
Step 4: Verify the image exists
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
yourusername/my-first-docker-image latest 960d37536dcd 26 seconds ago 467MB
ubuntu latest 58db3edaf2be 13 days ago 77.8MB
hello-world latest feb5d9fea6a5 16 months ago 13.3kB
Step 5: Run the container
docker run -it yourusername/my-first-docker-image
Hello World
Step 6: Push to DockerHub
docker push yourusername/my-first-docker-image
Your image is now publicly available on DockerHub. Anyone in the world can pull and run it with a single docker run command.
What just happened? You containerized an application, ran it locally, and shipped it to a global registry — in 6 commands. That's the Docker workflow in its entirety.
Key Takeaways
- Containers bundle app + dependencies + minimal OS into a portable unit
- They're lightweight because they share the host kernel via namespaces & cgroups
- Container base images are ~100× smaller than equivalent VM images
- Docker is the platform: build images → run containers → push to registries
- The Docker Daemon is the core service; the CLI is just a client talking to it
- DockerHub is the default public registry for sharing and pulling images
- The three lifecycle commands are
build,run, andpush
What's Next?
This is just the beginning. From here, explore:
- Docker Compose — define and run multi-container applications with a single YAML file
- Docker Volumes — persist data beyond the container lifecycle
- Multi-stage builds — drastically shrink production image sizes
- Kubernetes — orchestrate containers at scale across clusters
Happy shipping.
Top comments (0)