From Virtual Machines to Containers
In my last post, I talked about virtual machines and how they let you run multiple operating systems on one computer. But I also mentioned they're heavy and slow. That's where containers come in.
Containers are like virtual machines, but much lighter. Instead of running a full operating system for each application, containers share the host operating system while keeping applications isolated from each other.
What Exactly Is a Container?
A container packages an application with everything it needs to run: code, libraries, dependencies, and configuration files. It's like a shipping container for software.
How Containers Work: Namespaces and Cgroups
Containers rely on two powerful Linux kernel features: namespaces and cgroups. Understanding these helps demystify how containers actually achieve isolation.
Namespaces: Creating Separate Worlds
Namespaces are what make containers feel like separate machines even though they're sharing the same OS. Think of namespaces as creating different "views" of the system for each container.
There are several types of namespaces:
PID Namespace (Process ID): Each container gets its own set of process IDs. Inside a container, you might see a process with ID 1, and in another container, there's also a process with ID 1. They don't conflict because they're in separate namespaces. From inside the container, you can only see your own processes, not processes from other containers or the host.
Network Namespace: Each container can have its own network interfaces, IP addresses, and routing tables. This is why different containers can run web servers on the same port (like port 80) without conflicts. Each thinks it owns that port in its own network namespace.
Mount Namespace: Gives each container its own file system view. One container might see /app with Node.js files, while another sees /app with Python files. They're completely separate even though the path is the same.
User Namespace: Maps users inside the container to different users on the host. You might be "root" inside the container, but you're actually an unprivileged user on the host system. This adds security.
UTS Namespace: Lets each container have its own hostname, making it feel like a separate machine.
Cgroups: Limiting Resources
Control groups (cgroups) are about resource management. While namespaces control what a container can see, cgroups control what a container can use.
CPU limits: You can tell a container, "You can only use 50% of one CPU core." No matter how hard the application tries, it can't exceed that limit. This prevents one badly-behaved container from slowing down everything else.
Memory limits: Set a maximum amount of RAM a container can use. If it tries to use more, the container gets killed and restarted. This is crucial because without limits, one memory leak could crash your entire system.
Disk I/O limits: Control how much a container can read from or write to disk. Prevents one container from monopolizing disk access.
Network bandwidth: Limit how much network bandwidth a container can consume.
How Containers Access Computer Resources
The operating system has two layers. Kernel space is where the kernel directly manages hardware and security. User space is where applications run. Applications can't touch hardware directly—they ask the kernel through system calls.
When a container needs to do something, here's the flow:
Application makes a request: Your app in the container wants to read a file or send a network packet.
System call to kernel: The container makes a system call to the shared kernel. For example,
read()to read a file orsocket()to create a network connection.Kernel checks namespace: The kernel sees that this request came from a container. It checks which namespaces apply. "This process is in network namespace X, so it can only see network interfaces in that namespace."
Kernel checks cgroups: Before granting the request, the kernel checks cgroup limits. "Has this container exceeded its memory limit? Is it trying to use too much CPU?"
Kernel executes or denies: If everything checks out, the kernel performs the action and returns the result to the container. The container never directly touches the hardware.
Why Containers Are Better for Many Use Cases
After learning about both VMs and containers, here's why containers often win:
Speed: Containers start in seconds because they don't boot an entire OS. VMs take minutes.
Size: A container image might be 100-200 MB. A VM image is often several gigabytes.
Efficiency: I can run dozens of containers on my laptop. Running more than 2-3 VMs would bring it to its knees.
Portability: If it runs in a container on my machine, it'll run the same way on a server, in the cloud, or on my teammate's computer.
My Understanding So Far
Containers solve the "it works on my machine" problem. They package everything an application needs, making it consistent across different environments. They're lighter than VMs but still provide isolation.
This is what Docker is all about – making containers easy to create, share, and run. In my next post, I'll dive into Docker specifically and how it became the standard tool for working with containers.
Part 2 of my Docker learning journey. Slowly connecting the dots between VMs, containers, and Docker.
Top comments (0)