DEV Community

Cover image for Docker 1.0 : The Basics
Anuttam Anand
Anuttam Anand

Posted on

Docker 1.0 : The Basics

Hello folks, welcome to the first blog in the series dedicated to understanding Docker and Kubernetes. I've written numerous blogs and followed many tutorials, but most of them bombard users with practical commands, preparing us for real-world use. However, like me, you might have wondered how Docker works under the hood. Don't worry, in this blog, we will delve into Docker, connect the basic concepts we've learned, and explore how the commands we use function behind the scenes.

This blog is intended for people who have prior experience with virtual machines and a basic understanding of them. If you've used Docker before, that's fantastic! I encourage you to read the full series for a deeper understanding of the underlying processes.

Everything's a Process

You might have experience with Linux, Unix, or command line interfaces. Consider the following command and its output:

ls
Enter fullscreen mode Exit fullscreen mode

Output

jkjarvis@jkjarvis:~/docker-blog$ ls
a.txt  b.txt
Enter fullscreen mode Exit fullscreen mode

In the above image, we see the output for the command 'ls' which lists files and folders in the current directory.
But what 'ls' does ?
But what does the 'ls' command do? Before answering this, let's examine the following:

jkjarvis@jkjarvis:~/docker-blog$ which ls
/usr/bin/ls
Enter fullscreen mode Exit fullscreen mode

This tells us that the 'ls' command we are running is stored in our system. The path '/usr/bin/ls' indicates that 'ls' is a filename located in the binaries directory. In simple terms, when we run 'ls', it accesses this location and executes the binary code stored in the 'ls' file. (You can try 'cat /usr/bin/ls').

ls -> command -> process
Enter fullscreen mode Exit fullscreen mode

In summary, everything we do on our systems is translated into binary code, executed by the hardware, and then displayed as output.

The World of VMs...

We are familiar with virtual machines (VMs). VMs are used to simulate software, allowing, for example, the execution of Linux software on a Windows machine or an Android app in an Android VM on a Windows system.

Image description

Under the hood, when you run an app on the guest OS, the guest OS executes the binary code on a hypervisor. The hypervisor translates it to the host OS which runs it on the hardware, obtains the output, and transfers it back to the guest OS for user display. VMs can only simulate software and not hardware. For example, you cannot run an AMD software on ARM or x64 software on x86, as the binary code must be compatible with the underlying hardware. You can run a Linux x64 on a Windows x64 system but not an x86 Linux image on an x64 Windows machine.

I hope this explanation simplifies VMs for you!

But then what are containers ??

When we start our laptops, there's always a startup time where numerous processes run and set everything up. VMs retain this startup time, and the guest OS setup takes time, slowing down our work. Moreover, VMs require additional space and RAM, making them less efficient. What if we could eliminate this startup time and the need for extra resources? Enter "Containers"...

Image description

In containers, we share the same host OS and kernel, isolating only our applications. This efficient sharing of resources saves a significant amount of space and memory. Unlike VMs, containers allow us to run software programs in isolated environments. Compared to running on a single machine, containers enable us to run more instances (over 3x) than VMs. Additionally, containers help package software, allowing it to run on different machines. Within containers, we create images containing instructions on how to run the packaged software. It's like having a piece of furniture shipped to you with an assembly manual, but it assembles itself automatically.

Note: Although containers are lightweight but since they can directly interact with the Host OS, it makes them less secure. So for security purpose, VMs are the best choice.

So, what role does Docker play in all of this? Docker provides tools to package and run images and manage containers.

In this blog, I've covered the basic concepts necessary to understand the container world. Hands-on practice will be explored in the next blog, where we will discuss:

  1. Docker
  2. Installation and setup
  3. Dockerd, containerd, ctr, and much more

Thank you for reading! Feel free to comment below for any clarifications or questions.

You can reach out to me at:

  1. X (formerly Twitter): https://twitter.com/AnuttamAna56189
  2. LinkedIn: linkedin.com/in/anuttam-anand

Top comments (0)