When I first started working as a developer, Docker seemed to be everywhere. I kept hearing docker this docker that, and often had to run some docker commands to get things working. But I didn't actually know what exactly Docker was, so I spent some time learning about it, and here is what I learnt.
Docker is a service that helps us package and isolate our code in containers, so you can run it on different machines. It's like putting everything in your house into a huge shipping container with all that is needed to use your stuff. It doesn't matter where the container is, you will be able to use the things inside the container the same way you would in your house.
This is important in software development, because you develop applications in a specific environment, and what works on your machine might not work on another. Your code might run perfectly on your computer, but stop working on your friend's. If, for example, your code is written in Node.js, the reason might be as simple as them having a different version of Node installed that is incompatible with the one you used.
Docker helps us create and manage containers. Containers are standardised, independent units of software that we can run. Modern operating systems have built-in support for containers, and technically, we don't need Docker to create containers, but it's a tool that makes it much easier for us to do so.
Containers are running instances of images. Images are like read-only templates for containers. They contain our code and all required tools, and we can create multiple containers based on the same image.
Images are based on Dockerfiles. Dockerfiles are text files that specify the requirements for a specific image and include the commands a user can run in the terminal to build it. Images can be stored and shared in repositories on Docker Hub, which is a hosted registry and like a GitHub for Docker images.
Docker containers are sometimes compared to virtual machines, as they also provide an isolated environment for your code. But unlike containers, virtual machines contain their own operating system. It's like another computer running on your computer that takes up space on your hard drive and can therefore be quite slow. While the environment configurations for virtual machines can be shared and reproduced, this can sometimes be quite troublesome.
Containers run on top of the Docker Engine. The Docker Engine is a technology that takes care of the containerisation of your applications. It is a client-server application, which consists of three components:
- A client, the Docker CLI, which allows us to interact with Docker using the command line.
- A REST API that sends our commands from the client to a server, which can be a local or remote machine.
- A server, a local or remote machine that has the Docker daemon running. The Docker daemon, also called dockerd, listens for API requests and manages Docker objects such as images and containers.
While Docker containers can contain an operating system layer on top of the Docker Engine, it's usually much more lightweight than that of a virtual machine. This makes containers faster than virtual machines.
However, Docker containers are stateless by default. If you don't connect them to a persistent storage, your session data will be lost with the removal of the container. Virtual machines can be both stateful or stateless.
The diagram below illustrates the difference between a virtual machines and a Docker containers. A Hypervisor is a software that creates and runs virtual machines. The Bins / Libs layer provides the software libraries and services needed for the app to run.
Docker is a service that helps us package and isolate our code in containers. Containers are standardised, independent units of software and running instances of images. Images are based on Dockerfiles and created by the Docker daemon, which is part of the Docker Engine. Images are stored and shared on Docker Hub.
Unlike virtual machines, Docker containers do not contain a separate operating system and are stateless by default.