Although I am using Docker for a while now, the theory always seemed a bit complex to me, so I decided to do the in depth search to understand the theory and put it for me and others in simpler words. Although it wouldn't be as simple as this one line from the Stack Overflow user 'L0j1k' that kinda sums it all up :
Docker is just a fancy way to run a process.
Virtualisation in a broad sense means creating a simulated, virtual computing environment.
There are four virtualisation categories:
- Desktop virtualisation: where a centralised machine manage individualized desktops,
- Network virtualisation: where a network bandwidth is split into independent channels to be assigned to different device,
- Storage virtualisation: where a hardware storage information is abstracted to create a logical view of storage to be consumed by devices,
- Software virtualisation: where in a hosting physical environment, we get to create several isolated virtual environments.
Docker is not under any of the above categories. It's a container-based virtualisation (OS-level one ).
All the containers on the host machine share the same kernel and run on the same OS as the host. Each container is identified by a namespace so the kernel can insure the execution isolation.
A container ideally runs a single process, it's stateless also so it can be created and destroyed without affecting the application behavior.
Since an application usually use multiple containers, Docker offers a YAML configuration file, that allow us to build the different containers and put them on the same virtual network.
Easily put, a Docker Image is a read-only template of instructions to create a docker container.
It's the equivalent of an Immutable VM snapshot.
It's the snapshot of the docker container at a specific point of time that contains all the elements needed to run an application (code, configuration files, librairies,etc.)
See you soon in the practical part about Docker (a link would be added later when the other part is up).