DEV Community

Cover image for Docker : Part 2 : A deep dive with my might shark (the Docker Engine)
vishwasnarayanre
vishwasnarayanre

Posted on • Updated on

Docker : Part 2 : A deep dive with my might shark (the Docker Engine)

Disclaimer just for the understanding: Remember this is working just like a web engine that is platform independent but there is a difference.

In my previous article, I included an overview of Docker; if you haven't read it yet, pause what you're doing and go have a look here

Docker Engine is the container's heart and what allows it to run. Docker has a modular architecture, which means you can swap/customize any of the parts. I believe it is still beneficial to understand how a docker generate request flows before the container is operational.

Namespaces are mostly used to isolate our bins. It restricts our container's access to resources. A single Docker Engine, for example, will operate several Docker Containers, all of which are totally independent from one another. Each Container believes it is the only one running on the host. If you ask a container about the other containers. Container might be be perplexed because only one container operating on this host (with one name)and doesnt matter even if the other containers are running on the other host.

Thus, never name a container name similar to each other on a same host machine, even if you want to name it just tag the containers built and the tag will go and sit in the tag section in the Docker hub/Registry.

Alt Text

Control Groups: As the name suggests, this is something used to control things, and you are right. Control Groups are used to restrict the amount of resources (CPU, Memory, Networking, etc.) that a container may access. For example, we don't want a single container using all of the underlying OS resources and leave nothing for the other containers or processes on the host.

Daemons: They look like wicked aliens(As they are named like that), they're not what you're dreaming off. Daemons in computers are simply software/processes that run in the background without interfering with the user and execute specific activities on a regular basis or in response to specific commands. They are essentially systems that operate in the background and execute activities on our behalf (you can still run your commands for the required time and also at the triggers) it all depends on the commands that you have written.

if you want to learn how to write a Dockerfile then you have check some lectures from freecodecamp.org or

Dockershim was a short-term approach suggested by the Kubernetes group to incorporate Docker support so that it could act as the container runtime. Currently, the KubeSphere container framework uses Docker as the Kubernetes container runtime. Meanwhile, it is also compatible with any CRI-compliant implementations.

Docker Engine is a client-server technology that facilitates the activities and workflows associated with developing, shipping, and running container-based applications. The engine launches a server-side daemon mechanism that serves as a host for images, containers, networks, and storage volumes.

Docker Images are stored in the Docker Registry. The Registry may be a user's local repository or a shared repository, such as a Docker Hub, enabling several users to cooperate on the development of an application. Also inside the same enterprise, multiple teams can swap or distribute containers by uploading them to the Docker Hub, which is a cloud platform similar to GitHub.

Docker-compose is a command that allows you to run several containers as a single service. It accomplishes this by running each container in isolation while allowing them to communicate with one another.

Docker swarm is a container service that enables IT administrators and developers to build and control clusters of swarm nodes within the Docker network. Each Docker swarm node is a Docker daemon, and all Docker daemons communicate using the Docker API. A swarm is made up of two groups of nodes: boss nodes and worker nodes.A manager node is in charge of cluster administration activities. Worker nodes obtain and carry out tasks assigned by the boss node.

How docker conqured the world of containers

The idea that a given process can be run with some degree of isolation from the rest of its operating environment has been built into Unix operating systems such as BSD and Solaris for decades. The original Linux container technology, LXC, is an OS-level virtualization method for running multiple isolated Linux systems on a single host.LXC was made possible by two Linux features: namespaces, which wrap a collection of system resources and address them to a process to make them appear devoted to that process; and cgroups, which control the separation and use of system resources, such as CPU and memory, for a community of processes.

Containers decouple programmes from operating systems, allowing users to run a clean and lightweight Linux operating system while running everything else in one or more isolated containers. Since the operating system is abstracted from containers, you can transfer them through any Linux server that supports the container runtime environment.Docker made some major updates to LXC that rendered containers more compact and adaptable. Docker containers allow you to instal, copy, transfer, and backup a workload more quickly and conveniently than virtual machines. Docker adds cloud-like functionality to any infrastructure that can operate containers.

Docker's container image tools is also a step forward from LXC, allowing a developer to create image collections, compose applications from several files, and launch such containers and applications on local or remote infrastructure.

Other products, such as Docker Swarm and Kubernetes, have more complex variations of these activities (not just producing container that just make a new version of the same every time and are optimized to run in the server), such as container orchestration. Docker, on the other hand, contains the fundamentals. Despite the fact that Swarm evolved from the Docker project, Kubernetes has emerged as the de facto Docker orchestration tool of preference.

Docker vetos

Containers solve certain challenges, but they are not panaceas. Any of their flaws are intentional, and others are unintended consequences of their architecture.

Docker containers are not virtual machines

The most prominent misunderstanding of the containers is by equating them with virtual machines. Containers and virtual computers, on the other hand, use separate separation methods and thus have distinct advantages and drawbacks.

Since virtual machines run their own instance of an operating system, they offer a high level of separation for operations. This operating system would not have to be the same as the one used by the host. A virtual machine running on Windows can run on a hypervisor running on Linux, and vice versa.

Containers, on the other hand, use limited portions of the host operating system's resources; multiple programmes share the same OS kernel in a closely regulated manner. As a result, containerized applications aren't as fully independent as virtual computers, but they're still adequate for the vast majority of workloads.

Conclusion

Docker is a fantastic piece of technology for individually deploying software in a reproducible and isolated manner. There is no one-size-fits-all solution, so before selecting Docker as your tool of choice, you should understand your requirements in terms of protection, efficiency, deployability, observability, and so on.
Fortunately, there is already a fantastic community of software built around Docker. As required, additional solutions for service discovery, container orchestration, log routing, encryption, and other use cases may be included. I'd like to conclude by quoting one of my favourite tweets:

Top comments (0)