DEV Community

Cover image for Docker For Absolute Beginners
KodeKloud
KodeKloud

Posted on • Originally published at kodekloud.com

Docker For Absolute Beginners

Every company is dependent on software for innovation and one of the biggest innovations in the software development field was the invention of containers. The containers have changed the way the software is built and shipped these days. The company Docker took this route and made containers available for everyone. It has made developers worry-free because of its amazing features. Docker also enables the adoption of DevOps in enterprises by eliminating the gap between Dev and Ops teams that often used to get into conflicts.
Today, we will go through some fundamentals of Docker and more.

Docker Containers

We're now going to look at a high-level overview of why you need Docker, and what it can do for you. Let me start by sharing how I got introduced to Docker. in one of my previous projects, I had this requirement to set up, an end to end stack including various different technologies, like a web server using node js, and database such as MongoDB, a messaging system like Redis, and an orchestration tool like Ansible. We had a lot of issues developing this application with all these different components. First, the thing to be taken care of was their compatibility with the underlying operating system, we had to ensure that all these different services were compatible with the version of the operating system we were planning to use.

Alt Text

There have been times when certain versions of these services were not compatible with the OS. And we've had to go back and look for another OS that was compatible with all these different services. Secondly, we had to check the compatibility between the services and the libraries and dependencies on the OS. We've had issues where one service requires one version of a dependent library, whereas another service requires another version.

Alt Text

The architecture of our application changed over time, we've had to upgrade to newer versions of these components or change the database, etc. And every time something changed, we had to go through the same process of checking compatibility between these various companies and the underlying infrastructure. This compatibility matrix issue is usually referred to as the matrix from hell.

Next, every time we had a new developer on board, we found it really difficult to set up a new environment, the new developers had to follow a large set of instructions and run hundreds of commands to finally set up their environment. They had to make sure they were using the right operating system, the right versions of each of these components. And each developer had to set all that up by himself each time. We also had different development tests and production environments.

One developer may be comfortable using one OS and the others may be using another one. And so we couldn't guarantee that the application that we were building would run the same way in different environments. And so all of this made our life really difficult.
So I needed something that could help us with the compatibility issue, something that will allow us to modify or change these components without affecting the other components and even modify the underlying operating system as required. And that search landed me on Docker.

Alt Text

With Docker, I was able to run each component in a separate container with its own libraries and its own dependencies, all on the same VM and the OS, but within separate environments or containers. We just had to build the Docker configuration once and all our developers could now get started with a simple Docker run command, irrespective of what the underlying operating system they run. All they needed to do was to make sure they had Docker installed on their systems.

So what are containers?

Containers are completely isolated environments. As in they can have their own processes or services, their own networking interfaces, their own mounts just like virtual machines, except they're all shared the same operating system kernel. We will look at what that means in a bit. But it's also important to note that containers are not new with Docker.

Alt Text

Containers have existed for about 10 years now. And some of the different types of containers are LXC, LXD, LXCFS, etc. Docker utilizes LXC containers. Setting up these container environments is hard as they are very low level. And that is where Docker offers a high-level tool with several powerful functions, making it really easy for end users like us.

How does Docker work?

To understand how Docker works, let us revisit some basic concepts of operating systems First. If you look at operating systems like Ubuntu, Fedora, CentOS, etc. They all consist of two things, an OS kernel and a set of software. The operating system kernel is responsible for interacting with the underlying hardware, while the OS kernel remains the same, which is Linux in this case, it's the software above it that makes these operating systems different.

The software may consist of a different user interface drivers, compilers, file managers, developer tools, etc. So you have a common Linux kernel shared across all operating systems and some custom software that differentiates operating systems from each other.
We said earlier that Docker containers share the underlying kernel. What does that actually mean, sharing the kernel? Let's say we have a system with an Ubuntu OS with Docker installed on it. Docker can run any flavor of OS on top of it. As long as they're all based on the same kernel, in this case Linux. if the underlying operating system is Ubuntu Docker can run a container based on another distribution like Debian, Fedora, Susi or CentOS. Each Docker container only has the additional software that we just talked about previously which makes these operating systems different.

And Docker utilizes the underlying kernel of Docker host, which works with all the operating systems above. So what is an OS that did not share the same kernel as these, windows. And so you won't be able to run a Windows-based container on a Docker host with Linux OS on it. For that you would require Docker on a Windows Server.

You might ask isn't that a disadvantage then? Not being able to run another kernel on the OS? The answer is no.
Because unlike hypervisors Docker is not meant to virtualize and run different operating systems and kernels on the same hardware. The main purpose of Docker is to containerize applications and to ship them and run them.

Differences between virtual machines and containers

So that brings us to the differences between virtual machines and containers, something that we tend to do, especially those from a virtualization background. As you can see on the right, in the case of Docker, we have the underlying hardware infrastructure, then the operating system and Docker installed on the OS. Docker can then manage the containers that run with libraries and dependencies alone.

Alt Text

In the case of a virtual machine, we have the OS on the underlying hardware, then the hypervisor like ESX or virtualization of some kind, and then the virtual machines. As you can see, each virtual machine has its own operating system inside it. Then the dependencies and then the application. This overhead causes higher utilization of underlying resources as there are multiple virtual operating systems and kernels running. The virtual machines also consume higher disk space as each VM is heavy, and it's usually in gigabytes in size, whereas Docker containers are lightweight, and they're usually in megabytes in size. This allows Docker containers to boot up faster, usually in a matter of seconds, whereas virtual machines as we know take minutes to boot up as it needs to boot up the entire operating system.

It is also important to note that Docker has less isolation as more resources are shared between containers like the kernel, whereas VMs have complete isolation from each other. Since VMs, don't rely on the underlying operating system or kernel. You can have different types of operating systems such as Linux based or Windows-based on the same hypervisor. Whereas It is not possible on a single Docker host. So these are some differences between the two.

So how is it done? There are a lot of containerized versions of applications readily available as of today. So most organizations have their products containerized and available in a public Docker registry called Docker Hub, or Docker store already. For example, you can find images of the most common operating systems, databases, and other services and tools. Once you identify the images you need, and you install Docker on your host. Bringing up an application stack is as easy as running a Docker run command with the name of the image.

In this case, running a Docker run Ansible command will run an instance of Ansible on the Docker host. Similarly run an instance of MongoDB Redis and node js using the Docker Hub command. When you run node js, just point to the location of the code repository on the host. If you need to run multiple instances of the web service, simply add as many instances as you need and configure a load balancer of some kind in the front. In case one of the instances wants to fail, simply destroy that instance and launch a new instance. There are other solutions available for handling such cases that we will look at later. We've been talking about images and containers.

Alt Text

Let's understand the difference between the two. An image is a package or a template just like a VM template that you might have worked with, in the virtualization world. It is used to create one or more containers. Containers are running instances of images that are isolated and have their own environments and set of processes.
As we have seen before, a lot of products have been Dockerized already. In case you cannot find what you're looking for, you could create an image yourself and push it to the Docker Hub repository, making it available for the public.

Alt Text

If you look at it, traditionally, developers developed applications. Then they hand it over to the Ops team to deploy and manage it in production environments. They do that by providing a set of instructions such as information about how the host must be set up, what prerequisites are to be installed on the host, and how the dependencies are to be configured etc.
The Ops team uses this guide to set up the application. Since the Ops team did not develop the application on their own, they struggle with setting it up. When they hit an issue, they work with the developers to resolve it. With Docker, a major portion of work involved in setting up the infrastructure is now in the hands of the developers in the form of a Dockerfile. The guide that the developers built previously to set up the infrastructure can now easily be put together into a Dockerfile to create an image for the applications.

This image can now run on any container platform, and it's guaranteed to run the same way everywhere. So the Ops team now can simply use the image to deploy the application. Since the image was already working, when the developer built it, and operations are not modifying it, it continues to work the same way when deployed in production. To learn more about containers, check out my other courses ‘Docker for the absolute beginners

Watch the video on 'Docker for Beginners: Full Course' on YouTube.

Latest comments (0)