DEV Community

Cover image for DEMYSTIFYING DOCKER
Elegberun Olugbenga
Elegberun Olugbenga

Posted on

DEMYSTIFYING DOCKER

I have had my fair share of "It worked on test but it does not work in production" arguments and I can tell you for free it is really stressful and time-consuming. Would it not be great if it just worked?... Everywhere, anywhere, consistently. Imagine a life without dependency issues due to disparity in test and live environments. Imagine a life where you can scale applications in minutes not days.

Docker Joke

The concept of Docker might seem confusing at first but once you get the basic idea of it you will understand that it is not so different from what you are already doing.

What is Docker?

Docker is a software platform for building applications based on the concept of containers โ€” small and lightweight execution environments that make shared use of the operating system kernel but otherwise run in isolation from one another.

If you were not confused after reading that then this article is really not for you (lol), but if you were, like me lets carry on. I will break down that definition.The first step is in understanding what existed before docker.

Virtualization

Let's say I wanted to install both versions of software that were not compatible with each other on my computer like MacOS and Windows. How can I do this?. I can partition my computer by dividing my computer's storage and resources into separate items. I can then allocate a certain amount to the MacOS partition and then another to the windows partition my computer then looks like this.

Windows Partioning

The next step will be to install MacOS on one and Windows on the second partition. So my system can have two separate operating systems and behave like two separate entities.

This is basically the concept of Virtualization. You can have a large system usually a server and then split it into individual systems, Each system having its own individual process, operating system, RAM and resources and behaving like its own separate machine abstracted from the next.

Virtualization

Using dev examples If I had a project that uses Python 3 and then another project that depends on Python 2 and I want to host them on the same system, I would not be able to because I can't install both Python 2 and Python 3 on the same system. What I can do is virtualize the base system and then install Python 2 on one and Python 3 in the second one

The problem with this is that we basically created a whole new system just to host our Python 3 App, this system will have its own OS its own storage its own RAM and its own processes which will take a lot of resources and complexities just to have another app running.

What is a Kernel

Lets take your Laptop. It is made up of software applications like your music player app, your browser, notepad, and hardware components like RAM and storage. When you go to your Music app and click play and music starts blowing from your laptop speakers there is something responsible for communicating with your systems OS and telling it to allocate a certain amount of RAM and resources to your speaker component. This "mysterious" thing is called the (KERNEL)and it is the bridge between your Computers Operating System and its hardware.

ALONG CAME DOCKER

What if there is a way to provide the virtualization level of abstraction for individual applications in a single machine whilst also maximizing resources?. DOCKER.....

Docker Vs VMs


Docker Vs Virtualization

Docker uses the concepts of containers to provide abstraction between processes.

A container is exactly what you think it is, where stuff is stored. It is a place where all your applications code, its dependencies, and all you need to run your application is kept. You can have several of these containers running as individual processes in your system.

Each container uses the hosts' computer OS KERNEL to request for resources to enable them to run these processes. So you can have several containers running separate processes but sharing the base systems RAM and its other resources like storage, all these are done under the hood by DOCKER. Wonderful isn't it?. So I can run Python 2 and Python 3 on different containers in the same system, and each container with its own IP address.

In particular, containerized processes are often given a completely different root filesystem and a set of users, making it look almost as if it's running on a separate machine. (But it's not; it still shares the host's CPU, memory, I/O bandwidth and, most importantly, the kernel of the host.) stackoverflow_comment

Why Containers?

containers

  • Donโ€™t need to have a full-blown OS inside.
  • And because they don't have a full-blown OS and separate system, they are very lightweight and easier to setup.
  • They allow multiple processes on the same OS and better use of system resources.

HOW TO DOCKER

The reason why a docker is so lightweight is that it uses another concept called images. An image is like a blueprint of a container. It contains a set of instructions to tell each container what to do when it starts up and which process to run. Each image is defined in something called a Dockerfile. Here is an example of a Dockerfile created to run my .netcore web API.

The image of the container is built in stages, each layer is built above the next.

  1. FROM tells Docker which image you want to use for the container. The first layer is telling the container that it should go to this site and download dotnetcore/aspnet:3.1 image and then expose port 80 and 443 in this container to allow HTTP and HTTPS traffic and this should be the base image that other layers to build on.

  2. WORKDIR tells Docker which directory to use for performing subsequent commands.

  3. COPY tells Docker to copy a file from your local filesystem into the container image.

  4. RUN executes commands within the container image.

So, in plain English - this Dockerfile is based on the dotnet/core/sdk image hosted at mcr.microsoft.com. Docker copies the .csproj file from your local working directory to create your image and dotnet restore restores all the referenced packages. Once thatโ€™s done, Docker copies the remaining files from your working directory, then dotnet build creates a Release build in your container. /app. - okta blog

The last step is defining the entry point for this container so the API dll is your container's entry-point.

If I were to do this the "normal way" I would have had to go to Microsoft site and then downloaded the dotnetcore run time and Software Development Kit, copied the files to the different folders, and finally published the app. Now if I were to move to a new server or someone new joined my team and I needed to set the person up I would need to go through this whole process again and if we are lucky and run into dependency issues we could spend significant time resolving it.

With docker, all I need to do is define the container image in a Dockerfile and then docker will go through all the steps outlined and create a container, and no matter the environment I take it to as long as I have docker installed the result will be the same.

I could also duplicate several of these containers for different purposes, I could have a database container a test container a production container, Each container running its own separate process on the same server. I could even create different images for different dotnetcore versions and run each container on the same system.

Now after you have created your DockerFile you can then build this image by running this command

docker image build -t dotnetcoreapp:3.1 
Enter fullscreen mode Exit fullscreen mode

dotnetcoreapp is the name of my image and (3.1) is the tag.

once you run this command all the steps outlined in your dockerfile will be performed.

Docker Build process

After building your image you then run your container

docker container run -p 5000:80 dotnetcoreapp:3.1 
Enter fullscreen mode Exit fullscreen mode

This command tells docker to create the container based on your dotnetcoreapp image and run the container on port 5000 on your system and use port 80 for Http Traffic. If you navigate to localhost:5000 you will see the container you just ran.

You can run a docker ps to see the list of containers running on your system.

Alt Text

For a better reference take a look at this screenshot gotten from this Youtube video.learn docker in 12 minutes-by Jake Wright

Alt Text

Docker has a centralized repository where it stores images, whatever software image you need is probably there.

Docker Hub

You can even push your own image to that repository. This means that another person can pull your image and in minutes they can have a replica of your application running.
DockerHub

The beauty of this is you can get the exact same environment in the dev and product system. You can give your Dockerfile to any developer and as long as they have docker installed they will have the exact same system running without any dependency issues and painful setups and because it is lightweight your containers would not take up a lot of space, memory, and processing power.

Now you may be wondering If I have thousands of docker containers how do I manage all of them and their different processes?. The solution KUBERNETES and I would get to it in my next article.

SUMMARY

  • Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers.

  • Containers are isolated from one another and bundle their own software, libraries, and configuration files; they can communicate with each other through well-defined channels.

  • Images provide a blueprint for defining containers and they are written in a Dockerfile.

Now go back and read the definition of docker how does it sound now?

References

  1. Containerize a .NET Core app

  2. learn docker in 12 minutes-by Jake Wright

  3. build-a-simple-dotnet-core-app-in-docker-okta blog

  4. what-exactly-is-docker

Latest comments (0)