Many companies are running their applications and stuff. But where are these apps run? The answer is servers. Some companies have their servers while some borrow from big providers (like cloud providers).
Initially, only one application could be run on one server. As one might guess, there were some problems with this approach. Firstly, this increased the load on a single server since the load increased as the number of people using that app increased. Secondly, only one app could be run, thus, to run multiple apps, companies had to add multiple servers, thus significantly increasing the costs.
Enter Virtual Machines
Virtual machines solved this problem. The concept of the virtual machine was invented by IBM as a method of time-sharing extremely expensive mainframe hardware. In simple terms, virtual machines help us to run multiple apps on the same server. But the problem with virtual machines is that they require their own operating systems, and operating systems require RAM, CPU, etc. So, say we want to run a virtual machine on our computer. For doing this, we will have to dual boot our computer, that is, install another operating system (e.g., ubuntu) on another part of our hard disk. Other flaws include reduced speed, need for dedicated storage, need for dedicated CPU and RAM allocation, and installation of dependencies management for each virtual machine. Thus, virtual machines are better than one-app-one-server, but still not perfect.
Could containers be a better solution?
Won’t it be great if we could run multiple instances of applications on the same operating system? That’s where containers come into the picture. Docker (by Docker Inc.) made containers popular with Linux containers. Before docker, big companies like Google were already using containers in a way. Not just that, there were many other initiatives like containers as well, but just weren’t popular (In 2015, an organization named Open Container Initiative (OCI) was put in place for the express purpose of creating open industry standards around container formats and runtimes).
So, what are containers?
Containers are just like virtual machines but don’t require multiple operating systems.
The analogy for understanding containers is:
Say you have built a website. Now you want feedback, hence you share the website files with your friend. You must have encountered this situation where your friend is not able to run the website due to version incompatibility or similar reasons. Thus, to avoid this situation, you do this:
Thus, you will have to share all the files (for example, including dependencies) with your friend for him to be able to run the website on his system.
This is what is called a container. Now your friend can run your website on his system without any errors!
Virtual Machine vs Containers: Extended
In the above image for Virtual machine vs container, you can see a term called a hypervisor. The hypervisor is used to create multiple machines on a host operating system and it manages virtual machines. These virtual machines have their operating system and every operating system will be dedicated some amount of hardware, resources, and CPU. On the other hand, you have only one operating system in containers, along with a container engine (container engine consists of the parts covered in the Docker architecture section). Thus, you run multiple apps on the same operating system using the container engine. The container helps the apps to run in an isolated environment (that is, an app running in a container will not know what is happening outside that container). This in turn also ensures security. Docker (by Docker, Inc) is a container tool that helps us to create, manage & scale containers.
Next, how do you get docker on your system?
Just follow the steps below:
- go to https://docs.docker.com/ -downloads and install section -select docker desktop for mac/ windows/Linux according to your operating system. easy-peasy!
Installation is done, but what setup is required to run docker on your system?
A container runs on your host operating system (that is host kernel). Thus, when a windows app is containerized, it will not run on a Linux-based kernel and vice versa. A windows-based container will require a windows kernel, similarly for Linux based container. Docker desktop can be run in two modes: 1) run on windows containers and 2) run on Linux containers.
For the Windows system, you need to install the docker desktop and windows subsystem for Linux (WSL).
For the Mac system, just installing the Docker desktop will work.
Wait, wait, wait! First, understand what Docker is in technical terms: Docker Architecture
Docker consists of three parts:
a) Docker runtime b) Docker engine c) Orchestration
Its architecture is as follows:
Let’s have a look at these terms individually:
a) docker runtime:
It helps us to start and stop a container. It is of two types:
i)low-level-runtime, known as runC:
It works with the operating system and helps to start and stop the containers
ii)high-level-runtime, known as containerd:
-it is a CNCF project
-manages runc and containers
-connecting to the internet and pulling the images into containers:
Pulling the images means bringing data from the internet to the containers. Thus, containerd helps in making the interaction between containers and the internet.
b) docker engine:
Used to interact with the docker. Docker daemon is used.
Daemon works with the docker runtime and executes the commands.
Explanation of diagram:
Example:
Docker CLI is used to write docker commands
On CLI:
Docker run ubuntu
The CLI passes this message to the docker daemon via the RestAPI. The daemon interacts with the docker runtime and instructs it to run ubuntu in the container.
c) Orchestration:
Examples:
Let’s say there are 100 containers for an application. All containers contain version 1 of the application. And now, a new version of the application is made. Thus, to update the containers, we can either update containers manually or update them all at once. Updating all at once is one of the functions of orchestration engines. Examples include docker swarm, and Kubernetes.
How will you share your files with your friend using containers?
A Docker file is what you need.
A Docker file is a set of instructions. It contains the required operating system files and the dependencies required to run the application.
When we run the docker file, we get a Docker image. When the image is run, we get the container. When we have an app that we want to containerize, we first have to write a docker file. This file will be converted to an image that can be shared with other systems. Images are immutable. Once built, the files making up an image do not change. Images can be stored locally or in remote locations like https://hub.docker.com/. A single image can be used to create multiple containers. Images are built in layers. Each layer is an immutable file but is a collection of files and directories. The last layer can be used to write out data. Each layer has an ID, calculated via an SHA 256 hash of the layer contents. Thus, if the layer contents change, the SHA 256 hash changes as well. Note: The Image ID listed by docker commands (that is ‘docker images’) is the first 12 characters of the hash. These hash values are referred to by ‘tag’ names.
Some hands-on examples of Docker commands:
a)Listing hash values of docker images:
$ docker images -q --no-trunc
sha256:3556258649b2ef23a41812be17377d32f568ed9f45150a26466d2ea26d926c32
sha256:9f38484d220fa527b1fb19747638497179500a1bed8bf0498eb788229229e6e1
sha256:fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e
Note the first 12 characters of the hash values given above.
$ docker images
REPOSITORY TAG IMAGE ID
ubuntu 18.04 3556258649b2
centos latest 9f38484d220f
hello-world latest fce289e99eb9
Now notice the IMAGE ID above. The first 12 characters of the hash values are equal to the IMAGE ID.
A point to note:
Two containers may share the same images. Thus, when we try to run the new container, CLI will show that it is pulling images from the container which already had that image. This in turn makes the process fast by not downloading the same files again. Common images are identified by the image ID.
b)Pulling an image from the Docker registry:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker pull ubuntu:18.04
18.04: Pulling from library/ubuntu
7413c47ba209: Pull complete
0fe7e7cbb2e8: Pull complete
1d425c982345: Pull complete
344da5c95cec: Pull complete
Digest:sha256:c303f19cfe9ee92badbbbd7567bc1ca47789f79303ddcef56f77687d4744cd7a
Status: Downloaded newer image for ubuntu:18.04
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu 18.04 3556258649b2 9 days ago 64.2MB
c)Running image to create a container:
$ docker run -it ubuntu:18.04
root@4183618bcf17:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root@4183618bcf17:/# exit
exit
d)creating your own docker image:
Create a file named ‘Dockerfile’
By default on building, docker searches for ‘Dockerfile’ $ docker build -t myimage:1.0 .
During the building of the image, the commands in the RUN section of Dockerfile will get executed. $ docker run ImageID
The commands in the CMD section of Dockerfile will get executed when you create a container out of the image.
Dockerfile example:
Dockerfile example:
FROM ubuntu
MAINTAINER Yash <yash@gmail.com>
RUN apt-get update
CMD [“echo”, “Hello World”]
e) some basic image-related commands:
$ docker pull ubuntu:18.04 (18.04 is tag/version)
$ docker images (Lists Docker Images)
$ docker run image (creates a container out of an image)
$ docker rmi image (deletes a Docker Image if no container is using it)
$ docker rmi $(docker images -q) (deletes all Docker images)
f) listing containers:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
4183618bcf17 ubuntu:18.04 “/bin/bash” 4 minutes ago Exited
A simplified explanation of Docker’s working:
Let’s say, you type
~ docker run hello-world
On your CLI (command line interface):
This means that you want to run the image hello world in your containers.
What happens inside is this message will be passed to the docker daemon via the restAPIs. Docker daemon will now check whether your system has an image of hello world in it. If it does not, it will download the hello-world image from the docker registry. The Docker registry is where Docker images are stored. Docker Hub is a public registry that anyone can use. When you pull an image, Docker by default looks for it in the public registry and saves the image on your local system on DOCKER_HOST. You can also store images on your local machine or push them to the public registry.
If your system has those images, it will directly run them.
Fun fact: You can now run Mongo DB/ MySQL etc without even installing it on your computer.
Just run the container!
Outro:
There’s no better way to test-drive Docker than by diving in and playing with it. The purpose of the blog is to create awareness about containers. To learn further it is recommended to go through Docker’s Github and blogs.
Have fun!
Feel free to connect with me on LinkedIn
Top comments (4)
I was looking for the command to "clear" all Docker images, thank you for sharing that! Super useful!
Glad to hear that ;)
Pretty cool write-up. I know this is not a tutorial,but an example of how a docker file looks would have helped
Thanks for the feedback! Will take care of this next time