DEV Community

Cover image for INTRODUCTION TO DOCKER
Akhigbe Eromosele David
Akhigbe Eromosele David

Posted on • Updated on

INTRODUCTION TO DOCKER

In the world today, cloud-native technologies are getting more popular due to the need for businesses and organizations to scale up and shift operations to hybrid multi-cloud environments. As a developer, the name "Docker” should sound familiar to you. But if not, then this article is for you. This article will introduce the popular tool known as Docker

Docker is an open-source platform that enables developers to build, deploy, run, update, and manage containers.

Simply put Docker is a tool that is used to automate the deployment of applications in lightweight containers so that applications can work efficiently in different environments. with Docker, the delay between writing code and running it in production is greatly reduced and a single object can be reliably run anywhere.

Before we dive into understanding what Docker is, let’s take a step back and look at a brief history of containers.

What is a Container?

https://images.unsplash.com/photo-1601897690942-bcacbad33e55?ixlib=rb-4.0.3&q=85&fm=jpg&crop=entropy&cs=srgb

A container is a technology that gives us permission to isolate certain kernel processes and trick them into thinking they are the only ones running on a completely new computer. Their operations are similar to virtual machines, but there are some differences:

  1. Containers occupy less memory space than virtual machines.
  2. Containers have short boot-up times when compared with virtual machines.
  3. Containers outperform virtual machines.
  4. Containers are easy to scale and maintain.

Containers are efficient as you do not need to have a whole different OS installed inside your host OS. With containers, you can have several containers running within a single OS without having different guest OS installed.

in 2008, LXC (Linux Containers) was launched. It was the first and most complete implementation of a container management system. It used control groups, namespaces, and much of what was constructed before then. It was used straight from a Unix kernel and did not require any patches.

Docker was introduced in 2010 by Solomon Hykes and Sebastian Pahl, originally it was part of an internal project within the dotCloud.

From 2013–2015 different big companies like Red Hat, Microsoft, AWS, and IBM announced Docker collaborations.

By the end of 2016, Microsoft announced that Docker was locally supported on Windows through Hyper-V.

in 2019, Microsoft announced the WSL2, which made it possible for Docker to run on Windows without the need for a virtualized machine on Hyper-V.

Finally, in 2020, Docker became the worldwide choice for containers.

FEATURES OF DOCKER

  1. Increased productivity: By simplifying technical settings and quickly launching apps, it aids in raising productivity. Additionally, it not only offers a secure environment in which to run programs, but it also uses fewer resources.
  2. Application isolation: Containers from Docker are used to run applications in a secluded setting. Due to the independence of each container, Docker may run any form of program.
  3. Easy and faster configuration: One of Docker’s main advantages is that it makes system configuration quicker and simpler. This capability allows codes to be released more quickly and with less effort. Since Docker may be used in a wide range of environments, the infrastructure and the application’s environment are unrelated.
  4. Running more workloads on the same hardware: Docker is quick and light. It offers a workable, affordable substitute for hypervisor-based virtual machines so you may make better use of your server’s resources to meet your business objectives. Docker is ideal for small and medium deployments where you need to accomplish more with fewer resources as well as for high-density situations.
  5. Better Software Delivery: Containers make software delivery more effective. Containers have an isolated disk volume and are portable and self-contained. As the container matures and is deployed to other environments, this isolated volume travels with it.
  6. Rapid Scaling of Systems: Containers accomplish more work while using less computational hardware. They let data center managers share gear and compress more workload into less hardware, which lowers costs.
  7. Swarm: Swarm is a clustering and scheduling tool for Docker containers. It uses the Docker API as its front end, which allows users to use various tools to control it. IT managers and developers may set up and control a cluster of Docker nodes as a single virtual system using Swarm. They may also add or remove container iterations as computing needs vary with the help of a Docker Swarm cluster.
  8. Services: The status of a container inside a cluster is specified by a series of tasks called services. While Swarm schedules them among the nodes, each task in the Services identifies one instance of a container that should be operating.
  9. Security Management: It stores secrets in the swarm and decides which secrets to provide services access to, including a few crucial engine commands like secret inspect and secretly create.

Now that we have a little background knowledge of Docker, let’s look at some Docker terminologies :

Docker Architecture

Docker makes use of client-server architecture and consists of :

[photo by GeeksforGeeks](https://www.notion.so/Article-on-Docker-226ea98c3cfb40ec9a5feff7f9e94535?pvs=4#6e310282223a4f46ad76f999a648b185)
Photo by GeeksforGeeks

  • Docker Engine: It serves as the foundation of the whole Docker system. Docker Engines is a program that uses a client-server architecture. It is set up on the host computer. The Docker Engine is made up of three parts:

Photo by edureka.co
Photo by edureka.co

  1. Docker daemon: It is responsible for running containers to manage Docker services. Docker daemon communicates with other daemons and constantly listens for Docker API requests and processes them.
  2. REST API: Applications communicate with the Docker daemon using an API. It is used to instruct the Docker daemon what to do.
  3. Command Line Interface (CLI): it is an application providing a command-line interface for communicating with the Docker daemon. Docker commands are written here by the user. One of the main reasons developers adore Docker is how much-managing container instances can be streamlined by utilizing it.
  4. Docker Client: Docker users can interact with Docker through a client, a command-line tool that enables them to generate and manage Docker images, create and manage Docker containers, and deal with Docker networks and volumes. Any Docker instructions that are executed are sent from the client to the dockerd daemon, which executes them. Docker commands make use of the Docker API. The Docker client can communicate with many daemons.
  5. Docker Registry: A Docker registry is a storage and distribution system for Docker images. Because they offer a single place for organizing, storing, and exchanging Docker images, Docker registries are crucial parts of the Docker platform. Developers may guarantee that their applications are running in a consistent and predictable environment by utilizing Docker registries, which can lower errors and enhance stability. Along with versioning, access restrictions, and security measures, Docker registries offer strong tools for maintaining and distributing Docker images. There are two types of registries:
  6. Public registry: Docker hub is an example of a public registry
  7. Private registry: They are usually used to share images within the enterprise
  8. Docker Objects: In Docker, Images, containers, volumes, networks, and other objects are all created and used whenever we utilize Docker. The following are some examples of Docker objects:
  9. Docker Images: Docker images play a critical role in the Docker platform as they are templates that are used to create Docker containers. An image is essentially a snapshot of a specific environment, including all the necessary code, libraries, and dependencies needed to run an application. Docker images are built from a set of instructions, which are specified in a Dockerfile and can be easily shared and distributed across different environments. A Docker image has a base layer that is read-only, and the top layer can be written. When a Dockerfile is edited and rebuilt, only the modified layer is rebuilt in the top layer. A private container registry is used to share container images within an enterprise while a public container registry is used to share container images within the whole world.
  10. Docker Containers: Docker containers are running instances of Docker images. They are isolated and portable, running as isolated processes on a host machine. Containers provide a lightweight runtime environment for applications, with all the necessary components packaged together in a self-contained unit. Docker containers are designed to be easy to manage and deploy and can be run on any host machine that has the Docker engine installed. A Docker API or CLI can be used to start, stop or delete a Docker container.
  11. Docker Storage: A storage driver is necessary in order to store data in a container’s writable layer. It expires anytime the container is not operating since it is not persistent. It is also difficult to transmit this data. Docker gives users four choices for persistent storage:

Data Volumes: They enable the creation of persistent storage, as well as the renaming of volumes, listing of volumes, and listing of the associated container. Data Volumes, which are simply directories or files on the Docker Host filesystem, can be mounted straight into the filesystem of the container.

Directory Mounts: An alternative approach is to mount a host’s local directory into a container. In this case, any directory on the Host machine can be used as a source for the volume.

Volume container: Another option is to dedicate a container to host a volume and to mount that volume to the other containers. This method is used to maintain the state of the containers (data) produced by the running container. Since the volume container in this instance is separate from the application container, you are able to share it among multiple containers.

Storage Plugins: Connecting to external storage platforms is made possible via storage plugins. These plugins transfer data from the host’s storage to an appliance or other external source, such as a storage array, and help integrate the Docker containers with external volumes like Amazon EBS.

4. Docker Networking: It provides complete isolation for Docker containers and it’s a passage through which they communicate. There are five network drivers in Docker

  1. Bridge: It serves as a container’s default network driver. When your application is running in standalone containers, or when multiple containers are interacting with the same Docker host, you use this network.
  2. Host: The network separation between Docker containers and Docker hosts is eliminated by this driver. When no network isolation is required between the host and container, you can utilize it.
  3. Overlay: Swarm services may interact with one another thanks to this network. When you want the containers to run on various Docker hosts or when you want multiple applications to form swarm services, you use it.
  4. None: This driver disables all the networking.
  5. Macvlan: This driver gives containers a Mac address to make them appear to be actual devices. It uses the Mac addresses of the containers to route communication between them. When transferring a VM configuration, for instance, or when you want the containers to appear as real devices, you use this network.

CONCLUSION

In this article, we learned what Docker is, looked at a brief history of Docker, some features of Docker, and looked at the Docker architecture.
In part 2 we will be looking at some services offered by Docker, when to use the services and how to apply them to your organization.


If you enjoyed this article, do not forget to like, comment and share. you can also follow the author to make sure you do not miss the latest updates.

Top comments (0)