DEV Community

Cover image for Containerization With Docker
stephen Githinji
stephen Githinji

Posted on

Containerization With Docker

Introduction

In the world of software development, it is all too common to come across the phrase: "But it works on my machine!πŸ’»" This phrase is mainly associated with when developers are deploying their application for their users or a client, but the software fails to work as expected in production, yet it still works perfectly in development.

This problem is mostly as a result of differences in the environment (The OS, runtime environment, and configurations) and dependency issues.
Over the years, there have been various developments towards resolving this problem. For example, the development of FreeBSD jails, Open vz, and Linux containers (LXC). Although these advances embodied the idea of containerization, they were mostly designed for system administrators and not convenient for developers. It is not until the advent of Docker that developers started employing containerization for their applications, bridging the development and production disconnect.

What is Containerization?

This is a method for deploying software where the code, environment, and dependencies are packaged into a single, lightweight isolated container.
A container is therefore a package that encapsulate all that is required to make an application to run.

Importance of Containerization

  • Applications can run on any machine
  • It is easier to build, run, and deploy applications
  • Allows for isolation of services
  • It is Easy to scale applications

Docker

Docker is a popular open source containerization tool that allows developers to build, deploy, and manage their applications. From its inception in 2013, Docker has grown to be among the most sought after containerization tools.

Main Components of Docker

1. Docker daemon: Also referred to as dockerd, manages Docker objects such as images, containers, networks, and volumes.
2. Docker image: A read-only template/snapshot with instructions for creating a Docker container.
3. Docker container: It is a runnable instance of an image which can be created, started, stopped, moved, or deleted.
4. Docker Hub: It is a public registry where one can store, share, and manage Docker images.
4. Dockerfile: A file containing the definition of how an image is to be build. It builds only one image
5. Docker compose: It is a tool used to run multi-image containers.

Installing Docker

You can install the Docker engine on Ubuntu Linux by going to the official Docker website or following the instructions below:

# Add Docker's official GPG key:
sudo apt update
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}")
Components: stable
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/docker.asc
EOF

sudo apt update
Enter fullscreen mode Exit fullscreen mode

Install the Docker packages (including docker compose):

sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Enter fullscreen mode Exit fullscreen mode

Creating a Docker Image using Dockerfile

Creating a Dockerfile consists of the following steps:

  • Determine your base image
  • Install application dependencies
  • Copy in any relevant source code.
  • Configure the final image.

Example of a Docker file

Once inside the folder you want to create your Docker image, create a file with the name Dockerfile

touch Dockerfile
Enter fullscreen mode Exit fullscreen mode

Insert the following instructions in the Dockerfile:
Dockerfile image

A Dockerfile
  • FROM python:3.10 : Defines the base image/ parent image (in this case a python base image)
  • WORKDIR /app : specifies the working directory in the image where files will be copied, and commands will be executed.
  • COPY requirements.txt . : Copies files from the host into the container image.
  • RUN pip install -r requirements.txt : Runs the command once the container runs.
  • COPY . . : Copies the source code into the image
  • CMD ["python", "main.py"] : Sets the default command a container using this image will run.

Requirements.txt
This file includes the package dependencies required by your application for it to work as expected. They can be inserted manually inside the file.

To also include the versions for your packages, you can use the pip freeze > requirements.txt command

The file will contain the packages and the corresponding versions required by the application.
Below is an example of a simple file:
requirements file

A simple requirements.txt file

Main.py
This is the file containing the program you want to run in your container. For example, you can create a simple Hello program for your image:
main.py

main.py

Building and Running your image

To build your image, use the command:

docker build -t "tag for your image" "name/path of your folder containing the files" 
Enter fullscreen mode Exit fullscreen mode

This command is guided by the contents inside your Dockerfile.

ℹ️ Running the build for the first time will take longer. Other successful builds will take a shorter time due to the concept of layered caching, where Docker caches the results of each line of instruction in the Dockerfile.

First build

Our first build

Second Build

Our second build using layered cache

To run your container, use the command:

docker run "the tag you provided when building your image"
Enter fullscreen mode Exit fullscreen mode

This will run the command specified in the CMD line in the Dockerfile.

Creating Multiple Images Using Docker Compose

Docker Compose is used to define and run multi-container applications. With Compose, one can manage services, networks, and volumes in a single YAML configuration file. A YAML file is a human-readable data serialization language that is often used for writing configuration files. This file is commonly referred to as a compose file. To start all the dependencies in a compose file(databases, message queues, APIs, caches, etc.), one only needs to run a single command:

docker compose up
Enter fullscreen mode Exit fullscreen mode

To stop the services, you can use the command:

docker compose down
Enter fullscreen mode Exit fullscreen mode

Components of Docker Compose

1. Services
They are the individual containers running a particular instance of an image with certain configurations. A single service definition within a compose file can contain various containers.
2. Networking
In order for the different services in the compose file to communicate with each other, Compose automatically creates a shared network. The services communicate with hostnames derived from their defined service names.

localhost when used inside a container refers to the container and not the machine being used.

3. Volumes
Volumes in compose are used to store data outside the container to ensure that the data is persistent even after the container has been deleted or is recreated. They include: databases and logs.
4. Configs and Secrets
Configs are used to store non-sensitive configuration details such as application settings, while secrets store sensitive information such as passwords, API keys, etc.

Example of a compose file (.yaml file)

compose file

A compose file

The above compose file has 2 services: postgres and my_app
postgres:
This service pulls the PostgreSQL image from Docker Hub to be used by the my_app service.
my_app:
This service is for the image created for our main application (main.py) using the Dockerfile.
A simple structure for our docker folder when using compose is as follows:

compose_folder/
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ compose(.yaml) file
β”œβ”€β”€ main.py
└── requirements.txt
Enter fullscreen mode Exit fullscreen mode

In order to access the PostgreSQL pulled from the Docker hub in the compose file above from your shell, use the command:

docker exec -it postgres psql -U postgres
Enter fullscreen mode Exit fullscreen mode

docker exec -it "container_name" runs a shell command (psql -U postgres) inside the specified container (postgres)

Conclusion

Docker provides an easier and much cleaner way of creating, running, and deploying applications, APIs, or pipelines as containers. This is mainly made possible by the use of a Dockerfile to create an image for your application and Docker Compose for running and managing the various containers for your application. In addition, Docker Hub provides a list of various images for different applications that one can pull and use for their application, removing the hustle of creating those services/containers from scratch. Given its open source nature, Docker Hub, and it being developed with developers in mind, Docker is one of the most suitable tools for containerization.

Top comments (1)

Collapse
 
godaddy_llc_4e3a2f1804238 profile image
GoDaddy LLC

This is a really clean introduction to Docker and containerization, especially for beginners trying to understand why containers matter instead of just memorizing commands. The β€œworks on my machine” problem has probably caused more developer arguments than tabs vs spaces at this point πŸ˜…
I like that you explained the difference between images, containers, Dockerfiles, and Compose in a practical way instead of turning it into pure theory.
The layered caching explanation is also underrated β€” the first time Docker magically skips half your build steps feels like discovering cheat codes for deployment speed.
Another important point you highlighted well is how Compose simplifies multi-service environments, because manually wiring databases and APIs together gets painful fast.
And the localhost clarification inside containers deserves a standing ovation β€” every beginner learns that lesson exactly once, usually after 45 minutes of confusion πŸ˜‚
Overall, this is the kind of tutorial that helps people understand Docker as a workflow improvement tool, not just another trendy DevOps buzzword.