DEV Community

Cover image for You Need to Learn Docker Right Now! (Part 1)
Suraj
Suraj

Posted on

2

You Need to Learn Docker Right Now! (Part 1)

Let’s Understand the Problem

image

Let's say a developer is working on a project and has set up all the required dependencies and configurations. Everything works fine. However, when a team collaborates on the same project across different operating systems, they may have different dependencies and setups, making it a hassle to configure the environment repeatedly.

For example, if a tool is built for Windows, it won’t run on macOS or Linux, making setup even more difficult. Managing multiple environments and ensuring consistency becomes a challenge.

Docker solves this problem by creating a standardized, portable environment that works the same across all systems, eliminating compatibility issues and simplifying setup.

image

Before Docker:

  • Each application required a dedicated server.
  • Running multiple applications on the same server was difficult due to conflicts between dependencies and configurations.
  • Companies had to invest in more servers and powerful CPUs, leading to high costs and inefficient resource utilization.
  • This approach was not scalable, cost-effective, or environmentally friendly.

Who Solved This Problem?

  • Virtual Machines (VMs) provided a solution by allowing multiple applications to run on the same physical server.
  • This was an improvement but still had limitations.

Problems with Virtual Machines:

  • Each VM required its own full operating system, consuming significant resources.
  • Managing dependencies across different environments was still complex.
  • “Works on my machine” issues persisted, making development and deployment inconsistent.

Then Containers Came into the Picture

  • Containers allow applications to run without needing multiple OS installations.
  • Docker simplified containerization, making it easier to build, share, and deploy applications.
  • Containers are lightweight, fast, and portable across environments.
  • Better resource utilization, run more apps on the same hardware.
  • Quick to build, destroy, and deploy, ideal for cloud scaling.
  • Standardized workflow from development to production.

Difference between Virtual Machine & Containers

1. Virtual Machines (VMs)

  • Each VM runs on a Hypervisor, which allows multiple VMs to share the same physical infrastructure.
  • Every VM has its own Guest OS, making it heavyweight since each OS requires its own resources (RAM, CPU, Storage).
  • Applications inside VMs come with their own dependencies (Bins/Libs), which can lead to duplication of resources.
  • Overhead: Since each VM has a separate OS, it consumes more CPU and memory, making it less efficient for running multiple applications.

2. Containers

  • Containers share the same Host OS but are managed by a Container Engine (e.g., Docker, CRI-O, etc.), eliminating the need for separate Guest OS instances.
  • Each container has its own dependencies (Bins/Libs) but shares the OS kernel with other containers, making them lightweight and efficient.
  • Faster startup and lower overhead compared to VMs since containers don’t require a full OS boot.
  • Ideal for microservices and scalable applications since they consume fewer resources and can be deployed rapidly.

image

Key differences in short:

Feature Virtual Machines (VMs) Containers
OS Overhead Each VM has a full Guest OS Share the same Host OS
Startup Time Slower (OS boot required) Faster (lightweight)
Resource Usage Heavy (CPU & RAM overhead) Efficient & lightweight
Scalability Harder to scale due to overhead Easily scalable
Isolation Stronger (Separate OS per VM) Process-level isolation
Performance Slower due to OS duplication Faster & optimized

What is Docker?

Docker is a containerization platform that simplifies the process of building, shipping, and running applications in isolated environments called containers. Under the hood, Docker utilizes Linux namespaces and cgroups to create lightweight, portable containers. Initially, Docker used LXC (Linux Containers), but later switched to its own container runtime called containerd

According to Docker’s official site:

Docker is an open platform for developing, shipping, and running applications. It allows developers to package software into standardized units called containers, which include everything needed to run: libraries, system tools, code, and runtime.

Why Docker?

  • Manages Containers Easily – It provides tooling to handle containers efficiently.
  • Simplifies Deployment – Packages an app and its dependencies into a single unit (Docker container).
  • Build Once, Run Anywhere – Ensures consistency across development, testing, and production.
  • Fast & Lightweight – Containers are quick to create, destroy, and scale.

Installation of Docker

I have listed the official website to install the Docker Engine and Docker Desktop for up to date installation steps.

Official Documentation for Installation(Check according to your device)

Website: Install Docker Desktop

Website: Install Docker Engine

Docker Daemon:

The Docker Daemon (dockerd) is the core engine that runs in the background. It listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. The daemon handles all the container operations like starting, stopping, and scaling containers, as well as pulling and creating images.

Docker Desktop:

Docker Desktop is an application for MacOS and Windows that provides a GUI for managing Docker environments. It includes Docker Engine, Docker CLI client, Docker Compose, Kubernetes, and a graphical user interface that allows users to manage containers, images, and other Docker resources visually. Docker Desktop simplifies the process of setting up and using Docker on your local machine.

How Docker Works?

Docker follows a simple process to create, distribute, and run containers. Here’s how it works step by step:

Docker Architecture

image

1. Docker Client

The developer interacts with Docker using commands like:

  • docker build → Creates a Docker Image from a Dockerfile.
  • docker pull → Downloads an existing image from the Docker Registry (like Docker Hub).
  • docker run → Creates and starts a container from an image.

2. Docker Daemon

  • The Docker Daemon runs in the background and manages everything, including images, containers, networks, and storage.
  • It processes the client commands and does the actual work of building, pulling, and running containers.

3. Docker Host

  • The Docker Host is where containers run.
  • It holds two key components:
    • Images: Pre-built packages that contain applications and dependencies (e.g., Ubuntu, Nginx).
    • Containers: Running instances of Docker images that execute applications.

4. Docker Registry

  • This is a storage location for Docker images like you store you code on GitHub.
  • Common registries include Docker Hub, Google Artifact Registry,Azure container registry, Amazon Elastic Container Registry (ECR), GitHub Conainer registry(GCR) and private registries.
  • Images can be pulled (docker pull) from here or pushed (docker push) to share with others.

Process Flow

Step 1: You build an image using docker build .

Step 2: If the image isn’t available locally, Docker pulls it from the registry using docker pull.

Step 3: You run a container using docker run, which creates an instance from the image.

image

What is a Dockerfile?

A Dockerfile is a script containing a set of instructions to build a Docker Image. It defines the base OS, dependencies, and commands required to set up an environment.

Docker Image

A Docker Image is a lightweight, standalone, and executable software package that includes everything needed to run an application: code, runtime, system tools, libraries, and dependencies. A docker image is built in layers, where each instruction in the Dockerfile creates a new layer. These layers stack on top of each other to form the final image.

  1. FROM → Base image layer (e.g., Ubuntu, Node.js, or Alpine).
  2. WORKDIR → Sets the working directory inside the container.
  3. COPY → Copies files from the host to the container.
  4. RUN → Executes commands inside the image (e.g., installing dependencies).
  5. EXPOSE → Declares which ports the container will use.
  6. CMD / ENTRYPOINT → Defines the default command when the container starts.

Each layer is immutable—once created, it doesn’t change. Instead, new layers are added when modifications are made.

  • Images are immutable (cannot be changed).
  • Stored locally or on Docker Hub for sharing.

Writing your first Dockerfile:

FROM ubuntu:latest

WORKDIR /app

COPY package.json package-lock.json ./

COPY . .

EXPOSE 3000

CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

Docker Layer Caching

Docker optimizes image builds by caching layers. If a layer hasn’t changed, Docker reuses the cached version instead of rebuilding it.

How caching works:

  • Docker runs through the Dockerfile line by line.
  • If a layer hasn’t changed, it is pulled from cache.
  • If a layer changes, all layers after it must be rebuilt.

Containers

A Docker Container is a running instance of an image. It provides a lightweight and isolated environment to run applications.

image

  • Ephemeral (data is lost when deleted unless using volumes).
  • Multiple containers can be created from the same image.

Recap with Demo: Dockerizing a Nodejs application

Remember above we have written our first Dockerfile we will be using that.

Now for the practice at your own you can use this github repository, here you will find the nodejs applicaiton code okay: Nodejs Application Code

Dockerfile

Step 1: Writing the Dockerfile

# Use the latest Ubuntu image as the base
FROM ubuntu:latest

# Set the working directory inside the container
WORKDIR /app

# Copy package files first (helps with caching layers efficiently)
COPY package.json package-lock.json ./

# Install dependencies
RUN apt update && apt install -y nodejs npm && npm install

# Copy all remaining project files
COPY . .

# Expose port 3000 to allow external access to the application
EXPOSE 3000

# Start the application using npm
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

Step 2: Build the Docker Image

Now, let's build the image using the Dockerfile:

docker build -t mynodeapp .
Enter fullscreen mode Exit fullscreen mode
  • t mynodeapp → Gives the name mynodeapp to the image.
  • . → Refers to the current directory where the Dockerfile is located.

Step 3: Run the Container

Once the image is built, we can start a container from it:

docker run -d -p 3000:3000 --name mynodejscontainer mynodeapp
Enter fullscreen mode Exit fullscreen mode
  • d → Runs the container in detached mode (in the background).
  • p 3000:3000 → Maps port 3000 of the container to port 3000 of the host.
  • name mynodejscontainer → Names the running container mynodejscontainer.
  • mynodeapp → Uses the mynodeapp image to create the container.

Step 4: Verify the Running Container

To check if the container is running:

docker ps

CONTAINER ID   IMAGE       COMMAND        STATUS        PORTS                  NAMES
abcdef123456   mynodeapp   "npm start"    Up 5 minutes  0.0.0.0:3000->3000/tcp  mynodejscontainer
Enter fullscreen mode Exit fullscreen mode

Step 5: View Logs of the Application

To check logs and ensure the app is running properly:

docker logs -f mycontainer

> mynodeapp@1.0.0 start /
> node server.js
Server is running on port 3000...
Enter fullscreen mode Exit fullscreen mode

Step 6: Access the Application

Now, open your browser and go to:

http://localhost:3000
Enter fullscreen mode Exit fullscreen mode

Now, we have successfully dockerized our Node.js application running inside a Docker container!

This is the flow form writing a dockerfile, building the image out of it and running the container of the image.

image

Basic Docker Commands

Run a container interactively

# Runs an Ubuntu container in interactive mode (-it)
docker run -it ubuntu

# Run the container using port you have exposed in dockerfile, for example -p (publish)
docker run -it -p hostPort:containerPort <name-of-the-image>
Enter fullscreen mode Exit fullscreen mode

List all running containers

# Shows currently running container
docker container ls

# or
docker ps
Enter fullscreen mode Exit fullscreen mode

List all containers (including stopped ones)

# Lists all containers, including stopped ones
docker container ls -a
Enter fullscreen mode Exit fullscreen mode

Stop a running container

# Stops a running container using its container ID
docker container stop [container_id]
Enter fullscreen mode Exit fullscreen mode

Remove all stopped containers, unused volumes, and networks

# Cleans up unused containers, networks, and images
docker system prune
Enter fullscreen mode Exit fullscreen mode

Exec into the running container

docker exec -it <name-of-your-container> bash
Enter fullscreen mode Exit fullscreen mode

Inspect running processes in a container

# Lists all running processes inside the container after you exec into the container
ps -ef
Enter fullscreen mode Exit fullscreen mode

Image & Container Management

List all Docker images

# Shows all locally available Docker images
docker images
Enter fullscreen mode Exit fullscreen mode

Pull an image from Docker Hub

# Downloads the latest Nginx image
docker pull nginx
Enter fullscreen mode Exit fullscreen mode

Remove an image

# Deletes the specified Docker image
docker rmi nginx
Enter fullscreen mode Exit fullscreen mode

Start a stopped container

# Restarts a stopped container
docker start [container_id]

# Or you can use the name of the container as well
docker start <name-of-the-conatiner>
Enter fullscreen mode Exit fullscreen mode

Restart a container

# Stops and starts a container again
docker restart [container_id]

# Or you can use the name of the container as well
docker restart <name-of-the-conatiner>
Enter fullscreen mode Exit fullscreen mode

Delete a container

# Removes a stopped container permanently
docker rm [container_id]

# Or you can use the name of the container as well
docker rm <name-of-the-conatiner>
Enter fullscreen mode Exit fullscreen mode

Container Logs & Inspection

View container logs

# Shows logs of a specific container
docker logs [container_id]

# Or you can use the name of the container as well
docker logs <name-of-the-conatiner>
Enter fullscreen mode Exit fullscreen mode

Check container details

# Displays detailed information about a container
docker inspect [container_id]

# Or you can use the name of the container as well
docker inspect <name-of-the-conatiner>
Enter fullscreen mode Exit fullscreen mode

Monitor real-time container stats

# Shows real-time CPU, memory, and network stats of containers
docker stats
Enter fullscreen mode Exit fullscreen mode

I’ll be covering these advanced and Important topics in next blog

  1. Docker Compose
  2. Docker Networking
  3. Docker Volumes
  4. Docker Swarm
  5. Docker Vs Kubernetes

The End!

I hope you enjoyed this blog and have learned about docker today. See you in the next one.

Image of Quadratic

AI, code, and data connections in a familiar spreadsheet UI

Simplify data analysis by connecting directly to your database or API, writing code, and using the latest LLMs.

Try Quadratic free

Top comments (0)

AWS Q Developer image

Your AI Code Assistant

Automate your code reviews. Catch bugs before your coworkers. Fix security issues in your code. Built to handle large projects, Amazon Q Developer works alongside you from idea to production code.

Get started free in your IDE

👋 Kindness is contagious

If this article connected with you, consider tapping ❤️ or leaving a brief comment to share your thoughts!

Okay