Ever came across the meme:
“But it works on my computer”
If you know docker, then you will understand its meaning, a single line but a very big issue is hidden beneath that meme. So lets dive deep into it and lets understand what docker is and why we need it.
Problem
Imagine you joined a team as a frontend developer, and you were tasked to run the code on your local machine. You followed the instructions. You installed Node in the system, then installed Git. Now you cloned the project and installed all of the dependencies and you hit npm run dev
It took its time and you saw errors in console.
Error → Incompatible Node version
You connected with your team, they helped you install the correct versions of all the services and environment. Now you ran the code again, and again you ran into an error. Now the issue was the system. You are using macOS while the team is working on Windows—different architecture, different tools.
Now you solved this issue and you need to deploy the application on cloud. There also you will have to add all of these configurations to make it up and running. Pretty hectic.
Team has to do this every time a new member joins.
Here comes Docker for our rescue.
DOCKER
There is a concept of containers in Docker. It’s like an isolated environment which can be configured as per the requirements—from OS-level dependencies to tools needed to run the app.
Once it's set up, it can be shared among other team members and they can run it easily. Now there will be no issue running the application, because containers have their own environment. Even if team members have Windows, Linux or macOS*—it will still run*.
To do so, first we need to install:
- Docker CLI (command line interface)
- Docker Desktop (GUI)
You can download Docker from the official site and follow the instructions to install and run it.
Docker Components
Docker has two parts, one is the docker Engine (daemon) and second is docker desktop.
1. Docker Engine (Daemon)
It is the main crux of docker, it runs in background doing everything, from managing images, containers to run them, managing their lifecycle
2. Docker Desktop
It a GUI(graphical user interface) which gives ease to user to use and run docker without going into too much technicality.
Verify Installation
docker -v
If installed correctly, you’ll see something like:
Docker version 29.x.x
CONTAINERS and IMAGES
Whole Docker system revolves around these two.
1. Containers
These are the environment, which is required to run an image and each container is separated for other containers. Means container 1 don’t have access to the files/data of container 2 and vice versa.
A container is basically a running instance of an image.
2. Images
You can think of it as a blueprint which already has everything installed (like node, dependencies, etc). Contains everything needed to run an app
CUSTOM IMAGES
We can also create custom images and push them on docker hub, which is a collection of images for docker, just like github where other users or team members can download these images and run it in their systems(docker)
Imagine you have a MacBook, which is running macOS inside it. Now the containers act like MacBook and images acts like macOS. Images running inside container.
Analogy
- Image = blueprint
- Container = running application
Basic Commands
Good for the basics, now lets play around with docker commands
1. Run container
docker run -it image_name
- Creates + starts a container from an image
- Opens interactive terminal
- Container name will be random
- Pulls image from docker hub if not available locally
2. List containers
docker container ls
OR
docker container ls -a
- list downs all running containers -a List downs all of the containers, even the stopped ones
3. Start/Stop
docker start container_name
AND
docker stop container_name
4. Execute command inside container
docker exec container_name ls
It runs the ls command inside a running Docker container and exits after running the command
docker exec -it container_name bash
_Opens interactive shell inside container, now you are inside the container _
exec = execute
-it = interactive
Exit
Ctrl + D
5. List images
docker images
OR
docker image ls
PORT MAPPING
You might be thinking that the containers are isolated in nature, they don’t interact with other container or outside world(system) then how can we run and test projects on our system. Take an example, you pulled a node application and it is running in the container
docker run -it imageName
Now you might open a tab localhost:9000. But you will see nothing there, why cause that application is running inside the container, and our system doesn’t know anything about it.
In this situations, we need port mapping/need to expose the ports to the system.
How we do it, using -p tag followed by system port : container port
docker run -it -p 9000:9000 image_name
here we mapped the port 9000 of container to our system’s 9000 port. Now you can check localhost:9000 Application will be up and running.
We can set it to any port and we can map multiple ports as per the requirement
docker run -it -p 4584:9000 -p 2300:3000 image_name
ENVIRONMENT VARIABLES
Now you may ask, we create environment variables in our projects, we store secret keys in it and other important data, which only we have. So how can we set these in the container.
It's a genuine question and the solution is very easy, all we need to do is to use -e followed by key=value
docker run -it -p 4584:9000 -e jwtSecret=secretisthis -e production=false image_name
This is all the basics you need to understand what docker is and how to work with it. Now lets move one step further and lets containerize a very simple node server.
You can create your very own node server or you may use this tutorial app for this tutorial.
Your simple server is ready, now we have to containerize, make an image of it so others can use it.
To do so, we need a Dockerfile where we will be writing down all of our configurations.
Note – the name should be Dockerfile without any extension
DOCKERFILE
FROM ubuntu:22.04
# base image, it can be node, redis etc
WORKDIR /app
# Set working directory
#Install Node.js
RUN apt-get update && \
apt-get install -y curl ca-certificates && \
curl -fsSL https://deb.nodesource.com/setup_18.x | bash - && \
apt-get install -y nodejs && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
#Copy dependency files first (for caching)
COPY package.json package-lock.json ./
#Install dependencies
RUN npm install
#Copy rest of the app
COPY . .
#Tells to run these files whenever someone runs the image
ENTRYPOINT ["node", "main.js"]
Build Image
Now the configuration is complete, all we need do to is to make an image. For that the command is
docker build -t docker-tutorial .
Note - . as path cause I am in the same directory as of the Dockerfile
It will take some time, once its done you will see your image in your docker desktop app or you can use the command in terminal too.
Run Container
Now you can run it and test if its working or not.
docker run -it -e PORT=4567 -p 8000:4567 docker-tutorial
here I am passing environment variable PORT and I am mapping that port to system’s port 8000. Now I can see my application running on localhost:8000.
CACHING
Each line in Dockerfile = layer
If one layer changes → all layers after it rebuilds
That’s why:
COPY package.json ...
RUN npm install
COPY . .
is important for performance.
PUSH TO DOCKER HUB
Now you want to share your custom image with your teammates or with the world. To do so, first you need to create an account on docker hub
There you need to create a repository, once its created, then you will see a command like this
docker push your_user_name/docker-tutorial:tagname
This is the command that we will be using to push our image to the docker hub, but before that we have to create that image as per the notations.
Create the image, with name your_user_name/repository_name
In our case it will be
docker build -t your_user_name/docker-tutorial_ .
Once image is created, you can check it on docker desktop or using commands in cli
Now we need to run our push command.
docker push username/docker-tutorial:tagname
All done, check your repository on docker hub, you will see your docker image there. Now you can share this image with others and they can run it directly. No environment setup, no package matching needed. Just run it. That’s it.
DOCKER COMPOSE
Lets move to more advance part
Imagine you are working on a very big project, which needs nodejs, postgresql, redis, mailhog etc. Now you can either go and run each image one by one, or we can use docker compose. It’s a file where we configure all of the services (images), and with a single command all of them will start/stop. No manual efforts, just a single command to spin up the application.
To do so, we need to create a docker-compose.yml file, where we will be writing all of our configurations
version: "3.8"
#services is where we will add all of the images, that we need to run
services:
postgres:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_DB: review
POSTGRES_PASSWORD: password
redis:
image: redis
ports:
- "6379:6379"
Run everything
docker compose up
Stop everything
docker compose down
This pretty much sums up what Docker is, why we need it, and its importance in today’s development world. Hope you learned something useful from this.
Till next time,
Peace out






Top comments (0)