Containers vs Virtual Machine
Resource Utilization: Containers share the
host OS kernel, making them lighter and faster than VMs. VMs have afull-fledged OS and hypervisor, making them more resource-intensive.Portability: Containers are designed to be portable and can run on any system with a compatible host OS. VMs are less portable as they need a compatible hypervisor to run.
Security: VMs provide a higher level of security as each VM has its own OS and can be isolated from the host and other VMs. Containers provide less isolation, as they share the host OS
Files and Folders in containers base images
/bin: contains binary executable files, such as the ls, cp, and ps commands.
/sbin: contains system binary executable files, such as the init and shutdown commands.
/etc: contains configuration files for various system services.
/lib: contains library files that are used by the binary executables.
/usr: contains user-related files and utilities, such as applications, libraries, and documentation.
/var: contains variable data, such as log files, spool files, and temporary files.
/root: is the home directory of the root user.
Files and Folders that containers use from host operating system
The host's file system: Docker containers can access the host file system using bind mounts, which allow the container to read and write files in the host file system.
Networking stack: The host's networking stack is used to provide network connectivity to the container. Docker containers can be connected to the host's network directly or through a virtual network.
System calls: The host's kernel handles system calls from the container, which is how the container accesses the host's resources, such as CPU, memory, and I/O.
Namespaces: Docker containers use Linux namespaces to create isolated environments for the container's processes. Namespaces provide isolation for resources such as the file system, process ID, and network.
Control groups (cgroups): Docker containers use cgroups to limit and control the amount of resources, such as CPU, memory, and I/O, that a container can access.
Why are containers light weight ?
Containers are lightweight because they use a technology called containerization, which allows them to share the host operating system's kernel and libraries, while still providing isolation for the application and its dependencies.
This results in a smaller footprint compared to traditional virtual machines, as the containers do not need to include a full operating system.
Docker LifeCycle
docker build -> builds docker images from Dockerfile
docker run -> runs container from docker images
docker push -> push the container image to public/private regestries to share the docker images
Understanding Docker the terminology
Docker daemon (dockerd)
The engine that powers Docker behind the scenes.
It runs in the background on your computer
It manages everything: images, containers, networks, and storage
Docker client
The tool you use to talk to Docker.
You → Docker Client → Docker Daemon → Your Container Runs
Docker Desktop
An easy-to-install app for your computer (Mac, Windows, or Linux).
Includes the daemon and client bundled together
Docker registries
An online storage place for Docker images (DockerHub).
Dockerfile
Dockerfile is a file where you provide the steps to build your Docker Image.
Images
An image is a read-only template with instructions for creating a Docker container.
Dockerfile
# ====================================================================
# STAGE 1: BUILDER
# ====================================================================
# This stage installs all dependencies
FROM python:3.9-slim AS builder
# Set working directory
WORKDIR /build
# Copy requirements file
COPY requirements.txt .
# Install dependencies in a virtual environment
RUN python -m venv /opt/venv && \
/opt/venv/bin/pip install --no-cache-dir -r requirements.txt
# ====================================================================
# STAGE 2: RUNTIME
# ====================================================================
# This stage creates the final lightweight image
FROM python:3.9-slim
# Set working directory
WORKDIR /app
# Copy virtual environment from builder stage
COPY --from=builder /opt/venv /opt/venv
# Extract tar file to /app directory
ADD myapp.tar.gz /app/
# Copy application code
COPY . /app
# Set environment variables
ENV PATH="/opt/venv/bin:$PATH" \
NAME=World
# Expose port 80
EXPOSE 80
# Set entrypoint (cannot be overridden easily)
ENTRYPOINT ["python"]
# Set default command (can be overridden)
CMD ["app.py"]
# ====================================================================
# Usage:
# Build: docker build -t myapp .
# Run: docker run -p 8080:80 myapp
# Override CMD: docker run -p 8080:80 myapp other_script.py
# ====================================================================
- ADD only when you need
auto-extractionorURL
INSTALL DOCKER
Install Docker on EC2
Refer Documentation: docker installation- Debian
sudo apt update
sudo apt install docker.io -y
Check Docker daemon Status
sudo systemctl status docker
Start Docker daemon
sudo systemctl start docker
Grant Access to your user to run docker commands
user: ubuntu
sudo usermod -aG docker ubuntu
Logout & Re-Login after adding user to the group
Docker Commands
Pull Images
# Pull Docker images from locally 1st, if it's not available it pull from remote
docker pull redis # if not specify tag it pulling latest version tag
Build your first Docker Image
docker build -t test/my-first-docker-image:latest .
Run your First Docker Container
Flags:
-it: interactive mode (so that it won’t exit immediately after commands)-d: detach mode-e: environment variable
# 8080(Host Port):80(Container Port) use binding port
docker run -it -p 8080:80 test/my-first-docker-image
# Run & create container with name
docker run -d -p 6000:6379 --name my-redis redis:latest
# Running iteractive bash shell inside container
docker exec -it <containerID OR conatinerName> /bin/bash
Container Start & Stop
docker start <containerID OR conatinerName>
docker stop <containerID OR conatinerName>
Verify Docker Image is created
docker images
Remove Docker Images
docker rmi <image ID>
Force R*emove the Docker Image*
docker rmi -f <image_id>
Check number of running Container
docker ps
Check number of stopped Container
docker ps -a
Remove the St*opped Container*
docker rm <container_id>
Remove All Stopped Containers
docker container prune
Push & Tag the Image to DockerHub (registries)
Need to
login Image registrybefore push.In remote image registry e.g.-
ECR (registryDomain/imageName:tag)
# Before push need to tag local image with registoryDomain
# tag = copy + rename image
docker tag <localImage:tag> <registryDomain/imageName:tag>
# Push image to registry
docker push <registryDomain/imageName:tag>
Docker Logs
docker logs <containerID OR containerName>
Detailed info of Docker objects
Detailed info about the container named "my_container" in JSON format.
docker inspect my_container
Docker Network
# Asking for parameter like- ls
docker network
# List of docker netwoek & it's type
docker network ls
# Create a docker network with name mongo-network
docker network create mongo-network
Docker Volumes
1. Named Volumes
Docker manages it
Used for: Production, databases, persistent data
docker run -v mydata:/data myimage
2. Bind Mounts
Mount host directory into container
Used for: Development, editing files on host, testing
docker run -v /home/user/code:/app myimage
3. Anonymous Volumes
Auto-generated name, temporary
Used for: Temporary container data, gets deleted when container stops
docker run -v /data myimage
4. tmpfs Mounts
In-memory, very fast, temporary
Used for: Sensitive temporary data, caching, performance-critical temp files
docker run --tmpfs /tmp myimage
Docker Volumes Commands
# Create a volume:
docker volume create myvolume
# List all volumes:
docker volume ls
# Inspect a volume (see details):
docker volume inspect myvolume
Distroless & Multi Stage Docker Images
Multistage Docker Images
Multistage Docker images help reduce overall image size by allowing copy of artifact from another stage to final stage stage.
As final image size depends only on the last stage in the Dockerfile.
Distroless image
Using a Distroless image in the final stage minimizes the image size further as it includes only the application and its runtime dependencies.
Distroless images also improve security by reducing the surface area for vulnerabilities.
Binds Mount & Volumes
Bind Mounts
Directly map a host path to a container path; good for development and testing where direct host access is needed.
It expose the host filesystem.
AWS EFS and NFS can be mounted on your Docker host and then used as bind mount for the containers.
Volumes
Docker-managed storage, ideal for production use cases, offering better data management, portability and security.
Docker volumes does stored on the host filesystem as logical volume. But, Docker volumes do not expose the host filesystem directly to the container makes it higher level of isolation.
Networking in VMs & Docker Containers
DHCP (Dynamic Host Configuration Protocol): It is a network management protocol used to automatically assign IP addresses and other network configuration parameters to devices on a network.
VMs Networking
Bridged Networking
[External Network] -(Laptop 2)
|
|
(Internet)- [Network Switch/Router] -(Act as DHCP server)
|
|
[Host Machine] -(Laptop 1)
|
| -(Bridged Connection)
|
[VM1] [VM2] [VM3]
VMs are directly
connected to the external networkand receive IP addresses from the DHCP server of Router.VMs
can communicatewith any device on the external network and internet.
NAT (Network Address Translation)
[External Network] -(Laptop 2)
|
|
(Internet)- [Firewall/Router]
|
| -(Public IP)
|
(Act as DHCP server)- [Host Machine] -(Laptop 1)
|
| -(Private IP)
|
[VM1] [VM2] [VM3]
VMs are
connected to Host Machinenetwork which act as DHCP server.VMs
can notcommunicate with external networkVMs
share host IPto get access to internet
Host-Only Networking
(Act as DHCP server)- [Host Machine] <--> [VM1]
|
|
[VM2]
VMs are
connected to Host Machinenetwork which act as DHCP server.VMs
can notcommunicate with external networkVMs
can nothave access to internet
Docker Containers Networking
Bridge Network (Default)
[Container 1] --|
|-- [Bridge (DHCP)] -- [Docker Daemon (Host Machine)]
[Container 2] --|
Creates an
isolated networkfor containers on the same host.Containers get a
private IP addressand communicate through a virtual bridge on the same host.
User-Defined Bridge
[Container 1] --|
|-- [Bridge 1 (DHCP)] --|
[Container 2] --| |
|--- [Docker Daemon (Host Machine)]
|
|
[container 3] ----- [Bridge 2 (DHCP)] --|
Creates an
isolated networkfor containers on the same host.Container 3 can not communicatewith other containers, to achievecontainer isolation
Host Network
[Container 1] --|
| -- [Docker Daemon (Host Machine)] -- [Network]
[Container 2] --|
- Containers use the
host's IP addressand can bind directly to thehost’s network interfaces
Overlay Network
[Container 1] -- [Overlay Network] -- [Container 2]
| |
[Host A] [Host B]
Connects
containers across multiple Docker hostsusing a virtual network.Requires
Docker Swarmor another orchestration toolK8s.
Docker Compose
Use to manage
multi-container applicationsAuto-creates networksfor container communicationUsed only in
local development & testing
version: '3.8'
services:
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379"
webapp:
image: your-webapp-image:latest
deploy:
replicas: 2
environment:
- REDIS_HOST=redis
depends_on:
- redis
nginx:
image: nginx:latest
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- webapp
Docker Compose commands
# Running docker compose, -f flag is for docker compose file.
docker-compose -f mongo.yaml up
# Shutting down & deleting all containers
docker-compose -f mongo.yaml down
# Remove containers + volumes (data deleted)
docker-compose down -v
docker-compose stop # Stop containers (data safe)
docker-compose start # Start stopped containers
docker-compose restart # Restart containers
Docker Init
Command-line utility that aids in the
initialization of Docker resourceswithin a project, it's aDocker CLI pluginthat's included with Docker Desktop v 4.18 and later.Automatically generates Dockerfiles, Compose files, and . dockerignore filesbased on the nature of the project, significantly reducing the setup time and complexity associated with Docker configurations.
Real Time Challenges with Docker ?
Docker is a single daemon process, which cause
single point of failure.Docker Daemon
runs on root user, which is security threat.If running too many container in single host causes
Resources constraint.
Steps to Secure Containers ?
Use
Distrolessor images which do not have too many package as a final images inmulti stage build. So, there less security issues.Ensure networking is config properly. Configure
custom bridge networkin order to isolate containers.Uses utilities like
Syncto scan your container images for any vulnerability.
Feel free to share and spread the knowledge! 🌟😊 Enjoy Learning! 😊



Top comments (0)