DEV Community

Cover image for Introduction to Docker Containers πŸ‹πŸ“¦ [With Commands]
Mark Yu
Mark Yu

Posted on

Introduction to Docker Containers πŸ‹πŸ“¦ [With Commands]

Introduction

Containers have revolutionized software development and deployment by offering lightweight, consistent, and portable environments. They bundle applications and dependencies into a single package, streamlining the transition from development to production. In this blog post, we will explore the fundamentals of containers, their building blocks, advantages, and critical use cases and compare them to virtual machines (VMs).

What Are Containers?

Containers are lightweight, executable units that package an application and its dependencies (such as libraries, binaries, and configuration files) together. They isolate applications from their environments, ensuring consistent behaviour across various development and deployment stages.

Unlike traditional simulators or emulators, containers do not emulate entire hardware systems. Instead, they create isolated environments that share the host's kernel while simulating separate operating systems. This allows containers to run multiple isolated applications on the same system efficiently.

Image description

Key Sharing Mechanisms

  1. Kernel Sharing: Containers share the host's kernel, eliminating the need for separate OS instances.
  2. Memory and Storage: Memory and storage are shared among containers, while each maintains its isolated view.
  3. System Libraries/Binaries: Containers can share common binaries and libraries, reducing redundancy.

Container Isolation Mechanisms

Containers provide secure and efficient isolated environments through:

  1. Namespaces: Isolated workspaces for processes, network interfaces, and file systems.
  2. Resource Management: Allocation and limitation of resources like CPU, memory, and I/O through tools such as Linux cgroups.

Building Blocks of Containers

  1. Container Images: Immutable templates used to create containers.
  2. Container Registries: Central hubs for storing and sharing container images (e.g., Docker Hub).
  3. Container Engines: Runtimes for building, running, and managing containers (e.g., Docker Engine).
  4. Container Instances: The running containers instantiated from images.

Advantages of Containers

  1. Lightweight: Containers operate without the overhead of additional OS instances.
  2. Portability: Consistent operation across different platforms and clouds.
  3. Rapid Deployment and Scaling: Quick start, replication, and stoppage due to small size.
  4. Resource Efficiency: Higher density and efficient resource usage.
  5. Isolation and Security: Enhanced security and reduced application conflicts through process isolation.
  6. Simplified Management: Facilitated deployment and scaling through orchestration tools like Kubernetes.

Use Cases of Containers

  1. Microservices: Ideal for isolated, scalable services in a microservices architecture.
  2. DevOps and Agile Development: Accelerate development and deployment, fitting into CI/CD pipelines.
  3. Application Isolation: Host multiple applications on the same server without interference.
  4. Environment Consistency: Ensure consistency across environments, aiding testing and reducing bugs.

Virtual Machines vs. Containers

Virtual machines (VMs) and containers differ in their architecture, resource utilization, and use cases. While VMs offer strong hardware-level isolation, containers provide process-level isolation, favoring scalability and efficiency.

Key Differences

  1. Architecture: VMs have a full OS; containers share the host OS kernel.
  2. Resource Utilization: VMs require more resources; containers are lightweight.
  3. Startup Time: VMs have longer startup times; containers start quickly.
  4. Isolation: VMs offer strong isolation; containers offer moderate isolation.
  5. Scalability: VMs are less scalable; containers are highly scalable.
  6. Portability: VMs are less portable; containers are lightweight and portable.
  7. Security: VMs generally provide higher security; container security depends on implementation.

Docker Fundamentals

Docker is a prominent containerization platform. It includes components like the Docker Engine, Docker Daemon, Docker Client, Docker Images, and Docker Registries. Docker images serve as blueprints for creating containers, while Dockerfiles specify the steps to build images.

Building an Image with Dockerfile

A Dockerfile contains instructions to build a Docker image. To build an image, write a Dockerfile and use the docker build command:

shell
Copy code
docker build -t myapp:1.0 .
Enter fullscreen mode Exit fullscreen mode

This command builds an image from the Dockerfile in the current directory, tagging it as myapp:1.0.

Docker Image Best Practices

  1. Minimize Layers: Combine related commands into a single RUN instruction.
  2. Use Official Base Images: Start with official images for security and reliability.
  3. Clean Up: Remove unnecessary tools and files.

Image description

Common Docker Commands and Their Uses

Running Containers

  • docker run -d -p 80:80 nginx Runs an Nginx container in detached mode (-d), mapping port 80 on the host to port 80 in the container.
  • docker run -d -i -t -p 80:80 nginx /bin/bash Runs an Nginx container in detached mode (-d), mapping port 80 on the host to port 80 in the container, while also allocating a pseudo-terminal (-t) and keeping the standard input (-i) open.
  • docker run -d --name host_container --network host nginx Starts an Nginx container in detached mode (-d), naming it host_container and configuring it to use the host's network stack.
  • docker run -d --name isolated_container --network none BusyBox Runs a container in detached mode (-d), naming it isolated_container and using the BusyBox image with all network interfaces disabled (--network none).
  • docker run -d -p 8080:80 --network my_bridge_network my_web_app Runs a container from the my_web_app image, forwards port 8080 on the host to port 80 in the container, and connects the container to the bridge network my_bridge_network.

Creating Containers

  • docker create ubuntu Creates a new container from the Ubuntu image without starting it.

Managing Containers

  • docker start 12345abcde Starts a previously created container 12345abcde.
  • docker stop 12345abcde Stops a running container 12345abcde.
  • docker rm -f 12345abcde Removes a running container 12345abcde by force (-f).

Inspecting and Logging

  • docker inspect 12345abcde Returns detailed information on container 12345abcde.
  • docker logs 12345abcde Views the logs of container 12345abcde.
  • docker logs -f 12345abcde Views the logs of container 12345abcde while it is running (-f).

Networking

  • docker network create --driver bridge --subnet=192.168.10.0/24 --gateway=192.168.10.1 my_bridge_network Creates a new bridge network named my_bridge_network with the subnet 192.168.10 and the gateway 192.168.10.1.
  • docker run -d --name container1 --network my_bridge_network nginx Runs a container in detached mode (-d), naming it container1 and connecting it to my_bridge_network.
  • docker network connect my_bridge_network existing_container Adds the network my_bridge_network to an already running container existing_container.

Volume Management

  • docker volume create my_volume Creates a volume named my_volume.
  • docker run -d --name db_container -v my_volume:/var/lib/mysql mysql Runs a MySQL container using the volume my_volume.
  • docker run -d --name app_container -v /path/on/host:/path/in/container nginx Binds a volume located at /path/on/host:/path/in/container on the host to an Nginx container.
  • docker run -d --name tmp_container --tmpfs /path/in/container nginx Creates a temporary volume on the host's memory, which is never written to the host's filesystem.

These commands cover a wide range of Docker functionality, from creating and running containers to managing networking and volumes, making Docker a versatile tool for containerization.

Conclusion

Containers offer an efficient, scalable, and secure way to deploy applications. They excel in providing isolated environments, supporting microservices, and facilitating DevOps practices. With platforms like Docker, containers have become indispensable in modern software development and deployment, offering both simplicity and versatility.

Top comments (0)