DEV Community

Cover image for 🗓️ Docker Learning Journal
Lotanna
Lotanna

Posted on

🗓️ Docker Learning Journal

🧠 Week 1 Overview

Last week, I built a solid foundation in Docker. I got to understand containers and images, explore networks and persistent storage.

🧩 Key Concepts Covered

🐳 1. Docker Basics

I installed Docker for Mac and learned about the difference between images and containers:

Images: Templates used to create containers.

Containers: Running instances of images. They are isolated environments that include an app and all its dependencies.

💡 I learnt the hard way that on Mac and Windows, Docker Desktop must be running for Docker commands, or managing resources to work. Docker Desktop provides the background service (Docker Engine) that powers the CLI.

🔧 2. Core Commands

The table below contains some of the essential Docker commands I learnt to manage containers and images:

Command Description
docker ps Lists running containers
docker ps -a Lists all containers(running and stopped)
docker images Lists available images
docker start <container> Starts a stopped container
docker attach <container> Attaches to a running container
docker stop <container> Stops one or more containers

🏗️ 3. Building and Running Images

I learned to create images using Dockerfiles, with key instructions like:

FROM → sets the base image

COPY → copies files from the host into the image

RUN → executes commands during the image build process (e.g., installing dependencies or packages)

CMD → defines the command to run when the container starts

I explored how docker build works and how to tag images.
I also used docker run to create containers from images, and the -d flag to run them in detached mode.

🧱 4. Creating Images from Containers

I discovered that you can also create images from existing containers using:

docker commit <container-name>
Enter fullscreen mode Exit fullscreen mode

The --change flag allows modifying commands that the new image executes first.

🌐 5. Networking

I learned that docker network is used for managing container communication. By default, containers run in a bridge network.

Commands explored:

ip addr show → displays network interfaces
arp-scan --interface=eth0 --localnet → shows all containers on the network
ping <IP> → tests connectivity
curl → interacts with web apps or APIs

The three main network types are:

Type Description
bridge Default; containers communicate via internal bridge
host Shares the host network, no isolation
none No network access

Other concepts learned:

  • Detach from containers using Ctrl + P, then Ctrl + Q

  • Bind container ports to host ports using -P tag

  • Specify container networks using --network <network-name>

💾 6. Persistent Storage

Docker handles data persistence using 3 storage types:

Bind Mounts: link host directories to containers

  • They are good for development. Changes made on the host are instantly reflected inside the container without needing to rebuild the image.
  • You must know the exact path on the host.

Volumes: managed by Docker, ideal for production and sharing data between containers.

  • Docker decides where the data lives, so you don’t need to specify host paths.
  • Supports external storage and different drivers (more on that later).

tmpfs: in-memory temporary storage in RAM, useful for sensitive or short-lived data that shouldn’t persist on disk.

To attach storage to a container, I learned about the --mount flag, which provides a consistent way to define different storage types:

docker run --mount type=bind,source=/path/on/host,target=/path/in/container <image-name>
Enter fullscreen mode Exit fullscreen mode
  • type= specifies the storage type (bind, volume, or tmpfs)
  • source= defines the host directory or volume name (not needed for tmpfs)
  • target= defines the directory path inside the container

This syntax made it clearer how Docker connects external data to running containers and how flexible storage setups can be depending on the use case.

⚙️ Skills Progressed

By the end of Week 1, I’ve learned to:
✅ Understand Docker’s architecture and use cases.
✅ Build and run images using Dockerfiles.
✅ Manage container lifecycle (start, stop, inspect, delete).
✅ Use networking features to connect containers.
✅ Set up persistent storage strategies.

A key takeaway this week was understanding that Docker helps ensure applications run consistently across different environments.

I can now recognise why Dockerfiles are preferred, they make images version-controlled, shareable, and automatable.

Top comments (0)