<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: P0Saurabh</title>
    <description>The latest articles on DEV Community by P0Saurabh (@p0saurabh).</description>
    <link>https://dev.to/p0saurabh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/p0saurabh"/>
    <language>en</language>
    <item>
      <title>The Ultimate Guide to Docker: From Containers and Architecture to Hands-On Projects</title>
      <dc:creator>P0Saurabh</dc:creator>
      <pubDate>Fri, 01 Aug 2025 09:52:52 +0000</pubDate>
      <link>https://dev.to/p0saurabh/the-ultimate-guide-to-docker-from-containers-and-architecture-to-hands-on-projects-3ocb</link>
      <guid>https://dev.to/p0saurabh/the-ultimate-guide-to-docker-from-containers-and-architecture-to-hands-on-projects-3ocb</guid>
      <description>&lt;h1&gt;
  
  
  The Ultimate Guide to Docker: From Containers and Architecture to Hands-On Projects
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Part I: The Container Revolution: Why Modern Development Runs on Containers
&lt;/h2&gt;

&lt;p&gt;In modern software development, one of the most persistent and frustrating challenges has been ensuring that an application runs reliably across different computing environments. The infamous phrase, "but it works on my machine," has been the source of countless delays and headaches for development and operations teams alike. This problem arises from subtle differences in operating systems, library versions, and configurations between a developer's laptop, a testing server, and the production environment. Containerization emerged as the definitive solution to this problem, and Docker is the platform that brought this powerful technology to the masses.[1, 2]&lt;/p&gt;

&lt;h3&gt;
  
  
  A. What is a Container? Beyond the Buzzword
&lt;/h3&gt;

&lt;p&gt;At its core, a software container is a standard, executable unit of software that packages an application's code along with all its necessary dependencies, such as libraries, system tools, and runtime environments.[1, 3, 4] This package is a self-sufficient, lightweight executable that is abstracted away from the host operating system, allowing it to run consistently on any infrastructure.[3, 4]&lt;/p&gt;

&lt;p&gt;The most effective analogy for understanding a software container is the physical shipping container. Before standardization, shipping goods was a chaotic process. Goods of different shapes and sizes were difficult to load, move, and unload. The invention of the standardized shipping container revolutionized global trade by providing a uniform box that could be handled by any crane, ship, train, or truck, regardless of its contents. A software container does the same for applications.[5] It provides a standard "box" that can be moved from a developer's laptop (running macOS) to a staging server (running a specific Linux distribution) and finally to a public cloud provider (running on different hardware) with the guarantee that the application inside will behave identically in every location.[1, 6]&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Software containers are analogous to physical shipping containers, providing a standard unit for any application.&lt;/em&gt; [7, 8]&lt;/p&gt;

&lt;p&gt;This remarkable consistency is achieved through a form of operating system (OS) virtualization. Unlike traditional virtual machines that virtualize the hardware, containers virtualize the OS itself. They leverage features of the host OS kernel—such as &lt;code&gt;cgroups&lt;/code&gt; and &lt;code&gt;namespaces&lt;/code&gt; in Linux—to create isolated environments for processes.[9, 10] This allows each container to have its own private view of the filesystem, network, and process tree, all while sharing the kernel of the single host operating system.[1]&lt;/p&gt;

&lt;h3&gt;
  
  
  B. The Core Benefits: Speed, Portability, and Consistency
&lt;/h3&gt;

&lt;p&gt;The architectural design of containers directly translates into a set of transformative benefits for software development and deployment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Portability and Consistency&lt;/strong&gt;: This is the paramount advantage. Because a container bundles an application with all its dependencies, it creates a predictable and repeatable environment.[5] This eliminates entire classes of bugs caused by environment discrepancies and dramatically simplifies the deployment process. Developers can be confident that what they build and test locally is exactly what will run in production.[1, 11] This portability extends across operating systems (Linux, Windows, macOS) and infrastructures (on-premises data centers, public clouds).[3, 4]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Speed and Efficiency&lt;/strong&gt;: Containers are exceptionally lightweight. Since they don't need to bundle a full guest operating system, their image sizes are typically measured in megabytes, compared to the gigabytes required for virtual machines.[1, 4] This smaller footprint leads to significantly faster startup times—containers can launch in seconds, whereas VMs can take minutes to boot their entire OS.[4, 12] This efficiency also allows for higher server density, meaning more applications can be run on a single host machine, leading to better resource utilization and reduced server and licensing costs.[3, 13]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Agility and DevOps Enablement&lt;/strong&gt;: The speed and consistency of containers are foundational to modern DevOps practices. The ability to quickly build, test, and deploy applications in isolated units makes them a perfect fit for Continuous Integration and Continuous Delivery (CI/CD) pipelines.[3, 13] Containerization also promotes a clear separation of responsibilities: developers focus on the application logic and its dependencies inside the container, while operations teams focus on the platform that runs and manages these containers.[3] Furthermore, containers are the natural architectural unit for building microservices. Each microservice can be packaged, deployed, updated, and scaled independently in its own container, enabling teams to develop and release services at their own pace.[6, 13, 14]&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The underlying technology for containers, such as Linux control groups (&lt;code&gt;cgroups&lt;/code&gt;) and namespaces, existed for years before Docker's rise.[1, 15] However, these kernel features were complex and difficult for the average developer to use. Docker's revolutionary impact, starting with its open-source launch in 2013, was not in inventing containerization but in democratizing it.[1] It provided a simple, high-level API and an intuitive command-line interface (CLI) that abstracted away the low-level complexity.[10, 15] This brilliant simplification of a powerful concept, combined with the simultaneous rise of microservice architectures and cloud computing, created a perfect storm. Docker provided the right tool at the exact moment the industry needed it most, catalyzing a fundamental shift in how software is built and shipped.&lt;/p&gt;

&lt;h3&gt;
  
  
  C. Containers vs. Virtual Machines: A Definitive Comparison
&lt;/h3&gt;

&lt;p&gt;To fully appreciate the innovation of containers, it is essential to compare them with their predecessor, virtual machines (VMs). Both are virtualization technologies, but they operate at different levels of the system stack, leading to critical trade-offs in performance, portability, and security.[12, 16]&lt;/p&gt;

&lt;p&gt;The architectural divide is the most important distinction. Virtual machines virtualize the &lt;strong&gt;hardware&lt;/strong&gt;. A piece of software called a hypervisor runs on the host machine and creates fully independent guest machines. Each VM contains a complete copy of a guest operating system, along with the necessary binaries, libraries, and the application itself.[3, 16, 17] In contrast, containers virtualize the &lt;strong&gt;operating system&lt;/strong&gt;. They share the kernel of the host OS and run as isolated user-space processes.[1, 10] This fundamental difference is the source of all other distinctions.&lt;/p&gt;

&lt;p&gt;This architectural choice has profound implications for resource utilization. Because each VM includes a full OS, it is inherently resource-intensive, consuming a large amount of CPU, memory, and storage—often tens of gigabytes.[1, 18] Containers, by sharing the host kernel, have a dramatically smaller footprint, typically measured in megabytes, and consume far fewer resources.[3, 4]&lt;/p&gt;

&lt;p&gt;This trade-off also extends to security. VMs provide superior security isolation because each VM is a fully sandboxed environment with its own kernel. An exploit or crash within one VM is highly unlikely to affect other VMs on the same host.[12, 16] Containers, on the other hand, share the host OS kernel. While they are isolated from each other at the process level, a severe vulnerability in the shared kernel could theoretically be exploited to compromise the host or other containers.[16, 17]&lt;/p&gt;

&lt;p&gt;The choice between containers and VMs depends on the specific use case. VMs are better suited for scenarios requiring full OS isolation, running applications that need a different operating system from the host, or housing large, traditional monolithic workloads.[6, 18] Containers excel in the world of modern, cloud-native applications, packaging microservices, and powering fast-paced CI/CD pipelines where speed, efficiency, and portability are the highest priorities.[6] It is also a common and powerful pattern to run containers &lt;em&gt;inside&lt;/em&gt; of VMs, combining the hardware-level isolation of VMs with the lightweight agility of containers.[6, 16]&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Architectural difference between Virtual Machines (left) and Containers (right).&lt;/em&gt; [2, 3]&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Attribute&lt;/th&gt;
&lt;th&gt;Containers&lt;/th&gt;
&lt;th&gt;Virtual Machines (VMs)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Virtualization Level&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Operating System [1, 3]&lt;/td&gt;
&lt;td&gt;Hardware [3, 16]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Size&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Lightweight (Megabytes) [1, 4]&lt;/td&gt;
&lt;td&gt;Heavyweight (Gigabytes) [1, 12]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Startup Time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Seconds [4, 12]&lt;/td&gt;
&lt;td&gt;Minutes [1, 18]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Resource Overhead&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low (shares host kernel) [3, 4]&lt;/td&gt;
&lt;td&gt;High (includes full guest OS) [12, 18]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security Isolation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Process-level isolation [3, 10]&lt;/td&gt;
&lt;td&gt;Full hardware-level isolation [12, 16]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Portability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High (runs on any OS with a container engine) [3, 9]&lt;/td&gt;
&lt;td&gt;Lower (tied to specific hypervisor/OS) [12, 17]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ideal Use Case&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Microservices, CI/CD, cloud-native apps [6, 13]&lt;/td&gt;
&lt;td&gt;Monoliths, multi-OS environments, high-security isolation [6, 18]&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Part II: Under the Hood: Deconstructing Docker's Architecture
&lt;/h2&gt;

&lt;p&gt;While the concept of a container is straightforward, the Docker platform that builds, runs, and manages them is a sophisticated system with a clear and powerful architecture. Understanding its components is key to using Docker effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  A. Introducing Docker: The Engine of Containerization
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpsjlibg4v6hi5wklcs2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpsjlibg4v6hi5wklcs2.png" alt=" " width="800" height="662"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker was launched as an open-source project in 2013 and quickly became the de facto standard for containerization.[1, 10] Its popularity is so immense that the terms "Docker" and "container" are often used interchangeably, a testament to its market dominance.[15] Docker is a set of Platform as a Service (PaaS) products that provide a complete ecosystem for developing, shipping, and running applications inside containers.[10]&lt;/p&gt;

&lt;h3&gt;
  
  
  B. The Client-Server Model: How Docker Works
&lt;/h3&gt;

&lt;p&gt;At its heart, Docker operates on a client-server architecture.[2, 19, 20] This model consists of three main components that work in concert:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Docker Daemon (&lt;code&gt;dockerd&lt;/code&gt;)&lt;/strong&gt;: This is the server component of Docker. It is a persistent, long-running background process that listens for API requests from Docker clients and manages all the heavy lifting.[10, 21, 22] The daemon is responsible for building images, running and monitoring containers, configuring networks, and managing storage volumes. It is the core engine that brings containers to life.[19]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Docker Client (&lt;code&gt;docker&lt;/code&gt;)&lt;/strong&gt;: This is the primary interface through which users interact with Docker. It is a command-line interface (CLI) tool that allows you to issue commands like &lt;code&gt;docker run&lt;/code&gt; or &lt;code&gt;docker build&lt;/code&gt;.[10, 22] The client takes these commands and translates them into API requests that are sent to the Docker daemon for execution.[21]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The REST API&lt;/strong&gt;: This is the communication layer that connects the Docker client and the Docker daemon. The client uses this API to send instructions to the daemon.[19, 22] This interaction can happen over a local UNIX socket or a network interface, which means the Docker client can control a daemon running on the same machine or on a remote server.[2] This API-driven architecture is what makes Docker so extensible and easy to integrate with other tools and automation scripts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23p9701begu48ogl5opu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23p9701begu48ogl5opu.png" alt=" " width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  C. The Building Blocks: Docker's Core Objects
&lt;/h3&gt;

&lt;p&gt;When working with Docker, you will constantly interact with a few fundamental objects that form the building blocks of any containerized application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Images&lt;/strong&gt;: A Docker image is a read-only template that contains a set of instructions for creating a container.[10, 20, 21] It is a complete, executable package that includes everything an application needs to run: the application code, a runtime (like Python or Node.js), system tools, libraries, and settings.[1] Images are the "build" part of the Docker lifecycle; they are what you store and distribute.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Containers&lt;/strong&gt;: A container is a runnable, live instance of an image.[2, 10] When you execute the &lt;code&gt;docker run&lt;/code&gt; command, the Docker daemon uses an image as a blueprint to create and start a container. The container is the isolated environment where your application actually executes. It has its own filesystem, network stack, and process space, logically separated from the host and other containers.[3, 21]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Volumes&lt;/strong&gt;: The filesystem inside a container is ephemeral by default. This means that any data created or modified inside a container is lost when that container is removed. To solve this, Docker provides &lt;strong&gt;volumes&lt;/strong&gt;, which are the preferred mechanism for persisting data.[23] Volumes are managed by Docker and exist on the host filesystem, outside the container's lifecycle. They can be attached to one or more containers to store application data, such as a database file or user uploads, ensuring that the data persists even if the container is stopped and removed.[23, 24]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Networks&lt;/strong&gt;: By default, containers are isolated. Docker networks provide the communication layer that allows them to connect with each other, with the host machine, and with the outside world.[21, 23] Docker includes several built-in network drivers, such as &lt;code&gt;bridge&lt;/code&gt; (the default for isolating containers on a private network) and &lt;code&gt;host&lt;/code&gt; (for sharing the host's network stack). This networking capability is crucial for building multi-container applications where different services (like a web frontend and a database backend) need to communicate securely.[20, 21]&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The remarkable efficiency of Docker is deeply rooted in its use of a layered filesystem for images. An image is not a single monolithic file but is composed of multiple read-only layers stacked on top of each other.[23] Each instruction in a &lt;code&gt;Dockerfile&lt;/code&gt; (like adding a dependency or copying code) creates a new layer. When you run a container from an image, Docker doesn't create a full copy. Instead, it adds a thin, writable layer on top of the immutable image layers. Any changes made inside the running container, such as writing a log file, are stored in this writable layer. This "copy-on-write" system is incredibly efficient. It means that if you run ten containers from the same image, they all share the same underlying image layers in memory. Only the differences for each container are stored separately. This is why you can launch numerous containers almost instantly without consuming a proportional amount of disk space, and why pulling updates to an image is often very fast—Docker only needs to download the layers that have changed.[23]&lt;/p&gt;

&lt;h3&gt;
  
  
  D. The Ecosystem: Docker Hub and Registries
&lt;/h3&gt;

&lt;p&gt;The final piece of the Docker architecture is the registry, which is a system for storing and distributing Docker images.[11, 21]&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Hub&lt;/strong&gt;: This is the default public registry provided by Docker, Inc., and it serves as a massive central library for container images.[20, 25] It hosts thousands of "official" images for popular software (like Python, Ubuntu, Redis, and Nginx) that are maintained and vetted by the software vendors. It also contains countless community-contributed images for a vast range of applications and tools.[26] For most developers, Docker Hub is the primary source for pulling base images to build upon.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Public vs. Private Registries&lt;/strong&gt;: While Docker Hub is invaluable, most organizations need a place to store their own proprietary application images. For this, they use private registries.[21] A private registry provides a secure, access-controlled location to store and share images within a team or company. Major cloud providers (like AWS ECR, Google Artifact Registry, and Azure Container Registry) offer managed private registry services, or organizations can host their own.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Part III: Mastering the CLI: Your Docker Command Toolkit
&lt;/h2&gt;

&lt;p&gt;The Docker command-line interface (CLI) is your primary tool for interacting with the Docker daemon. While it has a large number of commands and options, a relatively small subset will cover the vast majority of your daily development tasks. This section provides a practical guide to the most important commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  A. Essential Commands for Daily Use (Image &amp;amp; Container Lifecycle)
&lt;/h3&gt;

&lt;p&gt;These are the commands you will use constantly to build, run, and manage your applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker build&lt;/code&gt;: Builds a new image from the instructions in a &lt;code&gt;Dockerfile&lt;/code&gt;. The &lt;code&gt;-t&lt;/code&gt; flag is used to tag the image with a name and optional version (e.g., &lt;code&gt;my-app:1.0&lt;/code&gt;).[26, 27]

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;docker build -t my-app:1.0.&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;docker run&lt;/code&gt;: Creates and starts a new container from a specified image. This is a powerful command with many useful flags.[24, 27]

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-d&lt;/code&gt;: Runs the container in detached mode (in the background).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-p &amp;lt;host_port&amp;gt;:&amp;lt;container_port&amp;gt;&lt;/code&gt;: Maps a port on the host machine to a port inside the container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--name &amp;lt;container_name&amp;gt;&lt;/code&gt;: Assigns a custom name to the container for easy reference.&lt;/li&gt;
&lt;li&gt;Example: &lt;code&gt;docker run -d -p 8080:80 --name my-web-server nginx&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;docker ps&lt;/code&gt;: Lists all currently running containers. To see all containers, including those that have stopped, use the &lt;code&gt;-a&lt;/code&gt; flag.[24, 28]&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;docker stop&lt;/code&gt;: Gracefully stops one or more running containers by sending a &lt;code&gt;SIGTERM&lt;/code&gt; signal, giving the application a chance to shut down cleanly.[27]

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;docker stop my-web-server&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;docker start&lt;/code&gt;: Starts one or more stopped containers.[27]&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;docker rm&lt;/code&gt;: Removes one or more stopped containers. You cannot remove a running container without first stopping it or using the &lt;code&gt;-f&lt;/code&gt; (force) flag.[24]&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;docker images&lt;/code&gt;: Lists all of the Docker images stored on your local machine.[27, 28]&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;docker rmi&lt;/code&gt;: Removes one or more images from your local machine.[24, 27]&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;docker pull&lt;/code&gt;: Downloads an image from a registry (Docker Hub by default) to your local machine.[24, 27]&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;docker push&lt;/code&gt;: Uploads an image from your local machine to a registry.[24, 27]&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Many developers learn &lt;code&gt;docker run&lt;/code&gt; as their first command, but it's helpful to understand that it's a convenience command that combines two distinct actions: creating a container and starting it. Docker also provides separate commands for these steps: &lt;code&gt;docker create&lt;/code&gt; and &lt;code&gt;docker start&lt;/code&gt;.[24, 27] &lt;code&gt;docker create&lt;/code&gt; builds the container's filesystem from the image and prepares it to run, but does not start it. &lt;code&gt;docker start&lt;/code&gt; then executes the created container. This separation can be useful in automation scripts where you might want to configure a set of containers first and then start them all together. Understanding this &lt;code&gt;run = create + start&lt;/code&gt; relationship provides a clearer mental model of the container lifecycle.&lt;/p&gt;

&lt;h3&gt;
  
  
  B. Advanced Commands and Techniques (Interaction &amp;amp; Management)
&lt;/h3&gt;

&lt;p&gt;Once your containers are running, you'll need commands to interact with them, manage their resources, and clean up your system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Interacting with Containers:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker logs&lt;/code&gt;: Fetches the standard output logs from a container. This is essential for debugging. The &lt;code&gt;-f&lt;/code&gt; flag allows you to "follow" the log stream in real-time.[27, 28]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker exec&lt;/code&gt;: Executes a command inside a running container. Using the &lt;code&gt;-it&lt;/code&gt; flags provides an interactive TTY, which is perfect for opening a shell (&lt;code&gt;/bin/sh&lt;/code&gt; or &lt;code&gt;/bin/bash&lt;/code&gt;) inside a container to debug its state.[27, 28]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker cp&lt;/code&gt;: Copies files and folders between a container's filesystem and your host machine's local filesystem.[26, 27]&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Inspection and Stats:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker inspect&lt;/code&gt;: Provides detailed, low-level information about any Docker object (container, image, volume, network) in JSON format. This is useful for finding a container's IP address or inspecting its configuration.[24, 27]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker stats&lt;/code&gt;: Displays a live stream of resource usage statistics (CPU, memory, network I/O) for your running containers.[24]&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Data and Network Management:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker volume&lt;/code&gt;: A group of commands for managing volumes, including &lt;code&gt;create&lt;/code&gt;, &lt;code&gt;ls&lt;/code&gt; (list), &lt;code&gt;inspect&lt;/code&gt;, and &lt;code&gt;rm&lt;/code&gt; (remove).[24, 27]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker network&lt;/code&gt;: A group of commands for managing networks, including &lt;code&gt;create&lt;/code&gt;, &lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;inspect&lt;/code&gt;, &lt;code&gt;connect&lt;/code&gt; (attach a container to a network), and &lt;code&gt;disconnect&lt;/code&gt;.[26]&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;System Cleanup:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker system prune&lt;/code&gt;: A very useful command for reclaiming disk space. It removes all unused Docker objects: stopped containers, dangling images (layers with no associated tagged image), unused networks, and build cache. Adding the &lt;code&gt;-a&lt;/code&gt; flag will also remove all unused images.[26, 28]&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  C. Quick Reference: The Docker CLI Cheat Sheet
&lt;/h3&gt;

&lt;p&gt;This table serves as a quick reference for the most common Docker commands and their functions, organized by category.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Common Flags&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Image Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker build&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;-t &amp;lt;name:tag&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Build an image from a Dockerfile.[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker images&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;-a&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;List all local images.[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker rmi&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;-f&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Remove one or more local images.[24]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker pull&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Pull an image from a registry.[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker push&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Push an image to a registry.[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker tag&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Tag an image with a new name/version.[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Container Lifecycle&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker run&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;-d&lt;/code&gt;, &lt;code&gt;-p&lt;/code&gt;, &lt;code&gt;-v&lt;/code&gt;, &lt;code&gt;--name&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Create and start a new container from an image.[24]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker ps&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;-a&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;List running containers (&lt;code&gt;-a&lt;/code&gt; for all).[28]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker start&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Start one or more stopped containers.[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker stop&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Stop one or more running containers.[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker restart&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Restart a container.[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker rm&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;-v&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Remove one or more stopped containers (&lt;code&gt;-v&lt;/code&gt; to remove volumes).[24]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Inspection &amp;amp; Interaction&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker logs&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;-f&lt;/code&gt;, &lt;code&gt;--tail&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Fetch the logs of a container (&lt;code&gt;-f&lt;/code&gt; to follow).[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker exec&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;-it&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Execute a command in a running container (&lt;code&gt;-it&lt;/code&gt; for interactive shell).[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker inspect&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Display low-level information on Docker objects.[24]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker stats&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Display a live stream of container resource usage.[24]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker cp&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Copy files/folders between a container and the host.[27]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Network &amp;amp; Volume Mgmt&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker network ls&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;List all networks.[26]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker volume ls&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;List all volumes.[24]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker volume prune&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Remove all unused local volumes.[26]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;System Cleanup&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker system prune&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;-a&lt;/code&gt;, &lt;code&gt;--volumes&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Remove unused data (stopped containers, networks, images).[28]&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Part IV: Hands-On Project: Building and Running a Containerized Flask Application
&lt;/h2&gt;

&lt;p&gt;Theory is essential, but the best way to learn Docker is by doing. This hands-on project will guide you through containerizing a simple Python web application using Flask. We will start by building and running a single container, then advance to a multi-service application using Docker Compose.&lt;/p&gt;

&lt;h3&gt;
  
  
  A. Project Setup: The Anatomy of Our Flask App
&lt;/h3&gt;

&lt;p&gt;First, let's create the files for our simple web application. Create a new directory for your project and inside it, create the following two files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. &lt;code&gt;app.py&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
This file contains the code for a minimal Flask web server. It will display a simple welcome message.python&lt;br&gt;
from flask import Flask&lt;br&gt;
app = Flask(&lt;strong&gt;name&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;@app.route('/')&lt;br&gt;
def home():&lt;br&gt;
return "Welcome to Flask with Docker!"&lt;/p&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == "&lt;strong&gt;main&lt;/strong&gt;":&lt;br&gt;
app.run(host="0.0.0.0", port=5000, debug=True)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The line `host="0.0.0.0"` is critical here. It tells the Flask development server to listen on all available network interfaces, which makes the application accessible from outside the Docker container.[29]

**2. `requirements.txt`**
This file lists the Python dependencies our project needs. For now, it's just Flask.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;flask&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
### B. Crafting the Blueprint: Writing the Dockerfile

The `Dockerfile` is a text-based script that contains the instructions for building our application's Docker image.[30, 31] Create a file named `Dockerfile` (with no file extension) in your project directory with the following content.

```

dockerfile
# Step 1: Use an official lightweight Python image as a base
FROM python:3.9-slim

# Step 2: Set the working directory inside the container
WORKDIR /app

# Step 3: Copy the requirements file and install dependencies
COPY requirements.txt.
RUN pip install -r requirements.txt

# Step 4: Copy the rest of the application's source code
COPY..

# Step 5: Expose the port that the application runs on
EXPOSE 5000

# Step 6: Define the default command to run when the container starts
CMD ["python", "app.py"]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's break down each instruction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FROM python:3.9-slim&lt;/code&gt;: This specifies the base image for our build. Using an official, &lt;code&gt;slim&lt;/code&gt; variant is a best practice as it results in a smaller and more secure final image.[30, 32]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WORKDIR /app&lt;/code&gt;: This sets the working directory for all subsequent commands inside the container. It's like &lt;code&gt;cd /app&lt;/code&gt;.[31, 32]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY requirements.txt.&lt;/code&gt;: This copies the &lt;code&gt;requirements.txt&lt;/code&gt; file from our host machine into the &lt;code&gt;/app&lt;/code&gt; directory inside the image.[30] We copy this file first to take advantage of Docker's layer caching. If our dependencies don't change, this layer won't need to be rebuilt.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN pip install -r requirements.txt&lt;/code&gt;: This executes the &lt;code&gt;pip install&lt;/code&gt; command during the image build process to install our dependencies.[32]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY..&lt;/code&gt;: This copies the rest of our project files (just &lt;code&gt;app.py&lt;/code&gt; in this case) into the &lt;code&gt;/app&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EXPOSE 5000&lt;/code&gt;: This instruction serves as documentation, informing Docker that the container listens on port 5000 at runtime. It does not actually publish the port to the host.[32, 7]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CMD ["python", "app.py"]&lt;/code&gt;: This specifies the default command that will be executed when a container is started from this image.[30]&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  C. The First Build: Creating and Running a Single Container
&lt;/h3&gt;

&lt;p&gt;With our &lt;code&gt;Dockerfile&lt;/code&gt; ready, we can now build the image and run our application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Build the Image&lt;/strong&gt;&lt;br&gt;
Open your terminal in the project directory and run the &lt;code&gt;docker build&lt;/code&gt; command. We'll tag (&lt;code&gt;-t&lt;/code&gt;) our image as &lt;code&gt;flask-hello-world&lt;/code&gt;. The &lt;code&gt;.&lt;/code&gt; at the end tells Docker to use the current directory as the build context.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
docker build -t flask-hello-world.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;2. Run the Container&lt;/strong&gt;&lt;br&gt;
Now, use the &lt;code&gt;docker run&lt;/code&gt; command to create and start a container from our new image.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
docker run -d -p 5001:5000 --name my-flask-app flask-hello-world


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, &lt;code&gt;-d&lt;/code&gt; runs the container in detached mode, and &lt;code&gt;-p 5001:5000&lt;/code&gt; is the crucial part. It maps port 5001 on your host machine to port 5000 inside the container, which is the port our Flask app is listening on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Verify Success&lt;/strong&gt;&lt;br&gt;
Open a web browser and navigate to &lt;code&gt;http://localhost:5001&lt;/code&gt;. You should see the message "Welcome to Flask with Docker!" You can also run &lt;code&gt;docker ps&lt;/code&gt; in your terminal to see your &lt;code&gt;my-flask-app&lt;/code&gt; container running.&lt;/p&gt;

&lt;h3&gt;
  
  
  D. Scaling Up: Introducing Docker Compose for Multi-Service Applications
&lt;/h3&gt;

&lt;p&gt;!(&lt;a href="https://www.google.com/search?q=https://i.imgur.com/2F2dYqg.png" rel="noopener noreferrer"&gt;https://www.google.com/search?q=https://i.imgur.com/2F2dYqg.png&lt;/a&gt;) [27, 28]&lt;/p&gt;

&lt;p&gt;Managing a single container is straightforward, but real-world applications often consist of multiple interconnected services, like a web server, a database, and a caching layer. Managing the lifecycle and networking of these services with individual &lt;code&gt;docker run&lt;/code&gt; commands becomes complex and error-prone.[33]&lt;/p&gt;

&lt;p&gt;This is where &lt;strong&gt;Docker Compose&lt;/strong&gt; comes in. It is a tool for defining and running multi-container Docker applications using a single, simple YAML configuration file.[10, 8]&lt;/p&gt;

&lt;p&gt;Let's enhance our application to use a Redis cache for a simple hit counter. This will demonstrate a realistic multi-service setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Update Project Files&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;requirements.txt&lt;/code&gt;&lt;/strong&gt;: Add &lt;code&gt;redis&lt;/code&gt; to the list of dependencies.&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

flask
redis


&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;app.py&lt;/code&gt;&lt;/strong&gt;: Modify the application to connect to Redis and increment a counter on each visit.&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
python
import time
import redis
from flask import Flask

app = Flask(__name__)
# Connect to the redis service, using the service name 'redis' as the hostname
cache = redis.Redis(host='redis', port=6379)

def get_hit_count():
    retries = 5
    while True:
        try:
            # The 'hits' key will be incremented in Redis
            return cache.incr('hits')
        except redis.exceptions.ConnectionError as exc:
            if retries == 0:
                raise exc
            retries -= 1
            time.sleep(0.5)

@app.route('/')
def hello():
    count = get_hit_count()
    return f'Hello World! I have been seen {count} times.\n'

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000, debug=True)


&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Create the &lt;code&gt;compose.yaml&lt;/code&gt; File&lt;/strong&gt;&lt;br&gt;
Now, create a file named &lt;code&gt;compose.yaml&lt;/code&gt; in your project directory. This file will define our two services: &lt;code&gt;web&lt;/code&gt; (our Flask app) and &lt;code&gt;redis&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
yaml
services:
  web:
    build:.
    ports:
      - "8000:5000"
    volumes:
      -.:/app
  redis:
    image: "redis:alpine"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;services&lt;/code&gt;: This is the top-level key where we define all the containers in our application stack.[33]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;web&lt;/code&gt;: This is the name of our Flask application service.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;build:.&lt;/code&gt;: Tells Compose to build an image using the &lt;code&gt;Dockerfile&lt;/code&gt; in the current directory.[34]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ports: - "8000:5000"&lt;/code&gt;: Maps port 8000 on the host to port 5000 in the &lt;code&gt;web&lt;/code&gt; container.[34]&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;volumes: -.:/app&lt;/code&gt;: This is a powerful feature for development. It mounts the current directory on the host into the &lt;code&gt;/app&lt;/code&gt; directory in the container. Any changes you make to your code on the host will be immediately reflected inside the container, without needing to rebuild the image.[33]&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;code&gt;redis&lt;/code&gt;: This is the name of our Redis cache service.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;image: "redis:alpine"&lt;/code&gt;: Tells Compose to pull the &lt;code&gt;redis:alpine&lt;/code&gt; image from Docker Hub, rather than building it locally.[34]&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The most powerful, yet seemingly magical, part of Docker Compose is its handling of networking. When you run &lt;code&gt;docker compose up&lt;/code&gt;, Compose automatically creates a dedicated &lt;code&gt;bridge&lt;/code&gt; network for your project and attaches both the &lt;code&gt;web&lt;/code&gt; and &lt;code&gt;redis&lt;/code&gt; containers to it. Within this network, Docker provides an internal DNS resolver. This means that the &lt;code&gt;web&lt;/code&gt; container can find and communicate with the &lt;code&gt;redis&lt;/code&gt; container simply by using its service name, &lt;code&gt;redis&lt;/code&gt;, as a hostname. This is why the line &lt;code&gt;redis.Redis(host='redis', port=6379)&lt;/code&gt; in our Python code works seamlessly. This built-in service discovery eliminates the need for manual IP address management and is a core reason why Compose is so effective for local development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Running with Compose&lt;/strong&gt;&lt;br&gt;
With the &lt;code&gt;compose.yaml&lt;/code&gt; file in place, running the entire application stack is incredibly simple.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To build the images and start the services in the foreground:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
docker compose up --build


&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To run in detached (background) mode:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
docker compose up -d


&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To stop and remove all containers, networks, and volumes created by the project:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
docker compose down


&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Run &lt;code&gt;docker compose up --build&lt;/code&gt;. Now, navigate to &lt;code&gt;http://localhost:8000&lt;/code&gt; in your browser. You should see the message "Hello World! I have been seen 1 times." Each time you refresh the page, the counter will increment, proving that your Flask container is successfully communicating with the Redis container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part V: Conclusion: Your Journey with Docker Has Just Begun
&lt;/h2&gt;

&lt;p&gt;This guide has taken you on a comprehensive journey through the world of Docker. You began by understanding the fundamental "why" of containerization—how it solves the age-old problem of environmental inconsistency by providing portable, lightweight, and efficient units of software. You deconstructed the differences between containers and virtual machines, explored the client-server architecture of the Docker engine, and learned about its core objects: images, containers, volumes, and networks.&lt;/p&gt;

&lt;p&gt;You then moved from theory to practice, mastering the essential Docker CLI commands for managing the entire container lifecycle. Finally, you put it all together in a hands-on project, containerizing a simple Flask application and then evolving it into a multi-service stack with Docker Compose, experiencing firsthand the power of automated builds, networking, and service discovery.&lt;/p&gt;

&lt;p&gt;Your journey with Docker is far from over; it has just begun. The skills you have acquired are a foundation for exploring more advanced and powerful concepts in the cloud-native ecosystem. Here are some paths to explore next:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Production Deployment&lt;/strong&gt;: While Docker Compose is excellent for development, production environments require more robustness. Investigate how to use multiple Compose files (e.g., a &lt;code&gt;compose.production.yaml&lt;/code&gt;) to override development settings for production deployments, and explore strategies for running Compose on a single server.[35]&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration at Scale&lt;/strong&gt;: For managing applications across a cluster of multiple machines, providing high availability and fault tolerance, the next logical step is a container orchestrator. &lt;strong&gt;Kubernetes&lt;/strong&gt; is the industry standard for this, and understanding it is a crucial skill for any modern DevOps or backend engineer.[9, 36]&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Best Practices&lt;/strong&gt;: Dive deeper into securing your containers. Learn about scanning images for vulnerabilities, the principle of least privilege by running containers as a non-root user, and managing secrets securely.[26, 32]&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimizing Builds&lt;/strong&gt;: To create smaller, faster, and more secure production images, learn about advanced &lt;code&gt;Dockerfile&lt;/code&gt; techniques like &lt;strong&gt;multi-stage builds&lt;/strong&gt;. This practice allows you to separate the build-time dependencies from the runtime environment, resulting in a lean final image.[32, 37]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By mastering Docker, you have unlocked a new level of efficiency and reliability in your development workflow. You are now equipped with one of the most fundamental tools in modern software engineering. Keep building, keep learning, and continue to explore the vast possibilities that containerization offers.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>How I Built Our Own OpenStack Cloud on a Virtual Machine (Step-by-Step Guide)</title>
      <dc:creator>P0Saurabh</dc:creator>
      <pubDate>Thu, 22 May 2025 06:30:48 +0000</pubDate>
      <link>https://dev.to/p0saurabh/how-i-built-my-own-openstack-cloud-on-a-virtual-machine-step-by-step-guide-1lj3</link>
      <guid>https://dev.to/p0saurabh/how-i-built-my-own-openstack-cloud-on-a-virtual-machine-step-by-step-guide-1lj3</guid>
      <description>&lt;h2&gt;
  
  
  📝 Introduction:
&lt;/h2&gt;

&lt;p&gt;OpenStack is one of the most popular open-source platforms used to build private cloud infrastructure — imagine running your own version of AWS 🏗️ right on your personal machine 💻.&lt;/p&gt;

&lt;p&gt;I wanted to dive deeper into cloud computing, so I decided to set up my own OpenStack cloud using MicroStack — a simplified, single-node version of OpenStack.&lt;/p&gt;

&lt;p&gt;In this blog, I’ll walk you through everything I did — step-by-step — from setting up the system to launching a virtual machine through the OpenStack Dashboard 🌐.&lt;/p&gt;

&lt;p&gt;Whether you're using Windows 🪟, macOS 🍎, or Linux 🐧, this guide will help you get your own private cloud up and running locally.&lt;/p&gt;

&lt;p&gt;🔍 What we’ll cover:&lt;/p&gt;

&lt;p&gt;✅ System setup (based on your OS)&lt;br&gt;
✅ Installing OpenStack using MicroStack&lt;br&gt;
✅ Uploading a cloud image&lt;br&gt;
✅ Launching your first VM from the Horizon dashboard&lt;/p&gt;

&lt;p&gt;🧰 Prerequisites (Cross-Platform Setup)&lt;br&gt;
You can follow this guide on any operating system, but make sure your system meets these minimum requirements:&lt;/p&gt;

&lt;p&gt;⚙️ Minimum System Requirements:&lt;/p&gt;

&lt;p&gt;🧠 CPU: 4 or more cores&lt;br&gt;
🧵 RAM: Minimum 16 GB&lt;br&gt;
💾 Storage: At least 100 GB free&lt;br&gt;
🌐 Internet: Required&lt;/p&gt;

&lt;h2&gt;
  
  
  🔧 Step 2: Set Up the Virtual Machine and Install Ubuntu 20.04 LTS
&lt;/h2&gt;

&lt;p&gt;To run OpenStack using MicroStack, we need a Linux environment. If you're not already on Ubuntu, follow these instructions to install it inside a virtual machine on your system.&lt;/p&gt;

&lt;p&gt;💻 VM Setup Instructions (For Windows and macOS Users)&lt;br&gt;
You'll need to create a virtual machine using either:&lt;/p&gt;

&lt;p&gt;🧱 VirtualBox (Free and open source)&lt;br&gt;
💼 VMware Workstation Player (Free for personal use)&lt;br&gt;
🍏 UTM or Parallels Desktop (macOS)&lt;/p&gt;

&lt;p&gt;⚙️ Recommended VM Configuration:&lt;br&gt;
Setting Value&lt;br&gt;
CPU 4 cores&lt;br&gt;
RAM 16 GB&lt;br&gt;
Storage 100 GB (dynamically allocated is OK)&lt;br&gt;
Network Mode    Bridged Adapter 🌐&lt;br&gt;
OS ISO  Ubuntu 20.04.6 LTS (64-bit) 🐧&lt;/p&gt;

&lt;p&gt;📥 Download Ubuntu 20.04 LTS ISO&lt;br&gt;
Go to the official site and download the ISO:&lt;br&gt;
🔗 &lt;a href="https://releases.ubuntu.com/20.04/" rel="noopener noreferrer"&gt;https://releases.ubuntu.com/20.04/&lt;/a&gt;&lt;br&gt;
Make sure to choose the 64-bit Desktop image.&lt;/p&gt;

&lt;p&gt;🛠 Create and Install the VM&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open &lt;strong&gt;VirtualBox&lt;/strong&gt; or your VM tool of choice (e.g., VMware, UTM, Parallels).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new VM and choose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🖥️ &lt;strong&gt;OS Type&lt;/strong&gt;: Linux&lt;/li&gt;
&lt;li&gt;🧩 &lt;strong&gt;Version&lt;/strong&gt;: Ubuntu (64-bit)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attach the &lt;strong&gt;Ubuntu ISO&lt;/strong&gt; you downloaded earlier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start the VM and follow the installation prompts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Select &lt;strong&gt;"Erase disk and install Ubuntu"&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;🧠 Choose your &lt;strong&gt;keyboard layout&lt;/strong&gt;, &lt;strong&gt;language&lt;/strong&gt;, and &lt;strong&gt;timezone&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;👤 Set up a &lt;strong&gt;username and password&lt;/strong&gt; (you'll use this to log in later)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After installation completes, &lt;strong&gt;restart the VM&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once logged into Ubuntu, open the terminal with &lt;code&gt;Ctrl + Alt + T&lt;/code&gt; and you're ready to continue!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  ☁️ Step 3: Installing MicroStack (Single-Node OpenStack Setup)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://microstack.run/" rel="noopener noreferrer"&gt;MicroStack&lt;/a&gt; is a snap-based deployment of OpenStack developed by Canonical. It allows you to run a full OpenStack cloud on a single machine — ideal for local testing, learning, and labs.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔧 What’s Included with MicroStack?
&lt;/h3&gt;

&lt;p&gt;When installed, MicroStack sets up the following OpenStack services automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔐 &lt;strong&gt;Keystone&lt;/strong&gt; – Identity service&lt;/li&gt;
&lt;li&gt;🧠 &lt;strong&gt;Nova&lt;/strong&gt; – Compute service (virtual machines)&lt;/li&gt;
&lt;li&gt;🖼️ &lt;strong&gt;Glance&lt;/strong&gt; – Image service&lt;/li&gt;
&lt;li&gt;🌐 &lt;strong&gt;Neutron&lt;/strong&gt; – Networking service&lt;/li&gt;
&lt;li&gt;📦 &lt;strong&gt;Cinder&lt;/strong&gt; – Block storage&lt;/li&gt;
&lt;li&gt;🌍 &lt;strong&gt;Horizon&lt;/strong&gt; – Web-based dashboard&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📦 Step-by-Step MicroStack Installation
&lt;/h3&gt;

&lt;p&gt;Make sure your Ubuntu VM is running and you’re logged into your user account.&lt;/p&gt;

&lt;p&gt;Absolutely! Here's the &lt;strong&gt;correct and properly structured Markdown version&lt;/strong&gt; of the section, formatted cleanly for use in a blog or documentation platform:&lt;/p&gt;




&lt;h4&gt;
  
  
  ✅ 1. Update Your System
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="p"&gt;```&lt;/span&gt;&lt;span class="nl"&gt;
&lt;/span&gt;
bash
sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h4&gt;
  
  
  ✅ 2. Install Snap
&lt;/h4&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
sudo apt install snapd -y

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h4&gt;
  
  
  ✅ 3. Install MicroStack (Edge Channel + Dev Mode)
&lt;/h4&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
sudo snap install microstack --edge --devmode

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h4&gt;
  
  
  ✅ 4. Initialize MicroStack as a Control Node
&lt;/h4&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
sudo microstack init --auto --control

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔑 Generate SSL keys&lt;/li&gt;
&lt;li&gt;⚙️ Set up essential services like MySQL, RabbitMQ, Nova, Keystone&lt;/li&gt;
&lt;li&gt;🌐 Configure networking&lt;/li&gt;
&lt;li&gt;📦 Download and register a test image (CirrOS)&lt;/li&gt;
&lt;li&gt;🟢 Create default flavors and networks
&amp;gt; 🕐 This step can take &lt;strong&gt;10–15 minutes&lt;/strong&gt; depending on your system.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔐 How to Get the OpenStack Admin Password and Access the Dashboard
&lt;/h2&gt;

&lt;p&gt;After installing MicroStack, your OpenStack admin credentials are saved automatically in a file.&lt;/p&gt;




&lt;h3&gt;
  
  
  ✅ Get the Admin Password
&lt;/h3&gt;

&lt;p&gt;Run the following command in your terminal:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
cat /var/snap/microstack/common/etc/microstack.rc


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You’ll see output like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=your_actual_password_here
export OS_AUTH_URL=https://&amp;lt;your-vm-ip&amp;gt;:5000/v3
...


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;🔑 Copy the value of:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
export OS_PASSWORD=your_actual_password_here


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  That’s your OpenStack &lt;strong&gt;admin password&lt;/strong&gt;.
&lt;/h2&gt;
&lt;h3&gt;
  
  
  🌐 Access the OpenStack Dashboard (Horizon)
&lt;/h3&gt;
&lt;h4&gt;
  
  
  ✅ 1. Find Your VM's IP Address
&lt;/h4&gt;

&lt;p&gt;Run:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
ip a


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Look for the IP address under your main network interface (usually &lt;code&gt;enp0s3&lt;/code&gt; or &lt;code&gt;eth0&lt;/code&gt;), for example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

192.168.1.101


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h4&gt;
  
  
  ✅ 2. Open Horizon Dashboard in Your Browser
&lt;/h4&gt;

&lt;p&gt;In your host machine’s browser, visit:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

http://&amp;lt;your-vm-ip&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
http://192.168.1.101

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📌 Make sure your VM uses &lt;strong&gt;Bridged Adapter mode&lt;/strong&gt; in VirtualBox or VMware so it's accessible on your local network.&lt;/p&gt;

&lt;h4&gt;
  
  
  ✅ 3. Log In
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Username&lt;/strong&gt;: &lt;code&gt;admin&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Password&lt;/strong&gt;: The value you copied from the &lt;code&gt;.rc&lt;/code&gt; file (&lt;code&gt;OS_PASSWORD&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7627qjlebhwfqu05eb4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7627qjlebhwfqu05eb4.png" alt="Image description" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🖼️ Step 5: Upload a Cloud Image Manually in Horizon Dashboard
&lt;/h2&gt;

&lt;p&gt;Now that you’re logged into the OpenStack Horizon dashboard, let’s upload a cloud image so you can launch your first virtual machine.&lt;/p&gt;




&lt;h3&gt;
  
  
  ✅ What Image Do You Need?
&lt;/h3&gt;

&lt;p&gt;OpenStack supports various formats, but the most commonly used is &lt;strong&gt;QCOW2&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We’ll use the &lt;strong&gt;Ubuntu 20.04 Cloud image&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  📥 Download it from:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://cloud-images.ubuntu.com/focal/current/" rel="noopener noreferrer"&gt;https://cloud-images.ubuntu.com/focal/current/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Download the file named:focal-server-cloudimg-amd64.img&lt;/p&gt;




&lt;h3&gt;
  
  
  🧭 Upload the Image via Horizon (Web UI)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;strong&gt;Project&lt;/strong&gt; tab on the left sidebar.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Compute → Images&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;"Create Image"&lt;/strong&gt; button (top-right).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bzkjvaois0wwwmkicez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bzkjvaois0wwwmkicez.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill the form as follows:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Image Name&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ubuntu-focal&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Image Source&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Image File&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Format&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;QCOW2 - QEMU Emulator&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;x86_64&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Minimum Disk (GB)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;10&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Minimum RAM (MB)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;2048&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;File&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Choose the &lt;code&gt;focal-server-cloudimg-amd64.img&lt;/code&gt; you downloaded&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9soyi2w8dq3kazk1l77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9soyi2w8dq3kazk1l77.png" alt="Image description" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Create Image&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;✅ The image will now be uploaded and processed. Once completed, it will appear in the &lt;strong&gt;Images&lt;/strong&gt; list.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Final Thoughts: Your OpenStack Cloud Is Now Live!
&lt;/h2&gt;

&lt;p&gt;Congratulations! 🎉 We’ve successfully built your very own OpenStack cloud environment — from setting up a VM, installing MicroStack, uploading an image, and launching your first instance. This experience gives you real insight into how modern cloud platforms work under the hood.&lt;/p&gt;

&lt;p&gt;But our journey doesn’t stop here — OpenStack is made up of many powerful components, each handling a key part of the cloud ecosystem.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Key OpenStack Services You Just Used
&lt;/h2&gt;

&lt;p&gt;Here’s a quick overview of the main services you interacted with:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Service&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;🌐 &lt;strong&gt;Horizon&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;horizon&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The web-based dashboard used to manage and monitor OpenStack resources.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🔐 &lt;strong&gt;Keystone&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;keystone&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Handles identity, authentication, and role-based access across OpenStack.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🧠 &lt;strong&gt;Nova&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;nova&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The compute engine that launches and manages virtual machines (instances).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🖼️ &lt;strong&gt;Glance&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;glance&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manages VM images. You uploaded your Ubuntu image here.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🌐 &lt;strong&gt;Neutron&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;neutron&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Provides networking-as-a-service, including IP management, subnets, and routers.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📦 &lt;strong&gt;Cinder&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;cinder&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Offers block storage for VMs (volumes that can persist after reboots).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🐧 &lt;strong&gt;Cloud-Init&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;(helper tool)&lt;/td&gt;
&lt;td&gt;Initializes cloud images on first boot (sets SSH keys, usernames, etc.).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🚀 What We Can Try Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create and attach &lt;strong&gt;volumes&lt;/strong&gt; using Cinder.&lt;/li&gt;
&lt;li&gt;Set up &lt;strong&gt;floating IPs&lt;/strong&gt; for external SSH access.&lt;/li&gt;
&lt;li&gt;Create &lt;strong&gt;security groups&lt;/strong&gt; and custom rules.&lt;/li&gt;
&lt;li&gt;Explore &lt;strong&gt;OpenStack CLI&lt;/strong&gt; for scripting and automation.&lt;/li&gt;
&lt;li&gt;Test multi-instance deployment and scaling.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  💬 Wrap-Up
&lt;/h2&gt;

&lt;p&gt;Building our own cloud gives you hands-on experience with how real-world infrastructure runs. Whether you're a student, DevOps engineer, or curious learner, running OpenStack locally is a massive step forward in understanding cloud-native technologies.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;br&gt;&lt;br&gt;
Feel free to share, comment, or ask questions if you get stuck anywhere. Happy hacking! 😄☁️&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
