<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ednah</title>
    <description>The latest articles on DEV Community by Ednah (@ed_akoth).</description>
    <link>https://dev.to/ed_akoth</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ed_akoth"/>
    <language>en</language>
    <item>
      <title>A Beginner's Guide to Docker Image Commands: Managing Docker Images</title>
      <dc:creator>Ednah</dc:creator>
      <pubDate>Wed, 13 Mar 2024 19:22:57 +0000</pubDate>
      <link>https://dev.to/ed_akoth/a-beginners-guide-to-docker-image-commands-managing-docker-images-lb</link>
      <guid>https://dev.to/ed_akoth/a-beginners-guide-to-docker-image-commands-managing-docker-images-lb</guid>
      <description>&lt;p&gt;In the world of containerization, Docker has emerged as a dominant force, simplifying the way developers build, ship, and run applications. At the core of Docker's functionality are Docker images, lightweight, standalone, executable packages that contain everything needed to run a piece of software, including the code, runtime, libraries, and dependencies.&lt;/p&gt;

&lt;p&gt;Managing Docker images is a crucial aspect of working with Docker, and understanding the various Docker image commands is key to efficiently handling your Docker images. In this blog post, we'll explore some of the most common Docker image commands, demystifying their usage and highlighting best practices along the way.&lt;/p&gt;

&lt;p&gt;Whether you're just getting started with Docker or looking to enhance your Docker skills, this guide will equip you with the knowledge you need to manage Docker images like a seasoned pro. Let's dive in!&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker pull
&lt;/h2&gt;

&lt;p&gt;This command downloads a docker image from a public or private registry (example being Docker Hub) to your local image. &lt;/p&gt;

&lt;p&gt;Docker images are stored in repositories, which can be public (like Docker Hub) or private (self-hosted or on a different registry). Thus, when you use docker pull, Docker contacts the specified registry and downloads the image to your local machine. If the image already exists locally, Docker will check if there is a newer version available on the registry and download it if necessary.&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull &amp;lt;image_name&amp;gt;:&amp;lt;tag&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull ubuntu:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This downloads the latest version of the official Ubuntu image.&lt;/p&gt;

&lt;h2&gt;
  
  
  docker images
&lt;/h2&gt;

&lt;p&gt;This command will list all Docker images that are currently available on your local system. &lt;/p&gt;

&lt;p&gt;Docker images are stored locally on your machine after being downloaded from a registry or built using a Dockerfile. The docker images command provides a list of all these images, along with their repository name, tag, image ID, and size. It allows you to see what images you have available to use for creating and running containers. &lt;/p&gt;

&lt;p&gt;You can use the output of docker images to manage your images, such as removing images that are no longer needed or checking the versions of images that you have downloaded. Images listed as  in the repository column are usually intermediate images created during the build process and can be removed using &lt;code&gt;docker rmi&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqp7sp9ld6qexc75q7jzi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqp7sp9ld6qexc75q7jzi.png" alt="Image description" width="748" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, the output shows three images: nginx with the latest tag, mysql with the 5.7 tag, and an unnamed image (with repository and tag set to ) that can be removed if not needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  docker run
&lt;/h2&gt;

&lt;p&gt;docker run creates a running container instance from a certain docker image&lt;/p&gt;

&lt;p&gt;When you run  docker run , Docker looks for the specified image locally. If the image is not found locally, Docker will attempt to download it from a registry before creating the container. &lt;/p&gt;

&lt;p&gt;You can also specify options and arguments with the docker run command to customize the container's behavior, such as setting environment variables, exposing ports, or mounting volumes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run [options] &amp;lt;image_name&amp;gt;:&amp;lt;tag&amp;gt; [command] [arguments]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Options (e.g., &lt;code&gt;-d&lt;/code&gt;for detached mode, &lt;code&gt;-p&lt;/code&gt; for port mapping) can be used to customize container behavior.&lt;/li&gt;
&lt;li&gt;Replace &lt;code&gt;&amp;lt;image_name&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;tag&amp;gt;&lt;/code&gt;as explained earlier.&lt;/li&gt;
&lt;li&gt;command (optional): The command to run within the container.&lt;/li&gt;
&lt;li&gt;arguments (optional): Arguments to pass to the command.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;

&lt;p&gt;Let’s look at an example to understand this syntax better&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name my-container -p 8080:80 nginx:latest nginx -g 'daemon off;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example the following have been used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Options: &lt;code&gt;-d, --name, -p&lt;/code&gt; are options that modify the behavior of the docker run command.&lt;/li&gt;
&lt;li&gt;-d runs the container in detached mode, meaning it runs in the background.&lt;/li&gt;
&lt;li&gt;--name my-container sets the name of the container to my-container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-p 8080:80&lt;/code&gt;maps port 8080 on the host to port 80 in the container, allowing you to access the nginx web server running in the container from your host machine at &lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Image name and tag:  &lt;code&gt;nginx:latest&lt;/code&gt; specifies the Docker image to use (nginx in this case) and its tag (latest).&lt;/li&gt;
&lt;li&gt;Command: &lt;code&gt;nginx -g 'daemon off;'&lt;/code&gt; is the command that overrides the default command specified in the Dockerfile and is executed when the container starts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s another simpler example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -it ubuntu:latest bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This starts a new container based on the ubuntu:latest image in interactive mode (-it) and  start an interactive Bash shell inside that container. This allows you to execute commands and interact with the Ubuntu environment within the container.&lt;/p&gt;

&lt;h2&gt;
  
  
  docker inspect
&lt;/h2&gt;

&lt;p&gt;Displays detailed information about a specific Docker image.&lt;/p&gt;

&lt;p&gt;In general, The docker inspect command provides detailed information about a Docker object, such as an image or container. It returns a JSON object containing various metadata and configuration details.&lt;/p&gt;

&lt;p&gt;When used with an image, docker inspect provides information like the image ID, size, tags, creation date, and details about the image's layers.&lt;/p&gt;

&lt;p&gt;When used with a container, docker inspect provides information about the container's configuration, network settings, volumes, and more.&lt;/p&gt;

&lt;p&gt;You can use docker inspect to troubleshoot issues, understand how images or containers are configured, and retrieve specific details for scripting or automation purposes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker inspect &amp;lt;image_or_container_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Example on an Image
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker inspect ubuntu:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
    {
        "Id": "sha256:3339fe25b7e8c0da71a528d937d7acae0861dfb6421ee8bce4b5a605cb14059a",
        "RepoTags": [
            "ubuntu:latest"
        ],
        "RepoDigests": [],
        "Parent": "",
        "Comment": "",
        "Created": "2022-01-01T00:00:00.000000000Z",
        "Container": "",
        "ContainerConfig": {
            "Hostname": "abcd1234",
            "Domainname": "",
            "User": "",
            ...
        },
        ...
    }
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s an explanation of a few key elements from the above output&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;"Id": "sha256:3339fe25b7e8c0da71a528d937d7acae0861dfb6421ee8bce4b5a605cb14059a"&lt;/code&gt;:  Unique identifier (SHA256 hash) for the container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"RepoTags": ["ubuntu:latest"]&lt;/code&gt;: Tags associated with the image, in this case, the latest tag for the ubuntu repository.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"Created": "2022-01-01T00:00:00.000000000Z"&lt;/code&gt;: Date and time when the container was created.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"ContainerConfig"&lt;/code&gt;: { ... }: Configuration details for the container, such as hostname, domain name, user, etc. This section provides the configuration used when a container is created from this image.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example on a container
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker inspect my-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
    {
        "Id": "abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234",
        "Created": "2022-01-01T00:00:00.000000000Z",
        "Path": "bash",
        "Args": [],
        "State": {
            "Status": "running",
            "Running": true,
            ...
        },
        "Image": "sha256:3339fe25b7e8c0da71a528d937d7acae0861dfb6421ee8bce4b5a605cb14059a",
        "...
    }
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s an explanation of a few key elements from the above output&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;"Id":"abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234"&lt;/code&gt;: Unique identifier (SHA256 hash) for the container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"Created": "2022-01-01T00:00:00.000000000Z"&lt;/code&gt;: Date and time when the container was created.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"Path": "bash"&lt;/code&gt;: The path to the command that was run when the container was started. In this case, it is bash. This means that when the container was started, it executed the bash command as its main process. In Docker, the command specified after the image name in the docker run command (or the default command specified in the Dockerfile if no command is specified) is the initial command that the container runs. In this case, the bash command is commonly used to start an interactive shell within the container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"State": { "Status": "running", "Running": true, ... }&lt;/code&gt;: Information about the current state of the container, such as whether it is running, paused, or stopped.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"Image": "sha256:3339fe25b7e8c0da71a528d937d7acae0861dfb6421ee8bce4b5a605cb14059a"&lt;/code&gt;: The image used to create the container, identified by its image ID.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  docker build
&lt;/h2&gt;

&lt;p&gt;Build a Docker image from a Dockerfile and context.&lt;/p&gt;

&lt;p&gt;The Dockerfile contains instructions on how to build the image, such as which base image to use, what commands to run, and what files to include.&lt;/p&gt;

&lt;p&gt;docker build command reads the Dockerfile and executes the instructions to create a new Docker image.&lt;/p&gt;

&lt;p&gt;The context is the set of files and directories located at the specified PATH or URL. These files are sent to the Docker daemon for building the image. What this means is, when you use docker build to create a Docker image, you provide a "context" to the Docker daemon. This "context" is simply the set of files and directories that Docker will use to build the image. For example, if you run docker build ., the current directory (denoted by .) and all its files and subdirectories will be sent to Docker as the context. Docker will then use these files to follow the instructions in your Dockerfile and create the image.&lt;/p&gt;

&lt;p&gt;Each instruction in the Dockerfile adds a new layer to the image, allowing for efficient reuse of layers between images.&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build [OPTIONS] PATH | URL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some options that can be pass to this command include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-t, --tag&lt;/code&gt;: Name and optionally a tag in the 'name:tag' format to apply to the resulting image.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--build-arg&lt;/code&gt;: Set build-time variables that are accessed like environment variables in the Dockerfile.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-f, --file&lt;/code&gt;: Name of the Dockerfile (Default is 'PATH/Dockerfile').&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--force-rm&lt;/code&gt;: Always remove intermediate containers, even after unsuccessful builds.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--no-cache&lt;/code&gt;: Do not use cache when building the image.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--pull&lt;/code&gt;: Always attempt to pull a newer version of the base image before building.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t my-image:latest .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, docker build would build a Docker image named my-image with the tag latest using the Dockerfile located in the current directory (.).&lt;/p&gt;

&lt;h2&gt;
  
  
  docker save
&lt;/h2&gt;

&lt;p&gt;This command  saves a Docker image as a tar archive to the local filesystem.&lt;/p&gt;

&lt;p&gt;It is used to export a Docker image as a tarball file (A tarball file is a file archive format used to bundle files and directories together, commonly used in Unix and Linux systems. In Windows, the equivalent of a tarball file would be a ZIP file.), which allows you to save the image locally or transfer it to another machine.&lt;/p&gt;

&lt;p&gt;The saved tarball contains all the layers of the Docker image, along with its metadata.&lt;/p&gt;

&lt;p&gt;Once saved, you can use docker load  (next command below) to import the image back into Docker on the same or another machine. This command is useful for creating backups of Docker images or sharing them with others who do not have direct access to a Docker registry.&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker save &amp;lt;image_name&amp;gt;:&amp;lt;tag&amp;gt; -o &amp;lt;output_file.tar&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker save ubuntu:latest -o ubuntu_latest.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the docker save command saves the ubuntu:latest image as a tarball file named ubuntu_latest.tar.&lt;/p&gt;

&lt;h2&gt;
  
  
  docker load
&lt;/h2&gt;

&lt;p&gt;This command loads a docker image from a tar archive&lt;/p&gt;

&lt;p&gt;It is used to import a Docker image that was previously saved as a tarball file using the docker save command. It reads the tarball file and imports the image into the Docker image repository on your local system.&lt;/p&gt;

&lt;p&gt;After loading the image, you can use it to create and run containers as you would with any other Docker image.&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker load &amp;lt;image_tar_file&amp;gt;  # Load from a tar archive
docker load  # Load from standard input (STDIN)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker load -i ubuntu_latest.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the docker load command imports the ubuntu_latest.tar tarball file, which contains a Docker image, into the local Docker image repository.&lt;/p&gt;

&lt;p&gt;This command essentially enables you to import Docker images from external sources like tar archives, allowing you to use pre-built or custom images without directly pulling them from a registry.&lt;/p&gt;

&lt;h2&gt;
  
  
  docker tag
&lt;/h2&gt;

&lt;p&gt;This command creates a tag for an existing docker image&lt;/p&gt;

&lt;p&gt;Tags are used to identify different versions or variants of an image. By default, Docker images are tagged with latest if no tag is specified. However, it's good practice to use explicit tags to version your images.&lt;/p&gt;

&lt;p&gt;Tags are specified in the format :, where  is the name of the image repository and  is the tag you want to assign to the image. After tagging an image, you can use the new tag to reference the image when running containers or pushing it to a registry.&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag &amp;lt;image_name&amp;gt;:&amp;lt;existing_tag&amp;gt; &amp;lt;image_name&amp;gt;:&amp;lt;new_tag&amp;gt;
docker tag &amp;lt;source_image_name&amp;gt;:&amp;lt;source_tag&amp;gt; &amp;lt;target_image_name&amp;gt;:&amp;lt;target_tag&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Examples
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag ubuntu:latest ubuntu:mytag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the docker tag command creates a new tag mytag for the ubuntu:latest image, resulting in the image being tagged as ubuntu:mytag.&lt;/p&gt;

&lt;p&gt;Here's another example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull ubuntu:18.04  # Download the Ubuntu 18.04 image
docker tag ubuntu:18.04 my-ubuntu:v1  # Create a custom tag 'v1' for the image

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a new tag named v1 for the existing ubuntu:18.04 image.&lt;/p&gt;

&lt;h2&gt;
  
  
  docker rmi
&lt;/h2&gt;

&lt;p&gt;Removes a Docker image.&lt;/p&gt;

&lt;p&gt;This command is used to delete images from your local machine that are no longer needed to free up disk space. It can remove one or more Docker images from your local Docker image cache.&lt;/p&gt;

&lt;p&gt;You can specify the image to remove by its name or ID. If the image is in use by a container, you will need to stop and remove the container before you can remove the image. You can use the command  docker stop to halt the running container&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rmi &amp;lt;image_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rmi ubuntu:mytag
//docker rmi removes the ubuntu:mytag image

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  docker image prune
&lt;/h2&gt;

&lt;p&gt;Removes unused Docker images from the local system.&lt;/p&gt;

&lt;p&gt;This command is used to remove unused images that are not referenced by any containers. It cleans up your local Docker image cache by removing images that are not currently in use, which is useful for freeing up disk space. You can use the -a or --all flag to also remove unused images that have tags other than , which are usually intermediate images created during the build process.&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker image prune [OPTIONS]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-a, --all&lt;/code&gt;: Remove all unused images, not just dangling ones.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--filter filter&lt;/code&gt;: Provide filter values (e.g., until, label, before) to specify the images to prune.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  docker push
&lt;/h2&gt;

&lt;p&gt;This command is used to push a Docker image to a remote registry, such as Docker Hub or a private registry. It allows you to share your Docker images with others or deploy them to production environments. &lt;/p&gt;

&lt;p&gt;Before you can push an image, you need to tag it with the name of the registry where you want to push it. This is typically done using the &lt;code&gt;-t&lt;/code&gt; flag when building the image or using the &lt;code&gt;docker tag&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;Once the image is tagged, you can use docker push to upload it to the specified registry. &lt;/p&gt;

&lt;p&gt;You'll need to authenticate with the registry using &lt;code&gt;docker login&lt;/code&gt;before pushing the image.&lt;/p&gt;

&lt;p&gt;When you push an image, Docker uploads all the image layers to the registry. This allows others to pull the image and use it on their own systems.&lt;/p&gt;

&lt;h4&gt;
  
  
  Syntax
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push &amp;lt;image_name&amp;gt;:&amp;lt;tag&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push my-username/my-image:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, docker push would push the my-image image with the latest tag to the Docker Hub registry under the my-username account.&lt;/p&gt;

&lt;p&gt;Throughout this blog post, we've covered some of the most common Docker image commands, including docker build, docker tag, docker push, and more. We've explored how each command works and how it can be used to enhance your Docker experience.&lt;/p&gt;

&lt;p&gt;Start experimenting with Docker image commands today and see how they can transform the way you build, ship, and run your applications. &lt;/p&gt;

&lt;p&gt;Happy containerizing!&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>docker</category>
    </item>
    <item>
      <title>Introduction to Containerization and Docker</title>
      <dc:creator>Ednah</dc:creator>
      <pubDate>Mon, 11 Mar 2024 18:56:16 +0000</pubDate>
      <link>https://dev.to/ed_akoth/introduction-to-containerization-and-docker-30bn</link>
      <guid>https://dev.to/ed_akoth/introduction-to-containerization-and-docker-30bn</guid>
      <description>&lt;p&gt;In today's fast-paced world of software development, efficiency and agility are key. Developers are constantly seeking ways to streamline their workflows, reduce dependencies, and improve the portability of their applications. This is where Docker comes in. Docker is a powerful tool that allows developers to package their applications and dependencies into lightweight, portable containers that can run consistently across different environments. In this blog, we will explore the fundamentals of containerization, Docker and how it has revolutionized the way we build, ship, and run applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Container?
&lt;/h2&gt;

&lt;p&gt;A container is a way to package an application with all the necessary dependencies and configurations it needs. This package is highly portable and can be easily shared and moved around between people, devices and teams. This portability and the compactness of having everything packaged in one isolated environment gives containers the advantages it confers in making development and deployment processes more efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where do containers live?
&lt;/h3&gt;

&lt;p&gt;As we mentioned, containers are portable. Thus, a question arises. Where are they stored? There must be a place where they are stored, enabling them to be shared and moved around. &lt;/p&gt;

&lt;p&gt;Containers are stored in a container repository.  This is a special storage area for containers. Many companies have their own private repositories where they host or store all their containers. There are also public repositories such as &lt;a href="https://hub.docker.com/"&gt;Dockerhub&lt;/a&gt;. In these public repositories, you can browse and find any application container that you want. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9j74n9z09hk8pps2dn00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9j74n9z09hk8pps2dn00.png" alt="Image description" width="800" height="786"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image Above: Dockerhub, a cloud-based registry service that allows you to store and share Docker container images. Docker Hub hosts millions of container images, including official images from Docker and user-contributed images. Users can search for images, pull them to their local environment, and push their own images to share with others.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do containers improve the development process?
&lt;/h3&gt;

&lt;p&gt;In the era before containers revolutionized development, teams building applications faced a cumbersome process. When a team of developers and engineers are working on an application, they would have to individually install the services needed directly on their operating systems. Suppose the team is building a web application that needs PostgresSQL and Redis. They would each have to install and configure the technologies on their local development environments. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8w79tmzj0co2i7n463qj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8w79tmzj0co2i7n463qj.png" alt="Image description" width="549" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach can be tedious and error prone if every developer were to do it individually. Given that each  person in the team may have different underlying configurations locally or even operating systems powering their devices, the transferability of the configurations is diminished, probably leading to errors when this application is shared between developers.&lt;/p&gt;

&lt;p&gt;With containers, those services/technologies don’t have to be installed directly on a developer's operating system. The container provides a self-contained environment that encapsulates the application and its dependencies, including libraries, binaries, and configuration files. This means that developers can work in a consistent environment regardless of their underlying operating system. Each developer only needs to fetch this established container and run it on their local machines.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do containers improve the deployment process?
&lt;/h3&gt;

&lt;p&gt;In the traditional deployment process, the development team produces artifacts, such as a JAR file for the application, along with instructions on how to install and configure these artifacts on the server. Additionally, other services, like databases, are included with their own set of installation and configuration instructions. These artifacts and instructions are then handed over to the operations team, who is responsible for setting up the environment for deployment.&lt;/p&gt;

&lt;p&gt;However, this approach has several drawbacks. Firstly, configuring and installing everything directly on the operating system can lead to conflicts with dependency versions and issues when multiple services are running on the same host. Secondly, relying on textual guides for instructions can result in misunderstandings between the development and operations teams. Developers may forget to mention important configuration details, or the operations team may misinterpret the instructions, leading to deployment failures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5d45kz52vtu36ngzmp8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5d45kz52vtu36ngzmp8.png" alt="Image description" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With containers, this process is simplified because developers and operations teams work together to package the entire application, including its configuration and dependencies, into a single container. This encapsulation eliminates the need to configure anything directly on the server. Instead, deploying the application is as simple as running a Docker command to pull the container image from a repository and then running it. While this is a simplified explanation, it addresses the challenges we discussed earlier by making environmental configuration on the server much easier. The only initial setup required is to install and set up the Docker runtime on the server, which is a one-time effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containers vs Virtual Machines
&lt;/h2&gt;

&lt;p&gt;Containers and virtual machines (VMs) are both used to isolate applications and their dependencies, but they differ in several key ways.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl13fsrvji91mh2c681cq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl13fsrvji91mh2c681cq.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following are the key differences:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Architecture:&lt;/strong&gt; VMs virtualize the hardware, creating a full-fledged virtual machine with its own operating system (OS), kernel, and libraries. Containers, on the other hand, virtualize the OS, sharing the host OS kernel but providing isolated user spaces for applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Usage:&lt;/strong&gt; VMs are typically heavier in terms of resource usage because they require a full OS to be installed and run for each VM. Containers are lightweight, as they share the host OS kernel and only include the necessary libraries and dependencies for the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Isolation:&lt;/strong&gt; VMs provide stronger isolation between applications since each VM has its own OS and kernel. Containers share the host OS kernel but provide process-level isolation using namespaces and control groups (cgroups).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Startup Time:&lt;/strong&gt; Containers have faster startup times compared to VMs, as they do not need to boot an entire OS. VMs can take longer to start as they need to boot a full OS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; While both containers and VMs provide isolation, VMs are often considered more secure due to the stronger isolation provided by virtualizing the hardware. Containers share the host OS kernel, which can potentially introduce security vulnerabilities if not properly configured.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdhfojl34o98ljj6rl8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdhfojl34o98ljj6rl8y.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tf5ncokyghle0ndyzwa.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tf5ncokyghle0ndyzwa.jpeg" alt="Image description" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have an idea of what containers are, let’s delve into what Docker is. &lt;/p&gt;

&lt;p&gt;Docker is an open-source platform that facilitates the development, deployment, and management of applications using containers. It uses the above discussed concepts of containerization to package an application and its dependencies into a single unit, which can then be  shared and deployed across different environments without any changes, ensuring consistency.&lt;/p&gt;

&lt;p&gt;Here are some key concepts to know with regards to Docker:&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Engine:
&lt;/h3&gt;

&lt;p&gt;Docker Engine is an open source containerization technology for building and containerizing your applications. Simply put, it is the workhorse behind the scenes, managing the entire container lifecycle.&lt;/p&gt;

&lt;p&gt;It consists of three main components:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Daemon (dockerd):&lt;/strong&gt; This is a background service that runs continuously on your system. It's responsible for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Creating and managing containers:&lt;/strong&gt; The daemon listens for commands from the CLI and interprets instructions to build, run, stop, and manage container instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image management:&lt;/strong&gt; It handles tasks like pulling images from registries, building images from Dockerfiles, and managing storage for container images and layers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network configuration:&lt;/strong&gt; The daemon sets up networking for containers, allowing them to communicate with each other and the outside world, following configurations you define.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource allocation:&lt;/strong&gt; The daemon manages resource allocation for containers like CPU, memory, and storage, ensuring smooth operation within defined limits.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pivkk8snhq2oen55pdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pivkk8snhq2oen55pdv.png" alt="Image description" width="636" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;REST API:&lt;/strong&gt; This built-in API allows programmatic interaction with Docker Engine. Developers can leverage tools and scripts to automate tasks like container deployment and management through the API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker CLI (docker):&lt;/strong&gt; This is the user interface most people interact with. It's a command-line tool that communicates with the Docker daemon to execute various commands related to container operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rd5kyfr94evytcjyvsj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rd5kyfr94evytcjyvsj.png" alt="Image description" width="794" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Images
&lt;/h2&gt;

&lt;p&gt;Image is the artefact or the application package (application code, runtime, libraries, and dependencies etc)  required to configure a fully operational container environment. This is the artefact that can be moved around from device to device or between developers in a team. It is a read-only template containing a set of instructions for creating a container that can run on the Docker platform.&lt;/p&gt;

&lt;p&gt;You can create a Docker image by using one of two methods:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile:&lt;/strong&gt; This is the most common method used to create Docker images. A Dockerfile is a text file that contains a series of instructions for building a Docker image. You specify the base image, add any dependencies, copy the application code, and configure the container. Once you have written the Dockerfile, you can build the image using the &lt;code&gt;docker build&lt;/code&gt; command. Here’s an example an dockerfile for a Node.js application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use the official Node.js 14 image as the base image
FROM node:14

# Set the working directory in the container
WORKDIR /app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install npm dependencies
RUN npm install

# Copy the rest of the application code to the working directory
COPY . .

# Expose port 3000
EXPOSE 3000

# Command to run the application
CMD ["node", "app.js"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;From Existing Container:&lt;/strong&gt; In this method, you start a container from an existing Docker image, make changes to the container interactively (e.g., installing software, updating configurations), and then save the modified container as a new Docker image using the &lt;code&gt;docker commit&lt;/code&gt; command. &lt;/p&gt;

&lt;p&gt;While the latter method is possible, it is generally not recommended for production use because it can lead to inconsistencies and difficulties in managing images. Using a Dockerfile to define the image's configuration and dependencies is a more standard and reproducible approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Containers
&lt;/h2&gt;

&lt;p&gt;Think of a container as a running image with its own isolated environment. It represents a lightweight, isolated, and portable environment in which an application can run. They are created from images using the docker run command and can be started, stopped, deleted, and managed using Docker commands. Any changes made to a container (e.g., modifying files, installing software) are lost when the container is deleted unless those changes are committed to a new image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7k7zeixst8m0wk0xw0sn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7k7zeixst8m0wk0xw0sn.png" alt="Image description" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fht5v0229e4i4lyo3qv98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fht5v0229e4i4lyo3qv98.png" alt="Image description" width="653" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, Docker has revolutionized the way we develop, package, and deploy applications. Its lightweight containers provide a flexible and efficient solution for managing dependencies and isolating applications, leading to faster development cycles, improved scalability, and enhanced portability. By simplifying the deployment process and enabling greater consistency across different environments, Docker has become an essential tool for modern software development. As technology continues to evolve, Docker is sure to remain at the forefront, empowering developers and organizations to innovate and deliver high-quality software solutions.&lt;/p&gt;

&lt;p&gt;Stay tuned for more articles on Docker.&lt;/p&gt;

&lt;p&gt;Happy Dockering!&lt;/p&gt;

</description>
      <category>containers</category>
      <category>docker</category>
    </item>
    <item>
      <title>Deploying a Dockerized React Application to AWS Elastic Beanstalk</title>
      <dc:creator>Ednah</dc:creator>
      <pubDate>Sun, 10 Mar 2024 02:29:00 +0000</pubDate>
      <link>https://dev.to/ed_akoth/deploying-a-dockerized-react-application-to-aws-elastic-beanstalk-d93</link>
      <guid>https://dev.to/ed_akoth/deploying-a-dockerized-react-application-to-aws-elastic-beanstalk-d93</guid>
      <description>&lt;p&gt;In this blog post, we will guide you through the process of running a simple React application inside a Docker container. You can find the React app on our GitHub repository here. We will start by preparing the React application for Docker and then move on to setting up Elastic Beanstalk to deploy the application. Finally, we will configure GitHub Actions for automated deployment.&lt;/p&gt;

&lt;p&gt;Here’s the blog outline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Preparing the React Application:&lt;/strong&gt; Using a Dockerfile to define the environment and dependencies and running the React application inside a Docker container.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Setting up Elastic Beanstalk:&lt;/strong&gt; Setting up necessary IAM roles, attaching policies and choosing the appropriate configurations for the Dockerized React application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuring GitHub Actions:&lt;/strong&gt; Setting up GitHub Actions for automated deployment, and configuring environment variables in the GitHub repository secrets.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Testing the Application&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Follow along to learn how to deploy your React applications with ease.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing the React application
&lt;/h2&gt;

&lt;p&gt;We will be running a simple React application inside a docker container. You can find the react app on my github repo &lt;a href="https://github.com/Ed-Neema/simpleTodoApp"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Create a new folder at a location of your choosing for the react application. To create a docker container for our application, we will run the following command in the folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -it -v ${PWD}:/app -p 3000:3000 node:18 sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command starts a Docker container based on the node:18 image (node version the application was built on), mounts the current directory to /app inside the container, maps port 3000, and opens an interactive shell session inside the container.&lt;/p&gt;

&lt;p&gt;Let's break down the command further:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker run&lt;/code&gt;: This is the command to run a Docker container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-it&lt;/code&gt;: These are two flags combined. -i stands for interactive, which allows you to interact with the container's shell. -t allocates a pseudo-TTY, which helps in keeping the session open and receiving input.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-v ${PWD}:/app&lt;/code&gt;: This is a volume mount flag. It mounts the current directory (${PWD}) on the host machine to the /app directory inside the container. This allows the container to access and modify files in the current directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-p 3000:3000&lt;/code&gt;: This flag maps port 3000 on the host machine to port 3000 inside the container. This is typically used for accessing services running inside the container from outside.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;node:18&lt;/code&gt;: This specifies the Docker image to use for creating the container. In this case, it's the node image with the tag 18, which presumably refers to a specific version of Node.js.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sh&lt;/code&gt;: This is the command to run inside the container. It starts a shell session (sh is a common Unix/Linux shell) so that you can interact with the container's file system and execute commands.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If docker doesn’t find the image locally it will pull it from the remote repository.&lt;/p&gt;

&lt;p&gt;Since we started a shell session, after running the command, you will be inside the active shell session. Now cd into the app directory, which is where our application will be.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy7xvr54t7nuklcpthpz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy7xvr54t7nuklcpthpz.png" alt="Image description" width="728" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have mapped this directory in our container to our current directory. Thus, any file that you create inside this directory will also reflect on your current directory. &lt;/p&gt;

&lt;p&gt;We can now clone our react project into this directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/Ed-Neema/simpleTodoApp.git 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that when you interact with a remote Git repository using HTTPS URLs, Git will typically prompt you for your username and password to authenticate (if your Github isn’t globally configured). However, due to changes in GitHub's authentication mechanisms, using a personal access token (PAT) is now required instead of your password for increased security. You may read more about this &lt;a href="https://docs.github.com/en/get-started/getting-started-with-git/about-remote-repositories#cloning-with-https-urls"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can find more detailed information about how to create your personal token &lt;a href="https://docs.github.com/en/enterprise-server@3.9/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens"&gt;here&lt;/a&gt;. In summary, here are the steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to your GitHub account settings.&lt;/li&gt;
&lt;li&gt;Navigate to&lt;code&gt;"Developer settings" &amp;gt; "Personal access tokens." &amp;gt; Tokens (classic)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click on &lt;code&gt;"Generate new token."&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Give your token a descriptive name, select the scopes or permissions you'd like to grant this token, and click "Generate token."&lt;/li&gt;
&lt;li&gt;Important: Copy your new personal access token. Once you leave or refresh the page, you won’t be able to see it again.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that the application has been cloned, let’s create a Dockerfile and a docker compose file.&lt;/p&gt;

&lt;p&gt;A Dockerfile is a text file that contains instructions for building a Docker image. It defines the environment inside a Docker container, including the base image to use, any additional dependencies to install, environment variables to set, and commands to run when the container starts. In our case, this is the Dockerfile we need: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4c2gri5eb6dap3af5ol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4c2gri5eb6dap3af5ol.png" alt="Image description" width="479" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This Dockerfile uses a multi-stage build to first build a Node.js application and then sets up an Nginx server to serve the built static files. &lt;/p&gt;

&lt;p&gt;Here's a breakdown of each part:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FROM node:18 as builder&lt;/code&gt;: This sets the base image to node:18 and assigns it an alias builder. This stage will be used for building the application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WORKDIR /app/simpleTodoApp&lt;/code&gt;: This sets the working directory inside the container to&lt;code&gt;/app/simpleTodoApp&lt;/code&gt;. All subsequent commands will be executed relative to this directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY package.json .&lt;/code&gt;: This copies the package.json file from the host machine to the &lt;code&gt;/app/simpleTodoApp&lt;/code&gt; directory in the container. This is done before running npm install to take advantage of Docker's layer caching mechanism.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN npm install&lt;/code&gt;: This installs the dependencies listed in the package.json file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY . .&lt;/code&gt;: This copies the rest of the application files from the host machine to the &lt;code&gt;/app/simpleTodoApp&lt;/code&gt; directory in the container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN npm run build&lt;/code&gt;: This runs the build script specified in the package.json file. This script is typically used to build the production version of the application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;FROM nginx&lt;/code&gt;: This starts a new build stage using the nginx base image. This stage will be used for the final image that will run the application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EXPOSE 80&lt;/code&gt;: This exposes port 80 on the container. This is the default port used by Nginx for serving web content.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY --from=builder /app/simpleTodoApp/dist /usr/share/nginx/html&lt;/code&gt;: This copies the build output from the builder stage (the &lt;code&gt;/app/simpleTodoApp/dist&lt;/code&gt; directory) to the Nginx web root directory (&lt;code&gt;/usr/share/nginx/html&lt;/code&gt;). This effectively sets up Nginx to serve the static files generated by the Node.js build process. (Note: Since we are using Vite, the builder outputs the built files to the “dist” directory. A pure react app would have its build files in a “build” directory, while a nextjs application would have them in a “.next” directory).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, we will create a docker-compose.yml file. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie853gwgpdq838khhsl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie853gwgpdq838khhsl2.png" alt="Image description" width="261" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This Docker Compose file defines a service named web that builds a Docker image using the Dockerfile in the current directory and exposes port 80 on the host machine to the container. &lt;/p&gt;

&lt;p&gt;Here's a breakdown of each part:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;version: '3'&lt;/code&gt;: This specifies the version of the Docker Compose file format. In this case, it's version 3, which is a widely used version that supports most features.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;services&lt;/code&gt;: This is the key under which you define the services that make up your application. Each service is a containerized application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;web&lt;/code&gt;: This is the name of the service. You can use any name you like to identify your services.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;build&lt;/code&gt;: This specifies how to build the Docker image for the web service.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;context&lt;/code&gt;: .: This specifies the build context, which is the path to the directory containing the Dockerfile and any other files needed for the build. In this case, it's set to . (the current directory).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;dockerfile&lt;/code&gt;: Dockerfile: This specifies the name of the Dockerfile to use for building the image. In this case, it's Dockerfile in the build context.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ports&lt;/code&gt;: This specifies the ports to expose on the host machine and the container.&lt;/li&gt;
&lt;li&gt;'&lt;code&gt;80:80&lt;/code&gt;': This maps port 80 on the host machine to port 80 on the container. This means that you can access the service running in the container on port 80 of the host machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s now test whether what we have done works before we deploy to beanstalk.&lt;/p&gt;

&lt;p&gt;Run the following command in a new terminal (outside the shell of your container) in your app’s directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up --build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The docker-compose up --build command is used to build the images for the services defined in your docker-compose.yml file and start the containers. In summary, the command will build the Docker image for the web service using the Dockerfile in the current directory. It will then start the container for the web service and expose port 80 on the host machine, mapping it to port 80 on the container.&lt;/p&gt;

&lt;p&gt;This will take a while to build the application’s assets. After it’s done, you should be able to access your application at &lt;a href="http://localhost/"&gt;http://localhost/&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Elastic Beanstalk
&lt;/h2&gt;

&lt;p&gt;We will first start with creating an instance profile.&lt;/p&gt;

&lt;p&gt;Creating an EC2 instance profile and attaching the specified policies is necessary when setting up an Elastic Beanstalk environment that uses EC2 instances to run your application. So, navigate to IAM and create an IAM role. The policies that can be attached are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWSElasticBeanstalkWebTier&lt;/strong&gt;: This policy provides permissions necessary for the EC2 instances to serve web traffic. It includes permissions to create and manage Elastic Load Balancers (ELB), which are used to distribute incoming traffic to your application across multiple EC2 instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWSElasticBeanstalkWorkerTier&lt;/strong&gt;: This policy is used for worker environments in Elastic Beanstalk, which are used for background processing or tasks that don't require direct handling of web requests. It provides permissions needed for worker environments, such as reading from and writing to SQS queues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWSElasticBeanstalkMulticontainerDocker&lt;/strong&gt;: This policy is specifically for multicontainer Docker environments in Elastic Beanstalk. It provides permissions for managing Docker containers and interacting with the Docker daemon on the EC2 instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By attaching these policies to the IAM role associated with your EC2 instances, Elastic Beanstalk can manage the resources (such as ELBs and Docker containers) required to run your application. This allows Elastic Beanstalk to automatically scale your application, handle load balancing, and manage the underlying infrastructure, making it easier to deploy and manage your application in a scalable and fault-tolerant manner.&lt;/p&gt;

&lt;p&gt;AWS provides several predefined roles that you can use for common tasks and services. These predefined roles are known as AWS managed policies. In our case, AWS already has a role for Elastic Beanstalk called aws-elasticbeanstalk-service-role. You may select this role and add the above three permissions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfp3jahqo9gl7audsuaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfp3jahqo9gl7audsuaq.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When creating the role, here are the options you can select: &lt;/p&gt;

&lt;p&gt;You may now navigate to the Elastic Beanstalk console and click on create new application. This will take you through a series of steps in creating the application. &lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0ensdks1384918ao0hf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0ensdks1384918ao0hf.png" alt="Image description" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will use a Web Server environment since the aim is to deploy a sample web application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15k34fi8ulfkot0mis15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15k34fi8ulfkot0mis15.png" alt="Image description" width="659" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, an environment name will be generated for you. You may also enter a domain name, otherwise, it will be automatically generated for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcybqpjydl46d3sfnxzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcybqpjydl46d3sfnxzo.png" alt="Image description" width="651" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, we select the appropriate configurations for the application we aim to run. In our case, since we are deploying a dockerized react application, we will choose docker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forz75g16s5o4dscnn5uq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forz75g16s5o4dscnn5uq.png" alt="Image description" width="654" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will start by deploying the sample application given to us by AWS, then modify it to our application.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn07juhlfacn8cr8v2ysd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn07juhlfacn8cr8v2ysd.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This step has to do with IAM roles that will give elastic beanstalk the appropriate permissions to set up and run our application with the appropriate permissions. Here, under existing service roles, we will select the IAM role we made previously for Elastic Beanstalk. &lt;/p&gt;

&lt;p&gt;Even though Elastic Beanstalk is setting up and managing the environment, it is possible to log into your EC2 once it’s running. You can create the EC2 key pair and specify it under the EC2 pair.&lt;/p&gt;

&lt;p&gt;For the EC2 instance profile, you may create a role for the EC2 instances and select it here (Give it a try!). &lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3
&lt;/h4&gt;

&lt;p&gt;The next section is about setting up some network and database configurations. &lt;/p&gt;

&lt;p&gt;You may select the VPC and the subnets that you want your instances to run in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwk853ufszo16vxrl5b4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwk853ufszo16vxrl5b4.png" alt="Image description" width="643" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since we will not be using a database for this application, we can leave it as disabled.&lt;/p&gt;

&lt;p&gt;For Step 4 and 5, you may configure the instance traffic settings and monitoring and logging as desired.&lt;/p&gt;

&lt;p&gt;After reviewing and creation, you may view the running instance of you web application through the given url:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhuat7g9hjbnwcy2oaw25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhuat7g9hjbnwcy2oaw25.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Github Actions
&lt;/h2&gt;

&lt;p&gt;To configure Github actions as needed, we can use instructions from this &lt;a href="https://github.com/einaregilsson/beanstalk-deploy"&gt;repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Essentially, your deploy yaml file will look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjxeb8pirt1c3pds45qp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjxeb8pirt1c3pds45qp.png" alt="Image description" width="477" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To configure the environment variables navigate to your repository’s &lt;code&gt;settings tab &amp;gt; actions&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then create new repository secrets for the keys preceded with the word "secret" in the image above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj584k3b488i94kohfdmy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj584k3b488i94kohfdmy.png" alt="Image description" width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under actions, add the Repository secrets:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5umvs1mjj1qgl9bfqo3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5umvs1mjj1qgl9bfqo3k.png" alt="Image description" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After this, push the changes to your github.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
git commit -m "commit message"
git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After pushing the changes, you will see a new action triggered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vsggnxnriy7xe4jhy82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vsggnxnriy7xe4jhy82.png" alt="Image description" width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the environment is fully updated and set up on your elastic beanstalk, you should see the todo application deployed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mofmxrkppth1v9kq90z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mofmxrkppth1v9kq90z.png" alt="Image description" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations on your first deployment! &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;In summary, deploying a ReactJS application on AWS Elastic Beanstalk using GitHub Actions simplifies deployment and enhances application lifecycle management efficiency. GitHub Actions' integration automates build and deployment steps, ensuring consistency and enabling rapid, reliable releases. AWS Elastic Beanstalk provides scalability and managed environments, allowing seamless application scaling based on demand. This combination offers a robust solution for deploying and maintaining ReactJS applications, empowering developers to focus on delivering exceptional user experiences while ensuring deployment reliability and scalability.&lt;/p&gt;

&lt;p&gt;Stay tuned for more deployment tutorials!&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>react</category>
      <category>docker</category>
    </item>
    <item>
      <title>Introduction To AWS Elastic Beanstalk</title>
      <dc:creator>Ednah</dc:creator>
      <pubDate>Sat, 09 Mar 2024 00:04:24 +0000</pubDate>
      <link>https://dev.to/ed_akoth/introduction-to-aws-elastic-beanstalk-20o9</link>
      <guid>https://dev.to/ed_akoth/introduction-to-aws-elastic-beanstalk-20o9</guid>
      <description>&lt;p&gt;AWS Elastic Beanstalk is a Platform as a Service (PaaS) offering from Amazon Web Services (AWS) that makes it easy to deploy, manage, and scale web applications and services. It abstracts away the underlying infrastructure details and automatically handles the deployment of a web application, from capacity provisioning, load balancing, auto-scaling to application health monitoring. This allows developers to focus on writing code rather than managing underlying infrastructure. &lt;/p&gt;

&lt;p&gt;At the same time, you retain full control over the AWS resources that are powering your application and can access them at any time. While it abstracts away the complexity of managing infrastructure, it still allows you to retain full control over the underlying AWS resources powering your application. This is important for several reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexibility:&lt;/strong&gt; By retaining control over the AWS resources, you have the flexibility to customize and configure them according to your specific requirements. You can choose the instance types, storage options, networking configurations, and other settings that best suit your application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance Optimization:&lt;/strong&gt; Full control over AWS resources allows you to optimize performance by fine-tuning configurations. For example, you can adjust instance sizes, add caching layers, or configure load balancers to improve performance and scalability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Management:&lt;/strong&gt; Having visibility and control over AWS resources helps you manage costs more effectively. You can monitor resource usage, identify cost drivers, and make adjustments to optimize cost-efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance:&lt;/strong&gt; Some industries or organizations may have specific compliance requirements that necessitate full control over resources. With Elastic Beanstalk, you can ensure compliance by configuring resources according to these requirements.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's worth emphasizing that using Elastic Beanstalk does not incur any extra costs. Instead, you only pay for the AWS resources required to store and operate your applications. You can learn more about its pricing &lt;a href="https://aws.amazon.com/elasticbeanstalk/pricing/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reviewing some concepts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Application:&lt;/strong&gt; In Elastic Beanstalk, an application is a logical container for your web application and its associated resources. Think of it as a folder. It helps you manage and organize your application components, including environments, versions, and configurations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Environment:&lt;/strong&gt; An environment is a collection of AWS resources that run your application. This includes EC2 instances, load balancers, databases, and other resources necessary to host your application. Each environment corresponds to a specific version of your application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Environment Tier:&lt;/strong&gt; Elastic Beanstalk offers two environment tiers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmkx3687wi3vxr0jvgoz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmkx3687wi3vxr0jvgoz.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;em&gt;Image Credit: Digital Cloud&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web Server Environment:&lt;/strong&gt; This tier is used for web applications that serve HTTP requests. It typically includes a load balancer to distribute traffic among multiple EC2 instances running your application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Worker Environment:&lt;/strong&gt; This tier is used for applications that perform background tasks, such as processing messages from a queue or performing batch jobs. It does not include a load balancer and is optimized for tasks that do not require direct user interaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Application Version:&lt;/strong&gt; An application version is a specific iteration of your application's code that you deploy to an environment. Each version is identified by a unique version label and can be deployed to multiple environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration:&lt;/strong&gt; Elastic Beanstalk allows you to configure various aspects of your environment through configuration files or the management console. This includes settings such as instance type, auto-scaling parameters, environment variables, and more. Configurations can be saved and reused across environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auto Scaling:&lt;/strong&gt; Elastic Beanstalk provides auto-scaling capabilities to automatically adjust the number of EC2 instances in your environment based on traffic levels. You can define scaling triggers based on metrics such as CPU utilization, network traffic, or custom metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Environment Health:&lt;/strong&gt; Elastic Beanstalk monitors the health of your environment and takes corrective actions to ensure high availability. This includes replacing unhealthy instances, adjusting auto-scaling settings, and triggering alarms based on predefined thresholds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to deploy a sample Python application
&lt;/h2&gt;

&lt;p&gt;Now that we know some basics, let's try to deploy a sample application. Navigate to the Elastic Beanstalk console and create new application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j0mfpzwd2aeabrg9zxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j0mfpzwd2aeabrg9zxz.png" alt="Image description" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After this, on the left sidebar, you will see the application created. Click on the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqxkuo2v151qb6p1r6ne.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqxkuo2v151qb6p1r6ne.png" alt="Image description" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then Click on create new environment at the top right. You will then run through a series of steps, instructing Beanstalk on how to build our web application. &lt;/p&gt;

&lt;p&gt;You will need to configure you environment based on the web application that you want to deploy. Elastic Beanstalk supports a wide range of web applications and frameworks including Java ( using Apache Tomcat, Java SE, or Java with Docker containers), .NET (using IIS and Windows Server), Nodejs, PHP, Python, Ruby, Go, and last but no least Docker (supports applications packaged as Docker containers, allowing you to use any language, framework, or runtime that can run in a container).&lt;/p&gt;

&lt;p&gt;Since our aim is to deploy a sample python web application, we will use a  Web Server environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4coscdws98qzp82p410.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4coscdws98qzp82p410.png" alt="Image description" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you can choose an environment name or use the autogenerated one. You may also enter a domain name or use one that will be autogenerated. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flstlhqbs359uwx2sqh68.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flstlhqbs359uwx2sqh68.png" alt="Image description" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we will choose the python version that we want to use. You may choose the appropriate version for your application. For this demo, we will use the latest versions available. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxg9ly2fp7y2a9be5bxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxg9ly2fp7y2a9be5bxr.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We then select the application we want to deploy. At the moment, since we don’t have code, we can deploy the sample application, which we can change at a later stage to the python application we want. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kisritzq0lu5tsmynk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kisritzq0lu5tsmynk5.png" alt="Image description" width="800" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next section, Service access, has to do with IAM roles that will give elastic beanstalk the appropriate permissions to set up and run our application with the appropriate permissions.&lt;br&gt;
Even though Elastic Beanstalk is setting up and managing the environment, it is possible to log into your EC2 once it’s running. You can create the EC2 key pair and specify it here. Since I had created an application on  elastic beanstalk, I already have some roles defined that can be used. You may choose the “Create and use new service role” to create roles as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk31eorzkbxeq98nytn1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk31eorzkbxeq98nytn1e.png" alt="Image description" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following section covers network and database configurations. You can select the Virtual Private Cloud (VPC) and the specific subnets where you want your instances to run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqiplovq4za129yy45g69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqiplovq4za129yy45g69.png" alt="Image description" width="622" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, should your application need a database, you might need to configure some database settings. Since ours doesn’t at the moment, we can skip this part. Should you need a database, toggle the option to enable, choose the subnets you want it to be deployed to, and update the settings such as the Engine you want, instance class of the compute that will run your database, the amount of storage you need and configure your database credentials as well.&lt;/p&gt;

&lt;p&gt;At step 4, you will configure more aspects of the EC2 instances that run your application. These aspects include The number of gigabytes of the root volume attached to each instance, Input/output operations per second for a provisioned IOPS (SSD) volume, the amount of throughput desired, security groups (you can choose the default or create your own) and configure CloudWatch monitoring as well. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2qee3z6jxjr85pha30l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2qee3z6jxjr85pha30l.png" alt="Image description" width="648" height="891"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also configure the compute capacity of your environment and specify aspects such as the environment type, number of instances you need among other settings based on your needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8i58hxt48dom312u83f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8i58hxt48dom312u83f.png" alt="Image description" width="654" height="872"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next tab, you can configure various monitoring and logging aspects of your application. You can then review your settings and create the environment&lt;/p&gt;

&lt;p&gt;Once you click on create, you can monitor the events that are happening on the dashboard of your environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk8hdi3bcmkraty8kttj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk8hdi3bcmkraty8kttj.png" alt="Image description" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a while the environment will launch successfully&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2usje12mkc9595c5drj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2usje12mkc9595c5drj.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may visit the url for your deployed application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm3f7it6llgz0urj6mar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm3f7it6llgz0urj6mar.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With our simple Python application now successfully deployed on Elastic Beanstalk, our next step involves exploring the deployment of a Dockerized container. This will further enhance our understanding and utilization of Elastic Beanstalk's capabilities, allowing us to leverage containerization for our application deployment in future articles.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
