<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jay Sheth</title>
    <description>The latest articles on DEV Community by Jay Sheth (@jay_sheth).</description>
    <link>https://dev.to/jay_sheth</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jay_sheth"/>
    <language>en</language>
    <item>
      <title>🚀 Hello, Kubernetes! A Hands-On Guide to Deploying Your First App on GKE description</title>
      <dc:creator>Jay Sheth</dc:creator>
      <pubDate>Sun, 02 Nov 2025 10:47:13 +0000</pubDate>
      <link>https://dev.to/jay_sheth/hello-kubernetes-a-hands-on-guide-to-deploying-your-first-app-on-gke-description-28gi</link>
      <guid>https://dev.to/jay_sheth/hello-kubernetes-a-hands-on-guide-to-deploying-your-first-app-on-gke-description-28gi</guid>
      <description>&lt;p&gt;&lt;strong&gt;Google Kubernetes Engine (GKE)&lt;/strong&gt; offers a powerful, managed environment for running containerized applications at scale.&lt;/p&gt;

&lt;p&gt;In this guide, we'll walk through the essential commands to provision a GKE cluster, deploy a simple web application, expose it to the internet, and clean up; all in less than an hour!&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ Prerequisites and Setup
&lt;/h2&gt;

&lt;p&gt;We'll be using &lt;strong&gt;Cloud Shell&lt;/strong&gt; for all our commands, as it comes pre-installed with the necessary tools: &lt;code&gt;gcloud&lt;/code&gt; for managing Google Cloud resources and &lt;code&gt;kubectl&lt;/code&gt; for interacting with the Kubernetes cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Task 1: Set a Default Compute Zone
&lt;/h3&gt;

&lt;p&gt;To make subsequent commands shorter, we'll configure a default region and zone.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Set the default compute region
gcloud config set compute/region us-east4

# Set the default compute zone (this will be where the cluster nodes live)
gcloud config set compute/zone us-east4-a

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Task 2: Create a GKE Cluster
&lt;/h3&gt;

&lt;p&gt;A GKE cluster is made up of a &lt;strong&gt;Master&lt;/strong&gt; (control plane) and multiple &lt;strong&gt;Nodes&lt;/strong&gt; (worker machines). We'll create a cluster named &lt;code&gt;lab-cluster&lt;/code&gt; with &lt;code&gt;e2-medium&lt;/code&gt; nodes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters create /
--machine-type=e2-medium /
--zone=us-east4-a lab-cluster

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This command can take several minutes to complete as Google Cloud provisions the necessary Compute Engine instances and sets up the Kubernetes control plane.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once complete, you can view the cluster in the Google Cloud Console:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F634963ascouyzdrgf9ax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F634963ascouyzdrgf9ax.png" alt="Cluster" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Task 3: Get Authentication Credentials
&lt;/h3&gt;

&lt;p&gt;Before using &lt;code&gt;kubectl&lt;/code&gt;, we need to get the credentials to connect to our new cluster securely. This command updates your &lt;code&gt;kubeconfig&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters get-credentials lab-cluster

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Expected Output:&lt;/strong&gt; &lt;code&gt;kubeconfig entry generated for lab-cluster.&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🚀 Deploying and Exposing the Application
&lt;/h2&gt;

&lt;p&gt;With the cluster running and authenticated, we can now deploy our containerized application. We'll use the sample &lt;code&gt;hello-app&lt;/code&gt; image provided by Google.&lt;/p&gt;

&lt;h3&gt;
  
  
  Task 4.1: Deploy the Container (Deployment)
&lt;/h3&gt;

&lt;p&gt;A Kubernetes &lt;strong&gt;Deployment&lt;/strong&gt; manages stateless applications, ensuring the specified number of replicas are running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deployment hello-server /
--image=gcr.io/google-samples/hello-app:1.0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Expected Output:&lt;/strong&gt; &lt;code&gt;deployment.apps/hello-server created&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After a moment, you can confirm the Deployment status in the GUI:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbn2rrqxx1f7z6li71ham.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbn2rrqxx1f7z6li71ham.png" alt="Deployment" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Task 4.2: Create a Kubernetes Service (Load Balancer)
&lt;/h3&gt;

&lt;p&gt;To access the application from the public internet, we need to create a Kubernetes &lt;strong&gt;Service&lt;/strong&gt;. By setting the type to &lt;code&gt;LoadBalancer&lt;/code&gt;, GKE automatically provisions a Google Cloud Load Balancer for us.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl expose deployment hello-server --type=LoadBalancer --port 8080

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Expected Output:&lt;/strong&gt; &lt;code&gt;service/hello-server exposed&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vpy0kbig4o1yrgf8for.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vpy0kbig4o1yrgf8for.png" alt="Service or Load Balancer" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4.3: Get the External IP and View the App
&lt;/h3&gt;

&lt;p&gt;The LoadBalancer takes a minute or two to provision an external IP. Run the following command until the &lt;code&gt;EXTERNAL-IP&lt;/code&gt; field is no longer &lt;code&gt;&amp;lt;pending&amp;gt;&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the Deployment details screen, you can see the service exposing the endpoint:&lt;/p&gt;

&lt;p&gt;Once the IP is visible (e.g., &lt;code&gt;35.245.47.76&lt;/code&gt;), open your web browser and navigate to: &lt;code&gt;http://[EXTERNAL-IP]:8080&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the successful output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxjzbunp0nkfw9mbcu7n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxjzbunp0nkfw9mbcu7n.png" alt="Result" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧹 Task 5: Delete the Cluster
&lt;/h2&gt;

&lt;p&gt;A key part of cloud computing is cleanup to avoid unnecessary costs. Deleting the cluster removes all associated resources, including the nodes, Pods, Deployments, and the Load Balancer Service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters delete lab-cluster

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When prompted, type &lt;code&gt;Y&lt;/code&gt; to confirm.&lt;/p&gt;

&lt;p&gt;The console will show that the cluster is being deleted:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7nimosawf1o5be95avq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7nimosawf1o5be95avq.png" alt="Cleanup" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You've successfully created, deployed to, and cleaned up a Google Kubernetes Engine cluster.&lt;/p&gt;

</description>
      <category>gke</category>
      <category>googlecloud</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Checkout my new post!</title>
      <dc:creator>Jay Sheth</dc:creator>
      <pubDate>Sat, 01 Nov 2025 18:06:17 +0000</pubDate>
      <link>https://dev.to/jay_sheth/checkout-my-new-post-1k10</link>
      <guid>https://dev.to/jay_sheth/checkout-my-new-post-1k10</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/jay_sheth" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1070691%2Fee6eab12-543f-4c23-8730-4226c8e94bec.png" alt="jay_sheth"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/jay_sheth/how-to-set-up-a-global-http-load-balancer-on-google-cloud-step-by-step-with-screenshots-3ghi" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;🌍 How to Set Up a Global HTTP Load Balancer on Google Cloud (Step-by-Step with Screenshots)&lt;/h2&gt;
      &lt;h3&gt;Jay Sheth ・ Nov 1&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>🌍 How to Set Up a Global HTTP Load Balancer on Google Cloud (Step-by-Step with Screenshots)</title>
      <dc:creator>Jay Sheth</dc:creator>
      <pubDate>Sat, 01 Nov 2025 18:04:49 +0000</pubDate>
      <link>https://dev.to/jay_sheth/how-to-set-up-a-global-http-load-balancer-on-google-cloud-step-by-step-with-screenshots-3ghi</link>
      <guid>https://dev.to/jay_sheth/how-to-set-up-a-global-http-load-balancer-on-google-cloud-step-by-step-with-screenshots-3ghi</guid>
      <description>&lt;p&gt;Load balancing is an essential part of modern cloud architectures --- it helps distribute traffic across multiple backend instances, ensuring &lt;strong&gt;reliability, scalability, and performance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll set up a &lt;strong&gt;Global HTTP Load Balancer&lt;/strong&gt; on Google Cloud Platform (GCP) using both the Cloud Shell (&lt;code&gt;gcloud&lt;/code&gt;) and the Google Cloud Console (GUI).&lt;/p&gt;




&lt;h2&gt;
  
  
  🧱 Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before starting, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A GCP project (like Qwiklabs or your own)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Billing enabled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloud Shell or &lt;code&gt;gcloud&lt;/code&gt; CLI access&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🖥️ Step 1: Create Initial Compute Engine Instances
&lt;/h2&gt;

&lt;p&gt;We'll start by creating 3 individual virtual machines. For a full tutorial, it's helpful to see how basic VMs are set up before moving to managed groups.&lt;/p&gt;

&lt;p&gt;You can create them using the following &lt;code&gt;gcloud&lt;/code&gt; commands, each setting up a simple Apache web server that displays its own name.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute instances create www1\
  --zone=us-west1-c\
  --tags=network-lb-tag\
  --machine-type=e2-small\
  --image-family=debian-11\
  --image-project=debian-cloud\
  --metadata=startup-script='#!/bin/bash
    apt-get update
    apt-get install apache2 -y
    service apache2 restart
    echo "&amp;lt;h3&amp;gt;Web Server: www1&amp;lt;/h3&amp;gt;" | tee /var/www/html/index.html'

gcloud compute instances create www2\
  --zone=us-west1-c\
  --tags=network-lb-tag\
  --machine-type=e2-small\
  --image-family=debian-11\
  --image-project=debian-cloud\
  --metadata=startup-script='#!/bin/bash
    apt-get update
    apt-get install apache2 -y
    service apache2 restart
    echo "&amp;lt;h3&amp;gt;Web Server: www2&amp;lt;/h3&amp;gt;" | tee /var/www/html/index.html'

gcloud compute instances create www3\
  --zone=us-west1-c\
  --tags=network-lb-tag\
  --machine-type=e2-small\
  --image-family=debian-11\
  --image-project=debian-cloud\
  --metadata=startup-script='#!/bin/bash
    apt-get update
    apt-get install apache2 -y
    service apache2 restart
    echo "&amp;lt;h3&amp;gt;Web Server: www3&amp;lt;/h3&amp;gt;" | tee /var/www/html/index.html'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The VM Instances appear in the Console like this:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kene4ruk5qcpfakkzwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kene4ruk5qcpfakkzwv.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Step 2: Create an Instance Template
&lt;/h2&gt;

&lt;p&gt;Instance templates define the configuration for VM instances that will be part of the managed instance group, ensuring they are identical and run the startup script.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute instance-templates create lb-backend-template\
  --region=us-west1\
  --network=default\
  --subnet=default\
  --tags=network-lb-tag\
  --image-family=debian-11\
  --image-project=debian-cloud\
  --metadata=startup-script='#! /bin/bash
    apt-get update
    apt-get install -y apache2
    echo "&amp;lt;h1&amp;gt;Hello from $(hostname)&amp;lt;/h1&amp;gt;" | tee /var/www/html/index.html
    systemctl restart apache2'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The Instance Group Template appear in the Console like this:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat2ui9bmk797vxlfh0jt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat2ui9bmk797vxlfh0jt.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ Step 3: Create a Managed Instance Group (MIG)
&lt;/h2&gt;

&lt;p&gt;This group will manage the identical backend instances using the template we just created. We'll start with a size of 2.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute instance-groups managed create lb-backend-group\
  --template=lb-backend-template\
  --size=2\
  --zone=us-west1-c

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute instance-groups managed set-named-ports lb-backend-group\
    --named-ports http:80\
    --zone=us-west1-c

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The Instance Group appear in the Console like this:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ngfp5dbe2vdoj75hznf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ngfp5dbe2vdoj75hznf.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🌐 Step 4: Reserve a Global Static IP Address
&lt;/h2&gt;

&lt;p&gt;This IP address will be the public, permanent address for your load balancer.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute addresses create lb-ipv4-1\
  --ip-version=IPV4\
  --global

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The Static IP appear in the Console like this:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdipjxibl0lzu3tf31n61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdipjxibl0lzu3tf31n61.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🔥 Step 5: Configure Firewall Rules
&lt;/h2&gt;

&lt;p&gt;We need a firewall rule to allow &lt;strong&gt;HTTP traffic (port 80)&lt;/strong&gt; and a separate rule to allow the &lt;strong&gt;Google Cloud Health Check probes&lt;/strong&gt; to reach your backend instances.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute firewall-rules create fw-allow-health-check\
  --network=default\
  --action=ALLOW\
  --direction=INGRESS\
  --source-ranges=130.211.0.0/22,35.191.0.0/16\
  --target-tags=network-lb-tag\
  --rules=tcp:80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The Firewall appear in the Console like this:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1u5sik16pirtkc5y1w2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1u5sik16pirtkc5y1w2.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  ❤️ Step 6: Create a Health Check
&lt;/h2&gt;

&lt;p&gt;This checks your instances to ensure they are healthy before sending user traffic to them.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute health-checks create http http-basic-check\
  --port 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The Health Check appear in the Console like this:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y4mx3cum3gap2gfkoni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y4mx3cum3gap2gfkoni.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 Step 7: Create a Backend Service
&lt;/h2&gt;

&lt;p&gt;The Backend Service links your health check to your instance group and defines the protocol for traffic to the backends.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute backend-services create web-backend-service\
  --protocol=HTTP\
  --port-name=http\
  --health-checks=http-basic-check\
  --global

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The Backend service appear in the Console like this:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn19smm76rl6axg65bazn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn19smm76rl6axg65bazn.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Step 8: Add the Instance Group to the Backend Service
&lt;/h2&gt;

&lt;p&gt;Now, connect your Managed Instance Group to the Backend Service.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute backend-services add-backend web-backend-service\
  --instance-group=lb-backend-group\
  --instance-group-zone=us-west1-c\
  --global

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The Instance Group added to the Backend Service appear in the Console like this:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhon1oyjcni1vtshhyd6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhon1oyjcni1vtshhyd6j.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🌍 Step 9: Create a URL Map
&lt;/h2&gt;

&lt;p&gt;The URL map defines which backend service handles incoming requests. For this simple setup, all requests go to our single backend service.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute url-maps create web-map-http\
  --default-service web-backend-service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The URL Map appear in the Console like this:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jk8sucqge5mly2fhbq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jk8sucqge5mly2fhbq8.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧭 Step 10: Create a Target HTTP Proxy
&lt;/h2&gt;

&lt;p&gt;The Target HTTP Proxy receives the request from the forwarding rule and consults the URL map to determine where to send the traffic.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute target-http-proxies create http-lb-proxy\
  --url-map web-map-http

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🚀 Step 11: Create a Global Forwarding Rule
&lt;/h2&gt;

&lt;p&gt;This is the final step, linking your static external IP to the proxy and listening on port 80 (HTTP).&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute forwarding-rules create http-content-rule\
  --address=lb-ipv4-1\
  --global\
  --target-http-proxy=http-lb-proxy\
  --ports=80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The Proxy appear in the Console like this:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcerpiiufpw76z16armgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcerpiiufpw76z16armgg.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Step 12: Test Your Load Balancer
&lt;/h2&gt;

&lt;p&gt;It may take a few minutes for the Load Balancer to provision and for the health checks to pass.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to &lt;strong&gt;Load Balancing&lt;/strong&gt; &amp;gt; &lt;strong&gt;Frontends&lt;/strong&gt; and copy the IP address (in this example: &lt;strong&gt;34.54.232.204&lt;/strong&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now open your browser and visit: &lt;code&gt;http://34.54.232.204&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should see output similar to this:&lt;/p&gt;

&lt;p&gt;Each time you refresh the page, the VM name (e.g., &lt;code&gt;lb-backend-group-c151&lt;/code&gt;) may change, confirming that the load balancer is successfully distributing traffic between the healthy instances in your Managed Instance Group.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Final Result:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgv4wadlvfv2ipzbxisn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgv4wadlvfv2ipzbxisn.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Component&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;VM Instances&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Backend servers running Apache.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Instance Template &amp;amp; Group&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Automates VM creation and management (MIG).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Firewall Rules&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Allows HTTP and health check traffic.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Health Check&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Monitors backend VM health.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Backend Service&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Connects instances to the LB, uses the named port.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;URL Map&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Routes incoming traffic to the correct backend service.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Frontend Rule&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Assigns the static IP and listens on port 80.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You've successfully built a global HTTP load balancer in GCP, distributing traffic between healthy backends using &lt;code&gt;gcloud&lt;/code&gt; commands and GUI verification.&lt;/p&gt;

&lt;p&gt;If you're looking for more details on the setup and configuration of different load balancer types, this video is a helpful resource: &lt;a href="https://www.google.com/search?q=https://www.youtube.com/watch%3Fv%3DFj7n0Q2U6bM&amp;amp;authuser=2" rel="noopener noreferrer"&gt;How to set up a Global HTTP Load Balancer with Compute Engine&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>google</category>
      <category>networking</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Deploy Flask App on Google Cloud Run with Terraform</title>
      <dc:creator>Jay Sheth</dc:creator>
      <pubDate>Mon, 15 Jul 2024 18:39:41 +0000</pubDate>
      <link>https://dev.to/jay_sheth/deploying-a-flask-app-on-cloud-run-with-terraform-a-comprehensive-guide-2ne0</link>
      <guid>https://dev.to/jay_sheth/deploying-a-flask-app-on-cloud-run-with-terraform-a-comprehensive-guide-2ne0</guid>
      <description>&lt;h1&gt;
  
  
  Prerequisite:
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;GCP Account&lt;/li&gt;
&lt;li&gt;Install Google Cloud SDK&lt;/li&gt;
&lt;li&gt;Terraform Installed on Developer Desktop&lt;/li&gt;
&lt;li&gt;Python on Developer Desktop&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  In GCP Console:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create new Project named &lt;code&gt;my-first-project&lt;/code&gt; on GCP console&lt;br&gt;
  &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv0nchuo2168zmwoh4l4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv0nchuo2168zmwoh4l4.png" alt="GCP Console"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firgjos88u696fe1wr22a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firgjos88u696fe1wr22a.png" alt="GCP Console"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We will enable the following APIs for our project to work, run the below mentioned commands in &lt;strong&gt;Google Cloud CLI&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Run&lt;/li&gt;
&lt;li&gt;Artifact Registry&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cloud Build&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng3l8usju1823ogt15yc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng3l8usju1823ogt15yc.png" alt="GCP commands"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  On Developer Desktop:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a simple Flask Application&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create an &lt;code&gt;app.py&lt;/code&gt; file &lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;code&gt;Dockerfile&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  In GCP Console:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;em&gt;&lt;strong&gt;Artifact Registry&lt;/strong&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;&lt;em&gt;Create Repository&lt;/em&gt;&lt;/strong&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz87q5d38iywyze2ubmfz.png" alt="Artifact Registry"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Give name to repository (&lt;code&gt;$REPO_NAME&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Select &lt;em&gt;Docker&lt;/em&gt; in Format&lt;/li&gt;
&lt;li&gt;Select &lt;em&gt;Standard&lt;/em&gt; in Mode&lt;/li&gt;
&lt;li&gt;Select &lt;em&gt;Regional&lt;/em&gt; in Location Type&lt;/li&gt;
&lt;li&gt;Select &lt;em&gt;us-west1&lt;/em&gt; in Region&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Create&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  On Developer Desktop
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the folder where we created the &lt;code&gt;Dockerfile&lt;/code&gt;; run the following commands&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4zf11whpz2nhto9shr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4zf11whpz2nhto9shr8.png" alt="Docker commands"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We will create Terraform project &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create &lt;code&gt;terraform.tf&lt;/code&gt; for configuring the providers&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
 &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create &lt;code&gt;main.tf&lt;/code&gt; containing the cloud run configuration&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
 &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create &lt;code&gt;variable.tf&lt;/code&gt; containing the variables&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
 &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create &lt;code&gt;data.tf&lt;/code&gt; containing the data&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
 &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create &lt;code&gt;output.tf&lt;/code&gt; containing the output&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
 &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create &lt;code&gt;terraform.tfvars&lt;/code&gt; containing the variable values&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
 &lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now run the following commands in your terminal&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cc14mstm93ybffdhypy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cc14mstm93ybffdhypy.png" alt="Terraform commands"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Output
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fki2jlh9o0grxd77sqdl0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fki2jlh9o0grxd77sqdl0.png" alt="Flask application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Destroy Infrastructure
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;To destroy the infrastructure, run the following commands in your terminal&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsd91fqyov939hot6dpgp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsd91fqyov939hot6dpgp.png" alt="Terraform commands"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;👋👋&lt;strong&gt;&lt;em&gt;BYE&lt;/em&gt;&lt;/strong&gt;👋👋&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>googlecloud</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Deploy React App using GitLab CICD and Docker</title>
      <dc:creator>Jay Sheth</dc:creator>
      <pubDate>Thu, 09 May 2024 18:06:16 +0000</pubDate>
      <link>https://dev.to/jay_sheth/deploy-react-app-using-gitlab-cicd-and-docker-2h56</link>
      <guid>https://dev.to/jay_sheth/deploy-react-app-using-gitlab-cicd-and-docker-2h56</guid>
      <description>&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Ubuntu Machine&lt;/li&gt;
&lt;li&gt;GitLab account&lt;/li&gt;
&lt;li&gt;Node, NPM, Docker installed on Ubuntu Machine &amp;amp; Local Machine&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Steps:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Project Folder
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a react app on your local machine using &lt;code&gt;npx create-react-app react-app&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create &lt;strong&gt;nginx.conf&lt;/strong&gt; file with below content&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

http {

  include mime.types;

  set_real_ip_from        0.0.0.0/0;
  real_ip_recursive       on;
  real_ip_header          X-Forward-For;
  limit_req_zone          $binary_remote_addr zone=mylimit:10m rate=10r/s;

  server {
    listen 80;
    server_name localhost;
    root /proxy;
    limit_req zone=mylimit burst=70 nodelay;

    location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
            try_files $uri /index.html;   
    }
  }
}

events {}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;strong&gt;Dockerfile&lt;/strong&gt; with below content&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

FROM nginx:latest

COPY ./build /usr/share/nginx/html

COPY nginx.conf /etc/nginx/nginx.conf  

EXPOSE 80/tcp 

CMD ["/usr/sbin/nginx", "-g", "daemon off;"]  


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;strong&gt;deploy.sh&lt;/strong&gt; file&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#!/bin/bash

# Pull the latest Docker image
docker pull &amp;lt;image-name&amp;gt;

# Stop and remove any existing container with the same name
docker stop &amp;lt;container-name&amp;gt; || true
docker rm &amp;lt;container-name&amp;gt; || true

# Run the Docker container
docker run -d --name &amp;lt;container-name&amp;gt; -p &amp;lt;host-port&amp;gt;:&amp;lt;container-port&amp;gt; &amp;lt;image-name&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Add the .pem file, which will be require to SSH into our Ubuntu machine in same folder.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. GitLab
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a GitLab account if you don't have&lt;/li&gt;
&lt;li&gt;Push code to GitLab&lt;/li&gt;
&lt;li&gt;Goto &lt;strong&gt;Setting -&amp;gt; CICD -&amp;gt; Variables&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Create a new &lt;strong&gt;variable&lt;/strong&gt; with DNS of your ubuntu machine, Docker Image Name, Docker Hub Token(configure later on), Docker Hub username
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kf91en82929jf7yqf9s.png" alt="Variable for CICD"&gt;
&lt;/li&gt;
&lt;li&gt;Create a cicd file in gitlab pipeline editor with below content&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

stages:
  - build
  - build image
  - deploy

Build:
  stage: build
  image: node:20-alpine

  script:
    - npm install
    - npm run build

  artifacts:
      paths:
        - build/

Push To DockerHub:
  stage: build image
  image: docker:stable

  services:
    - docker:dind
  script:
    - docker build -t $DH_IMAGE_NAME .
    - docker login -u $DH_USER_NAME -p $DH_TOKEN
    - docker push $DH_IMAGE_NAME

  dependencies:
    - Build

Deploy to EC2:
  stage: deploy
  image: alpine:3.19

  before_script:
    - apk update
    - apk add openssh-client
    - chmod 600 "Ubuntu1Key.pem" 

  script:
    - ssh -o StrictHostKeyChecking=no -i "Ubuntu1Key.pem" ubuntu@"$U1_EC2_DNS" 'bash -s' &amp;lt; deploy.sh  


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Explantion:&lt;/strong&gt; &lt;br&gt;
This CI/CD file in GitLab has three stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Build:&lt;/strong&gt; Uses Node.js to install dependencies and build the project. The resulting artifacts are stored in the build/ directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Push To DockerHub:&lt;/strong&gt; Builds a Docker image using Docker's stable version, then pushes it to DockerHub after logging in with credentials.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy to EC2:&lt;/strong&gt; Utilizes Alpine Linux to update packages and install SSH client. It sets permissions for SSH key usage, then deploys to an EC2 instance by executing a deployment script via SSH.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  3. Docker Hub
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create an account for Docker Hub&lt;/li&gt;
&lt;li&gt;Goto &lt;strong&gt;My Account -&amp;gt; Security -&amp;gt; Create A new Token&lt;/strong&gt; (With Read and Write)&lt;/li&gt;
&lt;li&gt;Copy the token &amp;amp; paste it in the variable create earlier in GitLab name &lt;strong&gt;DH_TOKEN&lt;/strong&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Run the CICD Pipeline
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Commit once on the main branch, so our CICD will be triggered automatically.&lt;/li&gt;
&lt;li&gt;Goto the &lt;strong&gt;&lt;em&gt;Build -&amp;gt; Pipelines&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;After successful completion of CICD, it will look like this&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6y9u0vke3vf46q04ape.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6y9u0vke3vf46q04ape.png" alt="CICD Result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now open the Public IP of EC2 with host-port which you specified in the &lt;code&gt;deploy.sh&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;BYE&lt;/em&gt; 👋👋&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>docker</category>
      <category>devops</category>
      <category>react</category>
    </item>
    <item>
      <title>Deploy Web Application using Nginx and Docker on Ubuntu</title>
      <dc:creator>Jay Sheth</dc:creator>
      <pubDate>Tue, 09 Apr 2024 16:18:15 +0000</pubDate>
      <link>https://dev.to/jay_sheth/deploy-web-application-using-nginx-and-docker-on-ubuntu-3pm2</link>
      <guid>https://dev.to/jay_sheth/deploy-web-application-using-nginx-and-docker-on-ubuntu-3pm2</guid>
      <description>&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Ubuntu Machine&lt;/li&gt;
&lt;li&gt;Basic Docker knowledge &lt;/li&gt;
&lt;li&gt;Node and NPM on Local Device and Ubuntu&lt;/li&gt;
&lt;li&gt;Docker on Ubuntu machine&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Flow:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;On local machine create a React App if you don't already have 1 by following command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-react-app react-app
cd react-app
npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; You can name your project anything, I have kept it react-app in this demo&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;Dockerfile&lt;/code&gt;on root level of react-app directory with below content.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:lts-alpine as build  # Use Node.js LTS Alpine image as build environment

WORKDIR /app  # Set working directory inside the container to /app

COPY package*.json ./  # Copy package.json and package-lock.json files to /app directory

RUN npm ci  # Install dependencies using npm's CI mode for faster and consistent builds

COPY . .  # Copy the rest of the application files to /app directory

RUN npm run build  # Build the application using npm script

FROM nginx:latest as prod  # Use the latest Nginx image as production environment

COPY --from=build /app/build /usr/share/nginx/html  # Copy built files from the previous stage to Nginx HTML directory

COPY nginx.conf /etc/nginx/nginx.conf  # Copy custom Nginx configuration file to override default configuration

EXPOSE 80/tcp  # Expose port 80 for incoming HTTP traffic

CMD ["/usr/sbin/nginx", "-g", "daemon off;"]  # Start Nginx server with daemon off for foreground execution

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;We will also need to create a Nginx configuration file to listen for any connection at port 80. Create a new file name &lt;code&gt;nginx.conf&lt;/code&gt;at root level of project directory.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {

  include mime.types;

  set_real_ip_from        0.0.0.0/0;
  real_ip_recursive       on;
  real_ip_header          X-Forward-For;
  limit_req_zone          $binary_remote_addr zone=mylimit:10m rate=10r/s;

  server {
    listen 80;
    server_name localhost;
    root /proxy;
    limit_req zone=mylimit burst=70 nodelay;

    location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
            try_files $uri /index.html;   
    }
  }
}

events {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Push this code on GitHub Repo&lt;/li&gt;
&lt;li&gt;Deploy Ubuntu machine on any cloud provider if you don't have setup on local device&lt;/li&gt;
&lt;li&gt;On Ubuntu machine we need to node, npm, docker installed, if it is not configured then do so.&lt;/li&gt;
&lt;li&gt;After installing node, npm, docker, pull the code from GitHub Repo &amp;amp; run these commands to run our application on server.&lt;/li&gt;
&lt;li&gt;This command will create an image for the above mentioned &lt;code&gt;dockerfile&lt;/code&gt;.
&lt;code&gt;docker build -t react-app-image:1.0.0 .&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Now we will create a container for the image we created, by running the following command.
&lt;code&gt;docker run -d -p 80:80 --name react-server with-docker&lt;/code&gt;
Now our application should running on the &lt;code&gt;PORT 80&lt;/code&gt;. If you are using cloud for ubuntu machine then search in browser &lt;code&gt;public_ip_of_machine:80&lt;/code&gt;, if you are deploying it on local device then open &lt;code&gt;localhost:80&lt;/code&gt; in browser.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;BYE 👋👋&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>docker</category>
      <category>ubuntu</category>
      <category>nginx</category>
    </item>
    <item>
      <title>Deploy &amp; Host a React Application on NGINX with Ubuntu</title>
      <dc:creator>Jay Sheth</dc:creator>
      <pubDate>Sun, 07 Apr 2024 08:47:59 +0000</pubDate>
      <link>https://dev.to/jay_sheth/deploy-host-a-react-application-on-nginx-with-ubuntu-m4l</link>
      <guid>https://dev.to/jay_sheth/deploy-host-a-react-application-on-nginx-with-ubuntu-m4l</guid>
      <description>&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Ubuntu Machine &lt;/li&gt;
&lt;li&gt;Nginx on Ubuntu &lt;/li&gt;
&lt;li&gt;Node and NPM on local device and ubuntu &lt;/li&gt;
&lt;li&gt;Template React App &lt;/li&gt;
&lt;li&gt;GitHub&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Flow
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Deploy Ubuntu machine on any cloud provider if you dont have setup on local device &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On Ubuntu, run following cmd to install nginx &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Update package index:&lt;/strong&gt; Before installing any new software, it's a good practice to update the package index to ensure you're getting the latest versions. &lt;br&gt;
&lt;code&gt;sudo apt update&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install Nginx:&lt;/strong&gt; Use apt to install Nginx. &lt;br&gt;
&lt;code&gt;sudo apt install nginx&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start Nginx:&lt;/strong&gt; Once installed, Nginx should start automatically. If not, you can start it manually. &lt;br&gt;
&lt;code&gt;sudo systemctl start nginx&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enable Nginx to start on boot:&lt;/strong&gt; If you want Nginx to start automatically whenever your instance is rebooted, you can enable it. &lt;br&gt;
&lt;code&gt;sudo systemctl enable nginx&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On local device, run this cmd &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;npx create-react-app react-app&lt;/code&gt; &lt;br&gt;
&lt;strong&gt;NOTE:&lt;/strong&gt; I have use react-app as a name of my react-app, you can keep whatever name you want &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Push this code on github &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On Ubuntu, run this cmd &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;git clone https://github.com/exampleuser/example-repository.git&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Note:&lt;/strong&gt; Above URL is example, you need to provide you repo URL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;npm run build&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cd /etc/nginx/sites-available&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sudo vim react-app&lt;/code&gt; &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Paste below code in vim editor&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server { 

        listen 80; 

        listen [::]:80;  

        root /var/www/react-app; 

        index index.html; 

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;cd /var/www/&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo mkdir react-app&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo chmod –R 755 /var/www/react-app&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo chown –R www-data:www-data /var/www/react-app&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cd $HOME&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo cp -r react-app/build/* /var/www/react-app/&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo unlink /etc/nginx/sites-enabled/default&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo ln –s /etc/nginx/sites-available/react-app /etc/nginx/sites-enabled/&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo systemctl restart nginx&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo nginx –t&lt;/code&gt; (check the output on cmd – it should be successfull) &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now open the Public Ip of Ubuntu machine if deployed on cloud, if launch on local open localhost:80 &lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Bye!!&lt;/strong&gt;&lt;/em&gt; 👋&lt;/p&gt;

</description>
      <category>react</category>
      <category>javascript</category>
      <category>ubuntu</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Account Management with Account Factory for Terraform [Demo]</title>
      <dc:creator>Jay Sheth</dc:creator>
      <pubDate>Fri, 10 Nov 2023 09:47:22 +0000</pubDate>
      <link>https://dev.to/jay_sheth/account-management-with-account-factory-for-terraform-demo-1ce4</link>
      <guid>https://dev.to/jay_sheth/account-management-with-account-factory-for-terraform-demo-1ce4</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;What is Account Factory For Terraform?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It is simple to generate and modify new accounts that adhere to your organization's security policies using the &lt;strong&gt;AWS Control Tower Account Factory for Terraform (AFT) Terraform module.&lt;/strong&gt; You may take use of Terraform's workflow and Control Tower's governance capabilities by using AFT, which establishes a pipeline for the automatic and reliable generation of AWS Control Tower accounts. This module is maintained by AWS.&lt;br&gt;
This tutorial walks you through the one-time procedures needed to deploy AFT in order to establish the account creation pipeline. Your Control Tower accounts will then be created and customized using AFT. You will deploy the AFT module, go through the customization choices for support accounts, and discover the elements of AFT and its workflow in this lesson.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Implementation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;PREREQUISITES:&lt;/strong&gt;&lt;br&gt;
Before we start our walkthrough, there are some prerequisites which are require, they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You should have AWS Control Tower environment deployed and active.&lt;/li&gt;
&lt;li&gt;An AWS Account with credential for non-root user with AdminstratorAccess.&lt;/li&gt;
&lt;li&gt;A new root email address for a new vended AWS account that you’ll submit through AFT.&lt;/li&gt;
&lt;li&gt;A new or existing Organizational Units (OU) governed by AWS Control Tower, which is needed as part of new account request parameter in AFT.&lt;/li&gt;
&lt;li&gt;Integrated development environment (IDE) with Git, Terraform, and AWS Command Line Interface (AWS CLI) installed. Your IDE environment must be configured with AWS credentials to your AFT Management account.&lt;/li&gt;
&lt;li&gt;Make sure to specify the AWS Control Tower home region in the commands where applicable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;STEP 1: Create AWS AFT Organizational Unit and Account&lt;/strong&gt;&lt;br&gt;
From now on we will be referencing 2 Account: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Control Tower management account:&lt;/strong&gt; Account in which in which we have launched AWS Control Tower.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Account Factory management account:&lt;/strong&gt; This account will be provisioned in this section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In your Control Tower management account navigate to AWS Control Tower and open Organization from the left pane. Select &lt;strong&gt;Create Resources&lt;/strong&gt; and select &lt;strong&gt;Create Organization Unit&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3fcmcjzz61dthoc7r0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3fcmcjzz61dthoc7r0d.png" alt="Fig 1"&gt;&lt;/a&gt;&lt;br&gt;
Name the OU, we have kept it as &lt;strong&gt;Learn AFT&lt;/strong&gt; and then select Root OU as the parent OU.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsmwz8514zmshbwrsbfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsmwz8514zmshbwrsbfc.png" alt="Fig 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now navigate to Account Factory and select &lt;strong&gt;Create Account&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yt6b9qrd1nqacgnn13y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yt6b9qrd1nqacgnn13y.png" alt="Fig 3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;Account Email&lt;/strong&gt; field, enter an email which is not associated with any AWS Account, this will account’s root email address. In &lt;strong&gt;IAM Identity Center user email&lt;/strong&gt; enter an email id you have access to. And select Learn AFT as your &lt;strong&gt;Organization Unit&lt;/strong&gt;. Fill in all other fields as per your preference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrmz8e05p2ki6b1kzitz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrmz8e05p2ki6b1kzitz.png" alt="Fig 4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Account provisioning can take up to &lt;strong&gt;&lt;em&gt;30 minutes&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Clone and Fork Examples with configurations&lt;/strong&gt;&lt;br&gt;
We will be working around 5 repositories, one with the AFT module deployment and 4 that the module requires you to define your account specifications in. The first repository &lt;a href="https://github.com/hashicorp/learn-terraform-aws-control-tower-aft" rel="noopener noreferrer"&gt;learn-terraform-aws-control-tower-aft&lt;/a&gt;, which is one time setup will create the required infrastructure across Control Tower management account &amp;amp; in AFT management account. It will create 327 resources in these accounts and these resources will help us to create an account in our Control Tower and all the Customizations.&lt;br&gt;
AFT supports multiple VCS providers like AWS CodeCommit, GitHub, Bitbucket, and GitHub Enterprise Server. By default, it uses AWS CodeCommit for repositories. In our case, we will use GitHub as VCS.&lt;br&gt;
First Clone the &lt;a href="https://github.com/hashicorp/learn-terraform-aws-control-tower-aft" rel="noopener noreferrer"&gt;learn-terraform-aws-control-tower-aft&lt;/a&gt; containing the AFT module configuration in AFT Management Account.&lt;br&gt;
And for the next 4 repositories into your GitHub account.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;a href="https://github.com/hashicorp/learn-terraform-aft-account-request" rel="noopener noreferrer"&gt;learn-terraform-aft-account-request repository&lt;/a&gt;: provides an example setup for starting AFT-based new account provisioning.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://github.com/hashicorp/learn-terraform-aft-global-customizations" rel="noopener noreferrer"&gt;learn-terraform-aft-global-customizations repository&lt;/a&gt;: provides boilerplate setup for adjustments that will be applied to each account that AFT creates.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://github.com/hashicorp/learn-terraform-aft-account-customizations" rel="noopener noreferrer"&gt;learn-terraform-aft-account-customizations repository&lt;/a&gt;: includes default settings for account-specific adjustments.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://github.com/hashicorp/learn-terraform-aft-account-provisioning-customizations" rel="noopener noreferrer"&gt;learn-terraform-aft-account-provisioning-customizations repository&lt;/a&gt;: provides default settings that can be applied to accounts at provisioning time.
Clone your copies of repositories to your computer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Deploy AFT module&lt;/strong&gt;&lt;br&gt;
The AFT module is maintained by AWS Team, it will deploy multiple services which will help you to provision and customize account in Control Tower.&lt;br&gt;
In your terminal, navigate to the &lt;a href="https://github.com/hashicorp/learn-terraform-aws-control-tower-aft" rel="noopener noreferrer"&gt;learn-terraform-aws-control-tower-aft repository&lt;/a&gt; you cloned earlier.&lt;br&gt;
&lt;strong&gt;1. Update AFT module configuration&lt;/strong&gt;&lt;br&gt;
Open &lt;em&gt;main.tf&lt;/em&gt; file in your IDE, review and configure it according to your requirement. This module provisions resources across the Log, Audit, Control Tower Management, and AFT management accounts in your Landing Zone.&lt;br&gt;
In &lt;em&gt;terraform.tfvars&lt;/em&gt; provide your AWS account IDs for &lt;em&gt;ct_management_account_id, log_archive_account_id, audit_account_id, aft_management_account_id&lt;/em&gt;. For &lt;em&gt;ct_home_region&lt;/em&gt;, use the same region as the one Control Tower is enabled in. And provide your GitHub username in &lt;em&gt;github_username&lt;/em&gt; variable.&lt;br&gt;
By setting feature flags, you may disable the default VPC in accounts or enable CloudTrail recording at the organizational level.&lt;br&gt;
&lt;strong&gt;2. Apply configuration&lt;/strong&gt;&lt;br&gt;
After AFT management account is provisioned, we will start deploying AFT module. Configure your terminal with the AWS credentials for a user with &lt;strong&gt;AdminstratorAccess&lt;/strong&gt; in your Control Tower management account.&lt;br&gt;
Initialize the configuration to install the AWS provider and download the AFT module by running &lt;em&gt;terraform init&lt;/em&gt; command. Now apply your configuration by running &lt;em&gt;terraform apply&lt;/em&gt; to provision all the services. Respond &lt;em&gt;yes&lt;/em&gt; to confirm the operation. This will take 15 to 20 mins for deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l9wnkky5nvtxhz6bi4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l9wnkky5nvtxhz6bi4z.png" alt="Fig 5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; There are lot of Resources which are created and you might not know why they are beneficial like Private Link Interface endpoints and NAT Gateways which incur highest cost. The need of these are because of Private communication to AWS Services privately without using Public endpoints which gives enhanced data protection and security. NAT Gateway is required for AWS CodeBuild to communicate.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Review AFT components and workflow
One of the many advantages AFT has over manual provisioning of accounts is the abilty to queue multiple account requests with your configuration. AWS Control Tower currently allows you to create only one account at a time, but AFT uses DynamoDB and SQS to queue your account requests, making batched account creation more efficient.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpaps5cneiyzpzpf2jr52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpaps5cneiyzpzpf2jr52.png" alt="Fig 6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, you must create an account request file with the required attributes for the account to be provisioned. You must also apply for the customization you wish to apply to the account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy56f39fm6qfir3rvwygu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy56f39fm6qfir3rvwygu.png" alt="Fig 7"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you push your files to GitHub repositories, AFT triggers a workflow that will provision and customize your account.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;CodePipeline launches CodeBuild projects to populate a DynamoDB table item with your new account information. The new item initializes a Lambda that records your account requests in SQS, allowing you to create many new accounts at the same time.&lt;/li&gt;
&lt;li&gt;The new SQS messages trigger Lambda functions, which process your account request and begin the account vending process in Control Tower. AFT also created an account-specific pipeline to manage the customization of your new account, as well as an execution role in your new account that it may utilize to customize it.&lt;/li&gt;
&lt;li&gt;When AFT creates a new account, it triggers Lambda functions, which then activate your account-specific pipeline, which applies global and account-specific customizations. If you use Terraform configuration to generate resources in your account, the state of such resources is stored in S3. AFT applies account changes within your new account using the execution role it generated.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Enable CodeStar Connection&lt;/strong&gt;&lt;br&gt;
We are using GitHub as VCS, we will require CodeStar connection. AFT module sets up a CodeStar Connection, which will watch for the changes to repositories.&lt;br&gt;
Once AFT module sets up AFT management account, login to it and navigate to CodeStar connections in AWSaanagement console and look for ct-aft-github-connection and click on it and select Update pending connection. Follow the workflow to Install a new app and connect it to your personal GitHub account. After configuring it, click Connect to enable the AWS Connector for GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grant AFT access to Service Catalog portfolio&lt;/strong&gt;&lt;br&gt;
Log into the Control Tower management account in the AWS console, navigate to Portfolios in the Service Catalog page and click on the AWS Control Tower Account Factory Portfolio_&lt;em&gt;. Select the _Groups, roles, and users&lt;/em&gt; tab, then click &lt;em&gt;Add groups, roles, users&lt;/em&gt;. Select the Roles tab, then search for &lt;em&gt;AWSAFTExecution&lt;/em&gt;. Check the box next to it and click Add access.&lt;/p&gt;

&lt;p&gt;Navigate to your CodePipeline page in your AFT management account. Click Release change after selecting the &lt;em&gt;ct-aft-account-provisioning-customizations pipeline&lt;/em&gt;. Then, click the Release button to restart the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy an account with AFT&lt;/strong&gt;&lt;br&gt;
Now we will use AFT to provision a new account in Control Tower. Navigate to your cloned &lt;em&gt;learn-terraform-aft-account-request repository&lt;/em&gt;. Open &lt;em&gt;terraform/main.tf&lt;/em&gt;, this will contain an instance of &lt;em&gt;aft-account-request&lt;/em&gt;. Configure &lt;em&gt;control_tower_parameters, account_tags, change_management_parameters, custom_fields, account_customizations_name&lt;/em&gt; attributes according to the requirement. Now push this code to GitHub repository.&lt;br&gt;
AFT CodePipeline AFT created listens for the changes to your account requests and customization repositories. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global customizations&lt;/strong&gt;&lt;br&gt;
Global customizations apply to all AFT accounts. This enables you to automatically enforce security standards or provision standardised resources and infrastructure in each new account, making compliance with your organization's standards easier.&lt;br&gt;
Navigate to cloned &lt;em&gt;learn-terraform-aft-global-customizations repository&lt;/em&gt;. Move into &lt;em&gt;terraform _folder and create a folder named _main.tf&lt;/em&gt; and write a terraform configuration file which will have a customization of your requirement. By default, this configuration does not define any global customizations for your account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Account customizations&lt;/strong&gt;&lt;br&gt;
Account customizations are applied to a specific account or set of accounts by AFT. It uses the customizations defined in the repository whose name you specify in the &lt;em&gt;account_customizations_name&lt;/em&gt; input variable.&lt;br&gt;
That input variable's value must correspond to a subfolder in your account customizations repository. Account customizations let you to make specific changes to groups of accounts, such as imposing tougher access guardrails on accounts that manage production resources.&lt;br&gt;
Navigate to the cloned repository named &lt;em&gt;learn-terraform-aft-account-customizations&lt;/em&gt; and access the appropriate subfolder based on the specified &lt;em&gt;account_customizations_name&lt;/em&gt;. Once you're in the correct subfolder, navigate to the &lt;em&gt;terraform directory&lt;/em&gt; and open the file named &lt;em&gt;s3.tf&lt;/em&gt;. Please note that &lt;em&gt;s3.tf&lt;/em&gt; is just an example file, and you can modify it according to your specific customization needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inspect new account&lt;/strong&gt;&lt;br&gt;
Verify that AFT created your new account by finding the &lt;em&gt;AFT managment account&lt;/em&gt; in the list of accounts in your Control Tower Accounts dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszt4zyeqoq73nfyb23cu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszt4zyeqoq73nfyb23cu.png" alt="Fig 8"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In conclusion, leveraging the AWS Control Tower Account Factory for Terraform (AFT) brings powerful automation to account creation and customization processes. AFT's ability to queue multiple account requests and apply global and account-specific customizations &amp;amp; Seamlessly integrating Terraform and Control Tower's governance, AFT ensures security policy adherence, efficient workflows, and scalable provisioning. With support for multiple VCS providers like GitHub, businesses can maintain standardized security standards and enhance operational efficiency across their AWS environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Useful Links&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/" rel="noopener noreferrer"&gt;AWS Blog 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/mt/deploy-and-customize-aws-accounts-using-account-factory-for-terraform-in-aws-control-tower/" rel="noopener noreferrer"&gt;AWS Blog 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/controltower/latest/userguide/taf-account-provisioning.html" rel="noopener noreferrer"&gt;AWS Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=8Ot5wn7kxI0" rel="noopener noreferrer"&gt;AWS re:Invent video&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CO FIRST AUTHOR&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/rajdeepsinh-jadeja-854b951aa/" rel="noopener noreferrer"&gt;Rajdeep Jadeja&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cloud</category>
      <category>automation</category>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Introduction to Amazon Detective</title>
      <dc:creator>Jay Sheth</dc:creator>
      <pubDate>Wed, 14 Jun 2023 21:10:54 +0000</pubDate>
      <link>https://dev.to/jay_sheth/introduction-to-amazon-detective-289j</link>
      <guid>https://dev.to/jay_sheth/introduction-to-amazon-detective-289j</guid>
      <description>&lt;blockquote&gt;
&lt;h2&gt;
  
  
  What is Amazon Detective?
&lt;/h2&gt;

&lt;p&gt;Amazon Detective simplifies the analysis and investigative process across your AWS Accounts enabling your team to quickly and easily determine the root cause of a potential security issue. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In AWS security is extremely important, you will be able to find multiple AWS services that can send you an alert when an issue arises, but Amazon Detective helps you to dig deeper and get the granular level detail. &lt;/p&gt;

&lt;h2&gt;
  
  
  How does Amazon Detective work?
&lt;/h2&gt;

&lt;p&gt;When you enable Detective in your AWS account, the service automatically collects and analyzes millions of data from multiple data sources and provides us with easy-to-understand visual insight to interact with the analysis. So instead of manually inspecting the raw logs, you can visualize the details relating to an issue in one place and answer your security question. &lt;/p&gt;

&lt;p&gt;It will collect logs from CloudTrail management events, VPC network traffic, GuardDuty findings and then use Machine Learning, statistical analysis, and graph theory to generate a visualization. &lt;/p&gt;

&lt;p&gt;Use cases of Amazon Detective are: &lt;br&gt;
&lt;strong&gt;1.Finding/Alert Triage:&lt;/strong&gt; Suppose you have received a GuardDuty finding, and you are uncertain about whether you should be concerned. Detective can provide answers to your questions, which means it can assist you in accelerating triage and avoiding unnecessary escalation. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Incident Investigation:&lt;/strong&gt; If the finding is of concern, then the finding triage process becomes an incident investigation and allows you to see analysis going back ck up to 1 year and help you answer questions like how long the security issue has been going on for and how many resources have been affected by it. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Threat Hunting:&lt;/strong&gt; Suppose you want to know what kind of interactions an IP address had in your environment. &lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Detective is Multi Account service
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_QiClLwL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6sqmw7fk4mgn0kkk3bdr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_QiClLwL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6sqmw7fk4mgn0kkk3bdr.png" alt="Figure 1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Customers with multiple accounts who want to centralize the security investigation can use Amazon Detective. You must enable Detective in one of our account let's call it master account. Detective will make Security Behavior Graph from the logs collected from CloudTrail, VPC Network Traffic, GaurdDuty Finding. &lt;/p&gt;

&lt;p&gt;Master account can send invitations to other accounts, and their CloudTrail Logs, VPC Network Traffic, and GaurdDuty findings will be shared with master account. Therefore, it's essential to follow best practices for managing data access and security to ensure that only authorized users have access to sensitive information. &lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling AWS Security Findings in the Amazon Detective Console
&lt;/h2&gt;

&lt;p&gt;When you enable Detective for the first time, it identifies findings from both GuardDuty and Security Hub and begins ingesting them alongside other data sources.&lt;/p&gt;

&lt;p&gt;Detective begins analysing all relevant data in order to identify links between disparate events and activities. You can get a visualisation of these connections, including resource behaviour and activities, to start your investigation process. After two weeks, historical baselines are established, which can be used to provide comparisons to recent activity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;The Amazon Detective search interface serves as a common place for new users. Within this interface, you have the ability to search using various criteria, including &lt;em&gt;GuardDuty Finding, AWS account, AWS Role, EC2 Instance, IP address, Role Session, User, and User agent.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To search specifically for an &lt;em&gt;AWS Role&lt;/em&gt;, select it from the dropdown list and enter the desired role in the search bar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VEYqH8_K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ud3btkv62e9d9p0abqql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VEYqH8_K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ud3btkv62e9d9p0abqql.png" alt="Figure 2" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Upon performing the search, you will be redirected to the profile page for the respective AWS Role. It is worth noting that Detective provides a similar profile page for every resource. To begin, adjust the &lt;strong&gt;Scope Time&lt;/strong&gt; to the desired time frame.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lfpScS8u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mdmve4sud4ci4am7b65u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lfpScS8u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mdmve4sud4ci4am7b65u.png" alt="Figure 3" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The profile page is divided into multiple tabs: &lt;strong&gt;Overview, New Behavior, and Resource Interaction.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Overview tab provides high-level information, including:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Information related to the Role.&lt;/li&gt;
&lt;li&gt;Findings associated with the resource.&lt;/li&gt;
&lt;li&gt;The "Overall API call volume" panel displays all successful and failed API calls made using this Role.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m9i5EjZO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/21vp02p4omx5z7wnwssc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m9i5EjZO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/21vp02p4omx5z7wnwssc.png" alt="Figure 4" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5v83Wu29--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/izn3q608ypkntr5s4jpf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5v83Wu29--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/izn3q608ypkntr5s4jpf.png" alt="Figure 5" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the Resource Interaction tab, you can observe:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Who has assumed this Role&lt;/li&gt;
&lt;li&gt;The Role assumed by our Role.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dYWXuDso--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8nea25wny4lolyqmzecf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dYWXuDso--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8nea25wny4lolyqmzecf.png" alt="Figure 6" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The New Behavior tab highlights any behavior exhibited by the Role that had not been observed before the selected scope time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing Model
&lt;/h2&gt;

&lt;p&gt;Amazon Detective has a tiered pricing model that is based upon the volume of data that the service ingests, and the analytics and summaries of ingested data are kept for 1 year.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RL6-EhHK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lbzhcf9hncpvl4dkyy41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RL6-EhHK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lbzhcf9hncpvl4dkyy41.png" alt="Figure 2 - Pricing Model" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Useful Links&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/detective/latest/adminguide/what-is-detective.html"&gt;Amazon Detective&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=fmm4PXhg8BY"&gt;Amazon Detective Overview and Demonstration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=-j7p-uOT7Ds"&gt;Introduction to Amazon Detective&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>cloud</category>
      <category>aws</category>
      <category>detective</category>
    </item>
    <item>
      <title>Why Containers are more popular than Virtual Machine?</title>
      <dc:creator>Jay Sheth</dc:creator>
      <pubDate>Wed, 31 May 2023 18:53:17 +0000</pubDate>
      <link>https://dev.to/jay_sheth/why-containers-are-more-popular-than-virtual-machine-3g7b</link>
      <guid>https://dev.to/jay_sheth/why-containers-are-more-popular-than-virtual-machine-3g7b</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Virtualization&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Virtualization&lt;/em&gt; is quite an old concept. It began in the 1960s to logically divide the system resources provided by mainframe computers into individual applications. Being an old technology, it is still a relevant part of cloud computing.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Virtualization&lt;/em&gt; uses software to create an abstract layer over hardware allowing hardware elements such as storage, computing, and memory to get distributed among multiple virtual machines (VM). Each VM has its separate operating system (OS), acting like an individual machine even though it shares the underlying hardware with multiple other VMs.&lt;/p&gt;

&lt;p&gt;The abstraction layer is a piece of software known as a &lt;em&gt;Hypervisor&lt;/em&gt;. It is a crucial component in the virtualization process which serves as an interface between the VM and the underlying physical hardware. It ensures that VMs do not interrupt each other. It stands on top of a host or a physical server. The main task of a _hypervisor _is to pool resources from the physical server and allocate them to different virtual environments.&lt;/p&gt;

&lt;p&gt;There are two types of hypervisors:&lt;br&gt;
&lt;strong&gt;Type 1 or "bare metal" hypervisors&lt;/strong&gt;: A Type 1 hypervisor runs directly on the physical hardware of the underlying machine, interacting with its CPU, memory, and physical storage.&lt;br&gt;
&lt;strong&gt;Type 2 hypervisors&lt;/strong&gt;: A Type 2 hypervisor does not run directly on the underlying hardware. Instead, it runs as an application in an OS.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Virtual Machine&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fel7v0g42v5j433smcfho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fel7v0g42v5j433smcfho.png" alt="Virtual Machine"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A _virtual machine _(VM) is a virtual environment that behaves like a virtual computer system, &lt;strong&gt;complete with its CPU, memory, network interface, and storage.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS refers to them as &lt;em&gt;EC2 instances&lt;/em&gt;; &lt;em&gt;EC2 instances&lt;/em&gt; are &lt;em&gt;virtual machines&lt;/em&gt; that emulate physical hardware components. An &lt;em&gt;EC2 instance&lt;/em&gt; can do anything that a physical computer can do. You choose your compute options based on: &lt;strong&gt;CPU, memory, and storage.&lt;/strong&gt; You choose the OS and maintain all security and patching of the instance. You can scale up or down the resources as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Containers&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fniw2sahnlj44uwiortgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fniw2sahnlj44uwiortgb.png" alt="Containers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Containers&lt;/em&gt; are software components that package application code, along with required executables such as &lt;strong&gt;libraries, dependencies, binary codes, and configuration files&lt;/strong&gt;, using operating system virtualization in a standardized manner. &lt;em&gt;Containers&lt;/em&gt; can run on various platforms, including desktops, traditional IT environments, and cloud infrastructures.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Containers&lt;/em&gt; are &lt;strong&gt;lightweight, portable, and swift&lt;/strong&gt; because they do not include operating system images, like virtual machines. As a result, they have less overhead and can leverage the features and resources of the host operating system, making them highly portable and easy to deploy.&lt;/p&gt;

&lt;p&gt;AWS provides two services for container management:&lt;br&gt;
**1. Amazon Elastic Container Service (Amazon ECS)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon Elastic Kubernetes Service (Amazon EKS)**&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Amazon Elastic Container Service (Amazon ECS)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Elastic Container Service (ECS)&lt;/em&gt; is a fully managed container orchestration service provided by AWS. _ECS _simplifies the d*&lt;em&gt;eployment, management, and scaling of Docker containers on AWS, and enables customers to build highly scalable and resilient microservices-based applications.&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;_ECS _has two main components: the &lt;strong&gt;ECS service&lt;/strong&gt; and the &lt;strong&gt;ECS agent&lt;/strong&gt;. The &lt;strong&gt;ECS service&lt;/strong&gt; is responsible for managing the container instances, tasks, and services, and provides APIs and a console for customers to interact with. The &lt;strong&gt;ECS agent&lt;/strong&gt; is a lightweight daemon that runs on each EC2 instance or Fargate instance and communicates with the ECS service to register the instance and start, stop, and monitor containers.&lt;/p&gt;

&lt;p&gt;There are two ways to set up an Amazon ECS cluster:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;EC2&lt;/strong&gt;: With EC2 instances, customers can choose their own instances and scale the cluster as needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fargate&lt;/strong&gt;: With Fargate, AWS manages the instances for customers, and they only pay for the resources their containers use.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;_ECS _supports both &lt;strong&gt;Linux and Windows containers&lt;/strong&gt;, and provides features such as load balancing, auto scaling, service discovery, and integration with other AWS services.&lt;/p&gt;

&lt;p&gt;_ECS _integrates with other AWS services such as &lt;strong&gt;Amazon CloudWatch for monitoring, AWS Identity and Access Management (IAM) for access control, and Amazon Elastic Container Registry (ECR) for storing and managing Docker images.&lt;/strong&gt; ECS also supports integration with third-party tools such as Jenkins.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Amazon Elastic Kubernetes Service (Amazon EKS)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Elastic Kubernetes Service (EKS)&lt;/em&gt; is a fully managed Kubernetes service provided by AWS. &lt;em&gt;EKS&lt;/em&gt; simplifies the &lt;strong&gt;deployment, management, and scaling of containerized applications using Kubernetes on AWS, and enables customers to build highly scalable and resilient microservices-based applications.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is an open-source container orchestration system that automates containerized applications' deployment, scaling, and management. EKS allows customers to run Kubernetes clusters on a managed cluster of EC2 instances or Fargate instances same as ECS.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;EKS&lt;/em&gt; has two main components: the &lt;strong&gt;EKS control plane&lt;/strong&gt; and the &lt;strong&gt;worker nodes&lt;/strong&gt;. The &lt;strong&gt;EKS control plane&lt;/strong&gt; is responsible for managing the Kubernetes control plane, including the API server, etcd, and other components. The control plane is highly available, automatically scales, and is managed by AWS. The &lt;strong&gt;worker nodes&lt;/strong&gt; are the EC2 or Fargate instances running the containerized applications. &lt;em&gt;EKS&lt;/em&gt; automatically provisions, scales, and manages these nodes, and customers can use Amazon EC2 Auto Scaling groups to scale the nodes based on demand.&lt;/p&gt;

&lt;p&gt;_EKS _integrates with other AWS services such as &lt;strong&gt;Amazon CloudWatch for monitoring, AWS Identity and Access Management (IAM) for access control, and Amazon Elastic Container Registry (ECR) for storing and managing container images.&lt;/strong&gt; EKS also supports integration with third-party tools such as Jenkins.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Difference between containers and virtual machines (VMs)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlclep9vov0wwam8qgwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlclep9vov0wwam8qgwu.png" alt="Containers and Virtual Machine"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Containers and virtual machines are two distinct approaches to virtualizing computing resources. Virtual machines virtualize all components down to the hardware level, generating multiple instances of operating systems on a single physical server. In contrast, containers virtualize solely the software layers above the operating system, forming lightweight packages that incorporate all the dependencies required for a software application. Containers can operate more workloads on a single operating system instance than virtual machines, making them faster, more flexible, and more portable.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advantages&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Containerized applications are &lt;strong&gt;portable&lt;/strong&gt; and can be used in other cloud environments or returned to an on-premises datacenter, which helps businesses avoid vendor lock-in.&lt;/li&gt;
&lt;li&gt;A container is more &lt;strong&gt;lightweight&lt;/strong&gt;. They start up faster, nearly instantly. This difference in starting time is important when designing applications that are required to scale quickly during I/O bursts.&lt;/li&gt;
&lt;li&gt;Containers offer the &lt;strong&gt;flexibility and portability&lt;/strong&gt; that is ideal for the &lt;strong&gt;multi-cloud&lt;/strong&gt; world. When developers design new applications, they may not be aware of all of the locations where they will need to be deployed. Today, a corporation may run a program on its private cloud, but tomorrow it may need to deploy it on a public cloud. Containerizing applications gives teams the flexibility they need to deal with today's diverse software environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Use cases of Containers&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Increased developer's productivity&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8v0h8mkh4t7ddgvcgu6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8v0h8mkh4t7ddgvcgu6v.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While testing an early version of an application, a developer can run it from their PC without hosting it on the primary operating system or creating a testing environment. Furthermore, containers eliminate problems with environment settings, handle scalability challenges, and simplify operations. Because containers solve numerous challenges, developers can concentrate on development rather than dealing with operations.&lt;/p&gt;

&lt;p&gt;In these Configuration files, the code of the application, dependencies, and the runtime engine are packaged all together in a robust manner known as a container that can run on any environment independently.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Great for CI/CD&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Containers also make it easier to develop a CI/CD pipeline, provide more frequent updates, and create repeatable deployment processes. Because containers are lightweight and agile, each container contains much less code than updating an entire VM, and containers run in the same environment at every stage of development, there is little risk that a containerized application will work perfectly in development and then fail in production.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Containers can run on IoT devices&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using containers suit installing and updating applications on IoT devices. That is because containers encompass all the required software for the applications to function, making them easily transportable and lightweight, which is particularly beneficial for devices having restricted resources.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Great for Micro-service architecture&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwophhii50qoby5ep6zy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwophhii50qoby5ep6zy.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Containers support microservice architectures, allowing for more precise deployment and scaling of application components. They are preferred over scaling up an entire monolithic application simply because a particular component faces difficulty.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Hybrid and multi-cloud compatible&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pz3r02z3cl49nr0c0nz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pz3r02z3cl49nr0c0nz.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Containers provide flexibility in app deployment, allowing for the creation of a unified environment that can run on-premises and across multiple cloud platforms. This makes it possible to optimize costs and enhance operational efficiency by leveraging existing infrastructure and utilizing the benefits of different cloud providers with the workload.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Containers have surpassed virtual machines in popularity due to their lightweight design, shorter deployment times, and efficient resource utilization, particularly for current, cloud-native apps. However, both technologies will coexist and evolve, and organizations should select the best tool for the job based on their specific needs and goals.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>virtualmachine</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
