<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: SANDESH D MANOCHARYA</title>
    <description>The latest articles on DEV Community by SANDESH D MANOCHARYA (@sandesh-d-manocharya).</description>
    <link>https://dev.to/sandesh-d-manocharya</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sandesh-d-manocharya"/>
    <language>en</language>
    <item>
      <title>Deploying a Sample HTML Application to AWS EC2 with CodePipeline, CodeDeploy, and GitHub</title>
      <dc:creator>SANDESH D MANOCHARYA</dc:creator>
      <pubDate>Wed, 08 Jan 2025 06:41:37 +0000</pubDate>
      <link>https://dev.to/sandesh-d-manocharya/deploying-a-sample-html-application-to-aws-ec2-with-codepipeline-codedeploy-and-github-5fic</link>
      <guid>https://dev.to/sandesh-d-manocharya/deploying-a-sample-html-application-to-aws-ec2-with-codepipeline-codedeploy-and-github-5fic</guid>
      <description>&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Create IAM Roles&lt;/p&gt;

&lt;p&gt;Create two IAM Roles. One for the service AWS EC2 and another for the service AWS CodeDeploy.&lt;/p&gt;

&lt;p&gt;Go to the service "IAM", select "Roles" in the left panel and click on "Create role" on the right top.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6i9w6atjbq3i5t5f78pv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6i9w6atjbq3i5t5f78pv.png" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First let us create a role "Role_EC2CodeDeploy" for the service EC2.&lt;/p&gt;

&lt;p&gt;To do so ensure that "AWS service" is selected.&lt;/p&gt;

&lt;p&gt;From the dropdown of "Use case", under "Commonly used services", select the service or use case "EC2".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqbm61b7x5w7p7myarvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqbm61b7x5w7p7myarvb.png" alt="Image description" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmx5mxmitbx2in66hoah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmx5mxmitbx2in66hoah.png" alt="Image description" width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click "Next" and "Next"&lt;/p&gt;

&lt;p&gt;Give a name to the role as "Role_EC2CodeDeploy" and click on "Create role".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtw163ilaq59p1pgof0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtw163ilaq59p1pgof0e.png" alt="Image description" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Either during the role creation or after the role creation we need to add permissions by searching CodeDeploy in the search bar and selecting the policy "AmazonEC2RoleforAWSCodeDeploy".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqpzotnwalat4rl0wov1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqpzotnwalat4rl0wov1.png" alt="Image description" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A new role is created. I am going to use this role on an EC2 machine. This role allows the service AWS EC2 to access another service AWS CodeDeploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudtdrr1ljdj088hi5q59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudtdrr1ljdj088hi5q59.png" alt="Image description" width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly let us create another role "Role_CodeDeploy" for the service CodeDeploy.&lt;/p&gt;

&lt;p&gt;But this time instead of EC2, select the service or use case "CodeDeploy".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwvoojgig81qzr2nysku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwvoojgig81qzr2nysku.png" alt="Image description" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnj5kqunhywib3euo2e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnj5kqunhywib3euo2e8.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will take the permission policy "AWSCodeDeployRole" automatically. I am going to use this role on CodeDeploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzk0fiwmr3dwi3pjw45sl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzk0fiwmr3dwi3pjw45sl.png" alt="Image description" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give a name "Role_CodeDeploy" to the role and click on "Create role".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkaqvcomddd52jtwa887v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkaqvcomddd52jtwa887v.png" alt="Image description" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now both the roles are created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxsie3723r81t235najn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxsie3723r81t235najn.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Create EC2 Instance and Attach the Role "Role_EC2CodeDeploy"&lt;/p&gt;

&lt;p&gt;We can launch any number of instances based on our requirements. But I will launch only one instance "Demo_AWSCodeDeploy" in this example.&lt;/p&gt;

&lt;p&gt;I launched an Amazon Linux 2023 AMI t2.micro EC2 instance.&lt;/p&gt;

&lt;p&gt;Make sure that you have added the ports 22 and 80 as inbound rules.&lt;/p&gt;

&lt;p&gt;Under "Advanced details", from the IAM instance profile dropdown, select the role "Role_EC2CodeDeploy" which was created recently for the service EC2.&lt;/p&gt;

&lt;p&gt;Under "Advanced details" itself scroll down and in the "User data" section paste the following script to automatically install all the packages or dependencies immediately after launching the EC2 instance.&lt;/p&gt;

&lt;p&gt;_#!/bin/bash&lt;br&gt;
sudo yum -y update&lt;br&gt;
sudo yum -y install ruby&lt;br&gt;
sudo yum -y install wget&lt;br&gt;
cd /home/ec2-user&lt;br&gt;
wget &lt;a href="https://aws-codedeploy-ap-south-1.s3.ap-south-1.amazonaws.com/latest/install" rel="noopener noreferrer"&gt;https://aws-codedeploy-ap-south-1.s3.ap-south-1.amazonaws.com/latest/install&lt;/a&gt;&lt;br&gt;
sudo chmod +x ./install&lt;br&gt;
sudo ./install auto&lt;br&gt;
sudo yum install -y python-pip&lt;br&gt;
sudo pip install awscli&lt;br&gt;
_&lt;/p&gt;

&lt;p&gt;Click on "Launch instance".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5gop4h3zxvu6z7n1qvt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5gop4h3zxvu6z7n1qvt.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The EC2 instance "Demo_AWSCodeDeploy" is launched and running successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7uhqwczcwqc7b6n8s98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7uhqwczcwqc7b6n8s98.png" alt="Image description" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we click on the instance ID and see the details of the instance then we can see the IAM role "Role_EC2CodeDeploy" attached.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulvo9svrnm4o5mo9h1wy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulvo9svrnm4o5mo9h1wy.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That is all fine. But I want to try each command individually. So I do not use the user data script.&lt;/p&gt;

&lt;p&gt;Connect to the instance through Git Bash or any other CLI of your choice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4vxojwsp9ryn06907md.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4vxojwsp9ryn06907md.png" alt="Image description" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First let us update the package lists.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;sudo yum -y update&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvur7ht0ewz9fwsw59xu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvur7ht0ewz9fwsw59xu.png" alt="Image description" width="730" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let us install necessary packages.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;sudo yum -y install ruby&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznp9vvcg7w91t5sm8khi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznp9vvcg7w91t5sm8khi.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo yum -y install wget&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlajt9khp98w0yswt2xq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlajt9khp98w0yswt2xq.png" alt="Image description" width="729" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us create a project directory and navigate to it.&lt;/p&gt;

&lt;p&gt;_mkdir -p /home/ec2-user/Projects/GitHub_CodeDeploy&lt;/p&gt;

&lt;p&gt;cd /home/ec2-user/Projects/GitHub_CodeDeploy&lt;/p&gt;

&lt;p&gt;ls -la_&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t0sn3qeaj17q4enam2m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t0sn3qeaj17q4enam2m.png" alt="Image description" width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let us download CodeDeploy agent installer.&lt;/p&gt;

&lt;p&gt;_wget &lt;a href="https://aws-codedeploy-ap-south-1.s3.ap-south-1.amazonaws.com/latest/install_" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyjzpk20dwv6yqn3lfqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyjzpk20dwv6yqn3lfqb.png" alt="Image description" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ls -la&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmljp26p7jx54s245njs4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmljp26p7jx54s245njs4.png" alt="Image description" width="717" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us make the installer executable.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;sudo chmod +x ./install&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47iby06edvott9jmwb0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47iby06edvott9jmwb0r.png" alt="Image description" width="708" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us install the CodeDeploy agent in auto mode.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;sudo ./install auto&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu3lenndxm0vtm4ghje4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu3lenndxm0vtm4ghje4.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us install pip&lt;/p&gt;

&lt;p&gt;&lt;em&gt;sudo yum install -y python-pip&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbttp0xabb6g4bz01in9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbttp0xabb6g4bz01in9.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us install awscli&lt;/p&gt;

&lt;p&gt;&lt;em&gt;sudo pip install awscli&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82f4odw8jntomdjwbiwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82f4odw8jntomdjwbiwb.png" alt="Image description" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us confirm the installation of awscli&lt;/p&gt;

&lt;p&gt;&lt;em&gt;aws - version&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfxu7tge19n7ecqx4zwe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfxu7tge19n7ecqx4zwe.png" alt="Image description" width="800" height="44"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Create an Application and Deployment Group under CodeDeploy&lt;/p&gt;

&lt;p&gt;Go to the service "CodeDeploy", select "Applications" in the left side bar and click on "Create application".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6udwhlmczue1vk6blr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6udwhlmczue1vk6blr8.png" alt="Image description" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give a name "WebsiteForDevOpsStuffs" for the application.&lt;/p&gt;

&lt;p&gt;Select "EC2" from Compute platform dropdown.&lt;/p&gt;

&lt;p&gt;Click on "Create application".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5uzl9vfd5jyjyanmt2zp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5uzl9vfd5jyjyanmt2zp.png" alt="Image description" width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An application "WebsiteForDevOpsStuffs" is created.&lt;/p&gt;

&lt;p&gt;Click on "Create deployment group".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8he1ubfd09qvvsyun92e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8he1ubfd09qvvsyun92e.png" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give the name "DevOpsStuffsDeploymentGroup" to the deployment group.&lt;/p&gt;

&lt;p&gt;Attach the role "Role_CodeDeploy" to the service "AmazonCodeDeploy" by selecting "Role_CodeDeploy" from the "Service role" dropdown.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcyntuvxh8npghucgzo3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcyntuvxh8npghucgzo3.png" alt="Image description" width="679" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select "In place" as "Deployment type".&lt;/p&gt;

&lt;p&gt;Select "Amazon EC2 instances" as "Environment configuration".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffd6uf4ydvdyultv6a7k3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffd6uf4ydvdyultv6a7k3.png" alt="Image description" width="747" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Tag group, select "Name" and name of the newly created EC2 instance as value from dropdowns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fds45dalely32jx327ol4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fds45dalely32jx327ol4.png" alt="Image description" width="713" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave all other settings as their default values.&lt;/p&gt;

&lt;p&gt;Disable "Load balancer".&lt;/p&gt;

&lt;p&gt;Click on "Create deployment group".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyu7j2lwihgac9378hyu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyu7j2lwihgac9378hyu.png" alt="Image description" width="748" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deployment group "DevOpsStuffsDeploymentGroup" was created successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46a25lx8fup9k9rppgex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46a25lx8fup9k9rppgex.png" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Create a Pipeline and Integrate GitHub with CodePipeline&lt;/p&gt;

&lt;p&gt;Go to the left panel, expand "CodePipeline" and and click on Pipelines"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjn2kyq8y0rz8zvzlcew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjn2kyq8y0rz8zvzlcew.png" alt="Image description" width="237" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on "Create pipeline", give a name "DevOpsStuffsPipeline" to it, and leave the rest of the things default. By default it stores the artifacts in S3.&lt;/p&gt;

&lt;p&gt;Then click "Next".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxqp8syw06myoikn7qd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxqp8syw06myoikn7qd2.png" alt="Image description" width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select GitHub (Version 2) from dropdown, paste the connection link if you already have one or click on "Connect to GitHub" to create one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpue9h1jucdlfynyzzhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpue9h1jucdlfynyzzhg.png" alt="Image description" width="724" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select repository name and branch name from dropdowns.&lt;/p&gt;

&lt;p&gt;Select "No filter" for specifying how you want to trigger the pipeline and leave rest as default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hsrl82lstopo3r1qrc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hsrl82lstopo3r1qrc7.png" alt="Image description" width="706" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on "Skip build stage" since we are not building the code in this project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdskiab9wab9o4c2qvuxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdskiab9wab9o4c2qvuxh.png" alt="Image description" width="746" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select "AWS CodeDeploy" from the dropdown for "Deploy Provider".&lt;/p&gt;

&lt;p&gt;Click on "Next".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsogd3223cb652reuwccs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsogd3223cb652reuwccs.png" alt="Image description" width="739" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will automatically take Region, select the application and the deployment group from dropdowns.&lt;/p&gt;

&lt;p&gt;Click on "Next".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwhy81zmp0mz9aia02lp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwhy81zmp0mz9aia02lp.png" alt="Image description" width="742" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Review all the things and click on "Create pipeline".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5a6sdrahjbhwizuiw46x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5a6sdrahjbhwizuiw46x.png" alt="Image description" width="748" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now deployment will be started using AWS CodeDeploy. If we would have specified the number of EC2 instances as 4 during the time of launch in Step 2 then the application would have been deployed on all 4 EC2 instances now.&lt;/p&gt;

&lt;p&gt;First the code will be checked out from GitHub repo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8f8jd1oh62tfub5uad0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8f8jd1oh62tfub5uad0.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After some time code or application will be deployed on EC2 instance or server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhoj3o850hzm6wgynsfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhoj3o850hzm6wgynsfb.png" alt="Image description" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can click on "View details" to see the summary. As the code will be stored in S3 bucket by default, you can go to the service "AWS S3" and see the bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08kmrjaokdfmqm01ehi1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08kmrjaokdfmqm01ehi1.png" alt="Image description" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Access the Application&lt;/p&gt;

&lt;p&gt;Let us access the application on port 80 as already we have added the inbound rule 80 in the EC2 instance.&lt;/p&gt;

&lt;p&gt;Copy the Public DNS of the EC2 instance and paste it in the browser.&lt;/p&gt;

&lt;p&gt;There we go!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbzod911x27ufczxmozr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbzod911x27ufczxmozr.png" alt="Image description" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us go to GitHub repo, make some minor changes in the file &lt;em&gt;index.html&lt;/em&gt; by clicking on Edit/Pencil icon and commit the changes as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zvet5sfjiwgwt8ikg1k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zvet5sfjiwgwt8ikg1k.png" alt="Image description" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now go back to Amazon CodePipeline and just refresh the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gaazovr29coml48tjqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gaazovr29coml48tjqc.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fspt4agz4frl7n436yyec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fspt4agz4frl7n436yyec.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46do6vh9dmxurxpu0r4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46do6vh9dmxurxpu0r4u.png" alt="Image description" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then go back to the browser and just refresh the page.&lt;/p&gt;

&lt;p&gt;There we go!!&lt;/p&gt;

&lt;p&gt;The AmazonCodePipeline has automatically identified the code changes in the GitHub repo and triggered the build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswh9nq4g64m1p8gr2rd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswh9nq4g64m1p8gr2rd3.png" alt="Image description" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the GitHub repo of the project:&lt;br&gt;
&lt;a href="https://github.com/SandyDevOpsStuffs/AWS-Project1.git" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Ansible Kickstart for Absolute Beginners</title>
      <dc:creator>SANDESH D MANOCHARYA</dc:creator>
      <pubDate>Wed, 08 Jan 2025 06:19:39 +0000</pubDate>
      <link>https://dev.to/sandesh-d-manocharya/ansible-kickstart-for-absolute-beginners-31ge</link>
      <guid>https://dev.to/sandesh-d-manocharya/ansible-kickstart-for-absolute-beginners-31ge</guid>
      <description>&lt;p&gt;In this blog let us understand how to install Ansible, how to create an inventory file and how to run a playbook.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Requirement:&lt;/strong&gt; An AWS account and basics of EC2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Create three EC2 Instances.&lt;/p&gt;

&lt;p&gt;Create three EC2 instances with the same key-pair and security group.&lt;/p&gt;

&lt;p&gt;Rename them as &lt;em&gt;Ansible-ControlNode, Ansible-ManagedNode1 and Ansible-ManagedNode2&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftup1vkrnge483pl76ctn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftup1vkrnge483pl76ctn.png" alt="Image description" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Download and install Git.&lt;/p&gt;

&lt;p&gt;On your local machine download and install Git using below link so that we can use Git Bash:&lt;br&gt;
 &lt;br&gt;
&lt;a href="https://git-scm.com/downloads" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use any CLI such as Command Prompt, WSL (Windows Subsystem for Linux), VS Code Terminal, etc as alternatives for Git Bash.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Connect to each instance using Git Bash.&lt;/p&gt;

&lt;p&gt;Navigate to the folder where you stored the .pem file locally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy70fcsl24fn64rh527mv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy70fcsl24fn64rh527mv.png" alt="Image description" width="792" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Right click and select "Open Git Bash here".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9kprewbuug9pka6svsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9kprewbuug9pka6svsh.png" alt="Image description" width="577" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will get the below screen in a new Git Bash window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb04jg1sh2dq6neysuwtd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb04jg1sh2dq6neysuwtd.png" alt="Image description" width="800" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;em&gt;AWS Management Console&lt;/em&gt;, search for the service &lt;em&gt;EC2&lt;/em&gt; and click on &lt;em&gt;Instances&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Select the 1st instance Ansible-ControlNode and click on Connect to connect to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynfy3t3q8fp7mqg6y9qz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynfy3t3q8fp7mqg6y9qz.png" alt="Image description" width="800" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure that the SSH client is selected and copy the ssh command displayed at the bottom.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2rylgu7d815lkvvb5a9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2rylgu7d815lkvvb5a9.png" alt="Image description" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to Git Bash, right click and paste the copied ssh command. Then hit Enter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsj3sz87898zrvppae4bm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsj3sz87898zrvppae4bm.png" alt="Image description" width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then type &lt;em&gt;yes&lt;/em&gt; and hit Enter if it prompts the confirmation about the connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1pcvx1tzdv89ixztt45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1pcvx1tzdv89ixztt45.png" alt="Image description" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There we go!!…..&lt;br&gt;
We are successfully connected to the 1st instance. Its private IP is 172.31.45.125.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2h8319f5srirrgbzfeli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2h8319f5srirrgbzfeli.png" alt="Image description" width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open two more separate Git Bash windows and repeat these connection steps for another two instances (Managed Nodes) as well. Their private IPs are 172.31.33.145 and 172.31.41.176.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgjhsd3lv37k4a9o6rgj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgjhsd3lv37k4a9o6rgj.png" alt="Image description" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlud55hc0l5rslj98xe0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlud55hc0l5rslj98xe0.png" alt="Image description" width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the &lt;em&gt;.pem&lt;/em&gt; file locally in the notepad and copy the content.&lt;/p&gt;

&lt;p&gt;Create a &lt;em&gt;.pem&lt;/em&gt; file in all three instances with the same file name and paste the same content.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthc2cs4152pnm2stf1su.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthc2cs4152pnm2stf1su.png" alt="Image description" width="798" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ds31ru9tawjb9hf183f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ds31ru9tawjb9hf183f.png" alt="Image description" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eepwffncv62cbxk85f0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eepwffncv62cbxk85f0.png" alt="Image description" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you will see the same content in the file &lt;em&gt;.ssh/authorized_keys&lt;/em&gt; in all three instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Install Ansible on 1st instance &lt;em&gt;Ansible-ControlNode&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;_sudo apt update&lt;/p&gt;

&lt;p&gt;sudo apt install ansible&lt;/p&gt;

&lt;p&gt;ansible –-version_&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3mcuvtsdup6zo23802t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3mcuvtsdup6zo23802t.png" alt="Image description" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Create an inventory file.&lt;/p&gt;

&lt;p&gt;In the 1st instance &lt;em&gt;Ansible-ControlNode&lt;/em&gt;, create a directory &lt;em&gt;ansible_quickstart&lt;/em&gt; and navigate to it.&lt;/p&gt;

&lt;p&gt;Also in the same directory where you have created a &lt;em&gt;.pem&lt;/em&gt; file in the 1st instance, create a &lt;em&gt;.ini&lt;/em&gt; file for inventory by adding the Public IPs of 2nd and 3rd instances to a group &lt;em&gt;myhosts&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Whenever we restart the EC2 instances we get new Public IPs. So in the below screenshots the Public IPs may vary from one screenshot to another.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjku3f0uq6c3w59j7qfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjku3f0uq6c3w59j7qfw.png" alt="Image description" width="768" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydss7jjz4u1xn1lfl3yo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydss7jjz4u1xn1lfl3yo.png" alt="Image description" width="650" height="94"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Verify your inventory.&lt;/p&gt;

&lt;p&gt;Make sure that the output of the following command has listed the Public IPs that we have added in the previous step.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ansible-inventory -i inventory.ini - list&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkacgmz3hupp9a016jubo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkacgmz3hupp9a016jubo.png" alt="Image description" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Ping the group myhosts in your inventory.&lt;/p&gt;

&lt;p&gt;Ensure the permission of the &lt;em&gt;.pem&lt;/em&gt; file is readable only by the owner in all three instances. We can change it to 644 by using the &lt;em&gt;chmod&lt;/em&gt; command. But since I am the owner of the &lt;em&gt;.pem&lt;/em&gt; file I set the permission to 600 itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For 1st instance:&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;chmod 600 /home/ubuntu/ansible_quickstart/sandy-devops-stuffs-mumbai.pem&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For 2nd and 3rd instances:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;chmod 600 /home/ubuntu/sandy-devops-stuffs-mumbai.pem&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgm6wp9fvz3asoarz1m1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgm6wp9fvz3asoarz1m1.png" alt="Image description" width="800" height="27"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwcdhq3s9ctk6vre9v4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwcdhq3s9ctk6vre9v4w.png" alt="Image description" width="800" height="38"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfk1rbdxwn83brptcbkt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfk1rbdxwn83brptcbkt.png" alt="Image description" width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also I modified the inventory file as follows to ensure my &lt;em&gt;inventory.ini&lt;/em&gt; and Ansible configuration are correctly set up to use the SSH key.&lt;/p&gt;

&lt;p&gt;_[myhosts]&lt;/p&gt;

&lt;p&gt;13.233.123.32 ansible_user=ubuntu ansible_ssh_private_key_file=/home/ubuntu/ansible_quickstart/sandy-devops-stuffs-mumbai.pem&lt;/p&gt;

&lt;p&gt;43.205.239.162 ansible_user=ubuntu ansible_ssh_private_key_file=/home/ubuntu/ansible_quickstart/sandy-devops-stuffs-mumbai.pem_&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnz2ea7dlcwkath4svb9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnz2ea7dlcwkath4svb9.png" alt="Image description" width="800" height="59"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now run the &lt;em&gt;ansible ping&lt;/em&gt; command in the 1st instance.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ansible myhosts -m ping -i inventory.ini&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;OR&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ansible -m ping myhosts -i inventory.ini&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;OR&lt;br&gt;
&lt;em&gt;ansible -m ping -i inventory.ini myhosts&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwd254r4r8e03661ofz9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwd254r4r8e03661ofz9r.png" alt="Image description" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OR&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjcr7hyd5wknfl3onnu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjcr7hyd5wknfl3onnu5.png" alt="Image description" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There we go!!…..&lt;/p&gt;

&lt;p&gt;The control node is successfully connected to the managed nodes and now the control node can manage the managed nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Pass the &lt;em&gt;-u&lt;/em&gt; option with the &lt;em&gt;ansible ping&lt;/em&gt; command if the username is different on the control node and the managed node(s).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; It is easy and straightforward to write an inventory file in &lt;em&gt;.ini&lt;/em&gt; format. But if the number of managed nodes increases then it is a best practice to write an inventory file in &lt;em&gt;.yaml&lt;/em&gt; format as shown below which is equivalent to the file &lt;em&gt;inventory.ini&lt;/em&gt; which we have already created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxv3adr7ybzo28kugedkd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxv3adr7ybzo28kugedkd.png" alt="Image description" width="380" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; Create and run the playbook to ping the hosts.&lt;br&gt;
Now create a file &lt;em&gt;playbook.yaml&lt;/em&gt; with following content in the directory &lt;em&gt;ansible_quickstart&lt;/em&gt; on 1st instance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd12qwtept427acqk5ndb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd12qwtept427acqk5ndb.png" alt="Image description" width="316" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; &lt;em&gt;ansible.builtin.ping:&lt;/em&gt; and &lt;em&gt;ansible.builtin.debug:&lt;/em&gt; in the above playbook are collections and modules of Ansible.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Ansible.builtin&lt;/em&gt; is one of the Ansible collections.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ping&lt;/em&gt; and &lt;em&gt;debug&lt;/em&gt; are modules in the collection &lt;em&gt;ansible.builtin&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;u&gt;ping module&lt;/u&gt;&lt;/em&gt; - Try to connect to the host, verify a usable python and return pong on success.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;u&gt;debug module&lt;/u&gt;&lt;/em&gt; - Print statements during execution.&lt;/p&gt;

&lt;p&gt;To know more about all other collections &amp;amp; modules and to understand what they do please refer to the following links.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.ansible.com/ansible/latest/collections/index.html#list-of-collections" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/index.html#plugins-in-ansible-builtin" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the following command on 1st instance:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ansible-playbook playbook.yaml -i inventory.ini&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftf1a29vwr2lauao3caov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftf1a29vwr2lauao3caov.png" alt="Image description" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There we go!!…..&lt;/p&gt;

&lt;p&gt;We have successfully run a simple Ansible playbook to ping to the hosts listed in the inventory file.&lt;/p&gt;

&lt;p&gt;In this way we can write other playbooks and run them on the control node to deploy and configure the applications on all the managed nodes at a time instead of doing it on each node manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Credit:&lt;/strong&gt; Ansible official document &lt;u&gt;Introduction to Ansible&lt;/u&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Storing the Terraform State File in Remote Backend (S3 bucket)</title>
      <dc:creator>SANDESH D MANOCHARYA</dc:creator>
      <pubDate>Mon, 26 Aug 2024 09:40:05 +0000</pubDate>
      <link>https://dev.to/sandesh-d-manocharya/storing-the-terraform-state-file-in-remote-backend-s3-bucket-4h3</link>
      <guid>https://dev.to/sandesh-d-manocharya/storing-the-terraform-state-file-in-remote-backend-s3-bucket-4h3</guid>
      <description>&lt;p&gt;In this article let us build a simple terraform script to create an EC2 instance (you can create any resource of your choice) and then let us store the state file in S3 bucket.&lt;br&gt;
Storing Terraform state files in an S3 bucket is a recommended best practice because it provides a central location for storing and managing your infrastructure's state files. Here's a step-by-step guide on how to store a Terraform state file in an S3 bucket:&lt;br&gt;
Prerequisites:&lt;br&gt;
Install Terraform on your local machine.&lt;br&gt;
AWS account with the necessary IAM permissions to create S3 buckets and manage EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt;&lt;br&gt;
Create an IAM user (for example 'SDM-TerraformStateInS3').&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn06kguzncvnfa4rto3ub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn06kguzncvnfa4rto3ub.png" alt="Image description" width="800" height="44"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
While creating the IAM user do not attach any policy and do not add the user to any group. Just create the IAM user with a password of your choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt;&lt;br&gt;
Create a S3 bucket for example ('sdm-terraform-state-bucket-1') manually in the region 'ap-south-1'. You can choose any region of your choice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq23prjunjgr0mtq9a0g3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq23prjunjgr0mtq9a0g3.png" alt="Image description" width="798" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
In your backend configuration file 'backend_config.tf' (which will be created in Step 8), it's a good practice to specify the same region where you have created the S3 bucket. This helps ensure that your Terraform backend configuration aligns with the region where the bucket is located, which can help avoid potential issues related to region mismatch.&lt;br&gt;
By specifying the same region as your S3 bucket in the backend block in the file 'backend_config.tf', you ensure that Terraform communicates with the correct S3 bucket in the designated region when storing and retrieving the state file. This alignment between your Terraform backend configuration and the region of your S3 bucket helps maintain consistency and ensures that Terraform functions as expected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt;&lt;br&gt;
Create a DynamoDB table (for example 'SDM-terraform-lock') for state locking with the Partition key 'LockID' and its type 'String'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf0kg7gazy8fg5xa058k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf0kg7gazy8fg5xa058k.png" alt="Image description" width="787" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
When creating a DynamoDB table for use as a Terraform state lock, it's important to ensure that the table is created in the same region that you specify in your Terraform backend configuration (backend "s3" block in the file 'backend_config.tf' which will be created in Step 8) to maintain consistency. Terraform will interact with the DynamoDB table in the specified region to manage state locks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt;&lt;br&gt;
With the help of JSON code create your own 'Customer managed' IAM policy (for example 'SDM-Terraform-S3') for S3. The JSON code for the policy is as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b3q899qmyhnar7or54g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b3q899qmyhnar7or54g.png" alt="Image description" width="600" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then the created policy looks like this in the list of policies in IAM service of AWS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9eowbx39m02zmjbvl1zo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9eowbx39m02zmjbvl1zo.png" alt="Image description" width="800" height="42"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
Instead of creating our own 'Customer managed' policy we could attach 'AWS managed' policy 'AmazonS3FullAccess' to the IAM user. But the AWS managed policy 'AmazonS3FullAccess' provides full access to Amazon S3 resources, allowing users to perform a wide range of actions on S3 buckets and objects. If your primary goal was to store Terraform state files in an S3 bucket and manage infrastructure with Terraform, attaching this policy would be useful and sufficient for storing Terraform state files in S3.&lt;br&gt;
Here's why it's useful:&lt;br&gt;
S3 Bucket Operations: The 'AmazonS3FullAccess' policy grants permissions for various S3 bucket operations, including creating, listing, deleting, and updating buckets. These permissions are necessary for creating and managing an S3 bucket for Terraform state storage.&lt;br&gt;
Object Operations: The policy allows users to perform actions on S3 objects (files) within a bucket, which includes the ability to upload, download, and delete objects. This is important for managing the Terraform state file in the bucket.&lt;/p&gt;

&lt;p&gt;However, it's essential to consider the principle of least privilege when granting permissions. While 'AmazonS3FullAccess' provides broad access to S3, it may grant more permissions than strictly necessary for your use case. For security best practices:&lt;br&gt;
Use a More Specific Policy: If possible, create a custom IAM policy tailored to the specific actions and resources needed for your use case. This allows you to grant only the permissions required, reducing the potential attack surface.&lt;br&gt;
Consider State Locking: If you plan to use Terraform in a collaborative environment with multiple users, consider using Terraform's state locking feature, which uses DynamoDB to manage locks. Ensure that your IAM policies grant appropriate permissions for DynamoDB if you implement state locking.&lt;br&gt;
Regularly Review and Audit Policies: Periodically review and audit your IAM policies to ensure they align with your current infrastructure and security requirements. Remove unnecessary permissions and ensure that permissions are granted on a need-to-know basis.&lt;/p&gt;

&lt;p&gt;Hence, 'AmazonS3FullAccess' can be useful for managing Terraform state files in an S3 bucket, but it's essential to review and fine-tune your IAM policies to meet your specific security and infrastructure needs while adhering to security best practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt;&lt;br&gt;
Attach this policy 'SDM-Terraform-S3' to IAM user 'SDM-TerraformStateInS3'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xfwd9ywmx39robzozg1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xfwd9ywmx39robzozg1.png" alt="Image description" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt;&lt;br&gt;
Manually create an AWS EC2 Ubuntu instance (for example 'SDM-Terraform') with instance type 't2.micro' in the region 'ap-south-1'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wxrfi0un7ric5gktqqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wxrfi0un7ric5gktqqi.png" alt="Image description" width="800" height="28"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then ssh into it.&lt;br&gt;
Create a directory 'S3'.&lt;br&gt;
&lt;em&gt;mkdir S3&lt;/em&gt;&lt;br&gt;
Navigate into the directory 'S3'.&lt;br&gt;
&lt;em&gt;cd S3&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt;&lt;br&gt;
Install Terraform in it using the following commands.&lt;br&gt;
Update the system:&lt;br&gt;
&lt;em&gt;sudo apt-get install unzip&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Confirm the latest version number on the terraform website given below:&lt;br&gt;
&lt;a href="https://www.terraform.io/downloads.html" rel="noopener noreferrer"&gt;https://www.terraform.io/downloads.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Download latest version of the terraform (substituting newer version number if needed):&lt;br&gt;
&lt;em&gt;wget &lt;a href="https://releases.hashicorp.com/terraform/1.5.7/terraform_1.5.7_linux_amd64.zip" rel="noopener noreferrer"&gt;https://releases.hashicorp.com/terraform/1.5.7/terraform_1.5.7_linux_amd64.zip&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Install unzip:&lt;br&gt;
&lt;em&gt;unzip terraform_1.5.7_linux_amd64.zip&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Extract the downloaded file archive:&lt;br&gt;
&lt;em&gt;unzip terraform_1.5.7_linux_amd64.zip&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Move the executable into a directory searched for executables:&lt;br&gt;
&lt;em&gt;sudo mv terraform /usr/local/bin/&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Confirm the installation:&lt;br&gt;
&lt;em&gt;terraform --version&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt;&lt;br&gt;
Create a backend configuration file 'backend_config.tf' with the following content inside the directory 'S3'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4d26jp39ylcs1z0xnbu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4d26jp39ylcs1z0xnbu.png" alt="Image description" width="599" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9:&lt;/strong&gt;&lt;br&gt;
Create an infra (EC2 instance) configuration file or resource definition file 'ec2.tf' with the following content inside the directory 'S3'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2c94hncewhsfj5ezo679.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2c94hncewhsfj5ezo679.png" alt="Image description" width="478" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
Keep both the files 'backend_config.tf' and 'ec2.tf' in the same directory 'S3'.&lt;br&gt;
In this setup, 'backend_conf.tf' contains the Terraform backend configuration for state storage, while 'ec2.tf' contains your EC2 instance resource definitions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
We can combine the contents of both 'backend_conf.tf' and 'ec2.tf' into a single Terraform configuration file. Here's how you can structure it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqsk4y6w7lbd3vhdvpnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqsk4y6w7lbd3vhdvpnj.png" alt="Image description" width="616" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
In this combined file:&lt;br&gt;
The &lt;strong&gt;provider&lt;/strong&gt; block configures the AWS provider.&lt;br&gt;
The &lt;strong&gt;resource&lt;/strong&gt; block defines an AWS EC2 instance.&lt;br&gt;
The &lt;strong&gt;terraform&lt;/strong&gt; block includes the backend configuration for S3 and DynamoDB.&lt;br&gt;
This single file contains both the resource definition and the backend configuration, and you can use it to create the EC2 instance and manage the state in S3.&lt;br&gt;
The earlier provided guidance with two separate files is based on the typical recommended project structure and best practices for Terraform configuration management. Using separate files for backend configuration and resource definitions is often recommended for the following reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modularity and Organization: Separating backend configuration from resource definitions allows you to keep your infrastructure code organized and modular. It's easier to manage different aspects of your configuration in distinct files, making it more maintainable, especially in larger projects.&lt;/li&gt;
&lt;li&gt;Collaboration: In collaborative environments, different team members may be responsible for different parts of the configuration. By having separate files, team members can work on the backend configuration independently of resource definitions, reducing conflicts when merging changes in version control.&lt;/li&gt;
&lt;li&gt;Flexibility: Separating backend configuration enables you to reuse resource definitions across different environments or projects while changing only the backend configuration as needed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, using a single file can be a valid approach for smaller or less complex projects, as it simplifies the file structure. The choice of whether to use separate files or a single file depends on the specific requirements and complexity of your project, as well as your personal preference for organization.&lt;br&gt;
The above provided guidance on both approaches gives you flexibility and helps you choose the one that best suits your needs.&lt;br&gt;
If you go with two separate files for adopting best practice then that sounds like a good choice! Separating your Terraform configuration into two separate files for backend configuration and resource definitions aligns with best practices, especially as your projects grow in complexity or involve collaboration with multiple team members.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10:&lt;/strong&gt;&lt;br&gt;
Run the following commands:&lt;br&gt;
&lt;em&gt;terraform init&lt;br&gt;
terraform plan&lt;br&gt;
terraform apply&lt;/em&gt;&lt;br&gt;
Now go to the service 'EC2' in AWS and go to 'Instances'. Then in list of instances you will see an instance with the name 'SDM-TestTfStateinS3' in Running state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvvad978duvd8qwr0get.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvvad978duvd8qwr0get.png" alt="Image description" width="800" height="27"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also if you go to the service 'S3' in AWS, if you go to 'Buckets' and if you go to the bucket 'sdm-terraform-state-bucket-1' then you will be able to see the terraform state file in that bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4dzj4c6wbeyrq167j4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4dzj4c6wbeyrq167j4j.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
We could store the state file in GutHub as well. But it has some drawbacks.&lt;br&gt;
Yes, you can store your Terraform state file in a version control system (VCS) like GitHub, but it's generally not recommended for several reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Concurrency and Locking: VCS systems like GitHub do not provide built-in mechanisms for handling concurrent access and locking of the state file. In a collaborative environment, multiple team members could attempt to modify the state file simultaneously, leading to conflicts and potential data corruption.&lt;/li&gt;
&lt;li&gt;Performance: Terraform state files can become large and contain sensitive information. Storing them in a VCS can impact the performance of the repository and could expose sensitive data if not properly protected.&lt;/li&gt;
&lt;li&gt;Versioning: VCS systems are designed for source code versioning, not infrastructure state. Managing state in a VCS can become unwieldy as your infrastructure grows and changes.&lt;/li&gt;
&lt;li&gt;Security: Storing sensitive information, such as secrets or access keys, in a VCS is generally discouraged due to security concerns. State files may contain sensitive information, and their exposure should be minimized.&lt;/li&gt;
&lt;li&gt;Ease of Collaboration: Remote backends like Amazon S3 and others (e.g., Azure Blob Storage, Google Cloud Storage) are specifically designed for storing Terraform state files. They provide features like state locking and access control, making it easier for teams to collaborate safely.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using a remote backend like S3 or an equivalent is recommended because it addresses these concerns and is purpose-built for managing Terraform state. It provides a secure, centralized, and scalable solution for storing and managing state files, especially in team environments.&lt;br&gt;
While you could store Terraform configurations in a VCS like GitHub, it's generally better to use a remote backend for managing the state files. This separation of concerns allows you to benefit from the strengths of each tool: VCS for code collaboration and versioning, and a remote backend for state management and collaboration on infrastructure changes.&lt;br&gt;
Hope this article helps you to understand the purpose of storing Terraform state files into remote backend such as S3 buckets. Please comment if you have any suggestions or improvements.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>terraform</category>
    </item>
  </channel>
</rss>
