<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Palak Bhawsar</title>
    <description>The latest articles on DEV Community by Palak Bhawsar (@palakbhawsar98).</description>
    <link>https://dev.to/palakbhawsar98</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/palakbhawsar98"/>
    <language>en</language>
    <item>
      <title>AI-Powered Sentiment Analysis for Product Reviews &amp; Visualization</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Sat, 18 Jan 2025 12:43:25 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/ai-powered-sentiment-analysis-for-product-reviews-visualization-2mjl</link>
      <guid>https://dev.to/palakbhawsar98/ai-powered-sentiment-analysis-for-product-reviews-visualization-2mjl</guid>
      <description>&lt;p&gt;In this project, we will create an automated pipeline for analyzing the sentiment of product reviews. Instead of building a full application, we will upload product reviews in &lt;strong&gt;JSON&lt;/strong&gt; format to an &lt;strong&gt;Amazon S3&lt;/strong&gt; bucket. As soon as a new file is uploaded to S3, an &lt;strong&gt;S3 event notification&lt;/strong&gt; will trigger an &lt;strong&gt;AWS Lambda&lt;/strong&gt; function. This Lambda function will use the &lt;strong&gt;Amazon Comprehend API&lt;/strong&gt; to perform sentiment analysis on each review in the uploaded file. Once the sentiment data is processed, the Lambda function will upload the analyzed results to a &lt;strong&gt;new S3 bucke&lt;/strong&gt; t in JSON format. &lt;strong&gt;Amazon Athena&lt;/strong&gt; will then be used to query the sentiment data stored in S3. Finally, the data will be seamlessly integrated with &lt;strong&gt;Amazon QuickSight&lt;/strong&gt; for interactive visualization, providing insightful analysis of the sentiment trends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS Account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;QuickSight Account Setup&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Experience working with AWS Services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Programming skills: Python, JSON&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Create S3 Buckets
&lt;/h2&gt;

&lt;p&gt;Lets get our hands dirty and start by creating two S3 buckets one for storing product reviews and the other to store the data analyzed by Amazon Comprehends sentiment analysis.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Search for Amazon S3 in the services in AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on create bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select bucket type as General Purpose.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide unique name for the bucket, for example &lt;code&gt;product-review-bucket-789&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Object Ownership select as ACL Disabled.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create Bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upload Product Review JSON file to S3 Bucket&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/palakbhawsar98/AI-Powered-Sentiment-Analysis-of-Product-Review/blob/main/product_reviews.json" rel="noopener noreferrer"&gt;JSON file link&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create one more bucket to store sentiment analysis data &lt;code&gt;sentiment-analysis-bucket-1234&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4t9uqvgreuh20zl78mi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4t9uqvgreuh20zl78mi.png" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create Lambda Function
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Search for Lambda in the services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Create a function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Author from scratch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide a name for the Lambda function, for example, &lt;code&gt;LambdaForSentimentAnalysis&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the runtime as Python 3.13.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Create a custom role and go to the IAM console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the service as Lambda.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select AmazonS3FullAccess and ComprehendFullAccess. [Note: It is a good practice to provide granular access for a particular bucket and Comprehend job.]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go back to the Lambda screen and select the role just created in Existing roles.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, click Create function.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5t4hn18zgh4f92szp5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5t4hn18zgh4f92szp5z.png" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Write Python code
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open editor of your choice and create a file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Write a code to read product review JSON file from the S3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use the Amazon Comprehend API to analyze sentiment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Store the analyzed sentiment data in the output S3 bucket in JSON.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/palakbhawsar98/AI-Powered-Sentiment-Analysis-of-Product-Review" rel="noopener noreferrer"&gt;GitHub Code Link&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 4: Create S3 Event Notification to Trigger Lambda
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Select Product review S3 bucket &lt;code&gt;product-review-bucket-789&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Properties, click create event notification.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide a name to the event, for example, NewReviewUploadTrigger.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select All object create events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Lambda function created in above step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click save changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 5: Set up Athena for Querying S3 Data
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Select Athena in services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Query your data with Trino SQL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Launch query editor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Athena console, click on the "Settings" icon&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Query result location, specify an S3 bucket to store Athena query&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Query editor create a new database.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 6: Setup Amazon QuickSight for Visualization
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go to QuickSight in the AWS Console and setup Amazon QuickSight Account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attach an IAM policy to allow QuickSight to access Athena and S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Datasets New Dataset.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Give a name to Data Source Name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Athena Workgroup as Primary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Database and Tables created in previous step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Directly Query your Data and click Visualize then Create.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose visuals like Pie chart, Bar etc select Group/x-axis as sentiment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1ry4runvqwbt4u2u6be.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1ry4runvqwbt4u2u6be.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqge093ht3otpccts5iqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqge093ht3otpccts5iqx.png" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you for taking time to read my article. If I've overlooked any steps or missed any details, please don't hesitate to get in touch.&lt;/p&gt;

&lt;p&gt;Feel free to reach out to me anytime &lt;a href="https://www.linkedin.com/in/palak-bhawsar/" rel="noopener noreferrer"&gt;&lt;strong&gt;Contact me&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;~ Palak Bhawsar&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;]]&amp;gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>ai</category>
      <category>cloud</category>
    </item>
    <item>
      <title>CI/CD pipeline for Terraform Project</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Wed, 27 Mar 2024 05:11:32 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/cicd-pipeline-for-terraform-project-20p6</link>
      <guid>https://dev.to/palakbhawsar98/cicd-pipeline-for-terraform-project-20p6</guid>
      <description>&lt;p&gt;In this article, we will be creating an automated CI/CD pipeline for a Terraform project, with a focus on adhering to security and coding best practices. The pipeline will be designed to trigger automatically upon code push to GitHub, and will encompass code analysis, security analysis, testing, and the typical Terraform workflow stages such as initialization, planning, and applying changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;GitHub and AWS account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic knowledge of Terraform and AWS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Knowledge of Jenkins and CI/CD&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Let's understand the purpose of all these tools within our CI/CD pipeline.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TFLint&lt;/strong&gt; is a popular open-source static analysis tool designed for Terraform. It performs automated checks on Terraform configurations to identify potential issues, errors, and violations of best practices. TFLint helps maintain code quality, consistency, and reliability in Terraform projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tfsec&lt;/strong&gt; is a static analysis tool used to scan Terraform code to identify security gaps in IaC. It analyzes Terraform codebases to identify potential security issues such as misconfigurations, insecure settings, and other issues that might expose infrastructure to risks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terratest&lt;/strong&gt; is an open source testing framework for infrastructure defined using Terraform. It performs unit tests, integration tests, and end-to-end tests for the cloud-based infrastructure and helps identify security vulnerabilities early on.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Setup Jenkins Server
&lt;/h2&gt;

&lt;p&gt;Launch an EC2 instance and install Jenkins on it to set up the job for running the pipeline. Follow the blogs below to set up a Jenkins server on an EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/palakbhawsar98/install-jenkins-in-ec2-instance-using-user-data-script-3neg"&gt;https://palak-bhawsar.hashnode.dev/install-jenkins-in-ec2-instance-using-user-data-script&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Install tools in Jenkins
&lt;/h2&gt;

&lt;p&gt;Connect to the Jenkins EC2 instance and install all these tools. The Terraform tool is needed to run Terraform commands. TFLint is required to perform linting on Terraform configuration files. TFsec is needed to scan Terraform configuration files to identify any security vulnerabilities. Go is needed to run Terratest to execute unit and integration test cases.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install Terraform&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install TFLint&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install TFSec&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install go&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  3. Attach IAM role to Jenkins server
&lt;/h2&gt;

&lt;p&gt;Create an IAM role with the necessary permissions to create resources in AWS, and then attach this role to the EC2 instance where Jenkins is installed. Attaching this role to Jenkins is crucial as it grants the necessary permissions for provisioning resources within the AWS infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Create Webhook
&lt;/h2&gt;

&lt;p&gt;Webhook in Jenkins triggers the pipeline automatically when any changes are done in the GitHub repository like commit and push. Go to Jenkins dashboard and copy the Jenkins URL. Go to GitHub repository settings. In the left pane select &lt;strong&gt;Webhooks.&lt;/strong&gt; Click &lt;strong&gt;Add webhook&lt;/strong&gt; and paste the Jenkins URL in &lt;strong&gt;the Payload URL&lt;/strong&gt; by appending the URL with &lt;strong&gt;/github-webhook/&lt;/strong&gt; in the end of URL. Select the events when you want to trigger the pipeline, I have selected &lt;strong&gt;Just the push event&lt;/strong&gt; and click &lt;strong&gt;Add webhook.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Project struture and Code
&lt;/h2&gt;

&lt;p&gt;In this project, I am creating an AWS EC2 instance named "test_instance" with a specified AMI and instance type. It enables HTTP access to instance metadata and follows best practices by encrypting both the root block device and an additional EBS volume.&lt;/p&gt;

&lt;p&gt;Under &lt;code&gt;test&lt;/code&gt; folder, I have created main_test.go file to verify the changes in Terraform configuration, an EC2 instance of type t2.micro is created with the creator name as Palak. If any of these conditions fail, the test will report a failure.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;tflint.hcl&lt;/code&gt; file is used to configure TFLint. We can specify which plugins TFLint should use and their versions. If your Terraform code interacts with AWS resources, you might enable the AWS plugin and specify its version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhsn4a2eqq4wfjxlze5p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhsn4a2eqq4wfjxlze5p.png" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/palakbhawsar98/Terraform-CI-CD-Pipeline" rel="noopener noreferrer"&gt;https://github.com/palakbhawsar98/Terraform-CI-CD-Pipeline&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. &lt;strong&gt;Create Jenkins pipeline&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Go to Jenkins Dashboard click &lt;strong&gt;New Item -&amp;gt; Give a name to the pipeline -&amp;gt; Select Pipeline -&amp;gt; Click Ok.&lt;/strong&gt; Add Description of your &lt;strong&gt;pipeline -&amp;gt; Build Triggers -&amp;gt; GitHub hook trigger for GITScm polling&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Scroll to the last in the Pipeline section and from the dropdown select &lt;strong&gt;Pipeline script from SCM.&lt;/strong&gt; Under SCM, select Git and enter your GitHub project repository URL. If your GitHub repository is private then add credentials. Also, enter the branch name in &lt;strong&gt;Branches to build&lt;/strong&gt; and the Jenkinsfile name in &lt;strong&gt;Script Path&lt;/strong&gt; and click &lt;strong&gt;Save.&lt;/strong&gt; Finally, Click &lt;strong&gt;Build Now&lt;/strong&gt; to run the pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Troubleshooting
&lt;/h2&gt;

&lt;p&gt;The pipeline failed at the TFlint stage due to my Terraform configuration explicitly utilizing undeclared variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kq5is58dlg07gtc5j65.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kq5is58dlg07gtc5j65.png" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zsvgtl535fmex9vx1ih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zsvgtl535fmex9vx1ih.png" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzn7myr01whiuox01pnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzn7myr01whiuox01pnj.png" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Tfsec stage failed because encryption was not enabled for the EBS volume attached to the EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq9qjbhufia82nf0qyvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq9qjbhufia82nf0qyvq.png" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fguhouybvo8a0jhz5k1vt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fguhouybvo8a0jhz5k1vt.png" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, all pipeline stages ran successfully once it met best practices and security requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq6xe1z3pbzy023tzque.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq6xe1z3pbzy023tzque.png" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terraform workflow: init, plan, apply&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbx08b332n35givn0s6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbx08b332n35givn0s6q.png" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thank you for taking time to read my article. If I've overlooked any steps or missed any details, please don't hesitate to get in touch.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Feel free to reach out to me anytime&lt;/em&gt; &lt;a href="https://linkfree.eddiehub.io/palakbhawsar98" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;Contact m&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;&lt;em&gt;e&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;~ Palak Bhawsar&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Monitoring AWS Services Using CloudWatch</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Tue, 30 Jan 2024 15:11:20 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/monitoring-aws-services-using-cloudwatch-4ap2</link>
      <guid>https://dev.to/palakbhawsar98/monitoring-aws-services-using-cloudwatch-4ap2</guid>
      <description>&lt;p&gt;In this article we will explore monitoring using AWS CloudWatch and creation of CloudWatch alarms for various AWS services leveraging AWS SNS (Simple Notification Service) and SES (Simple Email Service) to send emails to the team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic understanding of different AWS services&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Monitoring is a method of keeping an eye on your infrastructure. If something fails or breaches occur, you should receive a notification. Amazon CloudWatch collects various metrics, and based on the thresholds set, it triggers an alarm. These alarms can then initiate automated actions if already set up, or else send notifications.&lt;/p&gt;

&lt;p&gt;For instance, suppose you have deployed an application on an EC2 instance, but suddenly, the traffic to your application surges beyond what your instance can handle. In such situations, creating alarms for EC2 allows you to monitor the instance's performance. When the traffic exceeds a certain limit, these alarms can alert you or automatically increase the size of the instance. This scaling up enhances your application's performance and availability to meet the increased demand.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lDHzbsy9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706442904131/9517ff48-7382-4af6-9e9a-4b530809fc68.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lDHzbsy9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706442904131/9517ff48-7382-4af6-9e9a-4b530809fc68.png" alt="" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OK State&lt;/strong&gt; : The OK state indicates that the monitored metric is within the acceptable range and has not breached the defined threshold.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ALARM State&lt;/strong&gt; : The ALARM state indicates that the monitored metric has breached the defined threshold. When the metric enters the ALARM state, CloudWatch triggers the associated alarm, and any configured alarm actions are executed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;INSUFFICIENT_DATA State&lt;/strong&gt; : The INSUFFICIENT_DATA state indicates that there is insufficient data to determine the state of the alarm. This state typically occurs when CloudWatch does not have enough data points to evaluate the metric against the defined threshold.&lt;/p&gt;

&lt;h3&gt;
  
  
  Important terminologies in CloudWatch:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;📈Metrics:&lt;/strong&gt; Metrics in CloudWatch are variables that represent specific data points or measurements about performance of AWS resources. These metrics are continuously collected and monitored by CloudWatch. For example CPUUtilization, MemoryUtilization, and DatabaseConnections are some of the metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alarms:&lt;/strong&gt; Alarms allow you to set thresholds on metrics. When a metric breaches the threshold, CloudWatch triggers an alarm, enabling you to take proactive actions or receive notifications. For example when CPU utilization is more than 80% trigger an alarm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📉Thresholds:&lt;/strong&gt; Thresholds are predefined values that you set on metrics to define acceptable performance ranges for your AWS resources. For example, you might set a threshold for CPU utilization at 80% or 90%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Period:&lt;/strong&gt; The period refers to the length of time over which CloudWatch aggregates data points for a specific metric.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evaluation Period&lt;/strong&gt; : The evaluation period determines the number of consecutive data points that must breach the alarm threshold before CloudWatch triggers the alarm. For example if we set the evaluation period to 3 for a 5-minute period, CloudWatch requires three consecutive 5-minute data points to breach the threshold before triggering the alarm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📄Namespaces&lt;/strong&gt; : Namespaces are containers for metrics, categorizing them based on their origin or source. For example, AWS services have their own namespaces like "AWS/EC2" or "AWS/S3".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;💹Datapoint:&lt;/strong&gt; Data points represent specific values or measurements of a metric at a particular point in time. For example we have an EC2 instance running a web server and CloudWatch collects data at every minute so data point indicates that at this hour of this day, the CPU utilization of the EC2 instance was X%&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1:Create Amazon SNS topic:
&lt;/h2&gt;

&lt;p&gt;In the AWS account services, search for SNS. Go to &lt;strong&gt;Topics&lt;/strong&gt; in side bar and click &lt;strong&gt;Create topic,&lt;/strong&gt; select type as &lt;strong&gt;Standard,&lt;/strong&gt; enter a topic name and click create topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tTxtWlj6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706612867128/87bc29bb-175d-497b-a9c2-09bb7ec43bc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tTxtWlj6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706612867128/87bc29bb-175d-497b-a9c2-09bb7ec43bc2.png" alt="" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Subscribe to SNS topic
&lt;/h2&gt;

&lt;p&gt;Go to &lt;strong&gt;Subscriptions&lt;/strong&gt; in the left panel, click &lt;strong&gt;Create subscription,&lt;/strong&gt; choose your SNS topic ARN, select protocol as &lt;strong&gt;Email&lt;/strong&gt; and enter the &lt;strong&gt;email addresses&lt;/strong&gt; of your team members. Team will receive a confirmation email to confirm their subscription.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oARHfSov--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706612941479/67d981a4-0716-42d5-8844-9ab9b560d373.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oARHfSov--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706612941479/67d981a4-0716-42d5-8844-9ab9b560d373.png" alt="" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Setup SES for sending emails
&lt;/h2&gt;

&lt;p&gt;Go to AWS services and search for &lt;strong&gt;SES&lt;/strong&gt; and verify the email addresses or domains that you want to send emails from. Once done go to your email and confirm the email address verification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y6JaoaAt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706613388459/f444cad3-7a34-41c7-9bc5-e936f09debaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y6JaoaAt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706613388459/f444cad3-7a34-41c7-9bc5-e936f09debaf.png" alt="" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4:Create CloudWatch alarm for EC2
&lt;/h2&gt;

&lt;p&gt;Go to AWS services and search for &lt;strong&gt;CloudWatch,&lt;/strong&gt; In the left navigation pane, click on &lt;strong&gt;All alarms&lt;/strong&gt; , then click &lt;strong&gt;Create Alarm,&lt;/strong&gt; Select the metric you want to monitor (e.g., CPU utilization for an EC2 instance).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4EFRkXXL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706447344823/157d267b-f011-4001-ab75-e0d84d111bbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4EFRkXXL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706447344823/157d267b-f011-4001-ab75-e0d84d111bbf.png" alt="" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select statistics, Period, and threshold. Select existing SNS topic and click &lt;strong&gt;Next,&lt;/strong&gt; enter Alarm name and click &lt;strong&gt;Create Alarm.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8WwtHfhH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706447377528/5d834ebe-d569-4dcd-988f-7ff41268d491.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8WwtHfhH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706447377528/5d834ebe-d569-4dcd-988f-7ff41268d491.png" alt="" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When CPUUtilization threshold is crossed, alarm will be triggered and you will receive an email notification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZNQezIFC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706615519491/31a5080f-db33-4fa1-b0bd-c9cbc3c4d1c3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZNQezIFC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706615519491/31a5080f-db33-4fa1-b0bd-c9cbc3c4d1c3.png" alt="" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Nag0sYG2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706615539915/07c31855-1ecb-4cbf-8365-aae4bb6efa36.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Nag0sYG2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706615539915/07c31855-1ecb-4cbf-8365-aae4bb6efa36.png" alt="" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is the email notification received on verified email:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P3sEZXHS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706615741812/e82967e1-6891-4770-ad4c-b93cae649eec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P3sEZXHS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1706615741812/e82967e1-6891-4770-ad4c-b93cae649eec.png" alt="" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Important metrics for Monitoring AWS services
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Metrics to monitor for EC2:
&lt;/h3&gt;

&lt;p&gt;Elastic compute cloud (EC2) is a web services that enable users to rent virtual servers, known as instances, along with other computing resources.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CPU Utilization&lt;/strong&gt; : CPU utilization indicates how much of the CPU resources are being utilized by your EC2 instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Memory Utilization&lt;/strong&gt; : Monitoring utilization metric ensure that your instances have enough memory available to handle the workload efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Network Traffic&lt;/strong&gt; : Monitoring inbound and outbound network traffic helps identify trends and anomalies in data transfer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disk I/O&lt;/strong&gt; : Monitoring Disk I/O metrics provide insights into how much data is being read from and written to the instance's storage volumes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disk Utilization&lt;/strong&gt; : Monitoring disk space usage helps prevent instances from running out of storage capacity, which can lead to application failures or downtime.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Metrics to monitor for CloudFront:
&lt;/h3&gt;

&lt;p&gt;Amazon CloudFront is a content delivery network (CDN) service provided by Amazon Web Services (AWS) that helps distribute content to users globally with low latency and high transfer speeds.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Requests&lt;/strong&gt; : Monitor the total number of requests served by CloudFront.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Transfer&lt;/strong&gt; : Track the volume of data transferred by CloudFront.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTTP Status Codes&lt;/strong&gt; : Monitor HTTP status codes returned by CloudFront. This includes 2xx, 3xx, 4xx, and 5xx status codes. An increase in 4xx or 5xx errors might indicate issues that need attention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Origin Response Time&lt;/strong&gt; : Track the time taken by the origin server to respond to CloudFront requests. High response times can impact overall latency and user experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error Rate&lt;/strong&gt; : Monitor the rate of errors encountered by CloudFront, increase in error rates may indicate underlying issues with content delivery or origin servers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Metrics to monitor Application Load Balancer:
&lt;/h3&gt;

&lt;p&gt;Application Load Balancer (ALB) distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, within one or more Availability Zones.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Request Count&lt;/strong&gt; : Total number of requests handled by the ALB over time. This metric helps you understand the traffic volume your application is receiving.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Healthy Host Count&lt;/strong&gt; : Track the number of healthy instances registered with the ALB's target groups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unhealthy Host Count&lt;/strong&gt; : Track the number of unhealthy instances registered with the ALB's target groups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt; : Monitor the latency of requests processed by the ALB. Latency metrics provide insights into the responsiveness of your application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTTP Error Rates&lt;/strong&gt; : Track the rate of HTTP 4xx and 5xx error responses returned by the ALB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Active Connection Count&lt;/strong&gt; : Monitor the number of active connections to the ALB over time. This metric helps you understand the level of concurrent connections your application is handling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Target Response Time&lt;/strong&gt; : Track the response time of backend instances when responding to requests forwarded by the ALB.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Metrics to monitor Amazon S3:
&lt;/h3&gt;

&lt;p&gt;Amazon Simple Storage Service (S3) is a storage service that provides object storage through a web service interface.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bucket Size&lt;/strong&gt; : Monitor the total size of objects stored in each S3 bucket. This can help understand your storage utilization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Number of Objects&lt;/strong&gt; : Track total number of objects stored in each S3 bucket. Monitoring object count helps you manage your data and identify any unexpected increases or decreases in the number of objects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Request Metrics&lt;/strong&gt; : Monitor various S3 request metrics, including the number of GET, PUT, POST, DELETE, and LIST requests made to each bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Transfer Metrics&lt;/strong&gt; : Track the amount of data transferred in and out of each S3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bucket Access Metrics&lt;/strong&gt; : Monitor access metrics such as the number of requests from different AWS services and different regions made to bucket.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Metrics to monitor Amazon RDS:
&lt;/h3&gt;

&lt;p&gt;Amazon Relational Database Service (RDS) is a web service that makes it easy to set up, operate, and scale a relational database in the cloud.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CPU Utilization&lt;/strong&gt; : Monitor the CPU utilization of your RDS instances to ensure that they have sufficient processing power to handle the workload.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Database Connections&lt;/strong&gt; : Track the number of active database connections to your RDS instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Read and Write IOPS&lt;/strong&gt; : Monitor the read and write I/O operations per second for your RDS instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Storage Usage&lt;/strong&gt; : Keep track of the amount of storage used by your RDS instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Free Storage Space&lt;/strong&gt; : Monitor the amount of free storage space available in your RDS instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Database Throughput&lt;/strong&gt; : Monitor the throughput of data flowing in and out of your RDS instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Database Latency&lt;/strong&gt; : Monitor the latency of database queries and transactions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Thank you for taking time to read my article. If I've overlooked any steps or missed any details, please don't hesitate to get in touch.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Feel free to reach out to me anytime&lt;/em&gt; &lt;a href="https://linkfree.eddiehub.io/palakbhawsar98"&gt;&lt;strong&gt;&lt;em&gt;Contact m&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;&lt;em&gt;e&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;~ Palak Bhawsar&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Architect two tier Secure and Scalable AWS Infrastructure with Terraform</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Wed, 16 Aug 2023 04:55:36 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/architect-two-tier-secure-and-scalable-aws-infrastructure-with-terraform-2iho</link>
      <guid>https://dev.to/palakbhawsar98/architect-two-tier-secure-and-scalable-aws-infrastructure-with-terraform-2iho</guid>
      <description>&lt;p&gt;In this article, we will design an architecture that meets security, availability and scalability requirements and also streamlines deployment and management processes. We will create AWS Route53 for DNS management, Application Load Balancer (ALB) for load distribution, Auto Scaling Groups to dynamically adjust capacity, VPC setup for network isolation, AWS EC2 instances and MySQL for deploying a web application, and many other AWS services to explore the power of Infrastructure as Code (IaC) tool Terraform. Additionally, we will take a step further by not just deploying Python-MySQL application, but also containerizing it using Docker. This modernization approach enhances portability and consistency while simplifying management. Moreover, we'll ensure that our application has an online presence with our custom domain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Installed and configured &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html"&gt;&lt;strong&gt;AWS CLI&lt;/strong&gt;&lt;/a&gt; and &lt;a href="https://learn.hashicorp.com/terraform/getting-started/install.html"&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub and AWS account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic knowledge about Terraform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Understanding of different AWS services&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Notes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I suggest building the infrastructure and resources manually first to fully understand how they work together. If you encounter any problems with the Terraform user data script, try manually deploying the code by SSHing into the instance. This hands-on approach will give you a solid grasp of the setup and troubleshooting skills you need for successful automation later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code link:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/palakbhawsar98/Python-MySQL-application"&gt;Python-MySQl-Application&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/palakbhawsar98/Terraform-secure-scalable-two-tier-infra-project"&gt;Terraform code&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: State management using S3 and DynamoDB
&lt;/h2&gt;

&lt;p&gt;The first step is to create an AWS S3 bucket and a DynamoDB table to store and lock state files. Terraform state is a critical component that keeps track of the resources we create and manage. Instead of storing this state locally, we'll store it remotely using AWS S3 and leverage DynamoDB for state locking. Let's first add the below code in the Terraform project folder and execute Terraform init, plan and apply commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;############################## S3.tf #################################
resource "aws_s3_bucket" "dev-remote-state-bucket" {
    bucket = "dev-remote-state-bucket"
      versioning {
        enabled = true
    }
    tags = {
        Name = "S3 Remote Terraform State Store"
    }
}


############################## dynamo_db.tf ##########################
resource "aws_dynamodb_table" "terraform-state-lock" {
    name = "terraform-state-lock"
    read_capacity = 5
    write_capacity = 5
    hash_key = "LockID"
    attribute {
        name = "LockID"
        type = "S"
    }
    tags = {
        "Name" = "DynamoDB Terraform State Lock Table"
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1xvQ_BHS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692015636927/90e6048b-fa3e-428c-b7ff-75bc2cafd643.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1xvQ_BHS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692015636927/90e6048b-fa3e-428c-b7ff-75bc2cafd643.png" alt="" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--urpgKr6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692015685099/50974205-bcb4-48cc-bd4b-76eb7670324b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--urpgKr6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692015685099/50974205-bcb4-48cc-bd4b-76eb7670324b.png" alt="" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above AWS resources will get created after executing terraform commands. Now, let's add a provider block and execute all commands again make sure you delete terraform.tfstate file from your local and update the Terraform backend with the AWS S3 bucket and DynamoDB table name as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;############################## provider.tf #############################
terraform {
     backend "s3" {
        bucket = "dev-remote-state-bucket"
        key = "terraform.tfstate"
        region = "us-east-1"
        dynamodb_table = "terraform-state-lock"
    }
    required_providers {
      aws = {
        version = "~&amp;gt;5.0"
        source = "hashicorp/aws"
      }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Purchase a domain name
&lt;/h2&gt;

&lt;p&gt;The first step in establishing your application's online presence is to purchase a domain name. You can choose a domain provider of your preference. I have chosen GoDaddy for domain registration services. You can also use AWS Route 53 for domain management which allows seamless integration with your AWS resources.&lt;/p&gt;

&lt;p&gt;Since I have purchased the domain from GoDaddy to integrate it with other AWS services. We need to delegate domain rights to AWS Route53. This will allow you to easily manage your DNS records within AWS, ensuring better integration with your resources and services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Hosted zone in AWS Route 53
&lt;/h2&gt;

&lt;p&gt;An AWS Route 53 hosted zone is a container that holds information about how you want to route traffic for a specific domain, such as &lt;code&gt;palakbhawsar.in.&lt;/code&gt; It will provide us with the necessary nameservers that we can use to delegate the domain management to Route 53. We will later associate the hosted zone with an Application load balancer to enable dynamic routing and load balancing of traffic across multiple resources.&lt;/p&gt;

&lt;p&gt;We can then take the name servers from the output and update the GoDaddy account to delegate domain management to Route 53.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9QjUlPOn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692017376213/901e309f-521e-49b7-899e-b187c684b1ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9QjUlPOn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692017376213/901e309f-521e-49b7-899e-b187c684b1ae.png" alt="" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to your &lt;strong&gt;GoDaddy account-&amp;gt;Domain portfolio-&amp;gt; DNS-&amp;gt; Nameservers-&amp;gt;Change nameservers-&amp;gt;Add nameservers-&amp;gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cYxW3mdx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692017655915/d582f965-588f-4c46-b9b2-e43e8f1cc6c7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cYxW3mdx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692017655915/d582f965-588f-4c46-b9b2-e43e8f1cc6c7.png" alt="" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Add SSL Certificate for Domain
&lt;/h2&gt;

&lt;p&gt;An SSL certificate ensures encrypted communication between users' browsers and your web application. We will utilize AWS Certificate Manager (ACM) to easily provision a free SSL/TLS certificate for our custom domain. The below code sets up an ACM certificate, creates a Route53 DNS record for certificate validation, and then validates the certificate. The validation process involves confirming ownership of the domain through DNS records, and it's a critical step in securing your website with an SSL/TLS certificate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_cSeIJ6u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692105733629/d884ef23-bb6b-48c1-94b8-f75c2ded777c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_cSeIJ6u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692105733629/d884ef23-bb6b-48c1-94b8-f75c2ded777c.png" alt="" width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: VPC setup for network isolation
&lt;/h2&gt;

&lt;p&gt;In this step, we'll establish a Virtual Private Cloud (VPC) in the us-east-1 region, linking it to an Internet Gateway for external connectivity. The setup includes the creation of both public and private subnets across the us-east-1a and us-east-1b availability zones. Route tables for these subnets will be configured and associated accordingly. Route tables act as guides for network traffic, determining its destination based on predefined rules. This orchestrated process encompasses the development of a comprehensive networking environment, laying the foundation for the subsequent building blocks of your infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2ZXfRcFi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692105805593/db2d4630-354a-45f0-a248-67937c557377.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2ZXfRcFi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692105805593/db2d4630-354a-45f0-a248-67937c557377.png" alt="" width="800" height="102"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--prNYKXZA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692105999420/7bbadf18-c945-49e7-98ec-e389718cd60f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--prNYKXZA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692105999420/7bbadf18-c945-49e7-98ec-e389718cd60f.png" alt="" width="800" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--06VXkH0g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106114341/3073a345-cdcb-4323-8250-d5ba9e0c05ce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--06VXkH0g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106114341/3073a345-cdcb-4323-8250-d5ba9e0c05ce.png" alt="" width="800" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Create an IAM role for EC2 Instances
&lt;/h2&gt;

&lt;p&gt;In this step, we will set up a special role for EC2 instances. The application running inside these instances needs to talk to a MySQL database to get credentials. Plus, it also requires access to database credentials stored in an AWS tool called Systems Manager. So, we're creating this role and permitting it to communicate with MYSQL and the System manager.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sTSH5FJ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692111562777/754a06bd-cae5-4e6c-88f5-a37775a26b1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sTSH5FJ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692111562777/754a06bd-cae5-4e6c-88f5-a37775a26b1b.png" alt="" width="800" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---4E02bim--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692111519598/3c445006-7fbf-4ded-97bc-4d8217869cef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---4E02bim--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692111519598/3c445006-7fbf-4ded-97bc-4d8217869cef.png" alt="" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Create instances and security groups
&lt;/h2&gt;

&lt;p&gt;In this step, we will create EC2 instances within each availability zone, leveraging a user data script that will orchestrate the deployment of our Python-MySQL application. The application will operate on port 80. We will create a security group that allows inbound traffic on port 80 for HTTP traffic, port 443 for HTTPS communication, and port 22 for SSH access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kd3792O_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106230961/ff231450-f0e6-45f9-8b37-039b05dd9402.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kd3792O_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106230961/ff231450-f0e6-45f9-8b37-039b05dd9402.png" alt="" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LXkUupx---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106662679/1eaaf64b-6001-4929-8ba3-6ed6431649bc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LXkUupx---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106662679/1eaaf64b-6001-4929-8ba3-6ed6431649bc.png" alt="" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Create RDS and security group
&lt;/h2&gt;

&lt;p&gt;In this step, we will create a MySQL DB instance in a private subnet and attach the security group to the MySQL DB instance. The security group will allow traffic from the EC2 instance security group on port 3306. This architecture is designed to seamlessly handle contingencies. If the MySQL database instance in the us-east-1a availability zone encounters a failure, a failover plan comes into play and the standby MySQL instance in the us-east-1b availability zone takes over the role of the primary database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gRas_6NB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106712583/ab5e23ee-b00e-4243-ba0f-63b220256401.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gRas_6NB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106712583/ab5e23ee-b00e-4243-ba0f-63b220256401.png" alt="" width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Create an Application Load Balancer
&lt;/h2&gt;

&lt;p&gt;In this step, we will create an AWS Application Load Balancer (ALB) with associated components. Starting with the security group that permits incoming HTTP (port 80) and HTTPS (port 443) traffic while allowing outbound traffic. Following this will create a target group and ALB listener to set up for HTTPS on port 443, along with a default fixed-response action. Additionally, specific listener rules are defined for paths like "/signup," "/signin," and "/dashboard," each forwarding traffic to the designated target group is EC2 instances. An SSL certificate is associated with the listener via an ACM certificate ARN. Lastly, a Route 53 A record is generated to associate the ALB's DNS name with the root domain ("&lt;a href="http://palakbhawsar.in"&gt;palakbhawsar.in&lt;/a&gt;"), ensuring connectivity to the ALB for incoming requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MgdqB6JY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106253403/7d1394be-5961-4b7e-b3d0-ad40eac92bbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MgdqB6JY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106253403/7d1394be-5961-4b7e-b3d0-ad40eac92bbf.png" alt="" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hRQO3vN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106312634/b7717c76-16fe-4945-a4c6-2ca5c5b7e623.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hRQO3vN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106312634/b7717c76-16fe-4945-a4c6-2ca5c5b7e623.png" alt="" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 9: Create an Autoscaling group
&lt;/h2&gt;

&lt;p&gt;In this step, we will create an Auto Scaling group, which is a way to manage a collection of Amazon EC2 instances. First, create a launch configuration that defines how instances will be launched. It specifies an Amazon Machine Image (AMI) to use, sets the instance type, and assigns a security group for the instances. This launch configuration is used to create instances. Next, create the Auto Scaling group, this group ensures a specific number of instances are running at all times, adapting to demand. It uses the previously created launch configuration. The group's settings include minimum and maximum instance counts (2 to 4 instances), and it will always try to maintain 2 instances by default. Finally, associate the target group to the Application Load Balancer (ALB) and distribute incoming traffic among the instances in the Auto Scaling group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--thtGrJwa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106452951/a2aeeca0-e3d6-42a1-8c5f-1f89cf27e0d8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--thtGrJwa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106452951/a2aeeca0-e3d6-42a1-8c5f-1f89cf27e0d8.png" alt="" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 10: Create a parameter in AWS System Manager
&lt;/h2&gt;

&lt;p&gt;In this step, we will create a secret locker for a password by creating a parameter in the AWS system manager securely. It will create a special place called "mysql_password" where you can store your password secretly. You give it a name, in this case, "mysql_psw," and you tell it that the secret is a secure string, like a hidden message. Then, you put your password, let's say it's "12345678," into this secret locker. Now, whenever you need that password, you can open this locker and get it. It's like having a secret vault just for your password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VN3ojB3T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106529886/eeea3037-443d-4da2-9c8a-c755a0dfea30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VN3ojB3T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692106529886/eeea3037-443d-4da2-9c8a-c755a0dfea30.png" alt="" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After successfully deploying the infrastructure, it's time to verify its functionality. Open your browser and access the web application using your domain: &lt;a href="http://palakbhawsar.in/signup"&gt;palakbhawsar.in/signup&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x7_q7i7G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692108821352/73fd2030-546a-432d-9aaa-347f0f9b2914.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x7_q7i7G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692108821352/73fd2030-546a-432d-9aaa-347f0f9b2914.png" alt="" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we added SSL/TLS certificate for our domain the connection is secure and can be accessed via HTTPS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M9NvmOPZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692108768073/bf973dcc-aeaa-482f-bcca-f2094e9478b1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M9NvmOPZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692108768073/bf973dcc-aeaa-482f-bcca-f2094e9478b1.png" alt="" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go ahead and sign up, then sign in to validate the connection with the MySQL database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dssr8ASB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692108922493/afacd617-cbdf-4001-8bf9-9c9be903e051.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dssr8ASB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692108922493/afacd617-cbdf-4001-8bf9-9c9be903e051.png" alt="" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Voila, it's a success! Your application is up and running smoothly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Iu24JcaI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692110374670/89f4bade-42df-4288-bb66-639c0d8b284c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Iu24JcaI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1692110374670/89f4bade-42df-4288-bb66-639c0d8b284c.png" alt="" width="800" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I am going to destroy this infrastructure due to the costs associated with AWS services. As a result, accessing the application using my domain might not be possible anymore.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;🥳🥳 Congratulations on completing this project!&lt;/p&gt;

&lt;p&gt;Thank you for taking time to read my article. If I've overlooked any steps or missed any details, please don't hesitate to get in touch.&lt;/p&gt;

&lt;p&gt;👉 Feel free to reach out to me anytime &lt;a href="https://linkfree.eddiehub.io/palakbhawsar98"&gt;&lt;strong&gt;Contact m&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;e&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Palak Bhawsar&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>S3 Event-Driven Email Notifications using AWS Lambda and SQS</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Wed, 17 May 2023 09:34:49 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/s3-event-driven-email-notifications-using-aws-lambda-and-sqs-3b7l</link>
      <guid>https://dev.to/palakbhawsar98/s3-event-driven-email-notifications-using-aws-lambda-and-sqs-3b7l</guid>
      <description>&lt;p&gt;In this project, I will be creating an S3 bucket, Queue, and Lambda function. Whenever an object is uploaded in the S3 bucket, an S3 event will trigger the queue and the lambda function will run which will check if the object name contains the word "sensitive" If so it will send an email notification to the team saying some sensitive file is added in the bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS account and familiarity with AWS services&lt;/p&gt;

&lt;p&gt;Let's first understand these AWS services:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple Storage Service(S3)&lt;/strong&gt;: A highly scalable object storage service that allows you to store and retrieve large amounts of data. It provides durability, availability, and security for your data and can be used for various purposes, such as backup, static website hosting, and content distribution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple Queue Service(SQS)&lt;/strong&gt;: A fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. It allows you to send, store, and receive messages between software components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Function as a Service(Lambda)&lt;/strong&gt;: A computing service that lets you run code without provisioning or managing servers. It allows you to execute your code in response to events, such as changes in data, HTTP requests, or scheduled intervals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple Email Service(SES)&lt;/strong&gt;: A fully managed service provided by AWS that enables you to send and receive email messages. It handles the underlying infrastructure and scaling aspects, allowing you to focus on sending and managing emails without the need to provision or manage servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create an S3 bucket
&lt;/h2&gt;

&lt;p&gt;Go to the AWS console and search for " &lt;strong&gt;S3&lt;/strong&gt;" in services. Click &lt;strong&gt;Create bucket&lt;/strong&gt; and give a unique name to the bucket &lt;strong&gt;-&amp;gt;&lt;/strong&gt; choose the AWS region where you want your bucket to reside. Keep other settings default and click &lt;strong&gt;Create bucket&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NqF_gmPu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684262500512/05a0ccbe-9159-47dc-bc5e-3f06147bb2a5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NqF_gmPu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684262500512/05a0ccbe-9159-47dc-bc5e-3f06147bb2a5.png" alt="" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create SQS
&lt;/h2&gt;

&lt;p&gt;Go to the AWS console and search for " &lt;strong&gt;SQS&lt;/strong&gt;" in services. Click &lt;strong&gt;Create queue -&amp;gt;&lt;/strong&gt; Choose the queue type as &lt;strong&gt;Standard -&amp;gt;&lt;/strong&gt; Give a name to the queue &lt;strong&gt;-&amp;gt;&lt;/strong&gt; Keep other settings default and click &lt;strong&gt;Create queue.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--skRQJ_RL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684262809206/dd23e9ec-9e21-4941-830c-615be5cb8d27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--skRQJ_RL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684262809206/dd23e9ec-9e21-4941-830c-615be5cb8d27.png" alt="" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now attach the access policy to the queue so that it can communicate with the S3 bucket when the S3 event is triggered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Attach policy to SQS
&lt;/h2&gt;

&lt;p&gt;Go to the Queue that we have created in the previous step and select access policy. Click edit to update the permissions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0o-00sjn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684263357578/bf07551a-cada-4a41-906b-4b899b13294f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0o-00sjn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684263357578/bf07551a-cada-4a41-906b-4b899b13294f.png" alt="" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take the &lt;strong&gt;Resource&lt;/strong&gt; arn from the existing policy shown in your console. Go to the bucket that we have created and in the properties section you will get bucket arn, copy it and paste it into &lt;strong&gt;aws:SourceArn in&lt;/strong&gt; the below policy and save.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Id": "Policy1684249599450",
  "Statement": [
    {
      "Sid": "Stmt1684249581034",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "sqs:*",
      "Resource": "arn:aws:sqs:us-east-1:148418490226:project-orion-queue",
      "Condition": {
        "ArnEquals": {
          "aws:SourceArn": "arn:aws:s3:::project-orion-files"
       }
      }
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Create S3 Event
&lt;/h2&gt;

&lt;p&gt;Go to your bucket and select &lt;strong&gt;Properties,&lt;/strong&gt; scroll down to &lt;strong&gt;Event notifications&lt;/strong&gt; and click &lt;strong&gt;Create event notification.&lt;/strong&gt; Give a name to your event then checkmark &lt;strong&gt;All object create events.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ch15QnGv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684308922186/f63d95b2-59f4-4efc-b9bd-8eab3d16216c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ch15QnGv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684308922186/f63d95b2-59f4-4efc-b9bd-8eab3d16216c.png" alt="" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select destination as &lt;strong&gt;SQS queue -&amp;gt;&lt;/strong&gt; Select the queue that we have created in previous steps &lt;strong&gt;-&amp;gt;&lt;/strong&gt; click &lt;strong&gt;save changes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5HHWumfh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684309219719/9196ab49-2691-4fe7-90a6-20ec66974448.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5HHWumfh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684309219719/9196ab49-2691-4fe7-90a6-20ec66974448.png" alt="" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Create Lambda function
&lt;/h2&gt;

&lt;p&gt;Go to AWS services and search for Lambda. Select Lambda and click &lt;strong&gt;Create function.&lt;/strong&gt; Select &lt;strong&gt;Author from Scratch-&amp;gt;&lt;/strong&gt; Give a name to function &lt;strong&gt;-&amp;gt;&lt;/strong&gt; Select runtime as &lt;strong&gt;Python3-&amp;gt;&lt;/strong&gt; Click &lt;strong&gt;Create function.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now go to Configuration &lt;strong&gt;-&amp;gt;&lt;/strong&gt; Permissions &lt;strong&gt;-&amp;gt;&lt;/strong&gt; click &lt;strong&gt;Role name&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pLagRnMl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684309856663/769e9d54-3a61-4f6f-a959-48b0623d2dc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pLagRnMl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684309856663/769e9d54-3a61-4f6f-a959-48b0623d2dc4.png" alt="" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add permissions to this lambda role so that it can communicate with SQS and SES services. Click Add permissions-&amp;gt; Attach policies-&amp;gt; Search for &lt;strong&gt;AmazonSESFullAccess&lt;/strong&gt; and &lt;strong&gt;AmazonSQSFullAccess and&lt;/strong&gt; add this policy to the Lambda.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y3cC9F0M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684310167396/656d8011-61f6-48e5-b1fa-0ecb7fd22ada.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y3cC9F0M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684310167396/656d8011-61f6-48e5-b1fa-0ecb7fd22ada.png" alt="" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Add Lambda Trigger
&lt;/h2&gt;

&lt;p&gt;We want this function to run when SQS has something to process. Click &lt;strong&gt;Add trigger&lt;/strong&gt; in the Lambda function and select &lt;strong&gt;SQS&lt;/strong&gt; as the trigger source. Select the SQS name that we have created. Keep the &lt;strong&gt;Batch window&lt;/strong&gt; as 1 then click &lt;strong&gt;Add.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bItq0A0P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684310456302/ab2632fa-590d-4d85-8298-f090515107a0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bItq0A0P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684310456302/ab2632fa-590d-4d85-8298-f090515107a0.png" alt="" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Create SES and verify the Email
&lt;/h2&gt;

&lt;p&gt;Go to AWS services and search for SES. Select SES and click &lt;strong&gt;Verified Identities&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Create identity-&amp;gt; Email address-&amp;gt;&lt;/strong&gt; Enter &lt;strong&gt;email address-&amp;gt;&lt;/strong&gt; Click &lt;strong&gt;Create Identity.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check your mailbox and verify the email address. Create an identity for both the sender's email address and the receiver's email address. Once the identity is verified move to the next step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Deploy the Lambda function
&lt;/h2&gt;

&lt;p&gt;Go to Lambda and add the Python code which will filter out the object uploaded in the S3 bucket and send the email to the receiver if the object name contains the word "sensitive"&lt;/p&gt;

&lt;p&gt;Clone this &lt;strong&gt;GitHub&lt;/strong&gt; repository: &lt;a href="https://github.com/palakbhawsar98/AWS-Lambda-SQS-Project"&gt;https://github.com/palakbhawsar98/AWS-Lambda-SQS-Project&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_WRA00eI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684312187340/cd2ebe91-167f-429a-b98f-daf73197d780.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_WRA00eI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684312187340/cd2ebe91-167f-429a-b98f-daf73197d780.png" alt="" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After adding Python code in the Lambda function, click &lt;strong&gt;Deploy.&lt;/strong&gt; Now to test this, upload some objects in the S3 bucket. Go to &lt;strong&gt;monitor&lt;/strong&gt; and click view &lt;strong&gt;CloudWatchLogs-&amp;gt;&lt;/strong&gt; Select the latest &lt;strong&gt;Log Stream.&lt;/strong&gt; Check if the email is sent to the receiver and the queue is empty.&lt;/p&gt;

&lt;p&gt;As soon as you upload a file in the S3 bucket and the name of the file contains the word "sensitive" an email will be sent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--licdh6Ll--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684314511766/0c7cd2a5-baa8-465c-91cf-def2d9b080c7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--licdh6Ll--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684314511766/0c7cd2a5-baa8-465c-91cf-def2d9b080c7.png" alt="" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IS0cwft3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684315027131/833a2e8f-c330-4c92-af7a-6c1f17b11715.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IS0cwft3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684315027131/833a2e8f-c330-4c92-af7a-6c1f17b11715.png" alt="" width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations🥳🥳, we have completed the serverless project successfully.&lt;/p&gt;

&lt;p&gt;If you face any issues contact me on my socials 👉 &lt;a href="https://linkfree.eddiehub.io/palakbhawsar98"&gt;&lt;strong&gt;Contact m&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;e&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Thank You&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Palak Bhawsar&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automated CI/CD pipeline for Java Project</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Wed, 22 Mar 2023 04:14:58 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/automated-cicd-pipeline-for-java-project-367h</link>
      <guid>https://dev.to/palakbhawsar98/automated-cicd-pipeline-for-java-project-367h</guid>
      <description>&lt;p&gt;In this article, we will be creating an automated CI/CD pipeline for your Java project using Jenkins, Docker, and AWS. With this pipeline, your project will be automatically built, tested, and deployed to your AWS EC2 instance every time you push code to your GitHub repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;GitHub and DockerHub account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS account and knowledge of EC2 Instances&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Knowledge of Jenkins, Docker, and Maven&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Step 1: Setup Jenkins Server
&lt;/h1&gt;

&lt;p&gt;First set up the Jenkins server to create a pipeline. Launch an EC2 instance and install Java and Jenkins into it. &lt;a href="https://dev.to/palakbhawsar98/install-jenkins-in-ec2-instance-using-user-data-script-3neg"&gt;&lt;strong&gt;Follow this article to set up a Jenkins server&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;.&lt;/strong&gt; You need to install Maven and Docker as well. Maven to build the Java project and Docker to build the docker image in Jenkins. SSH into your instance and install maven and Docker using the below commands. For Jenkins, I am using an Ubuntu instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install Maven in Jenkins
sudo apt-get install maven -y
## Update packages
sudo apt-get update
## Install Docker
sudo apt-get install docker.io -y
## Add Jenkins user to Docker group
sudo usermod -a -G docker jenkins

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set JAVA_HOME and MAVEN_HOME using the below commands. To set the variable permanently, you should add it to the .bashrc file in your home directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64" &amp;gt;&amp;gt; ~/.bashrc
echo "export MAVEN_HOME=/usr/share/maven" &amp;gt;&amp;gt; ~/.bashrc

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;1.1. Install Plugins&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Install plugins to integrate Jenkins with GitHub, Maven, and EC2. Go to &lt;strong&gt;Manage Jenkins&lt;/strong&gt; , and select &lt;strong&gt;Manage plugins.&lt;/strong&gt; Under available plugins search for the below plugins and &lt;strong&gt;Install without restart&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Git&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pipeline maven integration&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pipeline stage view&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSH Agent&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1.2. Configure Java and Maven in Jenkins&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Go to &lt;strong&gt;Manage Jenkins,&lt;/strong&gt; select &lt;strong&gt;Global tool configuration&lt;/strong&gt; , and scroll down to add &lt;strong&gt;JDK&lt;/strong&gt; and &lt;strong&gt;Maven&lt;/strong&gt; path that we exported in the above steps as shown below. Uncheck the &lt;strong&gt;Install automatically&lt;/strong&gt; checkbox, Give &lt;strong&gt;Name&lt;/strong&gt; and &lt;strong&gt;Path&lt;/strong&gt; and click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aDIi4R2N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1678541765953/86896052-e694-464b-a996-5270e7210e23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aDIi4R2N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1678541765953/86896052-e694-464b-a996-5270e7210e23.png" alt="" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xge8iEbX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1678541809448/8535c744-1873-4e96-bdc4-04bb5c19071a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xge8iEbX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1678541809448/8535c744-1873-4e96-bdc4-04bb5c19071a.png" alt="" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1.3. Create Webhook&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Webhook in Jenkins triggers the pipeline automatically when any changes are done in the GitHub repository like commit and push. Go to Jenkins dashboard and copy the URL in the browser. Now go to GitHub repository settings. In the left pane select &lt;strong&gt;Webhooks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qZAWgYME--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1675071024012/0222fbfa-6a5d-40ce-9b52-78188cd52946.png%3Fauto%3Dcompress%2Cformat%26format%3Dwebp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qZAWgYME--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1675071024012/0222fbfa-6a5d-40ce-9b52-78188cd52946.png%3Fauto%3Dcompress%2Cformat%26format%3Dwebp" alt="" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Add webhook&lt;/strong&gt; and paste the Jenkins URL in &lt;strong&gt;the Payload URL&lt;/strong&gt; by appending the URL with &lt;strong&gt;/github-webhook/&lt;/strong&gt; as shown below. Select the events when you want to trigger the pipeline, I have selected &lt;strong&gt;Just the push event&lt;/strong&gt; and click &lt;strong&gt;Add webhook.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rsKxn462--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1675071481032/0263bc55-52ac-4f6a-8f50-437253f5fc0e.png%3Fauto%3Dcompress%2Cformat%26format%3Dwebp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rsKxn462--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1675071481032/0263bc55-52ac-4f6a-8f50-437253f5fc0e.png%3Fauto%3Dcompress%2Cformat%26format%3Dwebp" alt="" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 2: Add DockerHub credential in Jenkins
&lt;/h1&gt;

&lt;p&gt;To build and push the Docker image to DockerHub, we need to add docker credentials in Jenkins. Go to your DockerHub account and select the dropdown near username and click &lt;strong&gt;Account settings.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rKZH4LSk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1678606769096/69a9fbe9-64c9-4a03-90ce-ccda5c0735fe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rKZH4LSk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1678606769096/69a9fbe9-64c9-4a03-90ce-ccda5c0735fe.png" alt="" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Security&lt;/strong&gt; in the left panel and click &lt;strong&gt;New Access Token.&lt;/strong&gt; Give a &lt;strong&gt;description&lt;/strong&gt; and click &lt;strong&gt;Generate&lt;/strong&gt;. Copy the token and close. Keep this token as we will be adding this to the Jenkins credential in the next steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J_dsmx4a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1678606872401/9daf0b59-c78c-482c-8438-4ca108e062d7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J_dsmx4a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1678606872401/9daf0b59-c78c-482c-8438-4ca108e062d7.png" alt="" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to the &lt;strong&gt;Jenkins dashboard&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Manage Jenkins&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Manage credentials&lt;/strong&gt;. Click &lt;strong&gt;System&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Global credentials&lt;/strong&gt; and then click &lt;strong&gt;Add credentials.&lt;/strong&gt; Select the kind as &lt;strong&gt;Username and Password&lt;/strong&gt; and enter your DockerHub username and enter the token in &lt;strong&gt;Password&lt;/strong&gt; that we have generated in dockerHub. Give the ID and description of your choice but remember this ID as we will be using it in Jenkinsfile and click &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fw6yOuyR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1678622505112/58730e33-bd3c-4e5f-9827-e0309c4e3501.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fw6yOuyR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1678622505112/58730e33-bd3c-4e5f-9827-e0309c4e3501.png" alt="" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.1. Create a Repository in DockerHub&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Go to the DockerHub registry, select &lt;strong&gt;Repositories&lt;/strong&gt; and click &lt;strong&gt;Create repository.&lt;/strong&gt; Give a name to the repository and select visibility as &lt;strong&gt;Public&lt;/strong&gt; and click &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RuAyvbkv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679334721927/884a895a-3438-43fb-94eb-209af5bd4236.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RuAyvbkv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679334721927/884a895a-3438-43fb-94eb-209af5bd4236.png" alt="" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 3: Setup Docker in EC2 Instance
&lt;/h1&gt;

&lt;p&gt;Launch an EC2 instance and install docker into it. Open port 22 to SSH into the instance and open port 8081 as I will be exposing my Java application to port 8081. After successful deployment, you will be able to access the application in the browser using &lt;code&gt;http://PUBLIC_IP:8081/hello&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I have launched the Amazon Linux instance and setup Docker using the below commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Update packages
sudo yum update
# Install Docker
sudo yum install docker
# Add ec2-user to Docer group
sudo usermod -a -G docker ec2-user
# Enable docker service at AMI boot time
sudo systemctl enable docker.service
# Start the Docker service
sudo systemctl start docker.service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;3.1. Add EC2 credentials in Jenkins&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Go to &lt;strong&gt;Jenkins Dashboard -&amp;gt; Manage Jenkins -&amp;gt; Manage Credentials&lt;/strong&gt;. Click on Global and &lt;strong&gt;Add Credentials&lt;/strong&gt; and select &lt;strong&gt;SSH Username with the private key&lt;/strong&gt; under &lt;strong&gt;Kind&lt;/strong&gt;. Enter the ID and Description of your choice. Make note of this ID as we will be using it in Jenkinsfile. Enter the username of the EC2 instance that you have launched. For the Amazon Linux instance, the username is ec2-user. Select &lt;strong&gt;Enter directly&lt;/strong&gt; and click &lt;strong&gt;Add&lt;/strong&gt; and copy-paste the private key that you have created while launching an instance and click &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wbbCip4b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679336942047/cc2ca8ff-07dc-4363-b0f2-a7f6bc1e07cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wbbCip4b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679336942047/cc2ca8ff-07dc-4363-b0f2-a7f6bc1e07cd.png" alt="" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Write Jenkinsfile and Dockerfile
&lt;/h2&gt;

&lt;p&gt;Write Jenkinsfile with all the below steps to fetch, build, test and deploy Java application. Remember to add the below Jenkinsfile in the root directory of your project in GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/palakbhawsar98/JavaWebApp"&gt;&lt;strong&gt;GitHub Project&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {
  agent any

  environment {
    DOCKERHUB_CREDENTIALS = credentials('docker-hub-cred')
    REMOTE_SERVER = 'your-remote-server-ip'
    REMOTE_USER = 'your-remote-server-user'            
  }

  // Fetch code from GitHub

  stages {
    stage('checkout') {
      steps {
        git branch: 'main', url: 'https://github.com/palakbhawsar98/JavaWebApp'

      }
    }

   // Build Java application

    stage('Maven Build') {
      steps {
        sh 'mvn clean install'
      }

     // Post building archive Java application

      post {
        success {
          archiveArtifacts artifacts: '**/target/*.jar'
        }
      }
    }

  // Test Java application

    stage('Maven Test') {
      steps {
        sh 'mvn test'
      }
    }

   // Build docker image in Jenkins

    stage('Build Docker Image') {

      steps {
        sh 'docker build -t javawebapp:latest .'
        sh 'docker tag javawebapp palakbhawsar/javawebapp:latest'
      }
    }

   // Login to DockerHub before pushing docker Image

    stage('Login to DockerHub') {
      steps {
        sh 'echo $DOCKERHUB_CREDENTIALS_PSW | docker login -u $DOCKERHUB_CREDENTIALS_USR --password-stdin'
      }
    }

   // Push image to DockerHub registry

    stage('Push Image to dockerHUb') {
      steps {
        sh 'docker push palakbhawsar/javawebapp:latest'
      }
      post {
        always {
          sh 'docker logout'
        }
      }

    }

   // Pull docker image from DockerHub and run in EC2 instance 

    stage('Deploy Docker image to AWS instance') {
      steps {
        script {
          sshagent(credentials: ['awscred']) {
          sh "ssh -o StrictHostKeyChecking=no ${REMOTE_USER}@${REMOTE_SERVER} 'docker stop javaApp || true &amp;amp;&amp;amp; docker rm javaApp || true'"
      sh "ssh -o StrictHostKeyChecking=no ${REMOTE_USER}@${REMOTE_SERVER} 'docker pull palakbhawsar/javawebapp'"
          sh "ssh -o StrictHostKeyChecking=no ${REMOTE_USER}@${REMOTE_SERVER} 'docker run --name javaApp -d -p 8081:8081 palakbhawsar/javawebapp'"
          }
        }
      }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, write Dockerfile with the instructions to build the Java project and keep this file in the root directory of the project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Define your base image 
FROM eclipse-temurin:17-jdk-focal 

#Maintainer of this image
LABEL maintainer="Palak Bhawsar" 

#Copying Jar file from target folder                                                                                       
COPY target/web-services.jar web-services.jar  

#Expose app to outer world on this port                                                                                                                                                                                                                                                                          
EXPOSE 8081   

#Run executable with this command  
ENTRYPOINT ["java", "-jar", "web-services.jar"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Create a Jenkins pipeline
&lt;/h2&gt;

&lt;p&gt;Go to Jenkins Dashboard click &lt;strong&gt;New Item -&amp;gt; Give a name to the pipeline -&amp;gt; Select Pipeline -&amp;gt; Click Ok&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TdEA3zfO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679332885888/2546b36d-fbce-4ccc-ade1-fdbb733a4dbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TdEA3zfO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679332885888/2546b36d-fbce-4ccc-ade1-fdbb733a4dbb.png" alt="" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add Description of your &lt;strong&gt;pipeline -&amp;gt; Build Triggers -&amp;gt; GitHub hook trigger for GITScm polling&lt;/strong&gt;. Scroll to the last in the Pipeline section and from the dropdown select &lt;strong&gt;Pipeline script from SCM.&lt;/strong&gt; Under SCM, select Git and enter your GitHub project repository URL. If your GitHub repository is private then add credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Akv5Y4cA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679334002155/27ac5310-1459-49f5-b49a-8cc9aef8f55c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Akv5Y4cA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679334002155/27ac5310-1459-49f5-b49a-8cc9aef8f55c.png" alt="" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, enter the branch name in &lt;strong&gt;Branches to build&lt;/strong&gt; and the Jenkinsfile name in &lt;strong&gt;Script Path&lt;/strong&gt; and click &lt;strong&gt;Save.&lt;/strong&gt; Finally, Click &lt;strong&gt;Build Now&lt;/strong&gt; to run the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M7ljpCAj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679334122783/f468c4ba-33ce-40fe-bec0-5e80f195ebeb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M7ljpCAj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679334122783/f468c4ba-33ce-40fe-bec0-5e80f195ebeb.png" alt="" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Jenkins pipeline run successfully and the docker image got deployed to the EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SGn5Y2lr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679336306158/c4893bdf-9a0c-4996-b377-0dfabcea3746.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SGn5Y2lr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679336306158/c4893bdf-9a0c-4996-b377-0dfabcea3746.png" alt="" width="800" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Access your Java Application in the browser using the public IP of the AWS instance at port 8081&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="http://publicIP:8081/hello"&gt;http://publicIP:8081/hello&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--drZ57wAN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679336421530/4fbef44e-0097-4e1f-a87e-46299da8e071.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--drZ57wAN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1679336421530/4fbef44e-0097-4e1f-a87e-46299da8e071.png" alt="" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations🥳🥳, we have successfully created an Automated CI/CD pipeline for Java application.&lt;/p&gt;

&lt;p&gt;If you face any issues contact me on my socials 👉 &lt;a href="https://linkfree.eddiehub.io/palakbhawsar98"&gt;Contact m&lt;/a&gt;e&lt;/p&gt;

&lt;p&gt;Thank You&lt;/p&gt;

&lt;p&gt;Palak Bhawsar&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Create free tier AWS account, create IAM user and set CloudWatch alarm for billing</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Mon, 27 Feb 2023 17:02:31 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/create-free-tier-aws-account-create-iam-user-and-set-cloudwatch-alarm-for-billing-41ge</link>
      <guid>https://dev.to/palakbhawsar98/create-free-tier-aws-account-create-iam-user-and-set-cloudwatch-alarm-for-billing-41ge</guid>
      <description>&lt;p&gt;In this article, we will see how to create a free tier AWS account step by step. We will create the IAM user and enable the Multi-factor Authentication (MFA) for the root user and IAM user to secure your account. Also to safeguard our AWS account from over-expenditure we will set billing alarms and create an SNS topic that will send you an email notification if you go beyond a certain limit.&lt;/p&gt;

&lt;p&gt;Let's first understand some of these terminologies:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Web Services (AWS)&lt;/strong&gt;: AWS is a cloud provider platform that provides computing, networking and storage services on demand that scales easily.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root User:&lt;/strong&gt; The root user is the account owner and is created when the AWS account is created. This is the default user and should not be used or shared.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAM User:&lt;/strong&gt; An IAM user is a resource to give access to your AWS account to specific users and provides them specific permissions to access resources in your AWS account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAM Policy:&lt;/strong&gt; IAM policies define permissions for actions that users or groups can perform in an AWS account. Users and groups are assigned JSON documents called policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS CloudWatch alarm:&lt;/strong&gt; CloudWatch alarm helps you to watch CloudWatch metrics and to receive notifications when the metrics fall outside of the levels (high or low thresholds) that you configure. In our case, we are setting a CloudWatch alarm to monitor the billing threshold.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple Notification Service (SNS):&lt;/strong&gt; SNS is a managed service that provides message delivery from publishers to subscribers. Publishers communicate asynchronously with subscribers by sending messages to a &lt;em&gt;topic&lt;/em&gt;. SNS topic is a logical access point that acts as a communication channel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create an AWS account
&lt;/h2&gt;

&lt;p&gt;Go to the below AWS site to create a free tier account and click &lt;strong&gt;Create a Free Account&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/free/?trk=09863622-0e2a-4080-9bba-12d378e294ba&amp;amp;sc_channel=ps&amp;amp;s_kwcid=AL!4422!3!453325184854!e!!g!!aws%20free%20tier&amp;amp;ef_id=Cj0KCQiAgOefBhDgARIsAMhqXA69rt2K3ALL5v3RAgYzxPcOjC0YuEW7QpyJP9OG6_yZD-aKZmKRn4IaAs8tEALw_wcB:G:s&amp;amp;s_kwcid=AL!4422!3!453325184854!e!!g!!aws%20free%20tier&amp;amp;all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc&amp;amp;awsf.Free%20Tier%20Types=*all&amp;amp;awsf.Free%20Tier%20Categories=*all" rel="noopener noreferrer"&gt;Free tier account&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6hkzednwawz6zem2wqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6hkzednwawz6zem2wqk.png" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter your email address and give any name to your AWS account. You can also change this name later. Click &lt;strong&gt;Verify email address.&lt;/strong&gt; You will get the verification code in your email. Enter the code and click &lt;strong&gt;Verify.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fui789uwppfcm171mbhlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fui789uwppfcm171mbhlc.png" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a root user password. Make sure that you create a strong password containing more than 8 digits then click &lt;strong&gt;Continue.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frckz7uxfav30y682slus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frckz7uxfav30y682slus.png" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next step, select &lt;strong&gt;Personal - for your own projects.&lt;/strong&gt; Enter your Full Name, Phone Number, Country or Region, and your address details. Click the checkbox after reading &lt;strong&gt;AWS Customer Agreement&lt;/strong&gt; and click continue to the next steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8r6nn7520ytpwzz8uf8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8r6nn7520ytpwzz8uf8.png" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next step, enter your Credit or Debit card details. It will deduct a minimal amount just for verification purposes and click continue. On the next page, it will ask for the OTP and &lt;strong&gt;Rs. 2.00&lt;/strong&gt; will get deducted from your account. Finally, enter the OTP and click on &lt;strong&gt;Submit&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgt2alpw9xm9186824nk0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgt2alpw9xm9186824nk0.png" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Confirm your Identity on the next page by verifying your Phone number. This phone number will be used to send you verification codes in the future. Select text message and enter your phone number, do the security check and click &lt;strong&gt;Send SMS&lt;/strong&gt;. Enter the OTP sent to your phone number.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gyjtzwyyi9eo3xkqbz3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gyjtzwyyi9eo3xkqbz3.png" width="800" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose a Basic support-Free plan which comes under the free tier and finally click on &lt;strong&gt;Complete Sign up&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26xfizb6l15vdc6h3zg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26xfizb6l15vdc6h3zg0.png" width="800" height="776"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to the AWS management console and select &lt;strong&gt;sign in to the console&lt;/strong&gt; and sign in using the email and password that we set up previously for the root user.&lt;/p&gt;

&lt;p&gt;The root user has access to every AWS service and resource in an account. If the credentials for the root account are stolen, it may lead to unnecessary costs in your account therefore &lt;strong&gt;it's recommended to not use a root account and instead create an IAM user&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Enable Multi-factor authentication
&lt;/h2&gt;

&lt;p&gt;Search for IAM in the services section and select IAM. The IAM dashboard will appear, in the security recommendation click &lt;strong&gt;Add MFA&lt;/strong&gt; and then &lt;strong&gt;Assign MFA&lt;/strong&gt; on the next page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fba1emiahythcwxkh2tm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fba1emiahythcwxkh2tm2.png" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give a name to the &lt;strong&gt;Device name&lt;/strong&gt; box and select an &lt;strong&gt;Authenticator app&lt;/strong&gt; and click next. Download the &lt;strong&gt;Google Authenticator app&lt;/strong&gt; on your phone. It will generate a 6 digits verification code that you will have to enter whenever you log in as a root user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsri0uggakdhnbv2dgodq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsri0uggakdhnbv2dgodq.png" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;Show QR code&lt;/strong&gt; on the next page. Open the Google Authenticator app on your phone and click on the '+' button and scan the QR code visible on your AWS account screen. Enter the 6 digits code from the app in the box &lt;strong&gt;MFA code 1&lt;/strong&gt;. Wait for 30 seconds and enter the next code in the box &lt;strong&gt;MFA code 2&lt;/strong&gt; and click &lt;strong&gt;Add MFA.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Multi-factor authentication is enabled for your root account.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Create IAM User
&lt;/h2&gt;

&lt;p&gt;Go to &lt;strong&gt;Users&lt;/strong&gt; in the left panel and click &lt;strong&gt;Add users&lt;/strong&gt;. Give the name to the user and click &lt;strong&gt;next.&lt;/strong&gt; This user will not have any permission by default. Attach the policy to this user to access AWS resources. Select &lt;strong&gt;Attach policies directly&lt;/strong&gt; and &lt;strong&gt;AdministratorAccess&lt;/strong&gt; for this user and click &lt;strong&gt;next&lt;/strong&gt; and &lt;strong&gt;Create user&lt;/strong&gt;. The user is created successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgv05pbjdbat32taamal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgv05pbjdbat32taamal.png" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Create credentials for IAM user
&lt;/h2&gt;

&lt;p&gt;Click on your username and go to &lt;strong&gt;Security Credentials&lt;/strong&gt; and click &lt;strong&gt;Enable console access&lt;/strong&gt; under Console sign-in. Select enable and a custom password on the next window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frw27u3v41j75v3v3vu8q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frw27u3v41j75v3v3vu8q.png" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a strong password for this user and checkmark &lt;strong&gt;User must create new password at next sign-in&lt;/strong&gt; and click on &lt;strong&gt;Apply&lt;/strong&gt; and download the &lt;strong&gt;CSV&lt;/strong&gt; file&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xf75l4wtvg6xh6szkcm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xf75l4wtvg6xh6szkcm.png" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to the dashboard and in right, click on the &lt;strong&gt;Create&lt;/strong&gt; button &lt;strong&gt;.&lt;/strong&gt; It will create the Account alias for your IAM user so that whenever you log in you don't have to enter the Account ID instead you can use the alias that you have created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enable MFA for IAM user&lt;/strong&gt; : Go to User, click on your username and go to security credentials and click &lt;strong&gt;Assign MFA device.&lt;/strong&gt; Set the MFA for this user in the same way that we have created for the root user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip6a0v5pkdfls3gjv6it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip6a0v5pkdfls3gjv6it.png" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Create a Billing Alert
&lt;/h2&gt;

&lt;p&gt;To safeguard your account from using services that are not under the free tier and to get email notifications whenever you cross the minimum bill amount we have to create a billing alarm. Go to the dropdown under your root user and click &lt;strong&gt;Billing dashboard.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpolrl0nwwk64omocs3wu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpolrl0nwwk64omocs3wu.png" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Bills&lt;/strong&gt; in the right panel and then &lt;strong&gt;Billing preferences&lt;/strong&gt;. Select the checkbox for &lt;strong&gt;Receive PDF Invoice By Email&lt;/strong&gt; and &lt;strong&gt;Receive Free Tier Usage Alerts&lt;/strong&gt; and &lt;strong&gt;Receive Billing Alerts.&lt;/strong&gt; Give your email address to receive the notifications for billing alerts and click &lt;strong&gt;save preferences&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kyk59255q60w131qub4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kyk59255q60w131qub4.png" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Create CloudWatch Alarm
&lt;/h2&gt;

&lt;p&gt;You can monitor your estimated AWS charges by using Amazon CloudWatch. Search for CloudWatch in AWS services and select it. Billing metric data is stored in the US East (N. Virginia), make sure you are in N. Virginia region before creating alarms.&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;All alarms&lt;/strong&gt; in the left panel and select &lt;strong&gt;Create alarm.&lt;/strong&gt; Select metric and scroll down, you will see billing as we have created an alert for billing in the previous step.&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Billing&lt;/strong&gt; and &lt;strong&gt;Total estimated charge.&lt;/strong&gt; Click &lt;strong&gt;USD&lt;/strong&gt; and &lt;strong&gt;Select metric.&lt;/strong&gt; On the next page define the threshold value. If you want to get an alert if your billing amount gets more than 2 USD then enter that value and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp83fa4ty2l7yi88l6053.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp83fa4ty2l7yi88l6053.png" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Create an SNS Topic
&lt;/h2&gt;

&lt;p&gt;Select &lt;strong&gt;create new topic&lt;/strong&gt; and give a name to this topic and enter your email address then click &lt;strong&gt;Create topic&lt;/strong&gt; and click &lt;strong&gt;Next&lt;/strong&gt;. On the next page give a name to the alarm and click &lt;strong&gt;Next&lt;/strong&gt; and &lt;strong&gt;Create alarm.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kgm95404kcguzikc9nq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kgm95404kcguzikc9nq.png" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to your email and confirm the subscription. The alarm will trigger when your account billing exceeds the threshold you have specified. The status is Ok as my billing is less than 2 USD.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjvzdyabps88dgbu29ia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjvzdyabps88dgbu29ia.png" width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, Sign out from the root user. Sign in as an IAM user and use an alias (instead of AccountID) that we created in the previous step. In the username enter the name of the IAM user and the password that we set up. Get the verification code from google authenticator for IAM users and click sign in. The window will prompt you to set a password. Create a new password, and make sure it's strong and has more than 8 digits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22qyjre0qlikw6hhlig2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22qyjre0qlikw6hhlig2.png" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations🥳🥳, we have created AWS free tier account and learned about IAM users, IAM policy, and CloudWatch alarm.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you face any issues contact me on my socials&lt;/strong&gt; 👉 &lt;a href="https://linkfree.eddiehub.io/palakbhawsar98" rel="noopener noreferrer"&gt;&lt;strong&gt;Contact m&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;e&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>posters</category>
    </item>
    <item>
      <title>Create pipeline using Jenkins GitHub and AWS for Java Project</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Tue, 31 Jan 2023 06:49:02 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/create-pipeline-using-jenkins-github-and-aws-for-java-project-59fj</link>
      <guid>https://dev.to/palakbhawsar98/create-pipeline-using-jenkins-github-and-aws-for-java-project-59fj</guid>
      <description>&lt;p&gt;In this article, we are going to create a continuous integration pipeline for a Maven project. This pipeline will fetch the code from GitHub, build the code, run the test cases, and will store the generated artifacts in the AWS S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS account with user permission to upload files in S3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS access key and secret access key for a user&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Understanding of Jenkins&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Familiarity with Maven commands to build and run Java project&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Let's first understand what Jenkins, GitHub, Webhook, S3, and Maven are:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Jenkins&lt;/em&gt;&lt;/strong&gt; is an open-source automation tool written in Java with plugins built for Continuous Integration and Continuous deployment/delivery purposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;GitHub&lt;/em&gt;&lt;/strong&gt; is a code hosting platform for collaboration and version control. It lets you and others work together on projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Webhook&lt;/em&gt;&lt;/strong&gt; is a mechanism to automatically trigger the build of a Jenkins project in response to a commit pushed to a Git repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Maven&lt;/em&gt;&lt;/strong&gt; is a powerful project management tool that is based on POM (project object model). It is used for project build, dependency, and documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;JUnit&lt;/em&gt;&lt;/strong&gt; is an open-source Unit Testing Framework for JAVA. It is useful for Java Developers to write and run repeatable tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Amazon S3&lt;/em&gt;&lt;/strong&gt; (Simple Storage Service) is an object storage service that offers industry-leading scalability, data availability, security, and performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Launch an EC2 instance and Install Jenkins
&lt;/h2&gt;

&lt;p&gt;First set up the Jenkins server to create a pipeline. Launch an EC2 instance and install Java and Jenkins into it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/palakbhawsar98/install-jenkins-in-ec2-instance-using-user-data-script-3neg"&gt;Follow this article to set up Jenkins server&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You need to install Maven as well to build the project. SSH into the instance and install maven using the below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install maven -y

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set JAVA_HOME and MAVEN_HOME using the below commands. To set the variable permanently, you should add it to the bashrc file in your home directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64" &amp;gt;&amp;gt; ~/.bashrc

echo "export MAVEN_HOME=/usr/share/maven" &amp;gt;&amp;gt; ~/.bashrc

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check if the variables are exported correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tail -3 ~/.bashrc

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Install Plugins
&lt;/h2&gt;

&lt;p&gt;Install plugins to integrate Jenkins with GitHub, Maven, and AWS S3. Go to &lt;strong&gt;Manage Jenkins&lt;/strong&gt; , and select &lt;strong&gt;Manage plugins.&lt;/strong&gt; Under available plugins search for the below plugins and &lt;strong&gt;Install without restart&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://plugins.jenkins.io/github" rel="noopener noreferrer"&gt;&lt;strong&gt;GitHub&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://plugins.jenkins.io/maven-plugin" rel="noopener noreferrer"&gt;&lt;strong&gt;Maven Integration&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://plugins.jenkins.io/s3" rel="noopener noreferrer"&gt;&lt;strong&gt;S3 publisher&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fif83c4ocrsthqpfh0zm8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fif83c4ocrsthqpfh0zm8.png" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Configure Java and Maven in Jenkins
&lt;/h2&gt;

&lt;p&gt;Go to &lt;strong&gt;Manage Jenkins,&lt;/strong&gt; select &lt;strong&gt;Global tool configuration&lt;/strong&gt; , and scroll down to add JDK and Maven path that we exported in the above steps as shown below. Uncheck the &lt;strong&gt;Install automatically&lt;/strong&gt; checkbox and click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosbk79yswvh94iz60b40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosbk79yswvh94iz60b40.png" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfwt2mqoqiugmbh4paco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfwt2mqoqiugmbh4paco.png" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Create S3 bucket to store artifacts
&lt;/h2&gt;

&lt;p&gt;Search for S3 in AWS services and click &lt;strong&gt;Create bucket.&lt;/strong&gt; Give a unique name to the bucket and choose the region in which you want your bucket to reside. Leave all the setting default and click &lt;strong&gt;Create bucket&lt;/strong&gt; at last.&lt;/p&gt;

&lt;p&gt;Go to Jenkins dashboard, select &lt;strong&gt;Manage Jenkins&lt;/strong&gt; then &lt;strong&gt;configure System.&lt;/strong&gt; Scroll to the end to add &lt;strong&gt;Amazon S3 profiles.&lt;/strong&gt; Give S3 bucket name in the profile name and access key and secret access key as shown below and click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3z33clvvgzr9xrwx20f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3z33clvvgzr9xrwx20f.png" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Create Webhook
&lt;/h2&gt;

&lt;p&gt;Webhook in Jenkins triggers the pipeline automatically when any changes are done in the GitHub repository like commit and push. Go to Jenkins dashboard and copy the URL in the browser. Now go to GitHub repository settings. In the left pane select &lt;strong&gt;Webhooks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bl0fzxogyjoul62pbqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bl0fzxogyjoul62pbqt.png" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Add webhook&lt;/strong&gt; and paste the Jenkins URL in &lt;strong&gt;the Payload URL&lt;/strong&gt; by appending the URL with &lt;strong&gt;/github-webhook/&lt;/strong&gt; as shown below. Select the events when you want to trigger the pipeline, I have selected &lt;strong&gt;Just the push event.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fw9nk4x2iwpp7hsjass.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fw9nk4x2iwpp7hsjass.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Create Jenkins pipeline
&lt;/h2&gt;

&lt;p&gt;Go to Jenkins dashboard and click &lt;strong&gt;New item&lt;/strong&gt;. Enter an item name and select &lt;strong&gt;Freestyle project&lt;/strong&gt; and click &lt;strong&gt;Ok&lt;/strong&gt;. Give a small description of your project and select GitHub project, and enter the URL of your &lt;strong&gt;GitHub project&lt;/strong&gt;. Scroll down and select &lt;strong&gt;Git&lt;/strong&gt; as source code management and enter the repository URL of your project. If your repository is private then add credentials. Enter your branch name under &lt;strong&gt;Branches to build&lt;/strong&gt; section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw75et096njmsrgc57qht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw75et096njmsrgc57qht.png" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the &lt;strong&gt;GitHub hook trigger for GITScm polling&lt;/strong&gt; under build triggers. In &lt;strong&gt;Build Steps&lt;/strong&gt; select Invoke top-level Maven targets. Select MAVEN_HOME which we configured in the maven version and enter the &lt;strong&gt;clean install test&lt;/strong&gt; in Goals.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;Post-build Actions,&lt;/strong&gt; select &lt;strong&gt;Publish artifacts to S3 Bucket&lt;/strong&gt;. Select your AWS profile name and in the source enter the artifact path that you want to store in the S3 bucket. Give the name of your S3 bucket in the Destination bucket. Select the bucket region in which you have created your bucket. Keep all other settings default and click apply save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj54o4i9r5v26he097dqh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj54o4i9r5v26he097dqh.png" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, click &lt;strong&gt;Build now&lt;/strong&gt; in the left panel, and you will see the pipeline run successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6l1auovsniy0ey7vkft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6l1auovsniy0ey7vkft.png" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check the artifact stored in the S3 bucket after the successful build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kwvs1dn7r977cg750tl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kwvs1dn7r977cg750tl.png" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations 🥳, the pipeline ran successfully and artifacts got stored in S3&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6l7e28pawdulx4lr95w.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6l7e28pawdulx4lr95w.gif" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you face any issues contact me on my socials&lt;/strong&gt; 👉 &lt;a href="https://linkfree.eddiehub.io/palakbhawsar98" rel="noopener noreferrer"&gt;&lt;strong&gt;Contact m&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;e&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>crypto</category>
      <category>web3</category>
      <category>blockchain</category>
      <category>offers</category>
    </item>
    <item>
      <title>Easily manage your Terraform Remote State file with Terraform Cloud</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Tue, 03 Jan 2023 10:50:26 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/easily-manage-your-terraform-remote-state-file-with-terraform-cloud-253m</link>
      <guid>https://dev.to/palakbhawsar98/easily-manage-your-terraform-remote-state-file-with-terraform-cloud-253m</guid>
      <description>&lt;p&gt;In this article, we will see how to store terraform remote state file in the terraform cloud. Also how to create Terraform cloud account and workspaces to manage remote state file. We are storing terraform state file in Terraform cloud to keep it safe as it contains sensitive information about the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Terraform?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform is an open-source infrastructure as a code tool created by HashiCorp, that lets you provision, build, change, and version cloud and on-prem resources. It lets you define both Cloud and on-prem resources in human-readable configuration files that you can version reuse, and share.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is Infrastructure as Code?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Infrastructure as code (IaC) is the process that allows you to manage infrastructure with configuration files rather than through a graphical user interface. IAC allows you to build, change, and manage your infrastructure in a safe, consistent, and repeatable way by defining resource configurations that you can version, reuse, and share.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is Terraform state file?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When we create infrastructure after executing &lt;strong&gt;terraform apply&lt;/strong&gt; command. Terraform creates a state file called &lt;strong&gt;terraform.tfstate&lt;/strong&gt;. This state file contains all the information about the resources created using Terraform and keeps track of resources created by your configuration and maps them to real-world resources. Terraform state file is a sensitive file as it contains information about the infrastructure that we have created.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You should never push terraform state file to any version control system like GitHub. Store terraform.tfstate file in the backend to keep it safe.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Terraform cloud?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform Cloud is HashiCorps managed service offering. It eliminates the need for unnecessary tooling and documentation for practitioners, teams, and organizations to use Terraform in production. Terraform Cloud enables infrastructure automation for provisioning, compliance, and management of any cloud, data center, and service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The backend supported by Terraform:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Amazon S3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Azure Storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Google Cloud Storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HashiCorps Terraform Cloud and Terraform Enterprise.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, we are using Terraform Cloud as backend to store Terraform state file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS account and AWS Access key and Secret key created&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Terraform installed on your IDE&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS CLI installed and configured on your IDE&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic understanding of AWS services and Terraform&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-1: Create Terraform cloud account&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Go to &lt;a href="https://app.terraform.io/"&gt;https://app.terraform.io/&lt;/a&gt; and sign-up for a free account. After login creates an organization and a workspace. Choose &lt;strong&gt;Start from the scratch&lt;/strong&gt; workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CyBjtDTr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672135932668/3eb275fc-8ae9-4f42-88e7-e9b2f13025b0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CyBjtDTr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672135932668/3eb275fc-8ae9-4f42-88e7-e9b2f13025b0.png" alt="" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-2: Create an organization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create a new organization, enter your organization name and email address as shown below, and click &lt;strong&gt;Create organization&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CK01cHfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672136074608/6ad2304b-a9a4-48c5-acdd-14b5906fe9f2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CK01cHfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672136074608/6ad2304b-a9a4-48c5-acdd-14b5906fe9f2.png" alt="" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-3: Create a workspace&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Choose the workflow of your choice, here I am selecting a &lt;strong&gt;CLI-driven workflow.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lGlqSeXD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672136299365/bdfea010-02db-4cf7-be83-27f13e5494c9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lGlqSeXD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672136299365/bdfea010-02db-4cf7-be83-27f13e5494c9.png" alt="" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give a name and description to your workflow and click &lt;strong&gt;Create workspace.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WEyNjFhA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672136562734/60696683-da7d-4010-9f56-532c23f21fe0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WEyNjFhA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672136562734/60696683-da7d-4010-9f56-532c23f21fe0.png" alt="" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-4: Clone repository&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Clone this &lt;a href="https://github.com/palakbhawsar98/Terraform/tree/main/terraform-remote-state-hands-on"&gt;code&lt;/a&gt;, add the backend block in the configuration file and enter your organization-name and workspace name that you have created in previous steps. This block ensures that terraform.tfstate file doesn't get created in your workspace locally and it gets created in the remote backend i.e. Terraform cloud.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "remote" {
    organization = "organization-name"
    workspaces {
      name = "terraform-backend-handson"
    }
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Step-5: Add environment variables&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Go to the variables section in the left bar and click &lt;strong&gt;Add variable.&lt;/strong&gt; Here we can add variables that will be used by configuration files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NUIymsQX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672141509065/bd838896-484b-4088-9ea6-3d9708752bc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NUIymsQX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672141509065/bd838896-484b-4088-9ea6-3d9708752bc4.png" alt="" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the &lt;strong&gt;Environment variable&lt;/strong&gt; and Add AWS_ACCESS_KEY and SECRET_ACCESS_KEY and don't forget to checkbox &lt;strong&gt;Sensitive&lt;/strong&gt; and click &lt;strong&gt;Save variable&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AolhV4f0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672141605421/a23c3b1b-9096-4201-8943-1f01679cd65e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AolhV4f0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672141605421/a23c3b1b-9096-4201-8943-1f01679cd65e.png" alt="" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After adding environment variables the console will look like this. You can also add terraform variables in this step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vUpFjXDe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672142456772/bca48902-0197-4631-9e9a-0dda2d28f6c9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vUpFjXDe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672142456772/bca48902-0197-4631-9e9a-0dda2d28f6c9.png" alt="" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-6: Terraform Login&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Go to your project directory in the console and execute &lt;strong&gt;terraform login&lt;/strong&gt; command and enter the value yes. Terraform will open the browser. Click &lt;strong&gt;Create API token.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xZ7BGz1t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672140160368/b60b8647-88dd-45bc-accb-f1ca62588d95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xZ7BGz1t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672140160368/b60b8647-88dd-45bc-accb-f1ca62588d95.png" alt="" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the API token and paste it into the terminal where we have executed &lt;strong&gt;terraform login&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2COrPboc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672140240551/c5917542-28b2-4548-a9e2-42408e5a5820.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2COrPboc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672140240551/c5917542-28b2-4548-a9e2-42408e5a5820.png" alt="" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now run the terraform init, terraform plan, and terraform apply commands to provision the infrastructure, You will see the console where the build starts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NfsRP8V5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672143729040/821de627-4832-4d6e-8df5-14d45b117a1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NfsRP8V5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672143729040/821de627-4832-4d6e-8df5-14d45b117a1e.png" alt="" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the triggered build is successful you will able to see the resources created in the AWS console, as well as the state file in Terraform Cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zk5k8-3v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672143778887/35371d1f-0386-4d4f-b5b4-a89100218c04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zk5k8-3v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1672143778887/35371d1f-0386-4d4f-b5b4-a89100218c04.png" alt="" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations, triggered CLI workflow run successfully and the state file got created in terraform cloud.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you face any issues contact me on my socials&lt;/strong&gt; 👉 &lt;a href="https://linkfree.eddiehub.io/palakbhawsar98"&gt;&lt;strong&gt;Contact m&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;e&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Terraform Cloud: Manage Terraform Remote State file</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Tue, 03 Jan 2023 10:50:26 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/terraform-cloud-manage-terraform-remote-state-file-5e6g</link>
      <guid>https://dev.to/palakbhawsar98/terraform-cloud-manage-terraform-remote-state-file-5e6g</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is Terraform?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform is an open-source infrastructure as a code tool created by HashiCorp, that lets you provision, build, change, and version cloud and on-prem resources. It lets you define both Cloud and on-prem resources in human-readable configuration files that you can version reuse, and share.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is Infrastructure as Code?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Infrastructure as code (IaC) is the process that allows you to manage infrastructure with configuration files rather than through a graphical user interface. IAC allows you to build, change, and manage your infrastructure in a safe, consistent, and repeatable way by defining resource configurations that you can version, reuse, and share.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is Terraform state file?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When we create infrastructure after executing &lt;strong&gt;terraform apply&lt;/strong&gt; command. Terraform creates a state file called &lt;strong&gt;terraform.tfstate&lt;/strong&gt;. This state file contains all the information about the resources created using Terraform and keeps track of resources created by your configuration and maps them to real-world resources. Terraform state file is a sensitive file as it contains information about the infrastructure that we have created. You should never push this file to any version control system like GitHub. Store terraform.tfstate file in the backend to keep it safe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The backend supported by Terraform:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Amazon S3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Azure Storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Google Cloud Storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HashiCorps Terraform Cloud and Terraform Enterprise.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, we are using Terraform Cloud to store Terraform state file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS account and AWS Access key and Secret key created&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Terraform installed on your IDE&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS CLI installed and configured on your IDE&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic understanding of AWS services and Terraform&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-1: Create Terraform cloud account&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Go to &lt;a href="https://app.terraform.io/" rel="noopener noreferrer"&gt;https://app.terraform.io/&lt;/a&gt; and sign-up for a free account. After login creates an organization and a workspace. Choose &lt;strong&gt;Start from the scratch&lt;/strong&gt; workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs212y4oa8e6x7tznsi7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs212y4oa8e6x7tznsi7h.png" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-2: Create an organization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create a new organization, enter your organization name and email address as shown below, and click &lt;strong&gt;Create organization&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufu04mhg2b5ynfoflvde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufu04mhg2b5ynfoflvde.png" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-3: Create a workspace&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Choose the workflow of your choice, here I am selecting a &lt;strong&gt;CLI-driven workflow.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsodr0hc1f4y6lgun8csl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsodr0hc1f4y6lgun8csl.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give a name and description to your workflow and click &lt;strong&gt;Create workspace.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya219pznx6iy1x3w3dv7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya219pznx6iy1x3w3dv7.png" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-4: Clone repository&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Clone this repository &lt;a href="https://github.com/palakbhawsar98/Terraform/tree/main/terraform-remote-state-hands-on" rel="noopener noreferrer"&gt;https://github.com/palakbhawsar98/Terraform/tree/main/terraform-remote-state-hands-on&lt;/a&gt; and Add the backend block in the configuration file and enter your organization-name and workspace name that you have created in previous steps. This block ensures that terraform.tfstate file doesn't get created in your local but in the remote backend i.e Terraform cloud.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "remote" {
    organization = "organization-name"
    workspaces {
      name = "terraform-backend-handson"
    }
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Step-5: Add environment variables&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Go to the variables section in the left bar and click &lt;strong&gt;Add variable.&lt;/strong&gt; Here we can add variables that are used by configuration files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04ngxzm435v0uxhfaadq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04ngxzm435v0uxhfaadq.png" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the &lt;strong&gt;Environment variable&lt;/strong&gt; and Add AWS_ACCESS_KEY and SECRET_ACCESS_KEY and don't forget to checkbox &lt;strong&gt;Sensitive&lt;/strong&gt; and click &lt;strong&gt;Save variable&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx848ge1h33llpja3oxu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx848ge1h33llpja3oxu.png" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After adding environment variables the console will look like this. You can terraform variables also in this step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhyceyokpyls3xvi1s5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhyceyokpyls3xvi1s5d.png" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-6: Terraform Login&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Go to your project directory in the console and execute &lt;strong&gt;terraform login&lt;/strong&gt; command and enter the value yes. Terraform will open the browser. Click &lt;strong&gt;Create API token.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tyojsd20r18y9i45m9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tyojsd20r18y9i45m9p.png" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the API token and paste it into the terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5adx7mtmz3d722g80kvk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5adx7mtmz3d722g80kvk.png" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now run the terraform init, terraform plan, and terraform apply commands to provision the infrastructure, You can see the console where the build will start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbggjojunba14zi0lu68i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbggjojunba14zi0lu68i.png" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the triggered build is successful you will able to see the resources created in the AWS console, as well as the state file in Terraform Cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxlfmyd0nek0fez6oevc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxlfmyd0nek0fez6oevc.png" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>emptystring</category>
    </item>
    <item>
      <title>Terraform: Create VPC, Subnets, and EC2 instances in multiple availability zones</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Sat, 17 Dec 2022 10:37:03 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/terraform-create-vpc-subnets-and-ec2-instances-in-multiple-availability-zones-2ao5</link>
      <guid>https://dev.to/palakbhawsar98/terraform-create-vpc-subnets-and-ec2-instances-in-multiple-availability-zones-2ao5</guid>
      <description>&lt;p&gt;In this article, I will demonstrate how to create VPC, Subnets, EC2 instances, Internet gateway, NAT gateway, and Security groups using Terraform in two availability zones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XxqE0YMr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1670764593209/inbTIQI6Z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XxqE0YMr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1670764593209/inbTIQI6Z.png" alt="" width="621" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS account and AWS Access key and Secret key created&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Terraform installed on your IDE&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS CLI installed and configured on your IDE&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic understanding of AWS services and Terraform&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Choose a region in which you want your VPC to reside and availability zones where you want to create public and private subnets for high availability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Decide the CIDR blocks range for your VPC and Subnets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create public and private subnets in each availability zone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an internet gateway to allow communication between your VPC and the internet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an EC2 instance in each public subnet in both the availability zones and create AWS key pair to SSH into your instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a route table for the public and private subnets and associate the route table with subnets to control where network traffic is directed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a NAT gateway to enable private subnets to connect to services outside your VPC. A NAT gateway must be in a public subnet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, create a VPC security group and open port 80 to allow HTTP traffic from anywhere and open port 22 to SSH into the instances.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Code Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use &lt;a href="https://github.com/palakbhawsar98/Terraform/tree/main/terraform-vpc-hands-on"&gt;GitHub&lt;/a&gt; to find providers.tf, variables.tf, and outputs.tf files.&lt;/p&gt;

&lt;p&gt;Let's get started with the configuration of the project&lt;/p&gt;

&lt;h1&gt;
  
  
  Create VPC
&lt;/h1&gt;

&lt;p&gt;We are creating VPC in the us-east-1 region and attaching it to the internet gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "vpc" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name = "my-vpc"
  }
}

resource "aws_internet_gateway" "internet_gateway" {
  vpc_id = aws_vpc.vpc.id
  tags = {
    Name = "inernetGW"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create public and private subnets
&lt;/h1&gt;

&lt;p&gt;Creating one public and one private subnet in both us-east-1a and us-east-1b zones.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_subnet" "vpc_public_subnet" {
  vpc_id = aws_vpc.vpc.id
  count = length(var.subnets_count)
  availability_zone = element(var.availability_zone, count.index)
  cidr_block = "10.0.${count.index}.0/24"
  map_public_ip_on_launch = true

  tags = {
    Name = "pub-sub-${element(var.availability_zone, count.index)}"
  }
}

resource "aws_subnet" "vpc_private_subnet" {
  count = length(var.subnets_count)
  availability_zone = element(var.availability_zone, count.index)
  cidr_block = "10.0.${count.index + 2}.0/24"
  vpc_id = aws_vpc.vpc.id

  tags = {
    Name = "pri-sub-${element(var.availability_zone, count.index)}"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create a route table and associate it with the public subnet
&lt;/h1&gt;

&lt;p&gt;A route table contains a set of rules that are used to determine where network traffic is directed. Associate a public subnet with the default route (0.0.0.0/0) pointing to an internet gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "public_route_table" {
  vpc_id = aws_vpc.vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.internet_gateway.id
  }
  tags = {
    Name = "public-route-tbl"
  }
}

resource "aws_route_table_association" "public_route_table_association" {
  count = length(var.subnets_count)
  subnet_id = element(aws_subnet.vpc_public_subnet.*.id, count.index)
  route_table_id = aws_route_table.public_route_table.id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create a NAT gateway and associate it with Elastic IP
&lt;/h1&gt;

&lt;p&gt;Create a public NAT gateway in a public subnet and associate it with an elastic IP address to route traffic from the NAT gateway to the Internet gateway for the VPC.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eip" "elasticIP" {
  count = length(var.subnets_count)
  vpc = true
}

resource "aws_nat_gateway" "nat_gateway" {
  count = length(var.subnets_count)
  allocation_id = element(aws_eip.elasticIP.*.id, count.index)
  subnet_id = element(aws_subnet.vpc_public_subnet.*.id, count.index)

  tags = {
    Name = "nat-GTW-${count.index}"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create a route table and associate it with the private subnet
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "private_route_table" {
  count = length(var.subnets_count)
  vpc_id = aws_vpc.vpc.id
  route {
    cidr_block = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.nat_gateway[count.index].id
  }

  tags = {
    Name = "private-route-tbl"
  }
}

resource "aws_route_table_association" "private_route_table_association" {
  count = length(var.subnets_count)
  subnet_id = element(aws_subnet.vpc_private_subnet.*.id, count.index)
  route_table_id = element(aws_route_table.private_route_table.*.id,
  count.index)
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create security group
&lt;/h1&gt;

&lt;p&gt;For the inbound connections open port 80 to allow HTTP traffic from anywhere and open port 22 to SSH into the instance and open all the ports for the outbound connections.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "vpc_sg" {
  name = "vpc_sg"
  description = "Security group for vpc"
  vpc_id = aws_vpc.vpc.id

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port = 0
    to_port = 0
    protocol = -1
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "vpc-sg"
  }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create EC2 instances in public subnets
&lt;/h1&gt;

&lt;p&gt;Create EC2 instance with user-data scripts to install Apache server and access static webpage. Also, create AWS key pair to SSH into instances&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "tls_private_key" "key" {
  algorithm = "RSA"
  rsa_bits = 4096
}

resource "local_file" "private_rsa_key" {
  content = tls_private_key.key.private_key_pem
  filename = "private_rsa_key"
}

resource "aws_key_pair" "public_rsa_key" {
  key_name = "public_rsa_key"
  public_key = tls_private_key.key.public_key_openssh
}

resource "aws_instance" "my_app_server" {
  ami = var.instance_ami
  instance_type = var.instance_size
  key_name = aws_key_pair.public_rsa_key.key_name
  count = length(var.subnets_count)
  subnet_id = element(aws_subnet.vpc_public_subnet.*.id, count.index)
  security_groups = [aws_security_group.vpc_sg.id]
  associate_public_ip_address = true

  user_data = &amp;lt;&amp;lt;-EOF
  #!/bin/bash
  sudo apt update -y
  sudo apt install apache2 -y
  sudo systemctl start apache2
  sudo systemctl enable apache2
  sudo apt install git -y
  git clone https://github.com/palakbhawsar98/FirstWebsite.git
  cd /FirstWebsite
  sudo cp index.html /var/www/html/
  EOF

  tags = {
    Name = "my_app_server-${count.index}"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are ready to deploy all our changes to AWS. Perform the below commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;terraform init&lt;/strong&gt; to initialize the working directory and download all the plugins for providers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;terraform fmt&lt;/strong&gt; to rewrite Terraform configuration files to a canonical format and style.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;terraform validate&lt;/strong&gt; to check that our code is error-free.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;terraform plan&lt;/strong&gt; to create the execution plan for the resources we are going to create in AWS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;terraform apply&lt;/strong&gt; to execute the actions proposed in a terraform plan and to deploy your infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can see the resources created in AWS Console. Take the public IPV4 and search it in the browser on port 80 that we opened for HTTP connections.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G7NRTrrH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1671212159651/fNhEAAsfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G7NRTrrH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1671212159651/fNhEAAsfb.png" alt="" width="880" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R9Ac0Of1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1671212062421/RkGIJnY5-.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R9Ac0Of1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1671212062421/RkGIJnY5-.png" alt="" width="880" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can access the HTML static webpage using public IPV4&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qcmKDtGR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1671212024285/iu-X9i1_g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qcmKDtGR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1671212024285/iu-X9i1_g.png" alt="" width="880" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want you can destroy the infrastructure we just create using &lt;strong&gt;terraform destroy&lt;/strong&gt; command.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Launch an EC2 Instance using Terraform</title>
      <dc:creator>Palak Bhawsar</dc:creator>
      <pubDate>Mon, 31 Oct 2022 10:23:50 +0000</pubDate>
      <link>https://dev.to/palakbhawsar98/launch-an-ec2-instance-using-terraform-5gje</link>
      <guid>https://dev.to/palakbhawsar98/launch-an-ec2-instance-using-terraform-5gje</guid>
      <description>&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision infrastructure using a high-level configuration language known as HCL (Hashicorp Configuration Language). Terraform is an IAC tool, used primarily by DevOps teams to automate various infrastructure tasks. The provisioning of cloud resources, for instance, is one of the main use cases of Terraform. It's a cloud-agnostic, open-source provisioning tool written in the Go language and created by HashiCorp.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAC or Infrastructure as Code&lt;/strong&gt; enables us to manage infrastructure in the form of code. Using Terraform we can automate the process of provisioning and destroying infrastructure.&lt;/p&gt;

&lt;p&gt;Let's get started. . .&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Download Terraform on a Linux machine
&lt;/h2&gt;

&lt;p&gt;Open your Linux terminal and run the following commands to Install Terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y gnupg software-properties-common curl
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update
sudo apt-get install terraform

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the downloaded version of Terraform using the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform version

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Create AWS User with permission
&lt;/h2&gt;

&lt;p&gt;Create a User and give it permission to interact with AWS from the local machine. Login to the AWS account and in services search for &lt;strong&gt;IAM.&lt;/strong&gt; In side pane select &lt;strong&gt;Users&lt;/strong&gt; and click &lt;strong&gt;Add users&lt;/strong&gt; , enter name of the user and check the box &lt;strong&gt;Access key - Programmatic access&lt;/strong&gt; and click &lt;strong&gt;next:Permissions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QvntB-Na--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667148417913/eqUY85wcp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QvntB-Na--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667148417913/eqUY85wcp.png" alt="image.png" width="880" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;Attach existing policies directly&lt;/strong&gt; from the top menu and check &lt;strong&gt;AmazonEC2FullAccess&lt;/strong&gt; and click &lt;strong&gt;Next: Tags&lt;/strong&gt; , &lt;strong&gt;Review&lt;/strong&gt; , and &lt;strong&gt;Add user&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wT8oEnO9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667148586434/Jpn3w9Zk7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wT8oEnO9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667148586434/Jpn3w9Zk7.png" alt="image.png" width="880" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy and keep the Access key ID and Secret access key safe for later use or Download the CSV file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4P0iOmMg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667148808755/vDaEnpZML.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4P0iOmMg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667148808755/vDaEnpZML.png" alt="image.png" width="880" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Configure AWS keys
&lt;/h2&gt;

&lt;p&gt;Install AWS CLI using the below commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install awscli


aws configure

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter the region and access key details when prompted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWS Access Key ID :
AWS Secret Access key:
Default region: us-east-1
Default Output: json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure profile to store access key and secret access key. Cat config file located at path ~/.aws/&lt;/p&gt;

&lt;p&gt;Edit the file and enter the access key and secret access key and give a name to profile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[default]
aws_access_key_id = &amp;lt;ACCESS_KEY_ID&amp;gt;
aws_secret_access_key = &amp;lt;SECRET_ACCESS_KEY&amp;gt;
region = us-east-1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Create terraform configuration file
&lt;/h2&gt;

&lt;p&gt;A Terraform configuration is a complete document in the Terraform language that tells Terraform how to manage a given collection of infrastructure. A configuration can consist of multiple files and directories. Create a configuration file with a .tf extension in any editor or IDE and save the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
   region = "us-east-1"
   }

resource "aws_instance" "my_ec2_instance" {
  ami = "ami-08c40ec9ead489470"
  instance_type = "t2.micro"

  tags = {
    Name = "FirstEC2Instnace"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To format the configuration file run the below command in the terminal in the same directory where the .tf file is present.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform fmt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialize terraform using the below command. It's the same as the &lt;strong&gt;git init&lt;/strong&gt; command in GIT. This command will initialize the working directory containing Terraform configuration files and install any required plugins&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check that the configuration file is error-free, run the following command after terraform init:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform validate

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the below which lets you preview the actions Terraform would take to modify your infrastructure or save a speculative plan which you can apply later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, run the below command. This is the same as &lt;strong&gt;git push&lt;/strong&gt;. This command executes the actions proposed in a terraform plan. It is used to deploy your infrastructure. Typically apply should be run after terraform init and terraform plan. Enter yes when prompted "Enter a value:".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see the instance is running in your AWS Console. You will see terraform.tfstate file is created after running the &lt;strong&gt;terraform apply&lt;/strong&gt; command. This state is used by Terraform to map real-world resources to your configuration, keep track of metadata, and improve performance for large infrastructures. This state is stored by default in a local file named "terraform. tfstate", but it can also be stored remotely, which works better in a team environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Destroy infrastructure
&lt;/h2&gt;

&lt;p&gt;To destroy the resources you have just created run the below command. The terraform destroy command terminates resources managed by your Terraform project. This command is the inverse of terraform apply in that it terminates all the resources specified in your Terraform state. It does not destroy resources running elsewhere that are not managed by the current Terraform project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
  </channel>
</rss>
