<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: VENKATA SRI HARI</title>
    <description>The latest articles on DEV Community by VENKATA SRI HARI (@venkatasrihari).</description>
    <link>https://dev.to/venkatasrihari</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/venkatasrihari"/>
    <language>en</language>
    <item>
      <title>Cut AWS EC2 Costs with Python: Automate Cost Optimization Using AWS Lambda</title>
      <dc:creator>VENKATA SRI HARI</dc:creator>
      <pubDate>Mon, 17 Feb 2025 06:08:20 +0000</pubDate>
      <link>https://dev.to/venkatasrihari/cut-aws-ec2-costs-with-python-automate-cost-optimization-using-aws-lambda-45oi</link>
      <guid>https://dev.to/venkatasrihari/cut-aws-ec2-costs-with-python-automate-cost-optimization-using-aws-lambda-45oi</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Managing AWS EC2 costs efficiently is crucial for DevOps and cloud engineers. In this guide, we’ll explore how to automate EC2 instance optimization using AWS Lambda and Python (Boto3). You’ll learn how to:&lt;br&gt;
✅ Automatically stop and start instances based on business hours&lt;br&gt;
✅ Monitor CPU utilization and shut down underutilized instances&lt;br&gt;
✅ Use tags and CloudWatch metrics to identify non-essential resources&lt;br&gt;
✅ Schedule the Lambda function using EventBridge (CloudWatch Events)&lt;br&gt;
✅ Improve cost efficiency while ensuring service availability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jnq7bpr65k1kaof9007.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jnq7bpr65k1kaof9007.png" alt="Image description" width="789" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With hands-on examples and best practices, this article will help you reduce EC2 costs without manual intervention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda Use Case&lt;br&gt;
Create an IAM Role for Lambda&lt;br&gt;
Write the Python Code for Lambda&lt;br&gt;
Schedule the Lambda Function&lt;br&gt;
Modify to Stop EC2 Instances Based on CPU Usage&lt;br&gt;
Final Steps&lt;br&gt;
Conclusion.&lt;/strong&gt;&lt;br&gt;
**&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lambda Use Case: -**
Stop idle EC2 instances outside of business hours (e.g., stop from 8 PM to 8 AM).
Start instances automatically during working hours.
Monitor CPU usage and shut down underutilized instances.
Tag-based filtering to exclude critical instances.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;*&lt;em&gt;2.Create an IAM Role for Lambda: -&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Your Lambda function needs EC2 permissions to start/stop instances.&lt;br&gt;
Create an IAM Role with the following permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Version: The policy language version.&lt;br&gt;
Statement: The array of permissions statements.&lt;br&gt;
Effect: This policy allows the listed actions.&lt;br&gt;
Action: The specific EC2 actions allowed by this policy (DescribeInstances, StartInstances, StopInstances).&lt;br&gt;
Resource: The resource scope for the actions (in this case, all resources).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Write the Python Code for Lambda&lt;br&gt;
Auto-Stop EC2 Instances Based on Time:-&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import datetime

# Initialize EC2 client
ec2 = boto3.client('ec2')

# Define business hours (e.g., 8 AM - 8 PM)
START_HOUR = 8
STOP_HOUR = 20

def lambda_handler(event, context):
    now = datetime.datetime.now()
    current_hour = now.hour

    # Fetch all running instances with the tag `AutoStop=Yes`
    instances = ec2.describe_instances(Filters=[
        {'Name': 'instance-state-name', 'Values': ['running']},
        {'Name': 'tag:AutoStop', 'Values': ['Yes']}
    ])

    instance_ids = [i['InstanceId'] for r in instances['Reservations'] for i in r['Instances']]

    if current_hour &amp;gt;= STOP_HOUR or current_hour &amp;lt; START_HOUR:
        if instance_ids:
            ec2.stop_instances(InstanceIds=instance_ids)
            print(f"Stopped instances: {instance_ids}")
    else:
        print("Business hours - no action needed")

    return {
        'statusCode': 200,
        'body': 'Lambda execution completed'
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How It Works?&lt;br&gt;
The function runs on a schedule (CloudWatch Event Trigger).&lt;br&gt;
It checks if the current time is outside business hours (8 AM — 8 PM).&lt;br&gt;
If instances have the tag AutoStop=Yes, they are stopped.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Schedule the Lambda Function&lt;/strong&gt;&lt;br&gt;
Use Amazon EventBridge (CloudWatch Rule) to trigger this Lambda:&lt;/p&gt;

&lt;p&gt;Cron Expression: 0 20 * * ? * → Runs at 8 PM UTC every day.&lt;br&gt;
Separate Function: Create a separate Lambda function to start instances at 8 AM, ensuring they are available during working hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Modify to Stop EC2 Instances Based on CPU Usage&lt;/strong&gt;&lt;br&gt;
If you want to stop EC2 instances with low CPU utilization, modify the function to check CloudWatch CPU metrics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

ec2 = boto3.client('ec2')
cloudwatch = boto3.client('cloudwatch')

def get_cpu_utilization(instance_id):
    response = cloudwatch.get_metric_statistics(
        Namespace='AWS/EC2',
        MetricName='CPUUtilization',
        Dimensions=[{'Name': 'InstanceId', 'Value': instance_id}],
        StartTime=datetime.datetime.utcnow() - datetime.timedelta(minutes=30),
        EndTime=datetime.datetime.utcnow(),
        Period=300,
        Statistics=['Average']
    )
    return response['Datapoints'][0]['Average'] if response['Datapoints'] else 0

def lambda_handler(event, context):
    instances = ec2.describe_instances(Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
    for res in instances['Reservations']:
        for inst in res['Instances']:
            instance_id = inst['InstanceId']
            cpu_usage = get_cpu_utilization(instance_id)

            if cpu_usage &amp;lt; 10:  # Stop instances with &amp;lt;10% CPU utilization
                ec2.stop_instances(InstanceIds=[instance_id])
                print(f"Stopped instance {instance_id} due to low CPU usage")

    return {'statusCode': 200, 'body': 'Lambda executed successfully'}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Final Steps&lt;/strong&gt;&lt;br&gt;
Deploy the Lambda function in AWS Lambda.&lt;br&gt;
Assign the IAM Role created earlier.&lt;br&gt;
Set an EventBridge schedule (e.g., every hour) to trigger it.&lt;br&gt;
Tag instances with AutoStop=Yes to include them in automation.&lt;br&gt;
This Lambda function helps reduce EC2 costs by shutting down idle or low-utilization instances while keeping critical workloads running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Automating AWS EC2 cost optimization with Python and AWS Lambda is a smart way to reduce cloud expenses while maintaining operational efficiency. By implementing automated start/stop schedules, monitoring CPU utilization, and leveraging tags, you can ensure that only necessary instances are running — saving costs without compromising performance.&lt;/p&gt;

&lt;p&gt;By integrating CloudWatch, EventBridge, and Boto3, you create a scalable and hands-free solution that continuously optimizes your cloud infrastructure. This approach not only enhances cost efficiency but also aligns with DevOps best practices for cloud automation.&lt;/p&gt;

&lt;p&gt;Start implementing cost-saving Lambda functions today, and take control of your AWS billing! 🚀💡&lt;/p&gt;

&lt;p&gt;………………………………………………………………………………………………………………………………………………………………………………………………&lt;/p&gt;

&lt;p&gt;You’re welcome! Have a great time ahead! Enjoy your day!&lt;/p&gt;

&lt;p&gt;Please Connect with me any doubts.&lt;/p&gt;

&lt;p&gt;Mail: &lt;a href="mailto:sriharimalapati6@gmail.com"&gt;sriharimalapati6@gmail.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LinkedIn: &lt;a href="http://www.linkedin.com/in/" rel="noopener noreferrer"&gt;www.linkedin.com/in/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/Consultantsrihari" rel="noopener noreferrer"&gt;https://github.com/Consultantsrihari&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Medium: Sriharimalapati — Medium&lt;/p&gt;

&lt;p&gt;Thanks for watching ##### %%%% Sri Hari %%%%&lt;br&gt;
Python&lt;br&gt;
AWS Lambda&lt;br&gt;
AWS&lt;br&gt;
DevOps&lt;br&gt;
Automation&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Mastering AWS DevOps: Automation &amp; Scripting with Python</title>
      <dc:creator>VENKATA SRI HARI</dc:creator>
      <pubDate>Thu, 13 Feb 2025 07:09:10 +0000</pubDate>
      <link>https://dev.to/venkatasrihari/mastering-aws-devops-automation-scripting-with-python-1jgm</link>
      <guid>https://dev.to/venkatasrihari/mastering-aws-devops-automation-scripting-with-python-1jgm</guid>
      <description>&lt;p&gt;Unlock the power of AWS DevOps with Python scripting! In this we’ll explore how to automate cloud infrastructure, streamline CI/CD pipelines, and optimize deployments using Python. From AWS SDK (Boto3) to scripting EC2, S3, Lambda, and IAM automation, this article covers real-world examples and best practices for DevOps engineers. Whether you’re automating deployments, monitoring resources, or enhancing security, this guide has you covered!&lt;/p&gt;

&lt;p&gt;💡 Key Takeaways:&lt;br&gt;
✅ Automate AWS infrastructure with Python (Boto3)&lt;br&gt;
✅ Integrate Python into CI/CD pipelines&lt;br&gt;
✅ Optimize AWS services with automation scripts&lt;br&gt;
✅ Best practices for DevOps scripting&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6qlqlstmp3aghqy6rts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6qlqlstmp3aghqy6rts.png" alt="Image description" width="800" height="788"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s dive into AWS DevOps automation!&lt;br&gt;
**&lt;br&gt;
What is AWS DevOps, and how does Python fit into it?**&lt;br&gt;
AWS DevOps is a set of practices that automate and streamline software development, deployment, and infrastructure management on AWS. Python plays a crucial role by enabling automation through scripts using Boto3 (AWS SDK for Python), infrastructure-as-code (IaC) tools, and integration with CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q1: How can Python be used to automate EC2 instance management?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: Python can automate EC2 operations such as instance creation, termination, and status checks.&lt;/p&gt;

&lt;p&gt;Example: Launching an EC2 instance using Boto3&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
    ImageId='ami-0abcdef1234567890',
    MinCount=1,
    MaxCount=1,
    InstanceType='t2.micro',
    KeyName='my-key'
)
print("EC2 Instance Created:", instance[0].id)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Q2: How do you use Boto3 to list all EC2 instances?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: Boto3 is the AWS SDK for Python. You can list all EC2 instances using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

ec2 = boto3.client('ec2')

response = ec2.describe_instances()
for reservation in response['Reservations']:
    for instance in reservation['Instances']:
        print(f"Instance ID: {instance['InstanceId']}, State: {instance['State']['Name']}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Q3: How do you automate S3 bucket creation using Python?&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

s3 = boto3.client('s3')
bucket_name = "my-unique-bucket-12345"

s3.create_bucket(Bucket=bucket_name)
print(f"Bucket {bucket_name} created successfully!")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Q4: How do you trigger a Lambda function using Python?&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

lambda_client = boto3.client('lambda')
response = lambda_client.invoke(
    FunctionName='myLambdaFunction',
    InvocationType='Event'
)

print("Lambda triggered:", response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Q5: How do you use Python to stop EC2 instances based on a tag?&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import boto3

ec2 = boto3.client('ec2')

instances = ec2.describe_instances(Filters=[{'Name': 'tag:Environment', 'Values': ['dev']}])

for reservation in instances['Reservations']:
    for instance in reservation['Instances']:
        ec2.stop_instances(InstanceIds=[instance['InstanceId']])
        print(f"Stopping instance: {instance['InstanceId']}")

##### This stops all EC2 instances with the tag Environment=dev.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Q6: How do you automate IAM user creation with Python?&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

iam = boto3.client('iam')

response = iam.create_user(UserName='newuser')
print(f"User created: {response['User']['UserName']}")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;#### This creates a new IAM user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q7: How do you automate Route 53 DNS record creation using Python?&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

route53 = boto3.client('route53')

response = route53.change_resource_record_sets(
    HostedZoneId='ZXXXXXXXXXXXXX',
    ChangeBatch={
        'Changes': [{
            'Action': 'CREATE',
            'ResourceRecordSet': {
                'Name': 'example.mydomain.com',
                'Type': 'A',
                'TTL': 300,
                'ResourceRecords': [{'Value': '192.168.1.1'}]
            }
        }]
    }
)

print("DNS record created:", response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  This creates an A record in Route 53.
&lt;/h3&gt;

&lt;p&gt;*&lt;em&gt;Q8: How do you use Python to fetch AWS CloudWatch logs?&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

logs = boto3.client('logs')

response = logs.describe_log_groups()
for log_group in response['logGroups']:
    print(f"Log Group: {log_group['logGroupName']}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  This fetches all available CloudWatch log groups.
&lt;/h2&gt;

&lt;p&gt;*&lt;em&gt;Q9: How do you automate AMI creation using Python?&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

ec2 = boto3.client('ec2')
instance_id = "i-0abcd1234efgh5678"

response = ec2.create_image(
    InstanceId=instance_id,
    Name="MyBackupAMI",
    NoReboot=True
)

print("AMI ID:", response['ImageId'])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  This creates an Amazon Machine Image (AMI) from an EC2 instance.
&lt;/h2&gt;

&lt;p&gt;*&lt;em&gt;Q10: How do you upload a file to an S3 bucket using Python?&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

s3 = boto3.client('s3')

s3.upload_file("local_file.txt", "my-bucket", "uploaded_file.txt")
print("File uploaded successfully!")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  This uploads local_file.txt to my-bucket.
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q11: How do you automate AWS Lambda deployment using Python?&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

lambda_client = boto3.client('lambda')

with open('lambda_function.zip', 'rb') as f:
    zip_data = f.read()

response = lambda_client.update_function_code(
    FunctionName='myLambdaFunction',
    ZipFile=zip_data
)

print("Lambda function updated:", response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  This updates an AWS Lambda function with new code.
&lt;/h3&gt;

&lt;p&gt;*&lt;em&gt;Q12: How do you monitor AWS services using Python and CloudWatch?&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

cloudwatch = boto3.client('cloudwatch')
response = cloudwatch.get_metric_statistics(
    Namespace='AWS/EC2',
    MetricName='CPUUtilization',
    Dimensions=[{'Name': 'InstanceId', 'Value': 'i-1234567890abcdef0'}],
    StartTime='2024-02-01T00:00:00Z',
    EndTime='2024-02-02T00:00:00Z',
    Period=3600,
    Statistics=['Average']
)
print("CPU Utilization Data:", response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Q13: How do you integrate Python with CI/CD pipelines for AWS?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python can be used in Jenkins, GitHub Actions, and AWS CodePipeline for tasks like:&lt;br&gt;
🔹 Running automated tests with PyTest&lt;br&gt;
🔹 Deploying infrastructure using Terraform with Python scripts&lt;br&gt;
🔹 Triggering AWS Lambda functions for post-deployment automation&lt;/p&gt;

&lt;p&gt;Example: A Python script to trigger an AWS Lambda function during deployment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

lambda_client = boto3.client('lambda')
response = lambda_client.invoke(
    FunctionName='myLambdaFunction',
    InvocationType='RequestResponse'
)
print("Lambda Function Triggered:", response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt; Mastering AWS DevOps Automation with Python&lt;br&gt;
Automating AWS with Python scripting is a game-changer for DevOps engineers, enabling efficient infrastructure management, CI/CD optimization, and security enhancements. With powerful tools like Boto3, AWS Lambda, and CloudWatch, Python simplifies tasks such as EC2 management, S3 automation, IAM security, and cost optimization.&lt;/p&gt;

&lt;p&gt;By integrating Python into your DevOps workflows, you can:&lt;br&gt;
✅ Reduce manual intervention and human errors&lt;br&gt;
✅ Improve deployment speed and system reliability&lt;br&gt;
✅ Optimize AWS resources for cost efficiency&lt;br&gt;
✅ Enhance security through automated IAM policies&lt;/p&gt;

&lt;p&gt;………………………………………………………………………………………………………………………………………………………………………………………………&lt;/p&gt;

&lt;p&gt;**You’re welcome! Have a great time ahead! Enjoy your day!&lt;/p&gt;

&lt;p&gt;Please Connect with me any doubts.&lt;/p&gt;

&lt;p&gt;Mail: &lt;a href="mailto:sriharimalapati6@gmail.com"&gt;sriharimalapati6@gmail.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LinkedIn: &lt;a href="http://www.linkedin.com/in/" rel="noopener noreferrer"&gt;www.linkedin.com/in/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/Consultantsrihari" rel="noopener noreferrer"&gt;https://github.com/Consultantsrihari&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Medium: Sriharimalapati — Medium&lt;/p&gt;

&lt;p&gt;Thanks for watching ##### %%%% Sri Hari %%%%**&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Scalable 3-Tier Architecture on AWS Using Terraform: A Modular Approach</title>
      <dc:creator>VENKATA SRI HARI</dc:creator>
      <pubDate>Wed, 05 Feb 2025 18:41:49 +0000</pubDate>
      <link>https://dev.to/venkatasrihari/building-a-scalable-3-tier-architecture-on-aws-using-terraform-a-modular-approach-403</link>
      <guid>https://dev.to/venkatasrihari/building-a-scalable-3-tier-architecture-on-aws-using-terraform-a-modular-approach-403</guid>
      <description>&lt;p&gt;In this project, we will design and implement a scalable three-tier architecture on AWS using Terraform. The setup follows a modular approach, organizing infrastructure into separate layers: Core, Web, App, and Database. This ensures better manageability, reusability, and security while deploying cloud resources. By the end, you will have a fully automated, infrastructure-as-code (IaC) solution for hosting applications on AWS.&lt;/p&gt;

&lt;p&gt;GitHub Repository: &lt;a href="https://github.com/Consultantsrihari/3-Tier-Architecture-on-AWS-Using-Terraform" rel="noopener noreferrer"&gt;https://github.com/Consultantsrihari/3-Tier-Architecture-on-AWS-Using-Terraform&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08auvzjx6ioeoi9vt8l9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08auvzjx6ioeoi9vt8l9.png" alt="Image description" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
Before starting this project, ensure you have the following:&lt;/p&gt;

&lt;p&gt;✅ AWS Account — To provision cloud resources using Terraform.&lt;br&gt;
✅ Terraform Installed — Download and install Terraform on your local machine.&lt;br&gt;
✅ AWS CLI Installed &amp;amp; Configured — Install AWS CLI and run aws configure to set up credentials.&lt;br&gt;
✅ IAM User with Required Permissions — Ensure your IAM user has permissions to create and manage VPCs, EC2, RDS, ALB, and other AWS resources.&lt;br&gt;
✅ Basic Knowledge of Terraform — Familiarity with writing .tf files, providers, modules, and variables.&lt;br&gt;
✅ Code Editor (VS Code Recommended) — Use VS Code with the Terraform extension for syntax highlighting and better development experience.&lt;br&gt;
✅ Git Installed (Optional) — For version control and managing Terraform code in a GitHub repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture Overview&lt;/strong&gt;&lt;br&gt;
We will deploy a three-tier architecture consisting of the following:&lt;/p&gt;

&lt;p&gt;VPC (Core Layer): Contains public and private subnets.&lt;br&gt;
Public Subnets: Bastion Host &amp;amp; NAT Gateway.&lt;br&gt;
Private Subnets: Web/Application servers (EC2 instances) and Database (Amazon RDS).&lt;br&gt;
Load Balancer (ALB): Distributes traffic.&lt;br&gt;
Security Groups: Restricted access at different layers.&lt;br&gt;
**Project Structure&lt;br&gt;
**Root Module (main.tf)&lt;br&gt;
The root module ties together the core network, web, app, and database modules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "core" {
  source = "./modules/core"

  vpc_cidr             = "10.0.0.0/16"
  public_subnet_cidrs  = ["10.0.1.0/24", "10.0.2.0/24"]
  private_subnet_cidrs = ["10.0.3.0/24", "10.0.4.0/24"]
  db_subnet_cidrs      = ["10.0.5.0/24", "10.0.6.0/24"]
  azs                  = ["us-east-1a", "us-east-1b"]
}

module "web" {
  source = "./modules/web"

  public_subnet_ids  = module.core.public_subnet_ids
  web_alb_sg_id      = module.core.web_alb_sg_id
  web_instance_sg_id = module.core.web_instance_sg_id
  web_ami            = "ami-0c55b159cbfafe1f0"
  web_instance_type  = "t2.micro"
}

module "app" {
  source = "./modules/app"

  private_subnet_ids = module.core.private_subnet_ids
  app_alb_sg_id      = module.core.app_alb_sg_id
  app_instance_sg_id = module.core.app_instance_sg_id
  app_ami            = "ami-0c55b159cbfafe1f0"
  app_instance_type  = "t2.micro"
}

module "database" {
  source = "./modules/database"

  db_subnet_ids = module.core.db_subnet_ids
  db_sg_id      = module.core.db_sg_id
  db_name       = "mydb"
  db_user       = "admin"
  db_password   = "securepassword123"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Security Group Setup (sg.tf)&lt;/strong&gt;&lt;br&gt;
Security groups are used to restrict traffic at different layers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Web ALB Security Group
resource "aws_security_group" "web_alb" {
  name        = "web-alb-sg"
  description = "Allow HTTP inbound traffic"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# Database Security Group
resource "aws_security_group" "database" {
  name        = "db-sg"
  description = "Allow MySQL access from app tier"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port       = 3306
    to_port         = 3306
    protocol        = "tcp"
    security_groups = [module.core.app_instance_sg_id]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Module Definitions&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Core Network Module (modules/core/main.tf)
This module creates the VPC, subnets, NAT Gateway, and route tables.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "main" {
  cidr_block = var.vpc_cidr
  tags = {
    Name = "3tier-vpc"
  }
}

# Public Subnets
resource "aws_subnet" "public" {
  count             = length(var.public_subnet_cidrs)
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.public_subnet_cidrs[count.index]
  availability_zone = var.azs[count.index]
  tags = {
    Name = "public-subnet-${count.index}"
  }
}

# Private Subnets
resource "aws_subnet" "private" {
  count             = length(var.private_subnet_cidrs)
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.private_subnet_cidrs[count.index]
  availability_zone = var.azs[count.index]
  tags = {
    Name = "private-subnet-${count.index}"
  }
}

# Internet Gateway
resource "aws_internet_gateway" "gw" {
  vpc_id = aws_vpc.main.id
}

# NAT Gateway
resource "aws_nat_gateway" "nat" {
  allocation_id = aws_eip.nat.id
  subnet_id     = aws_subnet.public[0].id
}

resource "aws_eip" "nat" {
  vpc = true
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Web Tier Module (modules/web/main.tf)&lt;/strong&gt;&lt;br&gt;
This module defines the public-facing ALB, EC2 instances, and auto-scaling group for the web tier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lb" "web" {
  name               = "web-alb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [var.web_alb_sg_id]
  subnets            = var.public_subnet_ids
}

resource "aws_launch_template" "web" {
  name_prefix   = "web-"
  image_id      = var.web_ami
  instance_type = var.web_instance_type
  key_name      = var.key_name

  network_interfaces {
    security_groups = [var.web_instance_sg_id]
  }

  user_data = base64encode(&amp;lt;&amp;lt;-EOF
              #!/bin/bash
              yum install -y nginx
              systemctl start nginx
              EOF
              )
}

resource "aws_autoscaling_group" "web" {
  desired_capacity   = 2
  max_size           = 4
  min_size           = 2
  vpc_zone_identifier = var.public_subnet_ids

  launch_template {
    id      = aws_launch_template.web.id
    version = "$Latest"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Application Tier Module (modules/app/main.tf)&lt;/strong&gt;&lt;br&gt;
This module defines the internal ALB, EC2 instances, and auto-scaling group for the app tier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lb" "app" {
  name               = "app-alb"
  internal           = true
  load_balancer_type = "application"
  security_groups    = [var.app_alb_sg_id]
  subnets            = var.private_subnet_ids
}

resource "aws_launch_template" "app" {
  name_prefix   = "app-"
  image_id      = var.app_ami
  instance_type = var.app_instance_type
  key_name      = var.key_name

  network_interfaces {
    security_groups = [var.app_instance_sg_id]
  }
}

resource "aws_autoscaling_group" "app" {
  desired_capacity   = 2
  max_size           = 4
  min_size           = 2
  vpc_zone_identifier = var.private_subnet_ids

  launch_template {
    id      = aws_launch_template.app.id
    version = "$Latest"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Database Tier Module (modules/database/main.tf)&lt;/strong&gt;&lt;br&gt;
This module defines the MySQL RDS instance and associated subnet group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_db_subnet_group" "db" {
  name       = "db-subnet-group"
  subnet_ids = var.db_subnet_ids
}

resource "aws_db_instance" "main" {
  allocated_storage    = 20
  storage_type         = "gp2"
  engine               = "mysql"
  engine_version       = "5.7"
  instance_class       = "db.t2.micro"
  db_name              = var.db_name
  username             = var.db_user
  password             = var.db_password
  parameter_group_name = "default.mysql5.7"
  skip_final_snapshot  = true
  db_subnet_group_name = aws_db_subnet_group.db.name
  vpc_security_group_ids = [var.db_sg_id]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deployment Steps&lt;br&gt;
1️. Initialize Terrafom&lt;br&gt;
&lt;em&gt;terraform init&lt;/em&gt;&lt;br&gt;
2️. Plan the Deployme&lt;br&gt;
&lt;em&gt;terraform plan&lt;/em&gt;&lt;br&gt;
3️. Apply the Configuratin&lt;br&gt;
&lt;em&gt;terraform apply -auto-approve&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verify the Deployment
5️.Destroy Infrastructure
&lt;em&gt;terraform destroy -auto-approve&lt;/em&gt;
Key Components:
*&lt;em&gt;Check AWS Console for created resources.
Validate EC2 instances, ALB, and RDS.
*&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
This Terraform-driven 3-tier AWS deployment establishes a scalable,&lt;br&gt;
secure, and highly available architecture, leveraging infrastructure-as-code(IaC) best practices. By modularizing components (web, app, database) and isolating network layers, it ensures fault tolerance, simplified maintenance,and controlled access between tiers. The use of auto-scaling groups, ALBs,and RDS guarantees resilience and performance, while security groups enforce least-privilege principles. This foundation supports seamless scaling, cost optimization, and compliance readiness, providing a robust blueprint for modern cloud-native applications. Future enhancements like HTTPS, WAF, or secrets management can further strengthen production readiness.&lt;/p&gt;

&lt;p&gt;………………………………………………………………………………………………………………………………………&lt;br&gt;
………………………………………………………&lt;br&gt;
You’re welcome! Have a great time ahead! Enjoy your day!&lt;br&gt;
Please Connect with me any doubts.&lt;br&gt;
Mail: &lt;a href="mailto:sriharimalapati6@gmail.com"&gt;sriharimalapati6@gmail.com&lt;/a&gt;&lt;br&gt;
LinkedIn: &lt;a href="http://www.linkedin.com/in/" rel="noopener noreferrer"&gt;www.linkedin.com/in/&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="https://github.com/Consultantsrihari" rel="noopener noreferrer"&gt;https://github.com/Consultantsrihari&lt;/a&gt;&lt;br&gt;
Medium: Sriharimalapati — Medium&lt;br&gt;
Thanks for watching ##### %%%% Sri Hari %%%%&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>Top 50 Hands-On DevOps Projects: From Zero to Hero</title>
      <dc:creator>VENKATA SRI HARI</dc:creator>
      <pubDate>Fri, 17 Jan 2025 12:05:10 +0000</pubDate>
      <link>https://dev.to/venkatasrihari/top-50-hands-on-devops-projects-from-zero-to-hero-4amm</link>
      <guid>https://dev.to/venkatasrihari/top-50-hands-on-devops-projects-from-zero-to-hero-4amm</guid>
      <description>&lt;p&gt;&lt;u&gt;&lt;/u&gt;Beginner Projects &lt;/p&gt;

&lt;p&gt;Version Control with Git and GitHub &lt;/p&gt;

&lt;p&gt;Collaborate on a project using branches, pull requests, and merge conflicts. &lt;/p&gt;

&lt;p&gt;CI/CD with Jenkins &lt;/p&gt;

&lt;p&gt;Set up a simple pipeline for a Java/Maven or Node.js application. &lt;/p&gt;

&lt;p&gt;Dockerize an Application &lt;/p&gt;

&lt;p&gt;Containerize a Node.js or Python application with Docker and manage images. &lt;/p&gt;

&lt;p&gt;Basic AWS EC2 Setup &lt;/p&gt;

&lt;p&gt;Launch and connect to an EC2 instance, and deploy a simple web app. &lt;/p&gt;

&lt;p&gt;Infrastructure Automation with Terraform &lt;/p&gt;

&lt;p&gt;Create a basic VPC, subnets, and an EC2 instance. &lt;/p&gt;

&lt;p&gt;Automated Configuration with Ansible &lt;/p&gt;

&lt;p&gt;Write a playbook to install and configure Apache or Nginx on multiple servers. &lt;/p&gt;

&lt;p&gt;Linux Administration for DevOps &lt;/p&gt;

&lt;p&gt;Manage users, groups, and permissions. Automate tasks with shell scripts. &lt;/p&gt;

&lt;p&gt;Monitoring with Prometheus and Grafana &lt;/p&gt;

&lt;p&gt;Monitor basic system metrics and create visual dashboards. &lt;/p&gt;

&lt;p&gt;Static Code Analysis with SonarQube &lt;/p&gt;

&lt;p&gt;Integrate SonarQube into Jenkins pipelines for code quality checks. &lt;/p&gt;

&lt;p&gt;Container Orchestration with Docker Swarm &lt;/p&gt;

&lt;p&gt;Deploy a multi-container app using Docker Compose and Docker Swarm. &lt;/p&gt;

&lt;p&gt;Intermediate Projects &lt;/p&gt;

&lt;p&gt;Multi-Region AWS Deployment &lt;/p&gt;

&lt;p&gt;Deploy a web app with a load balancer and Auto Scaling across regions. &lt;/p&gt;

&lt;p&gt;CI/CD Pipeline with GitHub Actions &lt;/p&gt;

&lt;p&gt;Automate builds, tests, and deployments for a Node.js app. &lt;/p&gt;

&lt;p&gt;Kubernetes Cluster Setup &lt;/p&gt;

&lt;p&gt;Install Kubernetes (Minikube or k3s) and deploy a multi-tier application. &lt;/p&gt;

&lt;p&gt;Infrastructure as Code with Terraform Modules &lt;/p&gt;

&lt;p&gt;Create reusable Terraform modules for networking and compute resources. &lt;/p&gt;

&lt;p&gt;Automated Backups with AWS S3 &lt;/p&gt;

&lt;p&gt;Automate EC2 snapshot backups and upload them to S3 using a script. &lt;/p&gt;

&lt;p&gt;Blue-Green Deployment with Jenkins and Kubernetes &lt;/p&gt;

&lt;p&gt;Implement a zero-downtime deployment strategy. &lt;/p&gt;

&lt;p&gt;Log Aggregation with ELK Stack &lt;/p&gt;

&lt;p&gt;Set up Elasticsearch, Logstash, and Kibana to centralize logs from multiple sources. &lt;/p&gt;

&lt;p&gt;Ansible Roles for Multi-Tier Deployment &lt;/p&gt;

&lt;p&gt;Use roles to configure web, app, and database servers. &lt;/p&gt;

&lt;p&gt;Docker Networking Basics &lt;/p&gt;

&lt;p&gt;Set up bridge, overlay, and host networks in Docker. &lt;/p&gt;

&lt;p&gt;SSL/TLS Certificate Automation &lt;/p&gt;

&lt;p&gt;Automate SSL certificate provisioning using Let's Encrypt and Certbot. &lt;/p&gt;

&lt;p&gt;Advanced Projects &lt;/p&gt;

&lt;p&gt;Hybrid Cloud Setup with AWS and Azure &lt;/p&gt;

&lt;p&gt;Use Terraform to create resources across both AWS and Azure. &lt;/p&gt;

&lt;p&gt;GitOps with Argo CD &lt;/p&gt;

&lt;p&gt;Manage Kubernetes applications with GitOps principles. &lt;/p&gt;

&lt;p&gt;Automated Testing Pipeline with Selenium and Jenkins &lt;/p&gt;

&lt;p&gt;Integrate Selenium for automated UI testing in a CI/CD pipeline. &lt;/p&gt;

&lt;p&gt;Dynamic Scaling with Kubernetes HPA &lt;/p&gt;

&lt;p&gt;Deploy an app with Horizontal Pod Autoscaler based on CPU/memory metrics. &lt;/p&gt;

&lt;p&gt;Multi-Cluster Kubernetes Management with Rancher &lt;/p&gt;

&lt;p&gt;Manage multiple Kubernetes clusters using Rancher. &lt;/p&gt;

&lt;p&gt;Serverless CI/CD with AWS Lambda &lt;/p&gt;

&lt;p&gt;Automate deployments using AWS Lambda and CodePipeline. &lt;/p&gt;

&lt;p&gt;Kubernetes Network Policies &lt;/p&gt;

&lt;p&gt;Implement fine-grained network controls for microservices. &lt;/p&gt;

&lt;p&gt;Building a Private Docker Registry &lt;/p&gt;

&lt;p&gt;Set up and secure a private Docker registry for your team. &lt;/p&gt;

&lt;p&gt;Implementing Service Mesh with Istio &lt;/p&gt;

&lt;p&gt;Deploy Istio for traffic management and observability in a Kubernetes cluster. &lt;/p&gt;

&lt;p&gt;Securing DevOps Pipelines &lt;/p&gt;

&lt;p&gt;Integrate SAST and DAST tools for security scanning in CI/CD. &lt;/p&gt;

&lt;p&gt;Distributed Tracing with Jaeger &lt;/p&gt;

&lt;p&gt;Monitor microservices with distributed tracing in Kubernetes. &lt;/p&gt;

&lt;p&gt;Infrastructure Monitoring with Datadog &lt;/p&gt;

&lt;p&gt;Use Datadog to monitor and visualize cloud and server resources. &lt;/p&gt;

&lt;p&gt;Multi-Environment CI/CD Pipelines &lt;/p&gt;

&lt;p&gt;Implement pipelines for dev, staging, and production environments. &lt;/p&gt;

&lt;p&gt;Advanced Terraform State Management &lt;/p&gt;

&lt;p&gt;Use remote state backends with locking mechanisms (e.g., S3 + DynamoDB). &lt;/p&gt;

&lt;p&gt;Custom Helm Chart Development &lt;/p&gt;

&lt;p&gt;Create Helm charts for application deployment on Kubernetes. &lt;/p&gt;

&lt;p&gt;Implementing Chaos Engineering &lt;/p&gt;

&lt;p&gt;Use tools like Gremlin or Chaos Monkey to test system resilience. &lt;/p&gt;

&lt;p&gt;CI/CD for Microservices with Jenkins and Docker &lt;/p&gt;

&lt;p&gt;Create pipelines for multiple services with interdependencies. &lt;/p&gt;

&lt;p&gt;Real-Time Monitoring with Prometheus Alertmanager &lt;/p&gt;

&lt;p&gt;Configure alerting rules and notifications for system events. &lt;/p&gt;

&lt;p&gt;Setting Up Centralized Authentication in Kubernetes &lt;/p&gt;

&lt;p&gt;Implement RBAC with external authentication (e.g., Keycloak). &lt;/p&gt;

&lt;p&gt;Disaster Recovery Strategy with AWS Backup &lt;/p&gt;

&lt;p&gt;Automate backup and recovery for critical workloads. &lt;/p&gt;

&lt;p&gt;Expert-Level Projects &lt;/p&gt;

&lt;p&gt;Deploying a Kubernetes Operator &lt;/p&gt;

&lt;p&gt;Develop and deploy a custom Kubernetes operator. &lt;/p&gt;

&lt;p&gt;Cross-Cloud CI/CD Pipelines &lt;/p&gt;

&lt;p&gt;Implement CI/CD for an app deployed across AWS and GCP. &lt;/p&gt;

&lt;p&gt;Serverless Microservices with AWS Lambda and API Gateway &lt;/p&gt;

&lt;p&gt;Build a scalable serverless architecture for an application. &lt;/p&gt;

&lt;p&gt;Dynamic Secrets Management with HashiCorp Vault &lt;/p&gt;

&lt;p&gt;Securely manage application secrets and credentials. &lt;/p&gt;

&lt;p&gt;Monitoring Kubernetes with Thanos &lt;/p&gt;

&lt;p&gt;Set up a highly available monitoring system for Kubernetes clusters. &lt;/p&gt;

&lt;p&gt;Implementing Canary Deployments &lt;/p&gt;

&lt;p&gt;Deploy updates incrementally and monitor performance with Kubernetes. &lt;/p&gt;

&lt;p&gt;Pipeline as Code with Jenkinsfile &lt;/p&gt;

&lt;p&gt;Create complex pipelines entirely using Jenkinsfile syntax. &lt;/p&gt;

&lt;p&gt;Container Security with Aqua or Trivy &lt;/p&gt;

&lt;p&gt;Scan container images for vulnerabilities and enforce compliance. &lt;/p&gt;

&lt;p&gt;Self-Healing Infrastructure with Auto Remediation &lt;/p&gt;

&lt;p&gt;Automate recovery from failures using AWS Lambda and CloudWatch. &lt;/p&gt;

&lt;p&gt;Full CI/CD with End-to-End Security and Observability &lt;/p&gt;

&lt;p&gt;Set up pipelines with advanced features like automated rollbacks, Slack alerts, Prometheus metrics, and Jaeger tracing. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS DevOps Project: Advanced Automated CI/CD Pipeline</title>
      <dc:creator>VENKATA SRI HARI</dc:creator>
      <pubDate>Tue, 31 Dec 2024 17:13:21 +0000</pubDate>
      <link>https://dev.to/venkatasrihari/aws-devops-project-advanced-automated-cicd-pipeline-589b</link>
      <guid>https://dev.to/venkatasrihari/aws-devops-project-advanced-automated-cicd-pipeline-589b</guid>
      <description>&lt;h1&gt;
  
  
  AWS DevOps Project: Advanced Automated CI/CD Pipeline
&lt;/h1&gt;

&lt;p&gt;This project demonstrates how to set up an advanced automated CI/CD pipeline with Infrastructure as Code (IaC), microservices, a service mesh, and monitoring using AWS. Tools like Terraform, Jenkins, Kubernetes (EKS), Istio, Prometheus, Grafana, and ArgoCD are utilized.&lt;/p&gt;

&lt;p&gt;Github: &lt;a href="https://github.com/Consultantsrihari/AWS-DevOps-Project-Advanced-Automated-CI-CD-Pipeline.git" rel="noopener noreferrer"&gt;https://github.com/Consultantsrihari/AWS-DevOps-Project-Advanced-Automated-CI-CD-Pipeline.git&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmr208e2lwh8gmu24ax6.png" alt="Image description" width="800" height="457"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Architecture Overview&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code&lt;/strong&gt;: AWS resources are provisioned using Terraform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containerized Microservices&lt;/strong&gt;: Deployed on EKS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Mesh&lt;/strong&gt;: Managed by Istio for advanced traffic control and observability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Pipeline&lt;/strong&gt;: Managed with Jenkins and ArgoCD.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt;: Implemented using Prometheus and Grafana.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Pre-requisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;AWS Account.&lt;/li&gt;
&lt;li&gt;Terraform installed locally.&lt;/li&gt;
&lt;li&gt;Kubectl and Helm installed.&lt;/li&gt;
&lt;li&gt;Jenkins installed and configured.&lt;/li&gt;
&lt;li&gt;Docker installed.&lt;/li&gt;
&lt;li&gt;Prometheus and Grafana configured for monitoring.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1: Provision Infrastructure with Terraform&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Terraform Configuration (main.tf)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"dev_vpc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;
  &lt;span class="nx"&gt;enable_dns_support&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;enable_dns_hostnames&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev-vpc"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"public_subnet"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.1.0/24"&lt;/span&gt;
  &lt;span class="nx"&gt;map_public_ip_on_launch&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"public-subnet"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_eks_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"eks_cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev-cluster"&lt;/span&gt;
  &lt;span class="nx"&gt;role_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;

  &lt;span class="nx"&gt;vpc_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;subnet_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform apply &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2: Deploy Jenkins for CI/CD&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Helm Chart for Jenkins
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add jenkinsci https://charts.jenkins.io
helm repo update
helm &lt;span class="nb"&gt;install &lt;/span&gt;jenkins jenkinsci/jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access Jenkins UI and install required plugins (e.g., Docker, Kubernetes, Git, Pipeline).&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Deploy Kubernetes Cluster and Microservices&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Sample Deployment YAML (microservice-deployment.yaml)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-microservice&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myregistry/sample-app:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; microservice-deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4: Configure Istio Service Mesh&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Install Istio
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;istioctl &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;profile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;demo &lt;span class="nt"&gt;-y&lt;/span&gt;
kubectl label namespace default istio-injection&lt;span class="o"&gt;=&lt;/span&gt;enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Sample VirtualService Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VirtualService&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;
  &lt;span class="na"&gt;gateways&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sample-gateway&lt;/span&gt;
  &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;route&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-microservice&lt;/span&gt;
        &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; virtualservice.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Step 5: Integrate Monitoring with Prometheus and Grafana&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Install Prometheus and Grafana
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;prometheus prometheus-community/prometheus
helm &lt;span class="nb"&gt;install &lt;/span&gt;grafana grafana/grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access Grafana Dashboard and connect to Prometheus as a data source.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 6: Jenkins Pipeline Configuration&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Jenkinsfile
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;&lt;span class="n"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="n"&gt;any&lt;/span&gt;
  &lt;span class="n"&gt;stages&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Checkout Code'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;checkout&lt;/span&gt; &lt;span class="n"&gt;scm&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Build Docker Image'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'docker build -t myregistry/sample-app:latest .'&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Push to Registry'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'docker push myregistry/sample-app:latest'&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Deploy to Kubernetes'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'kubectl apply -f microservice-deployment.yaml'&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Step 7: Continuous Delivery with ArgoCD&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Install ArgoCD
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace argocd
kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;kubectl apply&lt;/code&gt; command is used to install ArgoCD in a dedicated namespace (&lt;code&gt;argocd&lt;/code&gt;). ArgoCD manages the deployment and synchronization of applications in Kubernetes clusters using GitOps principles. Once installed, it acts as a continuous delivery tool that monitors a Git repository for changes and automatically applies them to the cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  ArgoCD Application YAML
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/your-repo/sample-app.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; argocd-application.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure&lt;/strong&gt;: Provisioned with Terraform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD&lt;/strong&gt;: Managed by Jenkins and ArgoCD.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Mesh&lt;/strong&gt;: Configured with Istio.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt;: Handled by Prometheus and Grafana.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup ensures a robust and automated DevOps workflow, promoting scalability, observability, and reliability.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Create a Terraform project which Implements a VPC on AWS and deploys to EKS cluster.</title>
      <dc:creator>VENKATA SRI HARI</dc:creator>
      <pubDate>Tue, 31 Dec 2024 16:37:27 +0000</pubDate>
      <link>https://dev.to/venkatasrihari/create-a-terraform-project-which-implements-a-vpc-on-aws-and-deploys-to-eks-cluster-2deb</link>
      <guid>https://dev.to/venkatasrihari/create-a-terraform-project-which-implements-a-vpc-on-aws-and-deploys-to-eks-cluster-2deb</guid>
      <description>&lt;p&gt;In this Terraform project we will learn how to provisions a Virtual Private Cloud (VPC) on AWS, deploys an Amazon Elastic Kubernetes Service (EKS) cluster within that VPC, and provides associated networking and IAM resources.&lt;/p&gt;

&lt;p&gt;In this I’ll create a well-organized Terraform configuration for an AWS VPC and EKS cluster, split into logical modules.&lt;/p&gt;

&lt;p&gt;The complete project GitHub URl: &lt;a href="https://github.com/Consultantsrihari/Create-a-Terraform-project-which-Implements-a-VPC-on-AWS-and-deploys-to-EKS-cluster..git" rel="noopener noreferrer"&gt;https://github.com/Consultantsrihari/Create-a-Terraform-project-which-Implements-a-VPC-on-AWS-and-deploys-to-EKS-cluster..git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v5bwg0hhz0xadmsc2jc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v5bwg0hhz0xadmsc2jc.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS VPC and EKS Terraform Configuration&lt;br&gt;
**_&lt;/p&gt;
&lt;h2&gt;
  
  
  I. created a modular Terraform configuration that sets up:
&lt;/h2&gt;

&lt;p&gt;_**Main.tf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

module "vpc" {
  source = "./modules/vpc"

  vpc_cidr           = var.vpc_cidr
  availability_zones = var.availability_zones
  environment        = var.environment
}

module "eks" {
  source = "./modules/eks"

  environment         = var.environment
  subnet_ids         = module.vpc.private_subnet_ids
  kubernetes_version = "1.27"

  node_desired_size  = 2
  node_max_size      = 4
  node_min_size      = 1
  node_instance_types = ["t3.medium"]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Variables.tf:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "us-east-1"
}

variable "environment" {
  description = "Environment name"
  type        = string
  default     = "dev"
}

variable "vpc_cidr" {
  description = "CIDR block for VPC"
  type        = string
  default     = "10.0.0.0/16"
}

variable "availability_zones" {
  description = "Availability zones"
  type        = list(string)
  default     = ["us-east-1a", "us-east-1b", "us-east-1c"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Outputs.tf
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "vpc_id" {
  value = module.vpc.vpc_id
}

output "cluster_endpoint" {
  value = module.eks.cluster_endpoint
}

output "cluster_name" {
  value = module.eks.cluster_name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;_&lt;/p&gt;

&lt;h2&gt;
  
  
  **II. A VPC module (modules/vpc) that creates:
&lt;/h2&gt;

&lt;p&gt;_&lt;br&gt;
**VPC with DNS support&lt;br&gt;
Public and private subnets across multiple AZs&lt;br&gt;
Internet Gateway&lt;br&gt;
NAT Gateways&lt;br&gt;
Route tables&lt;/p&gt;

&lt;p&gt;main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VPC
resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name        = "${var.environment}-vpc"
    Environment = var.environment
  }
}

# Public Subnets
resource "aws_subnet" "public" {
  count             = length(var.availability_zones)
  vpc_id            = aws_vpc.main.id
  cidr_block        = cidrsubnet(var.vpc_cidr, 8, count.index)
  availability_zone = var.availability_zones[count.index]

  map_public_ip_on_launch = true

  tags = {
    Name                                           = "${var.environment}-public-${var.availability_zones[count.index]}"
    Environment                                    = var.environment
    "kubernetes.io/cluster/${var.environment}-eks" = "shared"
    "kubernetes.io/role/elb"                       = 1
  }
}

# Private Subnets
resource "aws_subnet" "private" {
  count             = length(var.availability_zones)
  vpc_id            = aws_vpc.main.id
  cidr_block        = cidrsubnet(var.vpc_cidr, 8, count.index + length(var.availability_zones))
  availability_zone = var.availability_zones[count.index]

  tags = {
    Name                                           = "${var.environment}-private-${var.availability_zones[count.index]}"
    Environment                                    = var.environment
    "kubernetes.io/cluster/${var.environment}-eks" = "shared"
    "kubernetes.io/role/internal-elb"              = 1
  }
}

# Internet Gateway
resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name        = "${var.environment}-igw"
    Environment = var.environment
  }
}

# Elastic IP for NAT Gateway
resource "aws_eip" "nat" {
  domain = "vpc"

  tags = {
    Name        = "${var.environment}-nat-eip"
    Environment = var.environment
  }
}

# NAT Gateway
resource "aws_nat_gateway" "main" {
  allocation_id = aws_eip.nat.id
  subnet_id     = aws_subnet.public[0].id

  tags = {
    Name        = "${var.environment}-nat"
    Environment = var.environment
  }
}

# Public Route Table
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main.id
  }

  tags = {
    Name        = "${var.environment}-public-rt"
    Environment = var.environment
  }
}

# Private Route Table
resource "aws_route_table" "private" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.main.id
  }

  tags = {
    Name        = "${var.environment}-private-rt"
    Environment = var.environment
  }
}

# Route Table Associations
resource "aws_route_table_association" "public" {
  count          = length(var.availability_zones)
  subnet_id      = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public.id
}

resource "aws_route_table_association" "private" {
  count          = length(var.availability_zones)
  subnet_id      = aws_subnet.private[count.index].id
  route_table_id = aws_route_table.private.id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;variables.tf
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "vpc_cidr" {
  description = "CIDR block for VPC"
  type        = string
}

variable "availability_zones" {
  description = "Availability zones"
  type        = list(string)
}

variable "environment" {
  description = "Environment name"
  type        = string
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Outputs.tf
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
output "vpc_id" {
  value = aws_vpc.main.id
}

output "public_subnet_ids" {
  value = aws_subnet.public[*].id
}

output "private_subnet_ids" {
  value = aws_subnet.private[*].id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;**_&lt;/p&gt;

&lt;h2&gt;
  
  
  III. An EKS module (modules/eks) that creates:
&lt;/h2&gt;

&lt;p&gt;_**&lt;br&gt;
EKS cluster&lt;br&gt;
Node group with autoscaling&lt;br&gt;
Required IAM roles and policies&lt;/p&gt;

&lt;p&gt;main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eks_cluster" "main" {
  name     = "${var.environment}-eks"
  role_arn = aws_iam_role.eks_cluster.arn
  version  = var.kubernetes_version

  vpc_config {
    subnet_ids              = var.subnet_ids
    endpoint_private_access = true
    endpoint_public_access  = true
  }

  depends_on = [
    aws_iam_role_policy_attachment.eks_cluster_policy,
    aws_iam_role_policy_attachment.eks_service_policy,
  ]
}

# EKS Cluster IAM Role
resource "aws_iam_role" "eks_cluster" {
  name = "${var.environment}-eks-cluster"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "eks.amazonaws.com"
        }
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks_cluster.name
}

resource "aws_iam_role_policy_attachment" "eks_service_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = aws_iam_role.eks_cluster.name
}

# Node Group
resource "aws_eks_node_group" "main" {
  cluster_name    = aws_eks_cluster.main.name
  node_group_name = "${var.environment}-node-group"
  node_role_arn   = aws_iam_role.eks_node_group.arn
  subnet_ids      = var.subnet_ids

  scaling_config {
    desired_size = var.node_desired_size
    max_size     = var.node_max_size
    min_size     = var.node_min_size
  }

  instance_types = var.node_instance_types

  depends_on = [
    aws_iam_role_policy_attachment.eks_node_policy,
    aws_iam_role_policy_attachment.eks_cni_policy,
    aws_iam_role_policy_attachment.eks_container_registry,
  ]
}

# Node Group IAM Role
resource "aws_iam_role" "eks_node_group" {
  name = "${var.environment}-eks-node-group"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "ec2.amazonaws.com"
        }
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "eks_node_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks_node_group.name
}

resource "aws_iam_role_policy_attachment" "eks_cni_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_node_group.name
}

resource "aws_iam_role_policy_attachment" "eks_container_registry" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks_node_group.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;variables.tf
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
variable "environment" {
  description = "Environment name"
  type        = string
}

variable "subnet_ids" {
  description = "Subnet IDs for EKS cluster"
  type        = list(string)
}

variable "kubernetes_version" {
  description = "Kubernetes version"
  type        = string
  default     = "1.27"
}

variable "node_desired_size" {
  description = "Desired number of worker nodes"
  type        = number
  default     = 2
}

variable "node_max_size" {
  description = "Maximum number of worker nodes"
  type        = number
  default     = 4
}

variable "node_min_size" {
  description = "Minimum number of worker nodes"
  type        = number
  default     = 1
}

variable "node_instance_types" {
  description = "Instance types for worker nodes"
  type        = list(string)
  default     = ["t3.medium"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Outputs.tf
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
output "cluster_endpoint" {
  value = aws_eks_cluster.main.endpoint
}

output "cluster_name" {
  value = aws_eks_cluster.main.name
}

output "cluster_certificate_authority_data" {
  value = aws_eks_cluster.main.certificate_authority[0].data
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use this configuration:&lt;br&gt;
Initialize Terraform:&lt;/p&gt;

&lt;p&gt;terraform init&lt;/p&gt;

&lt;p&gt;Review the plan:&lt;/p&gt;

&lt;p&gt;terraform plan&lt;br&gt;
Apply the configuration:&lt;/p&gt;

&lt;p&gt;terraform apply&lt;/p&gt;

&lt;p&gt;The configuration uses variables with sensible defaults but can be customized through the terraform.tfvars file.&lt;/p&gt;

&lt;p&gt;Key features:&lt;/p&gt;

&lt;p&gt;Highly available setup across 3 AZs&lt;br&gt;
Private subnets for EKS nodes&lt;br&gt;
Public subnets for load balancers&lt;br&gt;
Proper IAM roles and policies&lt;br&gt;
Autoscaling node group&lt;br&gt;
Proper tagging for EKS and AWS Load Balancer Controller&lt;br&gt;
&lt;a href="https://github.com/Consultantsrihari/Create-a-Terraform-project-which-Implements-a-VPC-on-AWS-and-deploys-to-EKS-cluster..git" rel="noopener noreferrer"&gt;https://github.com/Consultantsrihari/Create-a-Terraform-project-which-Implements-a-VPC-on-AWS-and-deploys-to-EKS-cluster..git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;………………………………………………………………………………………………………………………………………………………………………………………………&lt;/p&gt;

&lt;p&gt;You’re welcome! Have a great time ahead! Enjoy your day!&lt;/p&gt;

&lt;p&gt;Please Connect with me any doubts.&lt;/p&gt;

&lt;p&gt;Mail: &lt;a href="mailto:sriharimalapati6@gmail.com"&gt;sriharimalapati6@gmail.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LinkedIn: &lt;a href="http://www.linkedin.com/in/" rel="noopener noreferrer"&gt;www.linkedin.com/in/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/Consultantsrihari" rel="noopener noreferrer"&gt;https://github.com/Consultantsrihari&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Medium: Sriharimalapati — Medium&lt;/p&gt;

&lt;h4&gt;
  
  
  Thanks for watching ##### %%%% Sri Hari %%%%
&lt;/h4&gt;

</description>
    </item>
    <item>
      <title>Automated Node.js Deployment to AWS EC2 with Docker and GitHub Actions</title>
      <dc:creator>VENKATA SRI HARI</dc:creator>
      <pubDate>Sun, 28 Jul 2024 15:00:03 +0000</pubDate>
      <link>https://dev.to/venkatasrihari/creating-an-architecture-using-terraform-on-aws-2lb1</link>
      <guid>https://dev.to/venkatasrihari/creating-an-architecture-using-terraform-on-aws-2lb1</guid>
      <description>&lt;p&gt;In this project, we will learn how to deploy a Node.js application using GitHub Actions, Docker, and AWS EC2. This fully automated pipeline ensures that our application on the EC2 instance is always up-to-date with the latest changes.&lt;/p&gt;

&lt;p&gt;GitHub Actions is a CI/CD platform designed to automate your build, test, and deployment processes. By configuring a .yml file, you can specify tasks to run in response to events like pull requests, issues, or commits.&lt;/p&gt;

&lt;p&gt;In this guide, we will utilize GitHub Actions to build a Docker image and push it to Docker Hub. An AWS EC2 instance will serve as a self-hosted runner. The instance will then pull and run the Docker container, allowing us to access the application via its public IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwd1i7z7rcg578dgzfw5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwd1i7z7rcg578dgzfw5t.png" alt="Image description" width="644" height="263"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Github link: NodeJS-Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;let’s create a Dockerfile in the project’s root directory.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn745dbbymrdpbrh2z11y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn745dbbymrdpbrh2z11y.png" alt="Image description" width="800" height="173"&gt;&lt;/a&gt;&lt;br&gt;
You can use this same docker file in your project also….&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Now I can create a .dockerignore file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qpf41rqcj1rf0m99rp6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qpf41rqcj1rf0m99rp6.png" alt="Image description" width="800" height="286"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# dockerignore file 
node_modules/
package-lock.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Log in to GitHub account and create a new repository and{My Repository Name is NodeJS-Deployment, in you case you can choose different name}.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46vkugsw9coz45k6ifmo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46vkugsw9coz45k6ifmo.png" alt="Image description" width="800" height="714"&gt;&lt;/a&gt;&lt;br&gt;
click on create Repository.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Push your existing code to the GitHub repository.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zb4g0zkuxjtnd7ycmjq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zb4g0zkuxjtnd7ycmjq.png" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;br&gt;
Give this below cmds&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git init
git add .
git commit -m "Initial commit"
git branch -M main
git remote add origin https://github.com/Consultantsrihari/NodeJS-Deployment
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjro6xys6lqwsxchwk0w4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjro6xys6lqwsxchwk0w4.png" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now go toActions tab and select Docker image and click on Configure, then you can write the docker .yml file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjgnbb3uzmrhenquxt4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjgnbb3uzmrhenquxt4w.png" alt="Image description" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbk0vist3hyqsyj8ux0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbk0vist3hyqsyj8ux0t.png" alt="Image description" width="800" height="548"&gt;&lt;/a&gt;&lt;br&gt;
In your case write own docker image or you can use below code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Name of the GitHub Actions workflow
name: CI/CD for Node.js REST-API

# Trigger this workflow on push events to the main branch and manual dispatch
on:
  push: 
    branches: [main]
  workflow_dispatch:

# Permissions needed for this workflow
permissions:
  contents: write

# Define the jobs to be executed
jobs:
  # Build job
  Build:
    # Use the latest Ubuntu runner
    runs-on: ubuntu-latest

    # Steps to be executed in the Build job
    steps:
      # Checkout the repository
      - name: Checkout repository
        uses: actions/checkout@v3 

      # Login to DockerHub using secrets for credentials
      - name: Login to DockerHub
        env:
          DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
          DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
        run: echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin

      # Build Docker image with a specific tag
      - name: Build Docker Image
        run: docker build -t bishal5438/rest-api .

      # Push the built image to DockerHub
      - name: Push to DockerHub
        run: docker push bishal5438/rest-api:latest 

  # Deploy job
  Deploy:
    # Use a self-hosted runner for deployment
    runs-on: self-hosted

    # Steps to be executed in the Deploy job
    steps:
      # Pull the latest Docker image from DockerHub
      - name: Pull the Docker Image
        run: docker pull bishal5438/rest-api:latest 

      # Delete the old container if it exists
      - name: Delete Old Container
        run: |
          if [ "$(docker ps -q -f name=rest-api-Container)" ]; then
            sudo docker rm -f rest-api-Container
          fi

      # Run a new container from the pulled image
      - name: Run the Container
        run: docker run -d -p 80:80 --name rest-api-Container bishal5438/rest-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, commit the changes, which will create a CI-CD docker-image.yml file inside .github/workflows&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkedikn836tp9k10m6j85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkedikn836tp9k10m6j85.png" alt="Image description" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now, let’s configure GitHub secrets to store sensitive information such as Docker Hub’s username and password.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For this, go to the repositorySettings&lt;/p&gt;

&lt;p&gt;Click onSecrets and variables and select Actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz27jr2amoc8ge4i7hsna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz27jr2amoc8ge4i7hsna.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To add a new secret, click on New repository secret.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpvhz4q8efft0jrxw7ky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpvhz4q8efft0jrxw7ky.png" alt="Image description" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add your Docker Hub username and password.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lz0geqjllma63z2x175.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lz0geqjllma63z2x175.png" alt="Image description" width="800" height="670"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Let’s configure AWS EC2.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Go to the AWS console and select EC2 service.&lt;/li&gt;
&lt;li&gt;Provide an instance name NodeJS-Deployment&lt;/li&gt;
&lt;li&gt;For the AMI option, select Ubuntu.&lt;/li&gt;
&lt;li&gt;Select an instance type based on your requirements. I am usingt2.micro instance.&lt;/li&gt;
&lt;li&gt;For the Key pair option, click on Create new key pair.&lt;/li&gt;
&lt;li&gt;Provide a key pair name and select RSA as key pair type.&lt;/li&gt;
&lt;li&gt;Select .pem as private key file format and click on Create key pair.&lt;/li&gt;
&lt;li&gt;Save the key securely as we need it to connect to our instance.&lt;/li&gt;
&lt;li&gt;Under Network Settings, click on Create security group.&lt;/li&gt;
&lt;li&gt;Then, allow http, https and ssh traffic.&lt;/li&gt;
&lt;li&gt;Select the required volume of storage.&lt;/li&gt;
&lt;li&gt;Finally click on Launch Instance.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Let’s configure AWS EC2.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS console and select EC2 service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide an instance name NodeJS-Deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the AMI option, select Ubuntu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select an instance type based on your requirements. I am usingt2.micro instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the Key pair option, click on Create new key pair.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide a key pair name and select RSA as key pair type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select .pem as private key file format and click on Create key pair.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save the key securely as we need it to connect to our instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Network Settings, click on Create security group.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, allow http, https and ssh traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the required volume of storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally click on Launch Instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F902j4mbrjpsu6n0ik6c5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F902j4mbrjpsu6n0ik6c5.png" alt="Image description" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Take the public IP of your instance and login.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Iam using Git bash, in your case use your own terminal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now succuesfully login our instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fund4x78au8z02zra28v5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fund4x78au8z02zra28v5.png" alt="Image description" width="800" height="718"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To configure and run docker as a non-root user, use the following commands.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farajtn29h0m7nfgqe4gv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farajtn29h0m7nfgqe4gv.png" alt="Image description" width="800" height="623"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update 

sudo apt install docker.io -y 

#Add group named docker
sudo groupadd docker

#Add user to the docker group 
sudo usermod -aG docker $USER

#Reload group permissions  
newgrp docker

#Managing permission 
sudo chown -R $USER:docker /var/run/docker
sudo chown $USER:docker /var/run/docker.sock

#Restart docker service 
sudo systemctl restart docker 

#Auto start on boot 
 sudo systemctl enable docker

#To check docker status
 sudo systemctl status docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Now, let’s set up a self-hosted runner for our GitHub repository.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Go to repository Settings.&lt;/p&gt;

&lt;p&gt;Click on Actions and select Runners.&lt;/p&gt;

&lt;p&gt;Next, click on New self-hosted runner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrngpozp1mrq8e9ysni9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrngpozp1mrq8e9ysni9.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select Linux as our EC2 instance is based on Ubuntu.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;GitHub will provide us the instructions to set up the runner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ollyicldii3ktz9rqfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ollyicldii3ktz9rqfj.png" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy past all cmds in ubuntu server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Download

# Create a folder
$ mkdir actions-runner &amp;amp;&amp;amp; cd actions-runnerCopied!
# Download the latest runner package
$ curl -o actions-runner-linux-x64-2.317.0.tar.gz -L https://github.com/actions/runner/releases/download/v2.317.0/actions-runner-linux-x64-2.317.0.tar.gzCopied!
# Optional: Validate the hash
$ echo "9e883d210df8c6028aff475475a457d380353f9d01877d51cc01a17b2a91161d  actions-runner-linux-x64-2.317.0.tar.gz" | shasum -a 256 -cCopied!
# Extract the installer
$ tar xzf ./actions-runner-linux-x64-2.317.0.tar.gzCopied!

Configure

# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/Consultantsrihari/NodeJS-Deployment --token BGHFMQ3HYQU6CX2VVFG73E3GUUQJKCopied!
# Last step, run it!
$ ./run.shCopied! 

Using your self-hosted runner
# Use this YAML in your workflow file for each job
runs-on: self-hosted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0xtklycuslpy842ln18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0xtklycuslpy842ln18.png" alt="Image description" width="800" height="866"&gt;&lt;/a&gt;&lt;br&gt;
If prompted for any inputs, you may press Enter for default values.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once the setup is completed, we can see active machines under Runners tab on our github repository.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupg2cnicuqrstmq1tde3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupg2cnicuqrstmq1tde3.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now, pull the changes to the remote repo to test CI-CD pipeline, We can see successful and failed jobs under the same action tab.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funjg1p1evgabcn7lvsnf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funjg1p1evgabcn7lvsnf.png" alt="Image description" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Finally, we can access our application by visiting http://.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj327rhqhvpc9s286mrzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj327rhqhvpc9s286mrzp.png" alt="Image description" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Hence, By integrating Github Actions with Docker and AWS, we can automate CI-CD steps and ensure a smooth and reliable workflow for Node.js application.&lt;/p&gt;

&lt;p&gt;You’re welcome! Have a great time ahead! Enjoy your day!&lt;/p&gt;

&lt;p&gt;Please Connect with me any doubts.&lt;/p&gt;

&lt;p&gt;LinkedIn: &lt;a href="http://www.linkedin.com/in/" rel="noopener noreferrer"&gt;www.linkedin.com/in/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mail: &lt;a href="mailto:sriharimalapati6@gmail.com"&gt;sriharimalapati6@gmail.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/Consultantsrihari" rel="noopener noreferrer"&gt;https://github.com/Consultantsrihari&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Thanks for watching ##### %%%% Sri Hari %%%%
&lt;/h4&gt;

&lt;p&gt;Github Actions&lt;br&gt;
Github&lt;br&gt;
Docker&lt;br&gt;
Devops Project&lt;br&gt;
DevOps&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
