<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Deepak Poudel</title>
    <description>The latest articles on DEV Community by Deepak Poudel (@poudeldipak).</description>
    <link>https://dev.to/poudeldipak</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/poudeldipak"/>
    <language>en</language>
    <item>
      <title>Automating ENI Failover with AWS Lambda + EventBridge</title>
      <dc:creator>Deepak Poudel</dc:creator>
      <pubDate>Tue, 16 Sep 2025 08:22:18 +0000</pubDate>
      <link>https://dev.to/poudeldipak/automating-eni-failover-with-aws-lambda-eventbridge-a24</link>
      <guid>https://dev.to/poudeldipak/automating-eni-failover-with-aws-lambda-eventbridge-a24</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd7demrk8g5yqxb0t8e6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd7demrk8g5yqxb0t8e6.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;*&lt;em&gt;Automating ENI Failover with AWS Lambda + EventBridge *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Goal: Automatically detach one or more Elastic Network Interfaces (ENIs) from a primary EC2 instance and attach them to a secondary EC2 instance when the primary transitions to a stopped/terminated state. This guide walks you through the setup entirely from the AWS Management Console. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfnd6gg8a5vcymkjz9a5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfnd6gg8a5vcymkjz9a5.png" alt=" " width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Architecture overview *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;1.EventBridge receives EC2 Instance State-change Notification for the primary instance (e.g. stopped, terminated). &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EventBridge triggers the Lambda function. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;3.Lambda calls EC2 APIs to detach the configured ENIs from the primary instance and attach them to the secondary instance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6pzv6pzptmp0utigm6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6pzv6pzptmp0utigm6f.png" alt=" " width="312" height="651"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Prerequisites *&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Primary and secondary EC2 instance IDs. &lt;/li&gt;
&lt;li&gt;ENI IDs you want to move (ENIs must be in the same Availability Zone as the target instance).&lt;/li&gt;
&lt;li&gt;IAM permissions to create roles, Lambda, and EventBridge rules.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;*&lt;em&gt;Step 1 — Create the IAM role for Lambda *&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the IAM Console → Roles → Create role. &lt;/li&gt;
&lt;li&gt;Select Trusted entity type: AWS service → Use case: Lambda → Next. &lt;/li&gt;
&lt;li&gt;Attach the managed policy AWSLambdaBasicExecutionRole (for logging). &lt;/li&gt;
&lt;li&gt;**Click Next. &lt;/li&gt;
&lt;li&gt;Name the role lambda-eni-mover-role and create it. **\&lt;/li&gt;
&lt;li&gt;After creation, open the role and go to the Permissions tab → Add permissions → Create inline policy. &lt;/li&gt;
&lt;li&gt;Choose JSON editor and paste: &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0fk3kcnbf1c88ais6o9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0fk3kcnbf1c88ais6o9.png" alt=" " width="473" height="711"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;{ &lt;br&gt;
  "Version": "2012-10-17", &lt;br&gt;
  "Statement": [ &lt;br&gt;
    { &lt;br&gt;
      "Effect": "Allow", &lt;br&gt;
      "Action": [ &lt;br&gt;
        "ec2:DescribeInstances", &lt;br&gt;
        "ec2:DescribeNetworkInterfaces", &lt;br&gt;
        "ec2:DetachNetworkInterface", &lt;br&gt;
        "ec2:AttachNetworkInterface" &lt;br&gt;
      ], &lt;br&gt;
      "Resource": "*" &lt;br&gt;
    } &lt;br&gt;
  ] &lt;br&gt;
} &lt;/p&gt;

&lt;p&gt;Save the policy with name LambdaEC2ENIPermissions. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Step 2 — Create the Lambda function *&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the Lambda Console → Create function. &lt;/li&gt;
&lt;li&gt;Choose Author from scratch. &lt;/li&gt;
&lt;li&gt;Function name: eni-mover &lt;/li&gt;
&lt;li&gt;Runtime: Python 3.10 &lt;/li&gt;
&lt;li&gt;Execution role: Choose Use the existing role and pick lambda-eni-mover-role. &lt;/li&gt;
&lt;li&gt;Click the Create function. &lt;/li&gt;
&lt;li&gt;Once created, scroll down to the Code Sourc editor. Replace the default code with the provided Python code (see below). &lt;/li&gt;
&lt;li&gt;In the Configuration tab → General configuration, set: 
   Timeout: 3 minutes (180 seconds) 
     Memory: 512 MB (adjustable) &lt;/li&gt;
&lt;li&gt;In the Configuration tab → Environment variables, add: 
     PRIMARY_INSTANCE = your primary instance ID 
     SECONDARY_INSTANCE = your secondary instance ID 
SECONDARY_ENIS = comma-separated ENI IDs (e.g., eni-abc123,eni-def456)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2eo1v9kbnhvl56rto6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2eo1v9kbnhvl56rto6z.png" alt=" " width="693" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;`import boto3&lt;br&gt;
import time&lt;br&gt;
from concurrent.futures import ThreadPoolExecutor, as_completed&lt;br&gt;
ec2 = boto3.client('ec2')&lt;br&gt;
PRIMARY_INSTANCE = "INSTANCE_ID_1"&lt;br&gt;
SECONDARY_INSTANCE = "INSTANCE_ID_2"&lt;br&gt;
SECONDARY_ENIS = [&lt;br&gt;
"ENI_ID_1",&lt;br&gt;
"ENI_ID_2"&lt;br&gt;
]&lt;br&gt;
def wait_for_available(eni_id, timeout=180):&lt;br&gt;
start = time.time()&lt;br&gt;
while time.time() - start &amp;lt; timeout:&lt;br&gt;
eni =&lt;br&gt;
ec2.describe_network_interfaces(NetworkInterfaceIds=[eni_id])['NetworkInterfaces&lt;br&gt;
'][0]&lt;br&gt;
if eni['Status'] == 'available':&lt;br&gt;
print(f"ENI {eni_id} is now available.")&lt;br&gt;
return True&lt;br&gt;
time.sleep(3)&lt;br&gt;
raise TimeoutError(f"ENI {eni_id} did not become available in {timeout}&lt;br&gt;
seconds")&lt;br&gt;
def wait_for_in_use(eni_id, timeout=180):&lt;br&gt;
start = time.time()&lt;br&gt;
while time.time() - start &amp;lt; timeout:&lt;br&gt;
eni =&lt;br&gt;
ec2.describe_network_interfaces(NetworkInterfaceIds=[eni_id])['NetworkInterfaces&lt;br&gt;
'][0]&lt;br&gt;
if eni['Status'] == 'in-use':&lt;br&gt;
print(f"ENI {eni_id} is now attached.")&lt;br&gt;
return True&lt;br&gt;
time.sleep(3)&lt;br&gt;
raise TimeoutError(f"ENI {eni_id} did not attach in {timeout} seconds")&lt;br&gt;
def move_eni_to_secondary(eni_id, device_index):&lt;br&gt;
eni =&lt;br&gt;
ec2.describe_network_interfaces(NetworkInterfaceIds=[eni_id])['NetworkInterfaces&lt;br&gt;
'][0]&lt;br&gt;
attachment = eni.get('Attachment')&lt;br&gt;
if attachment:&lt;br&gt;
print(f"Detaching ENI {eni_id} from {attachment['InstanceId']}...")&lt;br&gt;
ec2.detach_network_interface(AttachmentId=attachment['AttachmentId'],&lt;br&gt;
Force=True)&lt;br&gt;
wait_for_available(eni_id)&lt;br&gt;
else:&lt;br&gt;
print(f"ENI {eni_id} already detached.")&lt;br&gt;
print(f"Attaching ENI {eni_id} to secondary at DeviceIndex&lt;br&gt;
{device_index}...")&lt;br&gt;
ec2.attach_network_interface(NetworkInterfaceId=eni_id,&lt;br&gt;
InstanceId=SECONDARY_INSTANCE, DeviceIndex=device_index)&lt;br&gt;
wait_for_in_use(eni_id)&lt;br&gt;
return eni_id&lt;br&gt;
def lambda_handler(event, context):&lt;br&gt;
print("Event received:", event)&lt;br&gt;
detail = event.get("detail", {})&lt;br&gt;
if detail.get("instance-id") != PRIMARY_INSTANCE:&lt;br&gt;
print("Event not for primary instance. Skipping.")&lt;br&gt;
return {"status": "skipped - not primary"}&lt;br&gt;
if detail.get("state") not in ["stopping", "stopped", "shutting-down",&lt;br&gt;
"terminated"]:&lt;br&gt;
print("Instance state not relevant. Skipping.")&lt;br&gt;
return {"status": "skipped - irrelevant state"}&lt;/p&gt;

&lt;h1&gt;
  
  
  Collect current device indices on secondary
&lt;/h1&gt;

&lt;p&gt;existing_indexes = []&lt;br&gt;
response = ec2.describe_instances(InstanceIds=[SECONDARY_INSTANCE])&lt;br&gt;
for iface in&lt;br&gt;
response['Reservations'][0]['Instances'][0]['NetworkInterfaces']:&lt;br&gt;
existing_indexes.append(iface['Attachment']['DeviceIndex'])&lt;/p&gt;

&lt;h1&gt;
  
  
  Prepare device indices for each ENI
&lt;/h1&gt;

&lt;p&gt;device_indices = []&lt;br&gt;
next_index = 1&lt;br&gt;
for _ in SECONDARY_ENIS:&lt;br&gt;
while next_index in existing_indexes:&lt;br&gt;
next_index += 1&lt;br&gt;
device_indices.append(next_index)&lt;br&gt;
existing_indexes.append(next_index)&lt;br&gt;
next_index += 1&lt;/p&gt;

&lt;h1&gt;
  
  
  Move ENIs in parallel
&lt;/h1&gt;

&lt;p&gt;results = []&lt;br&gt;
with ThreadPoolExecutor(max_workers=len(SECONDARY_ENIS)) as executor:&lt;br&gt;
future_to_eni = {executor.submit(move_eni_to_secondary, eni, idx): eni&lt;br&gt;
for eni, idx in zip(SECONDARY_ENIS, device_indices)}&lt;br&gt;
for future in as_completed(future_to_eni):&lt;br&gt;
eni_id = future_to_eni[future]&lt;br&gt;
try:&lt;br&gt;
result = future.result()&lt;br&gt;
print(f"{result} moved successfully.")&lt;br&gt;
results.append(result)&lt;br&gt;
except Exception as e:&lt;br&gt;
print(f"Error moving {eni_id}: {e}")`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Click Deploy.&lt;br&gt;
Step 3 — Create the EventBridge Rule&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the Amazon EventBridge Console → Rules → Create rule.&lt;/li&gt;
&lt;li&gt;Name: eni-mover-primary-stop.&lt;/li&gt;
&lt;li&gt;Rule type: Rule with event pattern.&lt;/li&gt;
&lt;li&gt;Event pattern → Custom pattern (JSON editor) → paste:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;`10. Click Deploy.&lt;br&gt;
Step 3 — Create the EventBridge Rule&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the Amazon EventBridge Console → Rules → Create rule.&lt;/li&gt;
&lt;li&gt;Name: eni-mover-primary-stop.&lt;/li&gt;
&lt;li&gt;Rule type: Rule with event pattern.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Event pattern → Custom pattern (JSON editor) → paste:`&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, add a Target → choose Lambda function → select eni-mover.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the rule.&lt;br&gt;
EventBridge will now automatically invoke the Lambda&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  aws #awscommunitybuilder #awscommunitynepal #communitybuilder
&lt;/h1&gt;

&lt;p&gt;Public link: &lt;a href="https://awsclassstudy.s3.us-east-1.amazonaws.com/Automating+ENI+Failover.pdf" rel="noopener noreferrer"&gt;https://awsclassstudy.s3.us-east-1.amazonaws.com/Automating+ENI+Failover.pdf&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Robust Observability for your AWS Resources with New Relic</title>
      <dc:creator>Deepak Poudel</dc:creator>
      <pubDate>Mon, 02 Dec 2024 11:30:45 +0000</pubDate>
      <link>https://dev.to/poudeldipak/robust-observability-for-your-aws-resources-with-new-relic-193b</link>
      <guid>https://dev.to/poudeldipak/robust-observability-for-your-aws-resources-with-new-relic-193b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsef68ulhohey10eei7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsef68ulhohey10eei7r.png" alt="Image description" width="624" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Monitoring AWS Resources with New Relic for Enhanced Observability
&lt;/h1&gt;

&lt;p&gt;In this blog, we’ll discuss how your AWS resources can be monitored using a comprehensive monitoring and observability platform to improve application performance, optimize infrastructure usage, and enhance the overall user experience. By leveraging New Relic's suite of tools, you can gain deep insights into your system's behavior, quickly identify and resolve issues, and make data-driven decisions for future enhancements. The primary objective of selecting New Relic as a monitoring platform is to ensure optimal cloud infrastructure performance and resource utilization. It aims to enhance application speed and reliability by identifying and resolving bottlenecks. The platform facilitates streamlined troubleshooting by aggregating and analyzing logs from AWS ECS, Amazon CloudWatch, and other services. Additionally, it enables proactive alerting to notify teams of anomalies and critical incidents while offering insights into user behavior to improve real user experience.&lt;/p&gt;




&lt;h2&gt;
  
  
  Leveraging AWS and Its Limitations
&lt;/h2&gt;

&lt;p&gt;While AWS Monitoring tools provide infrastructure monitoring, they fall short in several key areas crucial for smooth application performance:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Visibility:&lt;/strong&gt; AWS CloudWatch primarily focuses on infrastructure metrics, lacking detailed application performance insights such as distributed tracing and code-level troubleshooting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Siloed Data:&lt;/strong&gt; Correlating application behavior with underlying infrastructure metrics from CloudWatch can be challenging due to siloed data storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Alert Fatigue:&lt;/strong&gt; CloudWatch's generic alerts can lead to alert fatigue, potentially causing critical issues to be missed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited User Experience (UX) Insights:&lt;/strong&gt; Understanding how real users experience applications on AWS is difficult with CloudWatch alone.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxcrecod00ickx90grdl.png" alt="Image description" width="624" height="283"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Key Features of New Relic
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Application Performance Monitoring (APM)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
New Relic's APM provides comprehensive insights into application performance:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Detailed Metrics:&lt;/strong&gt; Monitoring response times, throughput, error rates, and Apdex scores.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transaction Tracing:&lt;/strong&gt; Tracing individual transactions to identify bottlenecks.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Analysis:&lt;/strong&gt; Automatic capture and analysis of errors for quick resolution.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Monitoring:&lt;/strong&gt; Monitoring database performance and query execution times.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Maps:&lt;/strong&gt; Visualizing service dependencies and interactions in real-time.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Infrastructure Monitoring&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
New Relic Infrastructure offers real-time monitoring for servers and cloud environments:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource Utilization:&lt;/strong&gt; Tracking CPU, memory, disk I/O, and network usage.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Integration:&lt;/strong&gt; Monitoring AWS services like EC2, RDS, S3, and Lambda.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Health Metrics:&lt;/strong&gt; Providing health metrics and alerts for uptime and performance.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration Management:&lt;/strong&gt; Tracking configuration changes and their performance impact.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Monitoring:&lt;/strong&gt; Monitoring containerized environments like Kubernetes and Docker.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Browser Monitoring / Real User Monitoring (RUM)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Tracks user interactions with your application in real-time:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance Metrics:&lt;/strong&gt; Measuring page load times and user interaction timings.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Sessions:&lt;/strong&gt; Analyzing user behavior patterns and performance issues.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Geographical Insights:&lt;/strong&gt; Identifying performance variations across locations.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Browser Performance:&lt;/strong&gt; Tracking across browsers and devices.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session Traces:&lt;/strong&gt; Drilling into individual sessions to identify issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Synthetic Monitoring&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Simulates user transactions to proactively identify performance issues:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scripted Browsers:&lt;/strong&gt; Creating synthetic scripts for user interactions.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global Testing:&lt;/strong&gt; Testing from multiple geographic locations.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Benchmarks:&lt;/strong&gt; Benchmarking against predefined SLAs.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alerting:&lt;/strong&gt; Alerting on performance deviations and downtime.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uptime Monitoring:&lt;/strong&gt; Ensuring application availability with regular checks.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Logs Management&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Aggregates and analyzes log data for troubleshooting and optimization:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Log Aggregation:&lt;/strong&gt; Collecting logs from applications and infrastructure.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search and Query:&lt;/strong&gt; Performing advanced searches on log data.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Streaming:&lt;/strong&gt; Streaming log data for immediate insights.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Correlation:&lt;/strong&gt; Correlating logs with performance metrics for holistic analysis.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alerting:&lt;/strong&gt; Setting alerts for specific log patterns or anomalies.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dashboards and Alerts&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Customizable dashboards and alerting systems to monitor metrics and notify teams:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Custom Dashboards:&lt;/strong&gt; Visualizing key metrics and KPIs.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-built Dashboards:&lt;/strong&gt; Utilizing pre-built dashboards for common use cases.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alerting Policies:&lt;/strong&gt; Defining alert policies for critical metrics.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notification Channels:&lt;/strong&gt; Integrating with email, Slack, PagerDuty, etc.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incident Management:&lt;/strong&gt; Tracking incidents and resolutions.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Distributed Tracing&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Visualizes request flows across services to pinpoint failures:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trace Requests:&lt;/strong&gt; Tracing the journey of requests through services.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency Analysis:&lt;/strong&gt; Identifying latency at each hop.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Maps:&lt;/strong&gt; Visualizing service interactions and dependencies.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Tracking:&lt;/strong&gt; Tracking errors across distributed services.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trace Sampling:&lt;/strong&gt; Managing trace data volume through adaptive sampling.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Mobile Monitoring&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Provides insights into mobile application performance:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Crash Reporting:&lt;/strong&gt; Automatic capture of application crashes.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Metrics:&lt;/strong&gt; Monitoring app launch times and network requests.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Interaction Traces:&lt;/strong&gt; Tracing interactions and their impact on performance.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Device Insights:&lt;/strong&gt; Analyzing performance across devices and OS versions.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Serverless Monitoring&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Tracks serverless function performance and usage:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda Integration:&lt;/strong&gt; Monitoring Lambda functions with real-time metrics.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Function Tracing:&lt;/strong&gt; Tracing invocations and dependencies.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Monitoring:&lt;/strong&gt; Tracking execution costs and usage.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cold Start Analysis:&lt;/strong&gt; Identifying cold start issues.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Tracking:&lt;/strong&gt; Monitoring events triggering functions.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  How You Can Adopt
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Phase 1: Initial Assessment
&lt;/h3&gt;

&lt;p&gt;Identify critical applications and infrastructure components that require monitoring and define performance metrics. This phase involves conducting a thorough assessment to identify critical applications and AWS infrastructure components that require monitoring. Stakeholders will collaborate to define performance metrics and proactive alerting thresholds aligned with business objectives. Leveraging New Relic’s pre-built integrations for AWS services like ECS, Lambda, and RDS aims to streamline integration and enhance visibility into system performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: Configuration and Integration
&lt;/h3&gt;

&lt;p&gt;Set up New Relic accounts, deploy monitoring agents, and integrate with AWS CloudWatch logs. During this phase, the focus is on setting up New Relic accounts tailored to organizational structure and operational needs. Configuration of monitoring agents across applications and infrastructure will ensure comprehensive data collection. Deployment of browser and synthetic monitoring capabilities will provide real-time insights into user interactions and simulate user journeys for proactive issue detection. Integration with AWS CloudWatch logs will centralize log management for efficient troubleshooting and incident response.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3: Dashboard and Alert Setup
&lt;/h3&gt;

&lt;p&gt;Develop customized dashboards and set up alerting policies based on predefined thresholds. Customized dashboards within New Relic will be developed in this phase to visualize key metrics and KPIs essential for monitoring application performance, infrastructure health, and user experience. These dashboards will facilitate informed decision-making and enhance operational transparency. Setting up alerting policies based on predefined thresholds will ensure timely notifications via email, Slack, or PagerDuty, enabling swift responses to performance anomalies and security incidents.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 4: Continuous Monitoring and Optimization
&lt;/h3&gt;

&lt;p&gt;Analyze performance data, refine dashboards, and enhance operational efficiency over time. The final phase focuses on establishing a cycle of continuous improvement by analyzing application performance data to identify optimization opportunities. Refinement of dashboards and alerting mechanisms based on usage patterns and feedback will maintain relevance and effectiveness. The goal is to leverage New Relic’s insights to enhance application performance, optimize resource utilization, and strengthen overall operational efficiency over time.&lt;/p&gt;




&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Unified View:&lt;/strong&gt; New Relic eliminates the need for context switching between CloudWatch and separate APM tools by providing a unified view of application performance and infrastructure metrics. This integration simplifies monitoring workflows, enhances efficiency, and facilitates quick correlation of data across different layers of your infrastructure. Improved collaboration among teams further enhances operational effectiveness.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increased Application Performance:&lt;/strong&gt; New Relic's advanced APM capabilities, including distributed tracing and code-level profiling, swiftly identify performance bottlenecks that impact application responsiveness. By analyzing detailed metrics and transaction traces, teams can efficiently resolve issues and proactively prevent regressions with synthetic monitoring. This proactive approach ensures a seamless user experience during deployments and under varying workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Downtime:&lt;/strong&gt; With New Relic's customizable alerting system and real-time monitoring capabilities, teams receive early warnings about critical metrics and infrastructure health. This proactive monitoring enables prompt mitigation of potential issues before they escalate into outages, ensuring uninterrupted business operations. Enhanced visibility into infrastructure health and centralized log management further accelerates incident resolution, minimizing downtime impact.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimized Resource Usage:&lt;/strong&gt; New Relic provides granular insights into resource consumption across AWS environments, facilitating informed decisions on resource optimization. By identifying and addressing resource bottlenecks, teams optimize infrastructure efficiency and implement cost-saving strategies like AWS Reserved Instances or Auto Scaling. This approach maximizes the value of AWS investments while maintaining optimal performance. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced User Satisfaction:&lt;/strong&gt; Improves user experience through RUM insights. Proactively resolving application performance issues with New Relic enhances user experience by ensuring faster response times and improved reliability. Real User Monitoring (RUM) insights into user interactions empower teams to prioritize enhancements that directly impact user satisfaction and retention. Data-driven decisions based on user behavior analytics further optimize user experience, fostering loyalty and engagement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proactive Issue Resolution:&lt;/strong&gt; Customizable alerts for critical performance issues. New Relic's early anomaly detection and comprehensive visibility into application performance enable proactive issue resolution. Customizable alerts notify teams of critical performance issues or potential security threats, enabling swift action to mitigate risks and minimize downtime. Streamlined troubleshooting with centralized log management and distributed tracing ensures quicker identification and resolution of root causes, improving overall incident management efficiency.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Integrating New Relic with AWS enhances monitoring by addressing CloudWatch’s limitations, providing deep insights into application performance, and improving user satisfaction with proactive issue resolution. Start for free today and unlock better observability.&lt;/p&gt;

&lt;p&gt;aws #awscommunitybuilder #awscommunitynepal #communitybuilder&lt;/p&gt;

</description>
      <category>awscommunitybuilder</category>
      <category>aws</category>
    </item>
    <item>
      <title>Event Driven Shared Drive in AWS</title>
      <dc:creator>Deepak Poudel</dc:creator>
      <pubDate>Fri, 15 Nov 2024 10:25:14 +0000</pubDate>
      <link>https://dev.to/poudeldipak/event-driven-shared-drive-in-aws-407l</link>
      <guid>https://dev.to/poudeldipak/event-driven-shared-drive-in-aws-407l</guid>
      <description>&lt;p&gt;This blog outlines the architecture and setup for Network File Sharing, and Event-Driven Processing using AWS services. The primary components include an Amazon EC2 instance configured as an FTP server with Amazon EFS mounted for shared storage, network file sharing enabled via SAMBA, and event-driven processing handled through AWS Lambda functions triggered by AWS EventBridge. The objective is to create a scalable, secure, and automated environment for file storage, sharing, and processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftky5udh83or36pypea68.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftky5udh83or36pypea68.png" alt="Image description" width="800" height="623"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig: AWS Architecture Diagram&lt;/p&gt;

&lt;p&gt;Create a EC2 instance and mount a NFS Drive&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**Mount EFS on the EC2 Instance**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;SSH into the EC2 instance and install NFS utilities:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`sudo yum install -y amazon-efs-utils`
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a directory for mounting the EFS:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`sudo yum install -y amazon-efs-utils`
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Mount the EFS using the file system ID:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`sudo mount -t efs -o tls fs-XXXXXXXX:/ /mnt/efs`
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add an entry to /etc/fstab to ensure EFS is remounted on reboot:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`echo "fs-XXXXXXXX:/ /mnt/efs efs _netdev,tls 0 0" | sudo tee -a /etc/fstab`
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check if the EFS is successfully mounted:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`df -h`
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Then Setup Samba so that windows devices can directly add network drive&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install SAMBA on the EC2 Instance:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo yum install -y samba samba-client samba-common&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Backup the default SAMBA configuration file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.bak
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit the SAMBA Configuration:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/samba/smb.conf&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The following settings are configured under the [global] section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[global]
workgroup = WORKGROUP
server string = Samba Server
netbios name = ftp-server
security = user
map to guest = bad user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following share configuration is added to allow Windows clients to access the EFS directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[EFS_Share]
path = /mnt/efs
browseable = yes
writable = yes
guest ok = yes
create mask = 0755
directory mask = 0755
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Start the SAMBA services to apply the configuration:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start smb
sudo systemctl start nmb
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enable SAMBA services to start on boot:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable smb
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now when the file is uploaded to the FTP server or on via Samba, We need a REPL that checks for changes and sends them for processing \&lt;br&gt;
 \&lt;br&gt;
&lt;code&gt;#!/bin/bash&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Set variables
SOURCE_DIR="/mnt/efs/fs1"
S3_BUCKET="s3://backup-efs-ftp-bucketffa/"
LOG_FILE="/home/ec2-user/upload_to_s3.log"
DEBOUNCE_DELAY=30  # Delay in seconds for file stability check

# Function to log messages
log_message() {
   echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
}

# Function to check if file size is stable and not locked
is_file_stable() {
   local file="$1"
   local prev_size=$(stat -c%s "$file")
   log_message "Initial size of '$file': $prev_size bytes"


   # Sleep for the debounce delay
   sleep "$DEBOUNCE_DELAY"


   local new_size=$(stat -c%s "$file")
   log_message "Size of '$file' after sleep: $new_size bytes"


   # Check if the file size is stable
   if [ "$prev_size" -eq "$new_size" ]; then
       log_message "Size of '$file' after sleep didn't changed."
       # Now check if the file is locked
       if lsof "$file" &amp;amp;&amp;gt;/dev/null; then
           log_message "File '$file' is locked after stability check."
           return 1  # File is locked
       else
           log_message "File '$file' is stable and not locked."
           return 0  # File is stable and not locked
       fi
   else
       log_message "File '$file' size changed during stability check."
       return 1  # File is still changing
   fi
}

# Function to upload file to S3
upload_to_s3() {
   local file="$1"
   local full_path="$SOURCE_DIR/$file"


   # Check if the file exists
   if [ ! -f "$full_path" ]; then
       log_message "File '$full_path' does not exist. Skipping upload."
       return
   fi


   # Ensure the file size is stable and not locked
   if ! is_file_stable "$full_path"; then
       log_message "File '$full_path' is still changing or locked. Delaying processing."
       return
   fi


   # Create destination path for S3
   local s3_path="${S3_BUCKET}${file}"

   # Upload file to S3
   log_message "Attempting to upload '$full_path' to S3 path '$s3_path'..."
   if aws s3 cp "$full_path" "$s3_path" --acl bucket-owner-full-control; then
       log_message "Successfully uploaded '$file' to S3"
   else
       log_message "Failed to upload '$file' to S3. Error code: $?"
   fi
}

# Main loop to monitor directory recursively
log_message "Starting to monitor '$SOURCE_DIR' for new files..."
inotifywait -m -r --format '%w%f' -e close_write -e moved_to "$SOURCE_DIR" |
while read -r full_path; do
   # Clean up the filename to remove unwanted characters
   clean_filename=$(basename "$full_path")

   # Debugging information
   echo "Detected full path: '$full_path'"
   echo "Cleaned filename: '$clean_filename'"

   # Log detected file
   log_message "Detected new file: '$full_path'"

   # Ignore temporary or partial files
   if [[ "$clean_filename" != .* ]] &amp;amp;&amp;amp; [[ "$clean_filename" != *.part ]] &amp;amp;&amp;amp; [[ "$clean_filename" != *.tmp ]]; then
       # Wait for the debounce delay before uploading
       if is_file_stable "$full_path"; then
           upload_to_s3 "${full_path#$SOURCE_DIR/}"  # Remove SOURCE_DIR from the path
       else
           log_message "File '$full_path' is still locked or changing. Ignoring this upload attempt."
       fi
   else
       log_message "Ignoring temporary or partial file: '$full_path'"
   fi
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now Let’s Move Forward to Event Driven Architecture.&lt;/p&gt;

&lt;p&gt;In our case let’s unzip uploaded files if they are zipped.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpwgow63t16q0s7sy46e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpwgow63t16q0s7sy46e.png" alt="Image description" width="743" height="1061"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the code for our triggering AWS Lambda for each file upload. If the uploaded file is&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
from handler_run_task import run_ecs_task

s3 = boto3.client('s3')

def lambda_handler(event, context):
    try:
        # Get bucket name and object key from the event
        source_bucket = event['Records'][0]['s3']['bucket']['name']
        object_key = event['Records'][0]['s3']['object']['key']

        # Define ECS cluster and task details
        cluster_name = 'unzip-test-cluster'  # Replace with your ECS cluster name
        task_family = 'ple-family'  # Replace with your ECS task family
        container_name = 'test-container'  # Replace with your container name

        # Define the overrides for the ECS task
        overrides = {
            'environment': [
                {
                    "name": "BUCKET_NAME",
                    "value": source_bucket
                },
                {
                    "name": "KEY",
                    "value": object_key
                }
            ]
        }

        # Run ECS Task
        ecs_response = run_ecs_task(
            cluster_name,
            task_family,
            container_name,
            overrides,
            source_bucket,
            object_key
        )

        return {
            'statusCode': 200,
            'body': json.dumps('ECS task triggered successfully!')
        }

    except Exception as e:
        print(f"Error triggering ECS task: {e}")
        return {
            'statusCode': 500,
            'body': json.dumps(f"Error triggering ECS task: {str(e)}")
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**Let’s Create a handler when the previous lambda gives, json with files to process. The json will contain keys and bucket from which to unzip and the image to unzip the files**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;ECS Task is created by the lambda function and is able to handle the unzipping of files. If an unzipped file is found, the ECS task unzips the file and uploads the extracted contents back to a target S3 bucket. If the upload fails, the file is routed to a Dead Letter Queue (DLQ). Non-ZIP files are directly copied to the target bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ECS Configuration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Name&lt;/strong&gt;: unzip-test-cluster&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task Family&lt;/strong&gt;: ple-family&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container&lt;/strong&gt;: test-container&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Launch Type&lt;/strong&gt;: Fargate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Task Execution:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Retrieves the latest task definition for the given family.&lt;/li&gt;
&lt;li&gt;Executes the task with environment variables passed by the Lambda function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Handler_run_task.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

def run_ecs_task(cluster_name, task_family, container_name, overrides, source_bucket, object_key):
    ecs_client = boto3.client('ecs')

    # Get the latest task definition for the given family
    response = ecs_client.list_task_definitions(
        familyPrefix=task_family,
        sort='DESC',
        maxResults=1
    )

    latest_task_definition = response['taskDefinitionArns'][0]
    print("Printing Latest task def")
    print(latest_task_definition)

    # Run the ECS task with the latest task definition
    response = ecs_client.run_task(
        cluster=cluster_name,
        taskDefinition=latest_task_definition,
        overrides={
            'containerOverrides': [
                {
                    'name': container_name,
                    # 'cpu': overrides.get('cpu', 512),  # Default CPU to 512
                    # 'memory': overrides.get('memory', 1024),  # Default memory to 1024 MiB
                    'environment': overrides.get('environment', [])
                }
            ]
        },
        networkConfiguration={
            'awsvpcConfiguration': {
                'subnets': ['subnet-089f9162bd2913570', 'subnet-05591da28974513ee', 'subnet-0732585a95fcd1b64'],  # Replace with your subnet ID
                'assignPublicIp': 'ENABLED'  # or 'DISABLED' depending on your network setup
            }
        },
        launchType='FARGATE',  # Or 'EC2', depending on your setup
        count=1
    )

    return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**Now let’s create a service linked role.**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;An IAM role is created to grant the Lambda function and ECS tasks the necessary permissions to access other AWS resources, such as S3, ECS, and CloudWatch Logs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Key Policies&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log creation and event publishing to CloudWatch&lt;/li&gt;
&lt;li&gt;S3 bucket access (GetObject, ListBucket, PutObject)&lt;/li&gt;
&lt;li&gt;ECS task execution and description&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;IAM role passing&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"Version": "2012-10-17",
"Statement": [
{
    "Effect": "Allow",
    "Action": "logs:CreateLogGroup",
    "Resource": "arn:aws:logs:&amp;lt;aws_region&amp;gt;:&amp;lt;account_id&amp;gt;:*"
},
{
    "Effect": "Allow",
    "Action": [
        "logs:CreateLogStream",
        "logs:PutLogEvents"
    ],
    "Resource": "arn:aws:logs:&amp;lt;aws_region&amp;gt;:&amp;lt;account_id&amp;gt;:log-group:/aws/lambda/&amp;lt;lambda_function_name&amp;gt;:*"
},
{
    "Effect": "Allow",
    "Action": [
        "s3:GetObject",
        "s3:ListBucket"
    ],
    "Resource": [
        "arn:aws:s3:::&amp;lt;source_bucket_name&amp;gt;",
        "arn:aws:s3:::&amp;lt;source_bucket_name&amp;gt;/*"
    ]
},
{
    "Effect": "Allow",
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::&amp;lt;target_bucket_name&amp;gt;/*"
},
{
    "Effect": "Allow",
    "Action": "iam:PassRole",
    "Resource": "arn:aws:iam::&amp;lt;account_id&amp;gt;:role/*"
},
{
    "Effect": "Allow",
    "Action": [
        "ecs:RegisterTaskDefinition",
        "ecs:DescribeTaskDefinition"
    ],
    "Resource": "*"
}
]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**Dead-letter Queue (DLQ) Mechanism**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The DLQ captures and logs events where the ECS task fails to process the uploaded file correctly. This mechanism ensures that errors are captured and stored for subsequent analysis or reprocessing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure Handling&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Condition&lt;/strong&gt;: A failure is identified if the ECS task returns an HTTP status code other than 200.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DLQ Logging&lt;/strong&gt;: The failed event, including file details and error messages, is sent to the DLQ. The DLQ serves as a reliable storage for these failed events, ensuring no data is lost.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let’s create an ecs image \&lt;br&gt;
 \&lt;br&gt;
Here goes the unzip file \&lt;br&gt;
 \&lt;br&gt;
&lt;code&gt;#!/bin/bash&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Environment variables
SRC_BUCKET_NAME="$BUCKET_NAME"
SRC_BUCKET_KEY="$KEY"
DEST_BUCKET_NAME="backup-efs-ftp-ple-unzipped"
DLQ_URL="https://sqs.us-east-1.amazonaws.com/654654543848/unzip-failed-processing-queue"
OUTPUT_DIR="./output"
LOCAL_FILE_PATH="./$(basename "$SRC_BUCKET_KEY")"

# Function to log messages
log_message() {
   echo "$(date '+%Y-%m-%d %H:%M:%S') - $1"
}

# Function to download a file from S3
download_file() {
   log_message "Downloading $SRC_BUCKET_KEY from $SRC_BUCKET_NAME"
   aws s3 cp "s3://$SRC_BUCKET_NAME/$SRC_BUCKET_KEY" "$LOCAL_FILE_PATH"
   if [ $? -ne 0 ]; then
       log_message "Error downloading file from S3"
       send_to_dlq "Error downloading file from S3"
       exit 1
   fi
}

# Function to upload a file to S3
upload_to_s3() {
   local file_path="$1"
   local s3_key="$2"
   log_message "Uploading $file_path to s3://$DEST_BUCKET_NAME/$s3_key"
   aws s3 cp "$file_path" "s3://$DEST_BUCKET_NAME/$s3_key" --acl bucket-owner-full-control
   if [ $? -ne 0 ]; then
       log_message "Failed to upload $file_path to S3"
       send_to_dlq "Failed to upload $file_path to S3"
       exit 1
   fi
   invoke_load_balancer "$s3_key"
}

# Function to send a message to the DLQ
send_to_dlq() {
   local message="$1"
   log_message "Sending message to DLQ: $message"
   aws sqs send-message --queue-url "$DLQ_URL" --message-body "$message"
   if [ $? -ne 0 ]; then
       log_message "Failed to send message to DLQ"
       exit 1
   fi
}

# Function to invoke load balancer
invoke_load_balancer() {
   local s3_key="$1"
   log_message "Invoking load balancer for $s3_key"
   local payload=$(jq -n \
       --arg bucket "$DEST_BUCKET_NAME" \
       --arg key "$s3_key" \
       '{bucket: $bucket, key: $key, filePath: $key}')
   local response=$(curl -s -X POST "https://asdlb.mydomain.com/companyx/gateway/listenFTPWebhook" \
       -H "Content-Type: application/json" \
       -d "$payload")

   local status_code=$(echo "$response" | jq -r '.status')
   if [ "$status_code" != "200" ]; then
       log_message "Load balancer invocation failed with status code $status_code"
       send_to_dlq "Load balancer invocation failed with status code $status_code"
   else
       log_message "Load balancer invocation successful"
   fi
}

# Function to extract ZIP files
extract_and_process_files() {
   log_message "Extracting ZIP file $LOCAL_FILE_PATH"
   mkdir -p "$OUTPUT_DIR"
   unzip -o "$LOCAL_FILE_PATH" -d "$OUTPUT_DIR"
   if [ $? -ne 0 ]; then
       log_message "Failed to extract ZIP file"
       send_to_dlq "Failed to extract ZIP file"
       exit 1
   fi

   for file in "$OUTPUT_DIR"/*; do
       local s3_key=$(dirname "$SRC_BUCKET_KEY")/$(basename "$file")
       log_message "Processing file $file"
       upload_to_s3 "$file" "$s3_key"
   done
}

# Main process
main() {
   log_message "Starting processing for $SRC_BUCKET_KEY from $SRC_BUCKET_NAME"
   download_file

   if [[ "$LOCAL_FILE_PATH" == *.zip ]]; then
       extract_and_process_files
   else
       local s3_key=$(dirname "$SRC_BUCKET_KEY")/$(basename "$LOCAL_FILE_PATH")
       upload_to_s3 "$LOCAL_FILE_PATH" "$s3_key"
   fi

   log_message "Processing complete"
}

main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here’s the Docker File for the same \&lt;br&gt;
 \&lt;br&gt;
&lt;code&gt;# Use the official lightweight Debian image as the base&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM debian:bookworm-slim

# Set the working directory
WORKDIR /usr/src/app

# Set non-interactive mode to avoid prompts during package installation
ENV DEBIAN_FRONTEND=noninteractive

# Install necessary tools with cache cleanup
RUN apt-get update &amp;amp;&amp;amp; \
   apt-get install -y --no-install-recommends \
   bash \
   curl \
   unzip \
   awscli \
   apt-get clean &amp;amp;&amp;amp; \
   rm -rf /var/lib/apt/lists/*

# Copy your shell script into the container
COPY unzip.sh /usr/src/app/

# Make the shell script executable
RUN chmod +x /usr/src/app/unzip.sh

# Set environment variables (can be overridden at runtime)
ARG BUCKET_NAME
ARG KEY

# Command to run your shell script
CMD ["/usr/src/app/unzip.sh"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build and Push the Docker Image to ECR and update the code.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;We have Shared Samba Server for windows users with Event Driven on demand unzip functionality when a user uploads file to the Drive.&lt;/p&gt;

&lt;h1&gt;
  
  
  aws #awscommunitybuuilder #awscommunitynepal #communitybuilder
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>Complete guide to protect Amazon EBS Snapshots from accidental deletion using AWS Recycle Bin</title>
      <dc:creator>Deepak Poudel</dc:creator>
      <pubDate>Tue, 12 Mar 2024 06:58:37 +0000</pubDate>
      <link>https://dev.to/poudeldipak/complete-guide-to-protect-amazon-ebs-snapshots-from-accidental-deletion-using-aws-recycle-bin-243h</link>
      <guid>https://dev.to/poudeldipak/complete-guide-to-protect-amazon-ebs-snapshots-from-accidental-deletion-using-aws-recycle-bin-243h</guid>
      <description>&lt;p&gt;Have you ever been wondering about or facing the problem of accidental deletion of Amazon EBS Snapshots and AMI?&lt;br&gt;
Don’t worry; the AWS Recycle Bin is the solution to such a problem.&lt;br&gt;
Normally, we use EBS snapshots to back up the data on our Amazon EBS volumes. The snapshots are very useful, mainly in disaster recovery, data migration, and backup compliance. But in some cases, those snapshots can be accidentally deleted, which can cause a huge loss of data. For that reason, in this blog, I will be guiding you to use Amazon Recycle Bin.&lt;br&gt;
Let’s get started from the very beginning.&lt;br&gt;
Pre-requisite: Before starting the steps to configure the recycle bin, we must have at least one EBS volume. That EBS volume can be independent of EC2 or attached to an EC2 instance.&lt;br&gt;
Steps&lt;br&gt;
Step1:  Configuration of EBS snapshot&lt;br&gt;
First of all, let us create an EBS snapshot from the EBS volume. Navigate to the EC2 service. In the left navbar, click on "Volumes.”. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu43gumb3b1bteazqx5fs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu43gumb3b1bteazqx5fs.png" alt="Image description" width="364" height="699"&gt;&lt;/a&gt;&lt;br&gt;
In the volumes section, we can see a list of existing EBS volumes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbggsl5u39z5c4281p4qu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbggsl5u39z5c4281p4qu.png" alt="Image description" width="800" height="205"&gt;&lt;/a&gt;&lt;br&gt;
Select the EBS volume using the checkbox on the left side of the volume name.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekqa7nie4zck4uyvzk1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekqa7nie4zck4uyvzk1d.png" alt="Image description" width="768" height="423"&gt;&lt;/a&gt;&lt;br&gt;
Now, we need to create an EBS snapshot for the volume above. For that, click on “Action” and then “Create snapshot.”. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddw8mnhpobgib69s38b1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddw8mnhpobgib69s38b1.png" alt="Image description" width="288" height="374"&gt;&lt;/a&gt;&lt;br&gt;
Provide the details for the EBS snapshot by filling out the Description and Tags section. Multiple tags can also be provided as per requirement. We will be using the tag while configuring the recycle bin. After that, create the EBS snapshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8bfk0fd3ye8wbtqjbfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8bfk0fd3ye8wbtqjbfz.png" alt="Image description" width="800" height="631"&gt;&lt;/a&gt;&lt;br&gt;
A message is shown after the successful creation of a snapshot with its unique ID.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8flagc5d38ocez617v5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8flagc5d38ocez617v5z.png" alt="Image description" width="800" height="92"&gt;&lt;/a&gt;&lt;br&gt;
Step2: Configuration of Recycle Bin&lt;br&gt;
Search for Recycle Bin in the search bar and select that service.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0l939fpid0vh3n6b8p4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0l939fpid0vh3n6b8p4.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;br&gt;
In the recycle bin, we must create a retention rule to protect the snapshot. Click on “Click retention rule.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktwq0powyl7fthzi956y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktwq0powyl7fthzi956y.png" alt="Image description" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkxgc5l5jt5qe073expw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkxgc5l5jt5qe073expw.png" alt="Image description" width="800" height="316"&gt;&lt;/a&gt;&lt;br&gt;
In the rule details, provide the retention rule name so that we can identify the rule properly. Also, provide a relevant description of the rule.&lt;br&gt;
In the resource type, select EBS Snapshots. To apply the retention rule to all EBS snapshots, select “Apply to all resources.”.  In this blog, I will be applying the retention rule to only one snapshot. Select the relevant tags for the snapshots, which we have to retain, and click on "Add.”. In the retention period, we can choose the time period for which the resources can be recovered after deletion.The minimum time can be 1 day, and the maximum can be 365 days.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqiusjxflvl1a185003w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqiusjxflvl1a185003w.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;br&gt;
After configuring all the details, click on “Create retention rule” at the bottom. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmwp56tgnflw0242los3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmwp56tgnflw0242los3.png" alt="Image description" width="670" height="130"&gt;&lt;/a&gt;&lt;br&gt;
The retention rule that we configured can be seen in the home section of the recycle bin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d3pvxnky33klshe37en.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d3pvxnky33klshe37en.png" alt="Image description" width="800" height="215"&gt;&lt;/a&gt;&lt;br&gt;
The retention rule has one feature for rule lock. Select the rule and click on "Action,” then “Edit retention rule lock.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedm4xq0ymwtzp655qn3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedm4xq0ymwtzp655qn3p.png" alt="Image description" width="639" height="283"&gt;&lt;/a&gt;&lt;br&gt;
In the rule lock setting, we can configure it to prevent the retention rule itself from being accidentally or maliciously updated or deleted. There are two options: “unlock” and “lock.”. If we select Unlock, the rule can be modified and deleted. But if we select lock, then the rule can’t be modified or deleted until it is unlocked and the specified delay period has expired.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgymw1mw2dak8trrbt5zl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgymw1mw2dak8trrbt5zl.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
Step3: Checking the retention &lt;br&gt;
Now, we need to test the use of the recycle bin by deleting the snapshot. Navigate to the EBS snapshots section by searching “snapshots” in the search bar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mxtma5tf0ha28lici5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mxtma5tf0ha28lici5u.png" alt="Image description" width="800" height="428"&gt;&lt;/a&gt;&lt;br&gt;
Select the snapshot and click on "Actions,” then “Delete snapshot.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikkwyfhbeofhtl95iub6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikkwyfhbeofhtl95iub6.png" alt="Image description" width="561" height="536"&gt;&lt;/a&gt;&lt;br&gt;
After successfully deleting the snapshot, we can see that the snapshot no longer exists. This deletion can be done accidentally in some cases.  Click on “Recycle Bin” in the top right. This will redirect to the Recycle Bin service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bj8iyyp9lf8jmxw5zw0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bj8iyyp9lf8jmxw5zw0.png" alt="Image description" width="800" height="184"&gt;&lt;/a&gt;&lt;br&gt;
Select “Resources” just below the recycle bin on the left side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8i8b904pqws9fsapv1ot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8i8b904pqws9fsapv1ot.png" alt="Image description" width="267" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfaz90x5jzyu4fgzydeq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfaz90x5jzyu4fgzydeq.png" alt="Image description" width="800" height="178"&gt;&lt;/a&gt;&lt;br&gt;
In the resources section we can see the EBS snapshot that we deleted earlier. It also shows the bin entry date and bin exit date. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwt749pkmxfpeloa4tovn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwt749pkmxfpeloa4tovn.png" alt="Image description" width="800" height="337"&gt;&lt;/a&gt;&lt;br&gt;
To recover the snapshot, select the snapshot and click on “Recover” in the top right section. &lt;br&gt;
Now, click “Recover resources” to retain the snapshot volume&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1jeb4gbjrdnyd6ydbfnx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1jeb4gbjrdnyd6ydbfnx.png" alt="Image description" width="800" height="337"&gt;&lt;/a&gt;&lt;br&gt;
Navigate to the EBS snapshot section where, we can see the recovered snapshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaw6upfe2p2qg0wlfnbv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaw6upfe2p2qg0wlfnbv.png" alt="Image description" width="800" height="175"&gt;&lt;/a&gt;&lt;br&gt;
In this way we have successfully used Recycle Bin to recover EBS snapshot from accidental deletion. In the similar manner it can also be used to recover Amazon Machine Image (AMI). &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Streamlining Containerized Web Application Deployment: An All-Inclusive Guide Using AWS ECS, ECR, and CodePipeline</title>
      <dc:creator>Deepak Poudel</dc:creator>
      <pubDate>Thu, 29 Feb 2024 17:29:45 +0000</pubDate>
      <link>https://dev.to/poudeldipak/streamlining-containerized-web-application-deployment-a-comprehensive-guide-with-aws-ecs-ecr-and-codepipeline-40fd</link>
      <guid>https://dev.to/poudeldipak/streamlining-containerized-web-application-deployment-a-comprehensive-guide-with-aws-ecs-ecr-and-codepipeline-40fd</guid>
      <description>&lt;p&gt;In today’s world of technology web applications have become a major part of digital experience. They provide a complete and dynamic solution for various needs, ranging from communication and collaboration to e-commerce and entertainment. To ensure portability, efficiency, scalability, and the ability to provide consistent operation across diverse environments, containerization is required. In this blog, I will guide about streamlining containerized web application deployment using AWS services such as ECS, and ECR along with CodePipeline to accommodate continuous change requirements.&lt;br&gt;
First of all, let us understand the use case of services that will be used for this project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodeCommit&lt;/strong&gt;- It is a version control service, which will be used to store code, documents etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodeBuild&lt;/strong&gt;- It is used to compile source code, runs unit test and produce artifact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodePipeline&lt;/strong&gt;- Used to quickly model and configure the different stages of software release process and automate the software change continuously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAM roles&lt;/strong&gt;- Used to delegate access to AWS resources.&lt;br&gt;
ECR (Elastic Container Registry)- Used to securely store, manage docker containers images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ECS (Elastic Container Service)&lt;/strong&gt;- Used to easily deploy, manage and scale docker container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Fargate&lt;/strong&gt;- Used with ECS to run containers without having to manage servers.&lt;/p&gt;

&lt;p&gt;Let us jump directly to the set of steps &lt;br&gt;
&lt;strong&gt;Steps:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Step1:&lt;/strong&gt;&lt;br&gt;
In the very first, we will configure IAM roles with the permission policy required in this project for AWS services. To create IAM roles, IAM is used. Permission policies required for this project are: &lt;/p&gt;

&lt;p&gt;AmazonEC2ContainerRegistryPowerUser&lt;br&gt;
AWSCodeCommitReadOnly&lt;br&gt;
CloudWatchLogsFullAccess&lt;br&gt;
Attach those policies to the role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h1pcq5zxy2zqcngobvz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h1pcq5zxy2zqcngobvz.png" alt="Image description" width="708" height="218"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; We need to push our code to a repository for which code commit will be used. Search for CodeCommit in the AWS console and click in “Create Repository”. Repository name must be provided for CodeCommit. Descriptions, tag and CodeGuru reviewer are optional.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7w80srd9r759d9v2b2hc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7w80srd9r759d9v2b2hc.png" alt="Image description" width="457" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating git repository, it provides different way of connection such as HTTPS, SSH, HTTPS(GRC). We will be using HTTPS option to clone the repository so that we can push code to CodeCommit.&lt;br&gt;
To clone the repository, use the code: git clone &lt;br&gt;
In the local system where the web app code is available clone the repository and push the code.&lt;br&gt;
Code to create branch main- git checkout -b main&lt;br&gt;
  Add files:               - git add .&lt;br&gt;
  Commit                    - git commit -m &lt;br&gt;
  Push                  - git push origin main &lt;br&gt;
Content of Docker file used in this project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM nginx
WORKDIR /app
COPY . /usr/share/nginx/html
EXPOSE 80
buildspec.yml file is also required which will be configured later.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step3:&lt;/strong&gt; &lt;br&gt;
Now, we will set up ECR. Visibility setting can be choosen between private and public. In this project we will be using private repository.  A concise name which can be identified by developer should be provided in repository name. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyclb96nnb0cer9tv1le.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyclb96nnb0cer9tv1le.png" alt="Image description" width="633" height="408"&gt;&lt;/a&gt;&lt;br&gt;
After creating a container repository, it can be shown listed in private along with its URI. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bnl27aamxbl98dpfuxi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bnl27aamxbl98dpfuxi.png" alt="Image description" width="800" height="228"&gt;&lt;/a&gt;&lt;br&gt;
Click on Repository name to view the push commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj0wmkut1swf8zcdb34s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj0wmkut1swf8zcdb34s.png" alt="Image description" width="800" height="160"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 4:&lt;/strong&gt;&lt;br&gt;
Now, we will set up CodeBuild with source as CodeCommit. CodeBuild will extract code from the CodeCommit repository that we created earlier. Reference type can be selected as branch with relevant name, in our case “main”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42cpjth34iqju0yu99y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42cpjth34iqju0yu99y0.png" alt="Image description" width="800" height="592"&gt;&lt;/a&gt;&lt;br&gt;
We will be using default environment configuration. CodeBuild also requires compute medium, for which we will use EC2. We don’t need to configure EC2. To execute operation of CodeBuild permission is required which is provided by IAM role created earlier. &lt;br&gt;
buildspec.yml file will be used to build commands which will be provided through CodeCommit. &lt;br&gt;
The configuration of buidlspec.yml file is given as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2
phases:
  install:
    runtime-versions:
      nodejs: latest
    commands:
      - nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2 &amp;amp;
      - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
  pre_build:
    commands:
      - echo log in to Amazon ECR...
      - &amp;lt;provide way to Retrieve an authentication token and authenticate your Docker client to your registry using push commands&amp;gt;
      - REPOSITORY_URI=&amp;lt;replace with the uri of ECR&amp;gt;
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7
      - IMAGE_TAG=${COMMIT_HASH:=latest}
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image.
      - docker build -t $REPOSITORY_URI:latest .
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - docker push $REPOSITORY_URI:latest
      - docker push $REPOSITORY_URI:$IMAGE_TAG
      - echo write definitions file...
      -printf'[{"name":"blog-code-pipeline","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG &amp;gt; imagedefinitions.json
artifacts:
  files: imagedefinitions.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This buildspec file defines the steps for building and pushing docker image to ECR. Major phases for this file are.&lt;br&gt;
Phases:&lt;br&gt;
Install: For our source code to run we need to install Node.js latest version and also start Docker daemon.&lt;br&gt;
Pre_build: Authenticate Amazon ECR and setting variable for Docker image repository along with determining the image tag.&lt;br&gt;
Build: Commands to build Docker image by specifying ECR repository URI and tag.&lt;br&gt;
Post_build: Push Docker image to ECR and create definition file. Pushing definition file to artifact store.&lt;br&gt;
Commands: &lt;br&gt;
Commands for starting Docker daemon, authenticating ECR, building Docker image and pushing that image to ECR, creating the definitions file are used in the buildspec.yml file above.&lt;br&gt;
This buildspec file is used to automate the process of building a Docker image from source code in CodeCommit and tagging it with version, and pushing the Docker image to ECR. The resulting image information will be stored in artifact(imagedefinitions.json) file, for further deployment in ECS. &lt;br&gt;
Note the name “blog-code-pipeline” which we have used.&lt;br&gt;
After writing the buildspec.yml push it to the CodeCommit repository using the same process that we did earlier.&lt;br&gt;
To check whether our setup works well we will run CodeBuild and see the phase details as.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvokpwjne73l05ewhtp71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvokpwjne73l05ewhtp71.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 5-&lt;/strong&gt; In this step we will configure ECS to run the docker container from registry.&lt;br&gt;
First of all, cluster should be made. This will be used to organize container instances regionally so that task requests may be executed on them. Cluster name must be provided along with infrastructure configuration. AWS Fargate serverless will be used in this project. Click on the “create” button in the bottom. The cluster will be made through CloudFormation. It may take few minutes to set up. Progress can be tracked in CloudFormation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwptujbkz0fuh99tf93ve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwptujbkz0fuh99tf93ve.png" alt="Image description" width="800" height="578"&gt;&lt;/a&gt;&lt;br&gt;
Configure task definition- Application's blueprint is found in the task definition. The parameters and one or more containers that make up application are described using it. After creating the cluster, click on the “Task definitions” available in the left navbar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnef8eau0fj6j1t35ssoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnef8eau0fj6j1t35ssoc.png" alt="Image description" width="301" height="645"&gt;&lt;/a&gt;&lt;br&gt;
We can also create task definition using JSON, but for simplicity propose click on “Create new task definition”. Provide the task definition family name and infrastructure to run the task. We will be using default environment setup with Fargate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzptlctp86a45mmbwe2cx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzptlctp86a45mmbwe2cx.png" alt="Image description" width="800" height="224"&gt;&lt;/a&gt;&lt;br&gt;
We will use the default OS architecture and keep the task size to 1vCPU and 3GB memory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sdmc67lqfxl4e36u8el.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sdmc67lqfxl4e36u8el.png" alt="Image description" width="800" height="510"&gt;&lt;/a&gt;&lt;br&gt;
In the container section give the container name and image URI of repository. Container name is configured as per buildsepc.yml “blog-code-pipeline”. For port mapping we will use port 80, as per our Docker file. We can customize the other option, but for this sample project leave other section as default and click on “create” button at last&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcj3deg2a684yegw9mjq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcj3deg2a684yegw9mjq.png" alt="Image description" width="800" height="340"&gt;&lt;/a&gt;&lt;br&gt;
Now, click on the cluster that we created earlier and go to the services section. In order to execute and manage a given number of instances of a task description concurrently in an Amazon ECS cluster, we must build services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2yrfd5copq9u9dsuq74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2yrfd5copq9u9dsuq74.png" alt="Image description" width="800" height="243"&gt;&lt;/a&gt;&lt;br&gt;
The cluster name will be selected by default. We will use Fargate as capacity provider keeping the platform version latest.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmkobaodz3glx7nw46na.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmkobaodz3glx7nw46na.png" alt="Image description" width="800" height="563"&gt;&lt;/a&gt;&lt;br&gt;
In the deployment configuration section service and provide family name along with service name. We will leave other setting as default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femeee1fax5cgg171zdv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femeee1fax5cgg171zdv2.png" alt="Image description" width="800" height="575"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 6:&lt;/strong&gt;&lt;br&gt;
Set up CodePipeline&lt;br&gt;
At last CodePipeline must be configured so that when code is pushed to CodeCommit it trigger CodeBuild and then ECS for deploying changes without requiring manual intervention.&lt;br&gt;
Name should be provided for the pipeline to be identified.&lt;br&gt;
We can create new service role in Code Commit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fet6srmw4ovcwgo76o8ig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fet6srmw4ovcwgo76o8ig.png" alt="Image description" width="800" height="559"&gt;&lt;/a&gt;&lt;br&gt;
The source will be CodeCommit which contains code that we push from our local system. It must be configured according of CodeCommit repository, as our previous configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1ogmyki8t99l9plfmz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1ogmyki8t99l9plfmz8.png" alt="Image description" width="800" height="653"&gt;&lt;/a&gt;&lt;br&gt;
In the build phase, build provider must be selected and relevant name must be provided&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0u3y3glon7gdkiw59l8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0u3y3glon7gdkiw59l8o.png" alt="Image description" width="800" height="568"&gt;&lt;/a&gt;&lt;br&gt;
In the last deploy provider must be provided which will be used for deploying the containerized web app. ECS will be used as build provider which we configure earlier. Provide.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusir0dwuo9awg346v0bq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusir0dwuo9awg346v0bq.png" alt="Image description" width="800" height="544"&gt;&lt;/a&gt;&lt;br&gt;
the cluster name ,service name and image definitions file accordingly.&lt;br&gt;
After clicking on “Release change” of CodePipeline it will first get the code form CodeCommit and run CodeBuild and deploy using ECS. It may take some time to complete the whole process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8x2b5t85xd8ttxml5yb2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8x2b5t85xd8ttxml5yb2.png" alt="Image description" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbug6wat9t0b9jjscdwxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbug6wat9t0b9jjscdwxv.png" alt="Image description" width="710" height="441"&gt;&lt;/a&gt;&lt;br&gt;
 Now, to see the output go to ECS. In the task section of the cluster that we created. We can see public IP address for the hosted web app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7trt0c4kru0l9v0f13a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7trt0c4kru0l9v0f13a.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;
Copy and paste that IP address in the browser where we can see our website is hosted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmt1thgd712jb6efcvkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmt1thgd712jb6efcvkh.png" alt="Image description" width="765" height="710"&gt;&lt;/a&gt;&lt;br&gt;
Now, if we make any change to our source code of web app and push it to CodeCommit, it will automatically run CodePipeline and the change will be deployed in ECS automatically.&lt;br&gt;
In this way we have successfully deployed our containerized web app using ECS, ECR and CodePipeline.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>S3 File Analysis with AWS Lambda: Counting Words and SNS Notifications"</title>
      <dc:creator>Deepak Poudel</dc:creator>
      <pubDate>Sat, 10 Feb 2024 08:11:15 +0000</pubDate>
      <link>https://dev.to/poudeldipak/s3-file-analysis-with-aws-lambda-counting-words-and-sns-notifications-i0c</link>
      <guid>https://dev.to/poudeldipak/s3-file-analysis-with-aws-lambda-counting-words-and-sns-notifications-i0c</guid>
      <description>&lt;p&gt;In today's fast-paced world, automation is the key to efficiency. AWS Lambda, one of the popular serverless computing service, allows you to run code without provisioning or managing servers. In this tutorial, I'll walk you through how to leverage AWS Lambda to automatically count the number of words in a file uploaded to an S3 bucket and then send the word count in an email via Amazon SNS (Simple Notification Service). This tool can be valuable for various applications such as content analysis, document processing, and more.&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;br&gt;
Before directly diving into the implementation, make sure you have the following prerequisites in place:&lt;/p&gt;

&lt;p&gt;● An AWS account with appropriate permissions to create Lambda functions, S3 buckets, and SNS topics.&lt;/p&gt;

&lt;p&gt;Step 1: Log into the AWS Management Console and navigate the S3 service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvjytxn6qldwwkxkft34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvjytxn6qldwwkxkft34.png" alt="Image description" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Create a Bucket:&lt;br&gt;
Once you're in the S3 dashboard, click the "Create bucket" button. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwenyclhsb19xuy7b0uo0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwenyclhsb19xuy7b0uo0.png" alt="Image description" width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3: Configure Bucket Settings:&lt;/p&gt;

&lt;p&gt;● Bucket Name: Choose a globally unique name for your S3 bucket. Bucket names must be unique across all of AWS, so picking a name that someone else has yet to use is essential.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmitjzyj4orfzfhkwtyxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmitjzyj4orfzfhkwtyxl.png" alt="Image description" width="800" height="114"&gt;&lt;/a&gt;&lt;br&gt;
● Region: Select the favorable AWS region where you want to create the bucket. Choose a region that is geographically close to your intended users or applications for better performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bjgdm80ar2voiken26x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bjgdm80ar2voiken26x.png" alt="Image description" width="800" height="114"&gt;&lt;/a&gt;&lt;br&gt;
● Configure options such as versioning, logging, and tags as needed: Depending on your specific use case, you can enable features like versioning to keep multiple versions of files or configure logging to track bucket activity.&lt;br&gt;
● Set Permissions: By default, the bucket is private, meaning only the AWS account that created it has access. If you want to grant public access or specific permissions to other AWS accounts or IAM users, you can configure bucket policies, access control lists (ACLs), or IAM policies.&lt;/p&gt;

&lt;p&gt;● Review and Create: After configuring the settings for your S3 bucket, review your choices to ensure they are correct. Double-check the bucket name for uniqueness.&lt;br&gt;
● Click the "Create bucket" button to create the S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuoxo9jvt8lhr2qnur8d8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuoxo9jvt8lhr2qnur8d8.png" alt="Image description" width="733" height="148"&gt;&lt;/a&gt;&lt;br&gt;
Step 4: Create an IAM role to permit the Lambda function to access Amazon S3 and Amazon SNS:&lt;/p&gt;

&lt;p&gt;● Create a New Role:&lt;br&gt;
➔ In the IAM dashboard, click "Roles" in the left sidebar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9e4s7iwfs2a6j9ivfrb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9e4s7iwfs2a6j9ivfrb.png" alt="Image description" width="408" height="723"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq5h65m4itwqps82cfrn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq5h65m4itwqps82cfrn.png" alt="Image description" width="800" height="59"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Now click the "Create role" button to create a new role with required permission policies.&lt;/p&gt;

&lt;p&gt;● Select Type of Trusted Entity:&lt;/p&gt;

&lt;p&gt;➔ For the trusted entity type, select "AWS service" since you're creating this role for AWS Lambda.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnu7fmjk5x8gocv0ztmqh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnu7fmjk5x8gocv0ztmqh.png" alt="Image description" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ In the use case options, choose "Lambda &lt;br&gt;
● Attach Permissions Policies:&lt;/p&gt;

&lt;p&gt;In the "Permissions" step, you can attach policies defining the role's actions. Search and select the following policies:&lt;/p&gt;

&lt;p&gt;➔ AWSLambdaBasicExecutionRole: This policy lets your Lambda function write logs to CloudWatch Logs.&lt;br&gt;
➔ AmazonSNSFullAccess: This policy provides full access to Amazon SNS, allowing your Lambda function to publish messages to SNS topics.&lt;br&gt;
➔ AmazonS3FullAccess: This policy provides full access to Amazon S3, allowing your Lambda function to read from and write to S3 buckets.&lt;br&gt;
You can search for these policies in the search box and attach them individually.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22ti9hkb996nb3pj3gjy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22ti9hkb996nb3pj3gjy.png" alt="Image description" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodsaxsd0nnbvjeqund9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodsaxsd0nnbvjeqund9j.png" alt="Image description" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbczpn72m19gl2gd3ghg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbczpn72m19gl2gd3ghg.png" alt="Image description" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● Name and Create Role:&lt;/p&gt;

&lt;p&gt;➔ Give your role a name. In this case, you can name it "wordCounterRoleforlambda" or choose any other desired name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm516giqj7tp7tna6wa6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm516giqj7tp7tna6wa6.png" alt="Image description" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Add a description to help you remember the purpose of this role.&lt;br&gt;
➔ Click the "Create role" button to create the role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxg4yqgvdvp5ii0rdvu9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxg4yqgvdvp5ii0rdvu9.png" alt="Image description" width="605" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 5: Create a Simple Notification Service (SNS) topic. &lt;br&gt;
● Navigate to SNS (Simple Notification Service):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67i89qw0qqj6g4ttp1ip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67i89qw0qqj6g4ttp1ip.png" alt="Image description" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwz0kwtxmyejhoq3e28fy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwz0kwtxmyejhoq3e28fy.png" alt="Image description" width="450" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Create a New Topic: In the SNS dashboard, click the "Create topic" button to create a new SNS topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdc446b041ohjdsfnd7yz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdc446b041ohjdsfnd7yz.png" alt="Image description" width="800" height="45"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Configure Topic Details: Choose a type to Standard and Provide a name for your topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhucaftakp4ae9l0iyg92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhucaftakp4ae9l0iyg92.png" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Click on Create Topic&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvep6aivrm3zrk6ish3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvep6aivrm3zrk6ish3f.png" alt="Image description" width="644" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Create Subscription: With your topic selected, click the "Create subscription" button to set up a new subscription.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuveqn3h5fcgd1tyggbyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuveqn3h5fcgd1tyggbyd.png" alt="Image description" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Choose a Protocol and Endpoint and Click Create Subscription&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4i2pw57b2t804tnn7e6c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4i2pw57b2t804tnn7e6c.png" alt="Image description" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ You will receive an email shortly to confirm your Subscription&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqkx6vbds1k7tzekkn39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqkx6vbds1k7tzekkn39.png" alt="Image description" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Click on Confirm Subscription&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5bdx0zvfhofhqos4cem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5bdx0zvfhofhqos4cem.png" alt="Image description" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foc4f9ouhvv5atqxx46ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foc4f9ouhvv5atqxx46ja.png" alt="Image description" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 6: Create a Lambda Function&lt;br&gt;
● Go to the AWS Lambda console.&lt;br&gt;
● Click  "Create function" and choose "Author from scratch” as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzo9i51c149l0l6c8x07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzo9i51c149l0l6c8x07.png" alt="Image description" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0v480fmg0qrjtdonqv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0v480fmg0qrjtdonqv8.png" alt="Image description" width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgd0xebpz4c7agrkxigf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgd0xebpz4c7agrkxigf.png" alt="Image description" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● Give your function a name, and choose the runtime (e.g., Python 3.7)&lt;/p&gt;

&lt;p&gt;● Change the default execution role and select the existing role that we created earlier and click on create Function&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l6nfz3uvixh89kr6ywj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l6nfz3uvixh89kr6ywj.png" alt="Image description" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● Add a trigger to the lambda function&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqv7qi0y5j4xwhiwcmbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqv7qi0y5j4xwhiwcmbq.png" alt="Image description" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ In Trigger Configuration, Choose S3 , and the bucket we created earlier and click on Add&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sz4kfjf85nie4a8rg00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sz4kfjf85nie4a8rg00.png" alt="Image description" width="800" height="849"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ In lambda_handler.py write a following code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzz2483v4qagqpktz444.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzz2483v4qagqpktz444.png" alt="Image description" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;`import boto3&lt;br&gt;
import os&lt;br&gt;
import json&lt;/p&gt;

&lt;p&gt;def lambda_handler(event, context):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Find the topic ARN from the environment variables.

TOPIC_ARN = os.environ['topicARN']
print("Topic ARN =", TOPIC_ARN)

# Create an S3 client and find the S3 bucket name and file name (object key) from the event object.

s3Client = boto3.resource('s3')

record = event['Records'][0]
bucketName = record['s3']['bucket']['name']
print("bucketName =", bucketName)
objectKey = record['s3']['object']['key']
print("objectKey =", objectKey)

# Read the contents of the file.

textFile = s3Client.Object(bucketName, objectKey)
fileContent = textFile.get()['Body'].read()

print("fileContent =", fileContent)

# Count the number of words in given file.

wordCount = len(fileContent.split())
print('Number of words in text file:', wordCount)

# Create an SNS client, and format and publish a message containing the word count to the topic.

snsClient = boto3.client('sns')
message =  'The word count in the file ' + objectKey + ' is ' + str(wordCount) + '.'

response = snsClient.publish(
    TopicArn = TOPIC_ARN,
    Subject = 'Word Count Result',
    Message = message
)

# Return a successful function execution message.

return {
    'statusCode': 200,
    'body': json.dumps('File successfully processed by wordCounter Lambda function')
}`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This AWS Lambda function, when triggered by an S3 event, retrieves the file uploaded to an S3 bucket, counts the number of words in that file, and publishes the word count as a message to an Amazon SNS topic. The Lambda function uses the Boto3 library to interact with AWS services. It starts by extracting the S3 bucket name and object key from the event object, then reads the content of the file from S3. After counting the words, it formats a message and publishes it to the specified SNS topic. Finally, it returns a successful execution message. This code is designed to automate the word-counting process and notify subscribers via SNS when new files are uploaded to the S3 bucket.&lt;/p&gt;

&lt;p&gt;● Click on Configuration and Set the Environment variable&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36knrjdylsq6mqirgx1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36knrjdylsq6mqirgx1s.png" alt="Image description" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqdpmho121omxmwn83hp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqdpmho121omxmwn83hp.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Navigate to the sns Topic that we created earlier and copy ARN&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0co7kvm51j1d6ex4yrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0co7kvm51j1d6ex4yrg.png" alt="Image description" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Set the Environment Variable&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fby5u5dmpywh7tuba8zv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fby5u5dmpywh7tuba8zv6.png" alt="Image description" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● On Lambda Dashboard, click on Test&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3byhbsygpbrfsc0ufmv1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3byhbsygpbrfsc0ufmv1.png" alt="Image description" width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● Configure Test Event &lt;br&gt;
➔ Give Event Name&lt;br&gt;
➔ Choose S3-put on the template&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc96ttbx2fux4445pd0cm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc96ttbx2fux4445pd0cm.png" alt="Image description" width="800" height="896"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;➔ Update S3 bucket name, ARN, Principal, and Key on Template&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbijqeveu62q64p4t9le.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbijqeveu62q64p4t9le.png" alt="Image description" width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "Records": [&lt;br&gt;
    {&lt;br&gt;
      "eventVersion": "2.0",&lt;br&gt;
      "eventSource": "aws:s3",&lt;br&gt;
      "awsRegion": "us-east-1",&lt;br&gt;
      "eventTime": "1970-01-01T00:00:00.000Z",&lt;br&gt;
      "eventName": "ObjectCreated:Put",&lt;br&gt;
      "userIdentity": {&lt;br&gt;
        "principalId": "EXAMPLE"&lt;br&gt;
      },&lt;br&gt;
      "requestParameters": {&lt;br&gt;
        "sourceIPAddress": "127.0.0.1"&lt;br&gt;
      },&lt;br&gt;
      "responseElements": {&lt;br&gt;
        "x-amz-request-id": "EXAMPLE123456789",&lt;br&gt;
        "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"&lt;br&gt;
      },&lt;br&gt;
      "s3": {&lt;br&gt;
        "s3SchemaVersion": "1.0",&lt;br&gt;
        "configurationId": "testConfigRule",&lt;br&gt;
        "bucket": {&lt;br&gt;
          "name": "deepakwordcountbucket",&lt;br&gt;
          "ownerIdentity": {&lt;br&gt;
            "principalId": "*"&lt;br&gt;
          },&lt;br&gt;
          "arn": "arn:aws:s3:::deepakwordcountbucket"&lt;br&gt;
        },&lt;br&gt;
        "object": {&lt;br&gt;
          "key": "word.txt",&lt;br&gt;
          "size": 1024,&lt;br&gt;
          "eTag": "0123456789abcdef0123456789abcdef",&lt;br&gt;
          "sequencer": "0A1B2C3D4E5F678901"&lt;br&gt;
        }&lt;br&gt;
      }&lt;br&gt;
    }&lt;br&gt;
  ]&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Step 7: Create and Upload a file in amazon s3 bucket that we created earlier&lt;br&gt;
● Create a new file and write some contents&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbibxhd8recx5nn3fz94y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbibxhd8recx5nn3fz94y.png" alt="Image description" width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that there are 6 words in this file&lt;/p&gt;

&lt;p&gt;● Upload this file to amazon s3 bucket (deepakwordcountbucket)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hbyh5or20vcoetnrdd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hbyh5or20vcoetnrdd1.png" alt="Image description" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● Click on deploy in lambda function&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi7cx9wog1du68t09kb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi7cx9wog1du68t09kb3.png" alt="Image description" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● You will get an email which will tell you the number of words in the file you uploaded to s3 bucket&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft50thir1pyafsjbnnjuh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft50thir1pyafsjbnnjuh.png" alt="Image description" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
In this tutorial, I've shown you how to automate word counting on files uploaded to an S3 bucket using AWS Lambda and send the word count in an email notification via Amazon SNS. This automation can be a valuable addition to various data processing and content analysis workflows, saving you time and effort while keeping you informed about the contents of uploaded files. Explore further customization and integration options to suit your specific use cases.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How AWS saved me a lot of headache in my job</title>
      <dc:creator>Deepak Poudel</dc:creator>
      <pubDate>Mon, 24 Apr 2023 17:20:35 +0000</pubDate>
      <link>https://dev.to/poudeldipak/how-aws-saved-me-a-lot-of-headache-in-my-job-1han</link>
      <guid>https://dev.to/poudeldipak/how-aws-saved-me-a-lot-of-headache-in-my-job-1han</guid>
      <description>&lt;p&gt;I recently deployed my first solo project for my job at Digo and I’ve never been more grateful for AWS. &lt;/p&gt;

&lt;p&gt;Digo is a business management platform where automation is one of its core features. Internally, we call these automations “workflows”. Since Digo has been growing recently and more demanding customers are using our system, we were noticing huge CPU and memory spikes in our server due to workflows. Our server was a monolith and it was time to implement a new workflow engine and move it to a separate service. &lt;br&gt;
&lt;strong&gt;Implementation&lt;/strong&gt;&lt;br&gt;
This was my first independent feature and I was very excited and also a bit nervous! I made sure not to overlook anything. It took weeks to design, discuss and redesign the new workflow engine to finally come upon a performant solution that met all our business needs. I learned a lot from my mentor during this process. &lt;/p&gt;

&lt;p&gt;I had a solid view of the feature and how I was going to implement it. But a thought was creeping behind me: How am I going to deploy it? &lt;br&gt;
&lt;strong&gt;Deployment Problems&lt;/strong&gt;&lt;br&gt;
I had very little experience with deploying software to production. All of our infrastructure is deployed in AWS. During my dev cycles, I spin up a free tier ec2 instance early testing and progress demo. I am well familiar with linux as I had done a ground up installation of arch linux in my previous laptop. But deploying softwares in linux and managing it as a server was an entirely new topic for me. I somehow ran the workflow engine server in ec2 with the help of multiple articles on the internet. But I hated every time I had to make new deployments. Looking back my problems came to be because:&lt;br&gt;
I didn’t have deployment automation. Every time I had to deploy, I’d ssh into ec2, pull from master and run the server.&lt;br&gt;
My servers kept crashing. Initially, I didn’t use any process managers like pm2, docker etc. So every time my server crashed, I’d ssh into ec2 and restart the server. Later, I switched to pm2 and it saved me from the need to do this. &lt;br&gt;
I didn’t have a proper logging library. So I had to look through linux log files when my process crashed. &lt;/p&gt;

&lt;p&gt;Deploying features for testing during deployment has many benefits. It is good for the feature itself because we get continuous feedback early in the development cycle. It is good for QA, managers and the rest of the team because they can test the workflow engine and get exposed to its features. But, it sucked for me because of my little experience with deployment. I wish I had spent some time learning the basics of DevOps and AWS early. But I decided to focus more of my time on implementing the new workflow engine than learning DevOps.&lt;br&gt;
&lt;strong&gt;Resolving Issues with AWS services&lt;/strong&gt;&lt;br&gt;
My development was over and the feature was reaching release. It was finally time to learn DevOps. I had knowledge transfer sessions with my mentor. I also read more articles and explored AWS. Because I already had such a terrible experience once, I realized the importance of DevOps. I took courses at the AWS academy as well. &lt;/p&gt;

&lt;p&gt;Some weeks before production release, I addressed the problems I faced previously. These were all the different AWS services I used:&lt;br&gt;
Docker and Amazon Elastic Container Service (Amazon ECS) to delegate all my server management responsibilities. I didn’t have to worry about managing linux servers, scaling my infrastructure, server availability, etc because Amazon ECS handled everything for me. &lt;br&gt;
Task definition files for ECS to define my infrastructure blueprint for different environments (staging, production, etc)&lt;br&gt;
Cloudwatch logs to stream my container’s logs.&lt;br&gt;
Amazon DynamoDB to delegate all my database management responsibilities. &lt;br&gt;
Amazon parameter store for managing my environment secrets&lt;/p&gt;

&lt;p&gt;With these in place, I didn’t have to worry about extensively managing my servers and database. As for deployment, I set up deployment pipelines using github actions and aws code deploy. &lt;/p&gt;

&lt;p&gt;Learning all these was easier than I imagined. While use cases differ and we have to choose the best option based on our current situation, using these amazon services has been a good decision for the workflow engine at Dig. We have had no server downtime since deployment. &lt;/p&gt;

&lt;p&gt;I made many mistakes before finally following best practices of DevOps. But it has given me many unique insights that one gets only after failing. There are tons of high quality resources online including AWS academy, which I benefited the most from. &lt;/p&gt;

&lt;p&gt;Thanks for reading my article, if you have any problem, you can reach out to me in &lt;a href="https://www.linkedin.com/in/poudeld/" rel="noopener noreferrer"&gt;linkedin&lt;/a&gt; or in the comments below.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automation of Images and Orchestration in AWS EKS Part 1</title>
      <dc:creator>Deepak Poudel</dc:creator>
      <pubDate>Tue, 28 Mar 2023 16:38:15 +0000</pubDate>
      <link>https://dev.to/poudeldipak/automation-of-images-and-orchestration-in-aws-eks-part-1-2k4i</link>
      <guid>https://dev.to/poudeldipak/automation-of-images-and-orchestration-in-aws-eks-part-1-2k4i</guid>
      <description>&lt;p&gt;In this blog, we will be discussing five AWS services: AWS EKS (Elastic Kubernetes Service), AWS ECR (Elastic Container Registry), AWS Cloud9 IDE, AWS CodeCommit, and finally AWS CodePipeline. We'll explore the features and benefits of each service, and how they can be used in combination to build and deploy containerized applications in the cloud. Specifically, we'll look at how EKS provides a managed Kubernetes environment, ECR provides a registry for storing and managing Docker images, Cloud9 provides a cloud-based IDE for development, and CodeCommit provides a managed Git repository for version control. By the end of this blog, you'll have a good understanding of how to use these services to build and deploy containerized applications on AWS.&lt;/p&gt;

&lt;p&gt;Thanks to Visual Paradigm for the diagraming tool.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkoxyyjmaqgynhah10k2e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkoxyyjmaqgynhah10k2e.png" alt="Architecture" width="800" height="686"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before getting started, you will need an AWS account, a domain name, and a basic understanding of AWS services, including ECR, EKS, CodeCommit, and CodePipeline.&lt;/p&gt;
&lt;h1&gt;
  
  
  Create a CodeCommit repository
&lt;/h1&gt;

&lt;p&gt;CodeCommit is a fully managed source control service that makes it easy to host private Git repositories. You can use CodeCommit to store your website's source code and manage version control.&lt;/p&gt;

&lt;p&gt;To create a CodeCommit repository, follow these steps:&lt;/p&gt;

&lt;p&gt;Navigate to the CodeCommit console in the AWS Management Console.&lt;br&gt;
Click "Create repository."&lt;br&gt;
Give your repository a name and click "Create."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32lxghvktlyignkalfor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32lxghvktlyignkalfor.png" alt="AWS CodeCommit" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1xmb1sfvk6phdm7z6wp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1xmb1sfvk6phdm7z6wp.png" alt="my-cc-repo" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the repo, I pushed to the remote that contains a dockerfile. The buildspec.yml also contains steps to build the docker image, tag it and push it to ECR and then invoke updating the EKS.&lt;/p&gt;
&lt;h1&gt;
  
  
  Create a ECR Repo
&lt;/h1&gt;

&lt;p&gt;ECR is a fully managed Docker container registry that makes it easy to store, manage, and deploy Docker container images. You can use ECR to store your website's Docker container images and manage version control.&lt;/p&gt;

&lt;p&gt;To push the image to ECR you need a project and a valid Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM httpd:2.4
COPY ./index.html /usr/local/apache2/htdocs/
EXPOSE 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now build the image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t ecr-address/imagename:tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then test it locally&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name demo-sv -p 9000:80 ecr-address/imagename:tag`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To create an ECR repository, follow these steps:&lt;/p&gt;

&lt;p&gt;Navigate to the ECR console in the AWS Management Console.&lt;br&gt;
Click "Create repository."&lt;br&gt;
Give your repository a name and click "Create repository."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6b1679nsoy7tjrr1arl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6b1679nsoy7tjrr1arl2.png" alt="ECR" width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0i86na565by1r1e66e92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0i86na565by1r1e66e92.png" alt="ECR REPO" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;after the ECR repo is created, open the cluster and view the push commands &lt;br&gt;
Generally the steps are build tag and push from a Dockerfile that is in the project&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag e9ae3c220b23 aws_account_id.dkr.ecr.us-west-2.amazonaws.com/my-repository:tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push aws_account_id.dkr.ecr.us-west-2.amazonaws.com/my-repository:tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create a Cloud9 environment
&lt;/h1&gt;

&lt;p&gt;Cloud9 is an integrated development environment (IDE) that runs in the cloud. It provides a fully featured Linux environment with a web-based IDE and the ability to run commands in a terminal. You can use Cloud9 to write and test your code, as well as interact with your AWS resources. Please feel comfortable to skip cloud9 and use your local shell, I personally prefer using SSH to set up git.&lt;/p&gt;

&lt;p&gt;To create a Cloud9 environment, follow these steps:&lt;/p&gt;

&lt;p&gt;Navigate to the Cloud9 console in the AWS Management Console.&lt;br&gt;
Click "Create environment."&lt;br&gt;
Give your environment a name and select the settings you want.&lt;br&gt;
Choose an instance type that suits your needs. For this tutorial, we recommend using the t2.micro instance type.&lt;br&gt;
Click "Create environment."&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddi6hnj0bg1r9nodpma1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddi6hnj0bg1r9nodpma1.png" alt="Cloud9" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xcxnpa4kfzlakmms1kb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xcxnpa4kfzlakmms1kb.png" alt="my-cc-env" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this demonstration I will be using my local git bash. Make sure kubectl is installed either in your cloud9 or your local shell.&lt;/p&gt;
&lt;h1&gt;
  
  
  Create a EKS Cluster
&lt;/h1&gt;

&lt;p&gt;Cluster Policy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:UpdateAutoScalingGroup",
                "ec2:AttachVolume",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:CreateRoute",
                "ec2:CreateSecurityGroup",
                "ec2:CreateTags",
                "ec2:CreateVolume",
                "ec2:DeleteRoute",
                "ec2:DeleteSecurityGroup",
                "ec2:DeleteVolume",
                "ec2:DescribeInstances",
                "ec2:DescribeRouteTables",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVolumes",
                "ec2:DescribeVolumesModifications",
                "ec2:DescribeVpcs",
                "ec2:DescribeDhcpOptions",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeAvailabilityZones",
                "ec2:DetachVolume",
                "ec2:ModifyInstanceAttribute",
                "ec2:ModifyVolume",
                "ec2:RevokeSecurityGroupIngress",
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAddresses",
                "ec2:DescribeInternetGateways",
                "elasticloadbalancing:AddTags",
                "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
                "elasticloadbalancing:AttachLoadBalancerToSubnets",
                "elasticloadbalancing:ConfigureHealthCheck",
                "elasticloadbalancing:CreateListener",
                "elasticloadbalancing:CreateLoadBalancer",
                "elasticloadbalancing:CreateLoadBalancerListeners",
                "elasticloadbalancing:CreateLoadBalancerPolicy",
                "elasticloadbalancing:CreateTargetGroup",
                "elasticloadbalancing:DeleteListener",
                "elasticloadbalancing:DeleteLoadBalancer",
                "elasticloadbalancing:DeleteLoadBalancerListeners",
                "elasticloadbalancing:DeleteTargetGroup",
                "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                "elasticloadbalancing:DeregisterTargets",
                "elasticloadbalancing:DescribeListeners",
                "elasticloadbalancing:DescribeLoadBalancerAttributes",
                "elasticloadbalancing:DescribeLoadBalancerPolicies",
                "elasticloadbalancing:DescribeLoadBalancers",
                "elasticloadbalancing:DescribeTargetGroupAttributes",
                "elasticloadbalancing:DescribeTargetGroups",
                "elasticloadbalancing:DescribeTargetHealth",
                "elasticloadbalancing:DetachLoadBalancerFromSubnets",
                "elasticloadbalancing:ModifyListener",
                "elasticloadbalancing:ModifyLoadBalancerAttributes",
                "elasticloadbalancing:ModifyTargetGroup",
                "elasticloadbalancing:ModifyTargetGroupAttributes",
                "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                "elasticloadbalancing:RegisterTargets",
                "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
                "elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NodeGroup Policy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SharedSecurityGroupRelatedPermissions",
            "Effect": "Allow",
            "Action": [
                "ec2:RevokeSecurityGroupIngress",
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:DescribeInstances",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:DeleteSecurityGroup"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ec2:ResourceTag/eks": "*"
                }
            }
        },
        {
            "Sid": "EKSCreatedSecurityGroupRelatedPermissions",
            "Effect": "Allow",
            "Action": [
                "ec2:RevokeSecurityGroupIngress",
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:DescribeInstances",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:DeleteSecurityGroup"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ec2:ResourceTag/eks:nodegroup-name": "*"
                }
            }
        },
        {
            "Sid": "LaunchTemplateRelatedPermissions",
            "Effect": "Allow",
            "Action": [
                "ec2:DeleteLaunchTemplate",
                "ec2:CreateLaunchTemplateVersion"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ec2:ResourceTag/eks:nodegroup-name": "*"
                }
            }
        },
        {
            "Sid": "AutoscalingRelatedPermissions",
            "Effect": "Allow",
            "Action": [
                "autoscaling:UpdateAutoScalingGroup",
                "autoscaling:DeleteAutoScalingGroup",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "autoscaling:CompleteLifecycleAction",
                "autoscaling:PutLifecycleHook",
                "autoscaling:PutNotificationConfiguration",
                "autoscaling:EnableMetricsCollection"
            ],
            "Resource": "arn:aws:autoscaling:*:*:*:autoScalingGroupName/eks-*"
        },
        {
            "Sid": "AllowAutoscalingToCreateSLR",
            "Effect": "Allow",
            "Condition": {
                "StringEquals": {
                    "iam:AWSServiceName": "autoscaling.amazonaws.com"
                }
            },
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "*"
        },
        {
            "Sid": "AllowASGCreationByEKS",
            "Effect": "Allow",
            "Action": [
                "autoscaling:CreateOrUpdateTags",
                "autoscaling:CreateAutoScalingGroup"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:TagKeys": [
                        "eks",
                        "eks:cluster-name",
                        "eks:nodegroup-name"
                    ]
                }
            }
        },
        {
            "Sid": "AllowPassRoleToAutoscaling",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": "autoscaling.amazonaws.com"
                }
            }
        },
        {
            "Sid": "AllowPassRoleToEC2",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "*",
            "Condition": {
                "StringEqualsIfExists": {
                    "iam:PassedToService": [
                        "ec2.amazonaws.com",
                        "ec2.amazonaws.com.cn"
                    ]
                }
            }
        },
        {
            "Sid": "PermissionsToManageResourcesForNodegroups",
            "Effect": "Allow",
            "Action": [
                "iam:GetRole",
                "ec2:CreateLaunchTemplate",
                "ec2:DescribeInstances",
                "iam:GetInstanceProfile",
                "ec2:DescribeLaunchTemplates",
                "autoscaling:DescribeAutoScalingGroups",
                "ec2:CreateSecurityGroup",
                "ec2:DescribeLaunchTemplateVersions",
                "ec2:RunInstances",
                "ec2:DescribeSecurityGroups",
                "ec2:GetConsoleOutput",
                "ec2:DescribeRouteTables",
                "ec2:DescribeSubnets"
            ],
            "Resource": "*"
        },
        {
            "Sid": "PermissionsToCreateAndManageInstanceProfiles",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:DeleteInstanceProfile",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:AddRoleToInstanceProfile"
            ],
            "Resource": "arn:aws:iam::*:instance-profile/eks-*"
        },
        {
            "Sid": "PermissionsToManageEKSAndKubernetesTags",
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags",
                "ec2:DeleteTags"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringLike": {
                    "aws:TagKeys": [
                        "eks",
                        "eks:cluster-name",
                        "eks:nodegroup-name",
                        "kubernetes.io/cluster/*"
                    ]
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use these roles for cluster and the nodegroup. If you are on cloud9 make sure it can access the resources.&lt;/p&gt;

&lt;p&gt;now configure AWS cli&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;WARNING! don't use role to create the cluster, you can create the cluster but later can't issue kubectl commands. Use the same user to create the cluster and manage the cluster.&lt;/p&gt;

&lt;p&gt;EKS is a fully managed Kubernetes service that makes it easy to deploy, manage, and scale containerized applications. You can use EKS to deploy your website's containerized application and manage its infrastructure.&lt;/p&gt;

&lt;p&gt;To create an EKS cluster, follow these steps:&lt;/p&gt;

&lt;p&gt;Navigate to the EKS console in the AWS Management Console.&lt;br&gt;
Click "Create cluster."&lt;br&gt;
Choose a Kubernetes version and give your cluster a name.&lt;br&gt;
Choose the VPC and subnet where you want to deploy your cluster.&lt;br&gt;
Choose the security group for your cluster and click "Create."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4rniovhd2x0ws6sd8nk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4rniovhd2x0ws6sd8nk.png" alt="EKS" width="800" height="203"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnajgf3gs7xixpupyyk3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnajgf3gs7xixpupyyk3r.png" alt="nodegroup" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71wks0b7jitl47m3efrh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71wks0b7jitl47m3efrh.png" alt="nodegroup-details" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;i have minikube setup and this is the status of my local machine&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9bif11mr9bsho7g90xt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9bif11mr9bsho7g90xt.png" alt="local-nodes-and-pods" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzkux42zvfn175bqplsp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzkux42zvfn175bqplsp.png" alt="config" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have to create a kubeconfig because currently it is configured at my local minikube cluster&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs5alg3an6ck93npy3y6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs5alg3an6ck93npy3y6.png" alt="sts-command" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pg6x4x9i2yids2fxuvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pg6x4x9i2yids2fxuvg.png" alt="cat-kube-config" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I just backed up my old config file and generated a new config inside .kube&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks update-kubeconfig --region region-code --name my-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1w4gow4c939gzcjqvo4d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1w4gow4c939gzcjqvo4d.png" alt="kube-config-backup" width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;now let's add nodegroup and assign the role to the nodegroup. you can also configure auto scaling on the nodegroup. i've used spot instances&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5tnldnizuc57nyq1u96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5tnldnizuc57nyq1u96.png" alt="kubectl-node-pod-remote" width="679" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;now you can see two instances as the nodes and no pods in the default namespace &lt;/p&gt;

&lt;p&gt;let's run image from ECR&lt;/p&gt;

&lt;p&gt;either deploy from a yaml or let the kubectl generate a yaml for you. Here's the yaml that was generated for me previously.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
items:
- apiVersion: apps/v1
 kind: Deployment
 metadata:
   annotations:
     deployment.kubernetes.io/revision: "1"
   creationTimestamp: "2023-02-02T05:55:05Z"
   generation: 4
   labels:
     app: my-eks
   name: my-eks
   namespace: default
   resourceVersion: "41352"
   uid: 90024fca-9212-40c5-9546-d9b85dfdd98f
 spec:
   progressDeadlineSeconds: 600
   replicas: 2
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       app: my-eks
   strategy:
     rollingUpdate:
       maxSurge: 25%
       maxUnavailable: 25%
     type: RollingUpdate
   template:
     metadata:
       creationTimestamp: null
       labels:
         app: my-eks
     spec:
       containers:
       - image: ecr-address/imagename:tag
         imagePullPolicy: Always
         name: my-eks
         ports:
         - containerPort: 80
           protocol: TCP
         resources: {}
         terminationMessagePath: /dev/termination-log
         terminationMessagePolicy: File
       dnsPolicy: ClusterFirst
       priorityClassName: system-node-critical
       restartPolicy: Always
       schedulerName: default-scheduler
       securityContext: {}
       terminationGracePeriodSeconds: 30
 status:
   availableReplicas: 2
   conditions:
   - lastTransitionTime: "2023-02-02T05:55:05Z"
     lastUpdateTime: "2023-02-02T05:55:45Z"
     message: ReplicaSet "my-eks-c86b768" has successfully progressed.
     reason: NewReplicaSetAvailable
     status: "True"
     type: Progressing
   - lastTransitionTime: "2023-02-02T08:39:56Z"
     lastUpdateTime: "2023-02-02T08:39:56Z"
     message: Deployment has minimum availability.
     reason: MinimumReplicasAvailable
     status: "True"
     type: Available
   observedGeneration: 4
   readyReplicas: 2
   replicas: 2
   updatedReplicas: 2
kind: List
metadata:
 resourceVersion: ""
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ready to create a deployment ?&lt;br&gt;
let's do it from the yaml&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f my_deployment_file.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create deployment my-eks --image=ecr-address/image:tag&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;now expose the deployment&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl expose deployment my-eks --type=LoadBalancer --port=80 --target-port=80&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;type=loadbalancer we can leverage the cloud provider's load balancer. it's elastic load balancer in case of AWS.&lt;/p&gt;

&lt;p&gt;now it's just one replication, let's scale the application to run four replicas.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl scale --replicas=4 deployment my-eks&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frx54x9r38c8565joe3di.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frx54x9r38c8565joe3di.png" alt="external url" width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the URL exposes deployment. Your image will be load balanced and is accessible with the external public facing URL.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get services&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h1&gt;
  
  
  Finally Create a Pipeline
&lt;/h1&gt;

&lt;p&gt;Create a CodePipeline pipeline&lt;/p&gt;

&lt;p&gt;CodePipeline is a fully managed continuous delivery service that makes it easy to automate your release process. You can use CodePipeline to create a pipeline that automatically builds, tests, and deploys your website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0dh4w6ccfwis312e1om.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0dh4w6ccfwis312e1om.png" alt="AWS Codepipeline" width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I feel like splitting the content in two blogs because it's been long. Automation will be in the next one. Stay tuned. The next blog will continue from this point.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Serverless API Deployment with AWS Lambda and API Gateway</title>
      <dc:creator>Deepak Poudel</dc:creator>
      <pubDate>Sun, 26 Feb 2023 16:24:45 +0000</pubDate>
      <link>https://dev.to/poudeldipak/serverless-api-deployment-with-aws-lambda-and-api-gateway-3om5</link>
      <guid>https://dev.to/poudeldipak/serverless-api-deployment-with-aws-lambda-and-api-gateway-3om5</guid>
      <description>&lt;p&gt;Serverless API Deployment with AWS Lambda and API Gateway&lt;br&gt;
Introduction:&lt;br&gt;
Serverless API deployment with AWS Lambda and API Gateway is a cloud computing model in which an application runs on a serverless architecture without the need to manage servers, operating systems, or infrastructure. With AWS Lambda, developers can write code in response to various triggers, such as an HTTP request, and the code will execute without requiring a server to be running at all times. AWS API Gateway is a fully managed service that enables developers to create, publish, and manage APIs at scale.&lt;/p&gt;

&lt;p&gt;In a serverless API deployment, AWS Lambda functions act as the backend code for an API Gateway API. The API Gateway is responsible for handling requests from clients and forwarding them to the appropriate AWS Lambda function. The Lambda function performs the necessary operations and returns a response to the API Gateway, which sends the response back to the client.&lt;/p&gt;

&lt;p&gt;The benefits of serverless API deployment with AWS Lambda and API Gateway include:&lt;br&gt;
● Reduced operational overhead: With a serverless deployment model, there is no need to manage servers, operating systems, or infrastructure, which reduces operational overhead.&lt;br&gt;
● Scalability: Serverless architectures can scale automatically in response to changes in demand, allowing APIs to handle many requests without needing manual scaling.&lt;br&gt;
● Cost-effectiveness: Serverless APIs can be cost-effective because they are charged based on the number of requests and the execution time rather than the amount of server time used.&lt;br&gt;
● Ease of development: AWS Lambda and API Gateway provide a simple and intuitive interface for developing, testing, and deploying serverless APIs, which can speed up the development process.&lt;br&gt;
● Security: AWS Lambda and API Gateway provide robust security features, such as identity and access management, encryption, and monitoring, to help protect serverless APIs from security threats.&lt;br&gt;
Pricing:&lt;br&gt;
AWS Lambda:&lt;br&gt;
AWS Lambda pricing is based on the number of requests and the duration of each function execution. AWS Lambda offers a generous free tier of 1 million requests and 400,000 GB-seconds of monthly compute time. After that, pricing is based on the number of requests, the duration of each execution, and the amount of memory the function uses.&lt;/p&gt;

&lt;p&gt;API Gateway:&lt;br&gt;
API Gateway pricing is based on the number of API calls and data transfers out. There is a free tier of 1 million API calls per month and up to 12 months of free data transfer out. After that, pricing is based on the number of API calls and the amount of data transferred out.&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Sign in to the AWS Management Console and navigate the lambda service.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8wxmi0q5f12gxbp7040.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8wxmi0q5f12gxbp7040.png" alt=" " width="800" height="210"&gt;&lt;/a&gt;&lt;br&gt;
1.Choose to Create a function&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadubmmcdvq0ad683wf5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadubmmcdvq0ad683wf5u.png" alt=" " width="800" height="159"&gt;&lt;/a&gt;&lt;br&gt;
1 Create a function&lt;br&gt;
● Choose an Author from scratch &lt;br&gt;
● choose runtime: AWS Lambda supports various programming languages, including Java, Python, Node.js, Go, and others. You can choose the language that best fits your needs and expertise.&lt;br&gt;
● Choose Architecture: x86_64 &lt;br&gt;
Expand Change default execution role and check to Create a new role with basic Lambda permission&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i7mao10o9cqmu4vffvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i7mao10o9cqmu4vffvx.png" alt=" " width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr79kak8qr3jh3ypyunuk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr79kak8qr3jh3ypyunuk.png" alt=" " width="800" height="215"&gt;&lt;/a&gt;&lt;br&gt;
● You will see a page similar to the page below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fil3hpc3455xtum70ygzf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fil3hpc3455xtum70ygzf.png" alt=" " width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modify the default lambda_function 
● We will modify the lambda function to call API using the requests library
● I am taking a dummy API from &lt;a href="https://dummyjson.com/" rel="noopener noreferrer"&gt;https://dummyjson.com/&lt;/a&gt; &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkvowvttgariw58vilba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkvowvttgariw58vilba.png" alt=" " width="800" height="374"&gt;&lt;/a&gt;&lt;br&gt;
● The code looks like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffdpjyifugwxomtf14hj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffdpjyifugwxomtf14hj.png" alt=" " width="800" height="123"&gt;&lt;/a&gt;&lt;br&gt;
Code :&lt;/p&gt;

&lt;p&gt;import json&lt;br&gt;
import requests&lt;br&gt;
def lambda_handler(event, context):&lt;br&gt;
    base_url = '&lt;a href="https://dummyjson.com/" rel="noopener noreferrer"&gt;https://dummyjson.com/&lt;/a&gt;'&lt;br&gt;
    product_id = '1'&lt;br&gt;
    resp = requests.get(url=base_url+'products/'+product_id)&lt;br&gt;
    if resp.status_code != 200:&lt;br&gt;
        raise Exception('Failed to retrieve product: '+resp.text)&lt;br&gt;
    return {&lt;br&gt;
        'statusCode': resp.status_code,&lt;br&gt;
        'body': resp.json()&lt;br&gt;
    }&lt;br&gt;
Q. what this code does?&lt;br&gt;
1.Imports the necessary modules (JSON and requests).&lt;br&gt;
2.Defines a base URL and a product ID that will be used to construct the API URL.&lt;br&gt;
3.Sends a GET request to the API using the requests library, passing in the full API URL.&lt;br&gt;
4.Checks the response status code to make sure that the request was successful (status code 200).&lt;br&gt;
5.If the response status code is not 200, raise an exception with an error message that includes the response text.&lt;br&gt;
6.If the response status code is 200, return a dictionary containing the response status code and the parsed JSON response body.&lt;br&gt;
●Test the function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x2f7qiiie4r0wjjx8ro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x2f7qiiie4r0wjjx8ro.png" alt=" " width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a test event&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faii2o16h6o8cm29e6s4g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faii2o16h6o8cm29e6s4g.png" alt=" " width="800" height="772"&gt;&lt;/a&gt;&lt;br&gt;
● You will get an error as shown:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz80tdupsjgj6ui0sh7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz80tdupsjgj6ui0sh7s.png" alt=" " width="800" height="189"&gt;&lt;/a&gt;&lt;br&gt;
Q. why do we get this error, and how to solve it?&lt;br&gt;
➢ This is because there are no library-named requests. Now here comes the use of lambda layers.&lt;/p&gt;

&lt;p&gt;Lambda layers:&lt;br&gt;
AWS Lambda Layers are a way to manage common code and libraries in AWS Lambda functions. A layer is essentially a ZIP archive that contains libraries, custom runtimes, or other dependencies that multiple functions in your AWS account can use. Instead of bundling all the code and dependencies within each Lambda function, you can create a layer that includes common libraries or code and then associates that layer with one or more functions. This can help reduce the size of your function code and simplify updates to shared code or libraries.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fix the error 
● Goto your local system and create a directory
● Install the required libraries in our case requests library
● Zip the folder containing libraries&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bmtx35fndbl21iy9uq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bmtx35fndbl21iy9uq8.png" alt=" " width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● Goto lambda dashboard, Select layers, and click create layer&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ujptof5awup0fou3s8n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ujptof5awup0fou3s8n.png" alt=" " width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1dt6psq18eq0htdn0fu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1dt6psq18eq0htdn0fu.png" alt=" " width="800" height="208"&gt;&lt;/a&gt;&lt;br&gt;
● Give name, description and upload the file you zipped in the above step &lt;br&gt;
● Choose compactable architecture and runtimes and create&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add layer
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosv4ai592gogmqhoob15.png" alt=" " width="800" height="92"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt9k2h94aj3tqy0gsu62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt9k2h94aj3tqy0gsu62.png" alt=" " width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1.Test the lambda function&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jtz0uely85jwy9eo1va.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jtz0uely85jwy9eo1va.png" alt=" " width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79ayfqs7mf0gj0mbl5x7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79ayfqs7mf0gj0mbl5x7.png" alt=" " width="800" height="269"&gt;&lt;/a&gt;&lt;br&gt;
Now the code is successfully executed.&lt;br&gt;
1.Create an API Gateway&lt;br&gt;
● Goto AWS Management Console and navigate the API Gateway service&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fss7zoos7hmfuj18jhiwh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fss7zoos7hmfuj18jhiwh.png" alt=" " width="800" height="170"&gt;&lt;/a&gt;&lt;br&gt;
Choose REST API&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqknlkbz1six3fo1llx10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqknlkbz1six3fo1llx10.png" alt=" " width="800" height="386"&gt;&lt;/a&gt;&lt;br&gt;
● Create API&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbfyn9o5ale5ekcl7tuh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbfyn9o5ale5ekcl7tuh.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;br&gt;
●Goto actions and Create Method&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4xpldk4vohbo9o3fb7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4xpldk4vohbo9o3fb7b.png" alt=" " width="800" height="244"&gt;&lt;/a&gt;&lt;br&gt;
●Select GET , Setup and Save&lt;br&gt;
-We are integrating with Lambda function so choose integration type to lambda function&lt;br&gt;
-Choose Lamba region&lt;br&gt;
-Choose the Lamba created in above &lt;br&gt;
-Save&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0you87xsn1t7y3oaufd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0you87xsn1t7y3oaufd.png" alt=" " width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw87xl0vw9wb1isxgrkm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw87xl0vw9wb1isxgrkm.png" alt=" " width="800" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● Add Permission to Lambda Function&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4y8gci3w9qkcvg7ajcf6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4y8gci3w9qkcvg7ajcf6.png" alt=" " width="800" height="206"&gt;&lt;/a&gt;&lt;br&gt;
Test &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvtu36baemexmflvjwo3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvtu36baemexmflvjwo3.png" alt=" " width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12hy0v3bxkfs3bfbwuv7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12hy0v3bxkfs3bfbwuv7.png" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftegxfykyjk7xel6tenow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftegxfykyjk7xel6tenow.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see the response of our API is according to our desire&lt;/p&gt;

&lt;p&gt;●Enable CORS:&lt;br&gt;
Enabling CORS (Cross-Origin Resource Sharing) in API Gateway allows web applications from different domains to access resources served by your API.&lt;/p&gt;

&lt;p&gt;Without CORS, web applications can only make requests to endpoints on the same domain where the application is hosted. If you enable CORS, you allow requests to be made from other domains, which can be useful for building web applications that use APIs hosted on a different domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6fd53qud6ac6s5e9zm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6fd53qud6ac6s5e9zm5.png" alt=" " width="800" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju6wp724kbojszyp2n8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju6wp724kbojszyp2n8p.png" alt=" " width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fcwqhjy9godb0cjewdz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fcwqhjy9godb0cjewdz.png" alt=" " width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● Deploy API&lt;br&gt;
-Give stage name and description(optional)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqedzo4s2gxiakq68eoo8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqedzo4s2gxiakq68eoo8.png" alt=" " width="356" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwt1at78a3j8egbp7pyhy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwt1at78a3j8egbp7pyhy.png" alt=" " width="800" height="232"&gt;&lt;/a&gt;&lt;br&gt;
We have successfully deployed our API and got the invoke URL&lt;br&gt;
● Execute Lambda function when API is triggered&lt;br&gt;
-Goto Lambda page and click Add trigger&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud970tqs26yh3032i24s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud970tqs26yh3032i24s.png" alt=" " width="800" height="227"&gt;&lt;/a&gt;&lt;br&gt;
-Set trigger configuration as shown below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqqk7shk2eaed7ukklqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqqk7shk2eaed7ukklqy.png" alt=" " width="800" height="772"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;● Paste the invocation URL in the browser you will see the desired output in JSON format&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6xonvpxe88r3mvgiwrd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6xonvpxe88r3mvgiwrd.png" alt=" " width="800" height="73"&gt;&lt;/a&gt;&lt;br&gt;
Conclusion:&lt;br&gt;
In conclusion, AWS Lambda and API Gateway are powerful services that can be used to build and deploy scalable, cost-effective APIs quickly and easily. With Lambda, you can run your code without provisioning or managing servers, and only pay for the computing time you consume. API Gateway makes it easy to create and manage APIs that can be accessed by clients over the internet, with features such as authentication, rate limiting, and caching built in.&lt;/p&gt;

</description>
      <category>animation</category>
      <category>css</category>
      <category>productdesign</category>
      <category>freelancing</category>
    </item>
    <item>
      <title>Docker Multi Container GitHub Action Deployment Pipeline using AWS Elastic Beanstalk</title>
      <dc:creator>Deepak Poudel</dc:creator>
      <pubDate>Tue, 30 Aug 2022 17:54:05 +0000</pubDate>
      <link>https://dev.to/poudeldipak/docker-multi-container-github-action-deployment-pipeline-using-aws-elastic-beanstalk-3h5m</link>
      <guid>https://dev.to/poudeldipak/docker-multi-container-github-action-deployment-pipeline-using-aws-elastic-beanstalk-3h5m</guid>
      <description>&lt;p&gt;In this article, we want to build multiple containers using GitHub actions and deploy them to Amazon Elastic Beanstalk. Containers are the new de facto method used in the industry to avoid the most used complaint during the QA phase which is “It Works on my Machine”. It needs some engine such as docker to run. We can describe our container on a simple file that has information about the operating system to use, runtimes, and the artifacts that need to execute on the virtual machine. &lt;/p&gt;

&lt;p&gt;Let’s get started. You can find the code in the GitHub link below&lt;br&gt;
&lt;a href="https://github.com/poudeldipak/container_blog/tree/main/multi-docker-main" rel="noopener noreferrer"&gt;GitHub Link&lt;/a&gt;&lt;br&gt;
After you have downloaded you can see the files below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpeagv354sfw2t1mz68u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpeagv354sfw2t1mz68u.png" alt="Clone" width="800" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clone the project or download as zip. to run the project locally, you can use docker-compose-dev.yml file with the command in the project’s directory. &lt;br&gt;
This project has a separate docker-compose-dev.yml file because our docker-compose.yml has production version which downloads container from a repository (Docker hub for now). By default, the repository used in docker hub. The repository could be hosted in AWS Elastic Container Repository as well. To do so all we need to do is generate docker config file by logging in and generating AWS credentials file and changing the tag from docker username to awsurl/imagename. And update the repository url in the production docker-compose.yml.&lt;br&gt;
&lt;code&gt;docker-compose -f docker-compose-dev.yml up&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It then starts downloading the dependencies and running the command in Dockerfile such as copy, add npm install etc. in a series of containers one after another and build images of those containers.&lt;br&gt;
you can see the containers and images with the command not necessarily in the project’s directory i.e anywhere&lt;br&gt;
We now have a project that has redis and postgres specified as external container dependencies and a react client, nodejs server and an nginx that exposes http port 80. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker ps –all&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdofhcy4k5jff6ukemgv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdofhcy4k5jff6ukemgv.png" alt="Docker Images" width="747" height="354"&gt;&lt;/a&gt;&lt;br&gt;
&lt;code&gt;docker images&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rj3rmbye6x2pu94kg2d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rj3rmbye6x2pu94kg2d.png" alt="Image description" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The architecture&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31ccdmhvawfr8pq6gfck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31ccdmhvawfr8pq6gfck.png" alt="Image description" width="800" height="471"&gt;&lt;/a&gt;&lt;br&gt;
Networking between the containers&lt;br&gt;
Networking is a pretty vast topic to discuss. For now, we don’t have to worry about inter container networking at this level because all containers are a part of default bridge network driver. Because of the bridge, and specified hostname, containers can pinpoint other containers and communicate over a virtual network. We have leveraged the use of hostnames in our dockerfile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flay1p1wz2vyqmlm30iki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flay1p1wz2vyqmlm30iki.png" alt="Image description" width="208" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Solution&lt;br&gt;
The solution offered by the project is calculate Fibonacci value for nth position. It uses react for frontend website, redis for cache so that n-1 values can be retrieved if already calculated. Calculated values are stored in a postgres database. Nginx container to serve load balanced http traffic.&lt;/p&gt;

&lt;p&gt;Client&lt;br&gt;
The client is a react project that queries the API for previously calculated values and requests the worker container to evaluate Fibonacci value via API. The Container is built in two stages. The first stage is called builder which uses node:16-alpine to install dependencies and build the react bundle. After the bundle is build, HTML, CSS and JavaScript chunks are copied to another container using nginx as image which exposes port 80 internally&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmr68zyvzo75stibd2aii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmr68zyvzo75stibd2aii.png" alt="Image description" width="389" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yj3hfvnv31hf9bblhv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yj3hfvnv31hf9bblhv9.png" alt="Image description" width="777" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjr2olup7tb8fsvtbmoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjr2olup7tb8fsvtbmoi.png" alt="Image description" width="483" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dockerfile.dev is also included so that we could stop pushing to Elastic Beanstalk if the tests fail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4iaj54rc4wyynjlt8qi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4iaj54rc4wyynjlt8qi.png" alt="Image description" width="595" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The client nginx listens on port 3000. Here’s a catch; “container Nginx and Nginx inside the client are different entities”. Container nginx listens on port 80 whereas nginx inside the client is a secondary client that is served by container nginx:80. Confusing? I bet it is. Repeat reading the statement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcpb05j2l1s7i2v6jd1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcpb05j2l1s7i2v6jd1c.png" alt="Image description" width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nginx&lt;br&gt;
Nginx is a container that serves as the server handler to send and receive http traffic to the services. It exposes port 80.  Two Upstreams api with port 5000 and client with port 3000 are attached to the listener at port 80&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1y9d0xr65hdqszxmputn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1y9d0xr65hdqszxmputn.png" alt="Image description" width="408" height="503"&gt;&lt;/a&gt;&lt;br&gt;
Server&lt;br&gt;
The server is a container that communicates with postgres and redis via environment provided in the docker-compose.yml file. It contains endoints to fetch values that are calculated and post values to be calculated. It also has a caching logic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomavzwh75hie3ziwzz6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomavzwh75hie3ziwzz6t.png" alt="Image description" width="530" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhofp7waq90wqjdd0lsmt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhofp7waq90wqjdd0lsmt.png" alt="Image description" width="488" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It creates a subscription on the redis client for the given index and then add the index to the postgres. So instead of making query to the postgres, which is definitely more resource intensive than querying from redis, the server can fetch value via the redis used as a cache.&lt;br&gt;
Worker&lt;br&gt;
The worker is a nodejs application that recursively calculates Fibonacci values when a message is received in a subscription topic and publishes it to the redis cache.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgziiji4o7ijxxjdglps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgziiji4o7ijxxjdglps.png" alt="Image description" width="488" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Github Action&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf9dxn3tooqn6xc5afvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf9dxn3tooqn6xc5afvp.png" alt="Image description" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felslq7umeuparcudd1xo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felslq7umeuparcudd1xo.png" alt="Image description" width="800" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkeo0gelydm66jdrr3rk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkeo0gelydm66jdrr3rk.png" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Login to docker and generate an access token and copy it in the clipboard so that we can use it as docker password in the github actions secret.&lt;br&gt;
Docker username, password, AWS access and secret keys should be added to action secrets in the github repository.&lt;br&gt;
The github workflow is triggered when push is made on the main branch as defined in the workflow file.  It logs in to the docker hub, builds the container from the artifact and pushes it to the docker repository.&lt;/p&gt;

&lt;p&gt;Create and Elastic Beanstalk Application and add the application name, environment name  and the region to github workflow file. Create an IAM that can access the bucket and Elastic Beanstalk with programmatic access and also add it to the github workflow file. Check s3 for bucket created by Elastic Beanstalk and add the bucket name to the github workflow file.&lt;br&gt;
The docker engine expects docker-compose.yml file to configure which containers to run. Since our docker file contains the list of services (i.e. containers). Our application should be live. Try accessing URL which is automatically generated by elastic Beanstalk.&lt;/p&gt;

&lt;h1&gt;
  
  
  awscommunity #awscommunitybuilders #aws #awssolutionarchitect #awscloud #awscertified #cloudcomputing #awscertification #awstraining  #amazonwebservices #cloud #container #eks #ecs #ebs
&lt;/h1&gt;

</description>
    </item>
  </channel>
</rss>
