<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ahmed Salem</title>
    <description>The latest articles on DEV Community by Ahmed Salem (@ahmedsalem2020).</description>
    <link>https://dev.to/ahmedsalem2020</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ahmedsalem2020"/>
    <language>en</language>
    <item>
      <title>Hey folks! I’m building my Medium profile where I share real-world AWS, GCP, and DevOps tutorials. I’m aiming to reach 100 followers — would love your support here ❤️: https://medium.com/@ahmedSalem2020</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Wed, 09 Apr 2025 05:13:34 +0000</pubDate>
      <link>https://dev.to/ahmedsalem2020/hey-folks-im-building-my-medium-profile-where-i-share-real-world-aws-gcp-and-devops-tutorials-2ii4</link>
      <guid>https://dev.to/ahmedsalem2020/hey-folks-im-building-my-medium-profile-where-i-share-real-world-aws-gcp-and-devops-tutorials-2ii4</guid>
      <description></description>
      <category>aws</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>community</category>
    </item>
    <item>
      <title>Hey folks! I’m building my Medium profile where I share real-world AWS, GCP, and DevOps tutorials. I’m aiming to reach 100 followers. I would love your support here ❤️: https://medium.com/@ahmedSalem2020</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Mon, 07 Apr 2025 07:01:19 +0000</pubDate>
      <link>https://dev.to/ahmedsalem2020/hey-folks-im-building-my-medium-profile-where-i-share-real-world-aws-gcp-and-devops-tutorials-1dd0</link>
      <guid>https://dev.to/ahmedsalem2020/hey-folks-im-building-my-medium-profile-where-i-share-real-world-aws-gcp-and-devops-tutorials-1dd0</guid>
      <description></description>
      <category>aws</category>
      <category>gcp</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Automating EC2 Instance Resizing with AWS Systems Manager (SSM) Document</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Wed, 29 Jan 2025 06:13:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/automating-ec2-instance-resizing-with-aws-systems-manager-ssm-document-1934</link>
      <guid>https://dev.to/aws-builders/automating-ec2-instance-resizing-with-aws-systems-manager-ssm-document-1934</guid>
      <description>&lt;p&gt;Managing EC2 instance sizes manually can be time-consuming, especially when scaling resources across multiple instances. In this guide, we’ll deploy an AWS Systems Manager (SSM) document using AWS CloudFormation (CFN) to automate the resizing of EC2 instances in bulk. This setup allows users to resize instances across different Availability Zones with a single click.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our goal is to create a CloudFormation stack that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploys multiple EC2 instances in different Availability Zones&lt;/li&gt;
&lt;li&gt;Creates an SSM document to automate the resizing of EC2 instances&lt;/li&gt;
&lt;li&gt;Allows resizing based on Availability Zone, Tags, and Instance Type&lt;/li&gt;
&lt;li&gt;Enables automation through AWS Systems Manager&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of manually stopping, resizing, and restarting instances, this solution streamlines the entire process into an automated workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The architecture consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Instances - Launched with CloudFormation, spread across different Availability Zones&lt;/li&gt;
&lt;li&gt;AWS SSM Document - Defines the automation process for resizing instances&lt;/li&gt;
&lt;li&gt;IAM Role for SSM - Grants permissions to modify EC2 instance attributes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2zoqckfmnfeeqs1sr6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2zoqckfmnfeeqs1sr6j.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create a CloudFormation Template for EC2 Instances&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We first create an EC2 CloudFormation (CFN) template (ec2.yaml) to launch instances in different Availability Zones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ec2.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: 2010-09-09
#Description: EC2 with user data from CloudFormation.
Parameters:
  InstanceType:
    Description: WebServer EC2 instance type.
    Type: String
    Default: t2.micro
    AllowedValues:
      - t1.micro
      - t2.nano
      - t2.micro
      - t2.small
      - t2.medium
      - t2.large
      - m1.small
      - m1.medium
      - m1.large
      - m1.xlarge
      - m2.xlarge
      - m2.2xlarge
      - m2.4xlarge
      - m3.medium
      - m3.large
      - m3.xlarge
      - m3.2xlarge
      - m4.large
      - m4.xlarge
      - m4.2xlarge
      - m4.4xlarge
      - m4.10xlarge
      - c1.medium
      - c1.xlarge
      - c3.large
      - c3.xlarge
      - c3.2xlarge
      - c3.4xlarge
      - c3.8xlarge
      - c4.large
      - c4.xlarge
      - c4.2xlarge
      - c4.4xlarge
      - c4.8xlarge
      - g2.2xlarge
      - g2.8xlarge
      - r3.large
      - r3.xlarge
      - r3.2xlarge
      - r3.4xlarge
      - r3.8xlarge
      - i2.xlarge
      - i2.2xlarge
      - i2.4xlarge
      - i2.8xlarge
      - d2.xlarge
      - d2.2xlarge
      - d2.4xlarge
      - d2.8xlarge
      - hi1.4xlarge
      - hs1.8xlarge
      - cr1.8xlarge
      - cc2.8xlarge
      - cg1.4xlarge

  # Ec2Name:
  #   Description: Ec2 Resource name.
  #   Type: String

  # BucketName:
  #   Description: BucketName.
  #   Default: ahmedsalem-testbuckettt
  #   Type: String

  # ObjectPrefix:
  #   Description: idex.html prefix,for ex / 
  #   Type: String
  #   Default: /

  # KeyName:
  #   Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
  #   Type: 'AWS::EC2::KeyPair::KeyName'
  #   ConstraintDescription: must be the name of an existing EC2 KeyPair.

  # SubnetId:
  #  Description: Subnet ID which will run the web server instanc into.
  #  Type: 'AWS::EC2::Subnet::Id'

  LatestAmiId:
    Type: 'AWS::SSM::Parameter::Value&amp;lt;AWS::EC2::Image::Id&amp;gt;'
    Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'

  SSHLocation:
    Description: The IP address range that can be used to SSH to the EC2 instance.
    Type: String
    MinLength: '9'
    MaxLength: '18'
    Default: 0.0.0.0/0
    AllowedPattern: '(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})'
    ConstraintDescription: Must be a valid IP CIDR range of the form x.x.x.x/x

Resources:
  WebServerInstance1:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: !Ref InstanceType
      ImageId: !Ref LatestAmiId
      NetworkInterfaces: 
      - AssociatePublicIpAddress: "true"
        DeviceIndex: "0"
        SubnetId: subnet-ed2f8486
        GroupSet:
            - !GetAtt "WebServerSecurityGroup.GroupId"
      Tags:
       - Key: "Name"
         Value: "server_1"
       - Key: "ssm-tag"
         Value: "test-ssm"
      # SecurityGroupIds:
      #       - !GetAtt "WebServerSecurityGroup.GroupId"
      #IamInstanceProfile: !Ref IAMInstanceProfile

  WebServerInstance2:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: !Ref InstanceType
      ImageId: !Ref LatestAmiId
      NetworkInterfaces: 
      - AssociatePublicIpAddress: "true"
        DeviceIndex: "0"
        SubnetId: subnet-0556b778
        GroupSet:
            - !GetAtt "WebServerSecurityGroup.GroupId"
      Tags:
       - Key: "Name"
         Value: "server_2"
       - Key: "ssm-tag"
         Value: "test-ssm"
      # SecurityGroupIds:
      #       - !GetAtt "WebServerSecurityGroup.GroupId"
      #IamInstanceProfile: !Ref IAMInstanceProfile

  WebServerInstance3:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: !Ref InstanceType
      ImageId: !Ref LatestAmiId
      NetworkInterfaces: 
      - AssociatePublicIpAddress: "true"
        DeviceIndex: "0"
        SubnetId: subnet-34531b78
        GroupSet:
            - !GetAtt "WebServerSecurityGroup.GroupId"
      Tags:
       - Key: "Name"
         Value: "server_3"
      #  - Key: "ssm-tag"
      #    Value: "test-ssm"
      # SecurityGroupIds:
      #       - !GetAtt "WebServerSecurityGroup.GroupId"
      #IamInstanceProfile: !Ref IAMInstanceProfile

  WebServerInstance4:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: !Ref InstanceType
      ImageId: !Ref LatestAmiId
      NetworkInterfaces: 
      - AssociatePublicIpAddress: "true"
        DeviceIndex: "0"
        SubnetId: subnet-ed2f8486
        GroupSet:
            - !GetAtt "WebServerSecurityGroup.GroupId"
      Tags:
       - Key: "Name"
         Value: "server_4"

  WebServerInstance5:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: !Ref InstanceType
      ImageId: !Ref LatestAmiId
      NetworkInterfaces: 
      - AssociatePublicIpAddress: "true"
        DeviceIndex: "0"
        SubnetId: subnet-ed2f8486
        GroupSet:
            - !GetAtt "WebServerSecurityGroup.GroupId"
      Tags:
       - Key: "Name"
         Value: "server_5"

  WebServerInstance6:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: !Ref InstanceType
      ImageId: !Ref LatestAmiId
      NetworkInterfaces: 
      - AssociatePublicIpAddress: "true"
        DeviceIndex: "0"
        SubnetId: subnet-ed2f8486
        GroupSet:
            - !GetAtt "WebServerSecurityGroup.GroupId"
      Tags:
       - Key: "Name"
         Value: "server_6"

  WebServerInstance7:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: !Ref InstanceType
      ImageId: !Ref LatestAmiId
      NetworkInterfaces: 
      - AssociatePublicIpAddress: "true"
        DeviceIndex: "0"
        SubnetId: subnet-ed2f8486
        GroupSet:
            - !GetAtt "WebServerSecurityGroup.GroupId"
      Tags:
       - Key: "Name"
         Value: "server_7"

  WebServerSecurityGroup:
    Type: 'AWS::EC2::SecurityGroup'
    Properties:
      GroupDescription: Enable HTTP access via port 80 and SSH Access port 22.
      # VpcId: vpc-0078f41e6b5568b6e
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: '80'
          ToPort: '80'
          CidrIp: 0.0.0.0/0
        - IpProtocol: tcp
          FromPort: '22'
          ToPort: '22'
          CidrIp: !Ref SSHLocation

  # IAMInstanceProfile:
  #  Type: AWS::IAM::InstanceProfile
  #  Properties:
  #    Roles:
  #      - !Ref IAMRole
  #    InstanceProfileName: ec2-access-s3

  # IAMRole:
  #  Type: AWS::IAM::Role
  #  Properties:
  #    RoleName: ec2-s3-access
  #    AssumeRolePolicyDocument:
  #      Version: '2012-10-17'
  #      Statement:
  #        - Effect: Allow
  #          Principal:
  #            Service: ec2.amazonaws.com
  #          Action: sts:AssumeRole
  #    Path: '/'
  #    Policies:
  #      - PolicyName: 'EC2Access'
  #        PolicyDocument:
  #          Version: '2012-10-17'
  #          Statement:
  #            - Effect: 'Allow'
  #              Action:
  #                - 's3:GetObject'
  #              Resource: !Sub 'arn:aws:s3:::${BucketName}/index.html'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What This Template Does:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Launches multiple EC2 instances with different tags&lt;br&gt;
  • Ensures instances are deployed in multiple Availability Zones&lt;br&gt;
  • Assigns instances a security group for controlled access&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create a CloudFormation Template for the SSM Document&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We now create another CloudFormation template (ssm.yaml) that defines the SSM document for resizing EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ssm.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: 2010-09-09

# Description: SSM Document for resizing list of EC2 instances

Resources:
  SSMDocumentRole:
   Type: AWS::IAM::Role
   Properties:
     AssumeRolePolicyDocument:
       Version: '2012-10-17'
       Statement:
         - Effect: Allow
           Principal:
             Service: ssm.amazonaws.com
           Action: sts:AssumeRole
     ManagedPolicyArns:
     - arn:aws:iam::aws:policy/AmazonEC2FullAccess
     Path: "/"

  document: 
    Type: AWS::SSM::Document
    Properties:
      DocumentType: Automation
      Content:
        schemaVersion: '0.3'
        description: 'Resize EC2 instances.'
        assumeRole: '{{ AutomationAssumeRole }}'
        parameters:
          AvailabilityZone:
            type: String
            description: "(Required) The impacted Availability."
            default: 'us-east-2a'

          TagName:
            type: String
            description: "(Required) The Tag Name."
            default: 'ssm-tag'  

          TagValue:
            type: String
            description: "(Required) The Tag Value."
            default: 'test-ssm' 

          InstanceType:
            type: String
            description: "(Required) The Desired Instance Type."
            default: 't2.small'  

          # If I need to specify another IAMRole
          AutomationAssumeRole:
            type: String
            description: "(Required) The Arn of the role that allows the automation."
            default: !GetAtt SSMDocumentRole.Arn

        mainSteps:
        - name: listInstances
          action: aws:executeAwsApi
          timeoutSeconds: '60'
          inputs:
            Service: ec2
            Api: DescribeInstances
            Filters:
              - Name: availability-zone
                Values: ["{{ AvailabilityZone }}"]

              - Name: instance-state-name
                Values: ["running"]

              - Name: tag:{{ TagName }}
                Values: ["{{ TagValue }}"]
          outputs:
          - Name: InstanceIds
            Selector: "$.Reservations..Instances..InstanceId"
            Type: StringList

        - name: stopInstances
          action: aws:changeInstanceState
          onFailure: Continue
          inputs:
            InstanceIds:
              - "{{ listInstances.InstanceIds }}"
            DesiredState: stopped

        - name: ResizeInstances
          action: "aws:executeScript"
          inputs: 
            Runtime: "python3.8"
            Handler: resizeInstance
            InputPayload: 
              InstanceIds:
                - "{{ listInstances.InstanceIds }}"
            Script: |-
              def resizeInstance(events, context):
                import boto3
                client = boto3.client('ec2')
                for instance_id in events['InstanceIds']:
                  #client.modify_instance_attribute(InstanceId=instanceId, InstanceType={'Value': 't2.small',},)
                  print(instance_id)
                  print(type(instance_id))
                  client.modify_instance_attribute(InstanceId=instance_id, Attribute='instanceType', Value='t2.small')
                return True
          onFailure: Continue

        - name: startInstances
          action: aws:changeInstanceState
          inputs: 
            InstanceIds:
                - "{{ listInstances.InstanceIds }}"
            DesiredState: running
          isEnd: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What This Template Does:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Creates an SSM document that:&lt;br&gt;
  • Identifies EC2 instances based on Availability Zone and Tags&lt;br&gt;
  • Stops the instances before resizing&lt;br&gt;
  • Modifies the instance type using boto3&lt;br&gt;
  • Restarts the instances after resizing&lt;br&gt;
  • Defines an IAM Role for SSM to interact with EC2&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Deploy the CloudFormation Templates&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Launch the EC2 Stack

&lt;ul&gt;
&lt;li&gt;Deploy the ec2.yaml template to provision instances.&lt;/li&gt;
&lt;li&gt;Ensure the Tag Name and Tag Value are correctly set.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Launch the SSM Stack

&lt;ul&gt;
&lt;li&gt;Deploy the ssm.yaml template to create the SSM document.&lt;/li&gt;
&lt;li&gt;Verify that the IAM role is correctly attached.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;4. Execute the SSM Document to Resize Instances&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once both CloudFormation stacks are deployed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to AWS Systems Manager → Shared Documents&lt;/li&gt;
&lt;li&gt;Click on “Shared with me”&lt;/li&gt;
&lt;li&gt;Select your custom SSM document and click “Execute document”&lt;/li&gt;
&lt;li&gt;Enter the required parameters (Availability Zone, Tag Name, New Instance Type)&lt;/li&gt;
&lt;li&gt;Click Execute&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The SSM automation will now:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Identify instances in the specified Availability Zone &amp;amp; Tag&lt;br&gt;
 • Stop instances before resizing&lt;br&gt;
 • Modify instance types dynamically&lt;br&gt;
 • Restart instances&lt;/p&gt;

&lt;p&gt;You should see all EC2 instances updated successfully!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Clean Up Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To remove the deployed resources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Delete the SSM CloudFormation Stack&lt;/li&gt;
&lt;li&gt;Delete the EC2 CloudFormation Stack
This ensures that no unintended costs are incurred.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By leveraging AWS Systems Manager (SSM) Documents, we can automate EC2 instance resizing across multiple environments with minimal effort. This solution eliminates manual resizing, ensuring faster and more efficient instance modifications.&lt;/p&gt;

&lt;p&gt;Start implementing this in your AWS environment to &lt;strong&gt;save time and improve scalability!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ec2</category>
      <category>ssm</category>
      <category>systemmanager</category>
    </item>
    <item>
      <title>Assigning a FQDN (Fully Qualified Domain Name) to an EC2 Instance Using Route 53</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Mon, 20 Jan 2025 11:16:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/assigning-a-fqdn-fully-qualified-domain-name-to-an-ec2-instance-using-route-53-40jd</link>
      <guid>https://dev.to/aws-builders/assigning-a-fqdn-fully-qualified-domain-name-to-an-ec2-instance-using-route-53-40jd</guid>
      <description>&lt;p&gt;In this guide, we’ll explore how to assign a Fully Qualified Domain Name (FQDN) to an EC2 instance running a web server using AWS Route 53. This process includes deploying CloudFormation templates, configuring Route 53, and ensuring the FQDN resolves to the EC2 instance’s public IP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We aim to assign a domain name to a web server hosted on an EC2 instance. Instead of accessing the server using a public IP, we’ll use a friendly FQDN (e.g., &lt;a href="http://www.cmcloudlab1589.info" rel="noopener noreferrer"&gt;www.cmcloudlab1589.info&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;User-Friendly Access&lt;br&gt;
• The FQDN makes it easier for users to access the web server instead of remembering an IP address.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability&lt;br&gt;
• The setup can easily accommodate additional domains and services under the same hosted zone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Automation&lt;br&gt;
• Using CloudFormation ensures repeatability and simplifies deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Key Objectives:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploy an EC2 instance running a web application.&lt;/li&gt;
&lt;li&gt;Create a Route 53 RecordSet in the existing public hosted zone.&lt;/li&gt;
&lt;li&gt;Ensure the FQDN resolves to the EC2 instance’s public IP.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;br&gt;
The architecture consists of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An EC2 instance running a web server.&lt;/li&gt;
&lt;li&gt;A Route 53 public-hosted zone with a Type A RecordSet pointing to the instance’s public IP.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Diagram:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklzufj5u83puwiawxgl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklzufj5u83puwiawxgl3.png" alt="Image description" width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create a CloudFormation Template for the EC2 Instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following ec2.yaml file sets up an EC2 instance, including:&lt;/p&gt;

&lt;p&gt;• A web server with httpd installed and configured.&lt;/p&gt;

&lt;p&gt;• Security groups to allow HTTP (port 80) and SSH (port 22) traffic.&lt;/p&gt;

&lt;p&gt;• An IAM role with S3 access to retrieve the index.html file for the web server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: 2010-09-09
#Description: [CET-004] EC2 with user data from CloudFormation.
Parameters:
  InstanceType:
    Description: WebServer EC2 instance type.
    Type: String
    Default: t2.micro
    AllowedValues:
      - t1.micro
      - t2.nano
      - t2.micro
      - t2.small
      - t2.medium
      - t2.large
      - m1.small
      - m1.medium
      - m1.large
      - m1.xlarge
      - m2.xlarge
      - m2.2xlarge
      - m2.4xlarge
      - m3.medium
      - m3.large
      - m3.xlarge
      - m3.2xlarge
      - m4.large
      - m4.xlarge
      - m4.2xlarge
      - m4.4xlarge
      - m4.10xlarge
      - c1.medium
      - c1.xlarge
      - c3.large
      - c3.xlarge
      - c3.2xlarge
      - c3.4xlarge
      - c3.8xlarge
      - c4.large
      - c4.xlarge
      - c4.2xlarge
      - c4.4xlarge
      - c4.8xlarge
      - g2.2xlarge
      - g2.8xlarge
      - r3.large
      - r3.xlarge
      - r3.2xlarge
      - r3.4xlarge
      - r3.8xlarge
      - i2.xlarge
      - i2.2xlarge
      - i2.4xlarge
      - i2.8xlarge
      - d2.xlarge
      - d2.2xlarge
      - d2.4xlarge
      - d2.8xlarge
      - hi1.4xlarge
      - hs1.8xlarge
      - cr1.8xlarge
      - cc2.8xlarge
      - cg1.4xlarge
  Ec2Name:
    Description: Ec2 Resource name.
    Type: String
  BucketName:
    Description: BucketName.
    Default: ahmedsalem-testbuckettt
    Type: String
  ObjectPrefix:
    Description: idex.html prefix,for ex / 
    Type: String
    Default: /
  # KeyName:
  #   Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
  #   Type: 'AWS::EC2::KeyPair::KeyName'
  #   ConstraintDescription: must be the name of an existing EC2 KeyPair.
  # SubnetId:
  #  Description: Subnet ID which will run the web server instanc into.
  #  Type: 'AWS::EC2::Subnet::Id'
  LatestAmiId:
    Type: 'AWS::SSM::Parameter::Value&amp;lt;AWS::EC2::Image::Id&amp;gt;'
    Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'
  SSHLocation:
    Description: The IP address range that can be used to SSH to the EC2 instance.
    Type: String
    MinLength: '9'
    MaxLength: '18'
    Default: 0.0.0.0/0
    AllowedPattern: '(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})'
    ConstraintDescription: Must be a valid IP CIDR range of the form x.x.x.x/x

Resources:
  WebServerInstance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: !Ref InstanceType
      Tags:
       - Key: "Name"
         Value: !Ref Ec2Name
       - Key: "value"
         Value: "to be deleted"
      # KeyName: !Ref KeyName
      # NetworkInterfaces: 
      #   - AssociatePublicIpAddress: "true"
      #     DeviceIndex: "0"
      # SubnetId: !Ref SubnetId
      ImageId: !Ref LatestAmiId
      SecurityGroupIds:
            - !GetAtt "WebServerSecurityGroup.GroupId"
      IamInstanceProfile: !Ref IAMInstanceProfile
      UserData: 
        Fn::Base64:
          !Sub |
            #!/bin/bash -xe
            yum update -y aws-cfn-bootstrap
            yum install -y httpd
            systemctl start httpd
            systemctl enable httpd
            aws s3 cp s3://${BucketName}${ObjectPrefix}index.html /var/www/html/

  WebServerSecurityGroup:
    Type: 'AWS::EC2::SecurityGroup'
    Properties:
      GroupDescription: Enable HTTP access via port 80 and SSH Access port 22.
      VpcId: vpc-0078f41e6b5568b6e
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: '80'
          ToPort: '80'
          CidrIp: 0.0.0.0/0
        - IpProtocol: tcp
          FromPort: '22'
          ToPort: '22'
          CidrIp: !Ref SSHLocation

  IAMInstanceProfile:
   Type: AWS::IAM::InstanceProfile
   Properties:
     Roles:
       - !Ref IAMRole
     InstanceProfileName: ec2-access-s3

  IAMRole:
   Type: AWS::IAM::Role
   Properties:
     RoleName: ec2-s3-access
     AssumeRolePolicyDocument:
       Version: '2012-10-17'
       Statement:
         - Effect: Allow
           Principal:
             Service: ec2.amazonaws.com
           Action: sts:AssumeRole
     Path: '/'
     Policies:
       - PolicyName: 'EC2Access'
         PolicyDocument:
           Version: '2012-10-17'
           Statement:
             - Effect: 'Allow'
               Action:
                 - 's3:GetObject'
               Resource: !Sub 'arn:aws:s3:::${BucketName}/index.html'

Outputs:
  WebsiteURL:
    Description: URL for newly apache webserver created.
    Value: !Join 
      - ''
      - - 'http://'
        - !GetAtt 
          - WebServerInstance
          - PublicDnsName

  PublicIp:
    Description: The publicIP of the webserver
    Value: !GetAtt WebServerInstance.PublicIp
    Export:
      Name: !Sub "${AWS::StackName}-PublicIp"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Create the Route 53 CloudFormation Template&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following r53.yaml file creates a Route 53 RecordSet to associate the FQDN with the EC2 instance’s public IP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Parameters:
  WebserverStackParameter:
    Type: String
    Default: Webserver

  HostedZoneId:
    #Type: AWS::Route53::HostedZone::Id
    Type: String
    Description: HostedZone ID
    ConstraintDescription: must be a valid HostedZone ID
    Default: Z0775680V6B0O09IGAKU

  WebserverFQDN:
    Type: String
    # ConstraintDescription: must be a valid HostedZone ID

Resources:
  PublicDNSRecord:
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !Ref HostedZoneId
      Name: !Sub "${WebserverFQDN}.cmcloudlab1832.info"
      Type: A
      TTL: 900
      ResourceRecords:
        -
          Fn::ImportValue: 
             !Sub "${WebserverStackParameter}-PublicIp"

Outputs:
  Hostname:
    Description: The Hostname attached to our Webserver
    Value: !Ref PublicDNSRecord
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Deploy the Infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Launch the EC2 Stack&lt;br&gt;
• Deploy the ec2.yaml CloudFormation template to set up the EC2 instance.&lt;br&gt;
• Note the exported PublicIp value for use in the Route 53 stack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Launch the Route 53 Stack&lt;br&gt;
• Deploy the r53.yaml CloudFormation template to create the RecordSet in the Route 53 hosted zone.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;4. Test the Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After deploying both stacks, you can access your web server using the FQDN assigned in Route 53.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example FQDN:&lt;/strong&gt;&lt;br&gt;
&lt;a href="http://www.cmcloudlab1589.info" rel="noopener noreferrer"&gt;www.cmcloudlab1589.info&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Assigning a Fully Qualified Domain Name (FQDN) to an EC2 instance using Route 53 is a straightforward yet powerful solution for enhancing accessibility and scalability of your applications. By leveraging AWS CloudFormation, we automated the deployment process, making it efficient, repeatable, and easy to manage.&lt;/p&gt;

</description>
      <category>r53</category>
      <category>ec2</category>
      <category>aws</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>Leverage On-Premises Infrastructure in Amazon EKS Clusters with Amazon EKS Hybrid Nodes</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Fri, 13 Dec 2024 12:13:29 +0000</pubDate>
      <link>https://dev.to/aws-builders/leverage-on-premises-infrastructure-in-amazon-eks-clusters-with-amazon-eks-hybrid-nodes-3fbl</link>
      <guid>https://dev.to/aws-builders/leverage-on-premises-infrastructure-in-amazon-eks-clusters-with-amazon-eks-hybrid-nodes-3fbl</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9u8gyily8och8mc3htd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9u8gyily8och8mc3htd.png" alt="Image description" width="640" height="320"&gt;&lt;/a&gt;&lt;br&gt;
Amazon Web Services (AWS) continues to bridge the gap between on-premises infrastructure and the cloud. The new Amazon EKS Hybrid Nodes feature enables organizations to extend their Amazon Elastic Kubernetes Service (EKS) clusters to include on-premises resources. This hybrid approach provides enhanced flexibility and scalability, empowering businesses to leverage their existing infrastructure while enjoying the benefits of Kubernetes orchestration in the cloud.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore what Amazon EKS Hybrid Nodes are, how they work, and how they can help you integrate on-premises resources seamlessly into your EKS clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview: What Are Amazon EKS Hybrid Nodes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon EKS Hybrid Nodes allow organizations to use their on-premises servers, virtual machines (VMs), or edge devices as worker nodes in an EKS cluster. This feature makes it easier to manage workloads that require low latency, data residency compliance, or closer proximity to on-premises systems.&lt;/p&gt;

&lt;p&gt;With Amazon EKS Hybrid Nodes, you can deploy containerized applications on your existing infrastructure while managing everything from a single EKS control plane.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Extend Kubernetes Clusters On-Premises&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EKS Hybrid Nodes enable you to extend your Kubernetes clusters beyond AWS, incorporating on-premises resources into a unified infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Seamless Integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EKS Hybrid Nodes operate independently and do not require services like AWS Outposts or Amazon ECS Anywhere. Instead, they seamlessly integrate on-premises infrastructure into EKS clusters, providing flexibility without additional dependencies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uqptc61noxoq999b6dq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uqptc61noxoq999b6dq.png" alt="Image description" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Centralized Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Manage your on-premises and cloud-based workloads using the same EKS control plane, simplifying operations and providing a consistent Kubernetes experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Support for Edge Use Cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deploy workloads on edge devices or in locations with limited cloud connectivity while maintaining centralized control through EKS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Amazon EKS Hybrid Nodes Work&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following diagram illustrates the hybrid network connectivity, node setup, and integration between on-premises environments and AWS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkclecwqzsmkufmu38oo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkclecwqzsmkufmu38oo.png" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon EKS Hybrid Nodes enable seamless integration between your on-premises infrastructure and the Amazon EKS control plane. Here’s how it works:&lt;/p&gt;

&lt;p&gt;• Hybrid network connectivity using:&lt;/p&gt;

&lt;p&gt;• AWS Site-to-Site VPN&lt;br&gt;
  • AWS Direct Connect&lt;br&gt;
  • Another VPN solution&lt;br&gt;
• A VPC with routing configurations for remote nodes and pods, using a Virtual Private Gateway (VGW) or Transit Gateway (TGW).&lt;/p&gt;

&lt;p&gt;• Physical or virtual machines as infrastructure.&lt;/p&gt;

&lt;p&gt;• Compatible operating systems, such as:&lt;/p&gt;

&lt;p&gt;• Amazon Linux 2023&lt;br&gt;
  • Ubuntu (20.04, 22.04, 24.04)&lt;br&gt;
  • Red Hat Enterprise Linux (RHEL 8 or 9)&lt;br&gt;
• IAM roles:&lt;/p&gt;

&lt;p&gt;• EKS Cluster IAM Role&lt;br&gt;
  • EKS Hybrid Nodes IAM Role&lt;br&gt;
• AWS Systems Manager or IAM Roles Anywhere for node authentication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting Up Amazon EKS Hybrid Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure the prerequisites we mentioned above are in place before your on-premises infrastructure can join your EKS cluster as hybrid nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create an EKS Cluster and Enable Hybrid Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Navigate to the Amazon EKS Console&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open the Amazon EKS Console and begin creating a new cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Specify Networking for Hybrid Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the Step 2: Specify Networking screen, enable the option Configure remote networks to enable hybrid nodes and specify the Classless Inter-Domain Routing (CIDR) blocks for your on-premises environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foi3cg9qcwxvijbi9e70h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foi3cg9qcwxvijbi9e70h.png" alt="Image description" width="800" height="851"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. CIDR Configuration Guidelines&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• The CIDRs for remote nodes and remote pods must:&lt;/p&gt;

&lt;p&gt;Use RFC-1918 IPv4 addresses (private IP ranges).&lt;br&gt;
Not overlap with the VPC CIDR or the EKS cluster Kubernetes service CIDR.&lt;/p&gt;

&lt;p&gt;• The remote node CIDR and remote pod CIDR must not overlap.&lt;/p&gt;

&lt;p&gt;• Specifying a pod CIDR block is mandatory if:&lt;/p&gt;

&lt;p&gt;You run webhooks on your hybrid nodes.&lt;br&gt;
Your CNI plugin doesn’t use NAT for pod addresses as traffic leaves your nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Connect Your Hybrid Nodes to the EKS Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the EKS cluster configured, the next step is to connect your on-premises infrastructure to the cluster as hybrid nodes. Use the Amazon EKS Hybrid Nodes CLI (nodeadm) to automate the installation, configuration, and registration of your hybrid nodes.&lt;/p&gt;

&lt;p&gt;Install the required components:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ nodeadm install 1.31 --credential-provider &amp;lt;ssm, iam-ra&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Create a nodeConfig.yaml file:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
metadata:
  name: hybrid-node
spec:
  cluster:
    name: my-cluster
    region: us-east-1
  hybrid:
    ssm:
      activationCode: &amp;lt;activation-code&amp;gt;
      activationId: &amp;lt;activation-id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Register the hybrid node with the cluster:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ nodeadm init -c file://nodeConfig.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Verify node readiness:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use the EKS console or the following command to confirm that the node has joined:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure the nodes are in the Ready state after installing the required CNI plugin.&lt;br&gt;
Note: Before the hybrid nodes are marked as Ready, you need to install a compatible CNI plugin. For more information, refer to Install CNI for EKS Hybrid Nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Deploy Workloads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use Kubernetes-native tools such as kubectl and Helm charts to deploy workloads to both cloud-based and hybrid nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Monitor and Optimize&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Monitor your cluster’s performance using Amazon CloudWatch and other observability tools. Ensure that hybrid nodes are scaling effectively and meeting workload demands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon EKS Hybrid Nodes open new possibilities for organizations looking to unify their cloud and on-premises resources under a single Kubernetes control plane. By enabling seamless integration of on-premises infrastructure with EKS, this feature offers unparalleled flexibility, scalability, and consistency for modern application deployment.&lt;/p&gt;

&lt;p&gt;Whether you’re running edge workloads, ensuring data compliance, or modernizing legacy systems, EKS Hybrid Nodes provide a powerful solution to meet your hybrid cloud needs. Explore this feature today and unlock the full potential of Kubernetes on your terms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/use-your-on-premises-infrastructure-in-amazon-eks-clusters-with-amazon-eks-hybrid-nodes/" rel="noopener noreferrer"&gt;Use Your On-Premises Infrastructure in Amazon EKS Clusters with Amazon EKS Hybrid Nodes&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/aws/eks-hybrid/tree/main" rel="noopener noreferrer"&gt;Amazon EKS Hybrid Nodes GitHub Repository&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>eks</category>
    </item>
    <item>
      <title>AWS EKS Auto Mode: Automating Kubernetes Cluster Management</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Thu, 12 Dec 2024 10:58:27 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-eks-auto-mode-automating-kubernetes-cluster-management-1cek</link>
      <guid>https://dev.to/aws-builders/aws-eks-auto-mode-automating-kubernetes-cluster-management-1cek</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3we1kdflf68bl44laaov.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3we1kdflf68bl44laaov.jpg" alt="Image description" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS has taken a significant step in simplifying Kubernetes cluster management with the introduction of EKS Auto Mode. This new feature is designed to reduce the operational complexity of managing Kubernetes clusters while ensuring cost efficiency, scalability, and security. This article will provide a detailed overview of EKS Auto Mode, explore its benefits, and guide you through its implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview: What Is EKS Auto Mode?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8f7swvxyepvn3iqsu0oy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8f7swvxyepvn3iqsu0oy.png" alt="Image description" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) Auto Mode automates the management of Kubernetes cluster infrastructure, including compute, storage, and networking. By handling tasks such as scaling, upgrades, and security compliance, EKS Auto Mode enables teams to focus on building and deploying applications rather than managing infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Simplified Cluster Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EKS Auto Mode provides production-ready clusters with minimal operational overhead, enabling organizations to deploy workloads without deep Kubernetes expertise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Dynamic Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The service dynamically scales the number of nodes in your cluster based on workload demand, ensuring efficient resource utilization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Cost Optimization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EKS Auto Mode minimizes costs by automatically terminating unused nodes and consolidating workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Enhanced Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The service uses immutable Amazon Machine Images (AMIs) with enforced security measures, such as SELinux controls and read-only root file systems. Nodes have a maximum lifetime of 21 days, after which they are replaced to maintain security compliance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Automated Upgrades&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EKS Auto Mode automates updates to Kubernetes clusters, nodes, and associated components, ensuring they are always running the latest versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Integration with AWS Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The feature integrates seamlessly with AWS services like Amazon EC2, Elastic Load Balancing, and Amazon EBS, leveraging AWS’s cloud-native capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started with EKS Auto Mode&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Creating an EKS Auto Mode Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To create a new EKS Auto Mode cluster:&lt;br&gt;
&lt;strong&gt;1. AWS Management Console:&lt;/strong&gt; Navigate to the EKS Console and select the Auto Mode option when creating a cluster.&lt;br&gt;
&lt;strong&gt;2. CLI or eksctl:&lt;/strong&gt; Use commands or YAML configurations to define the cluster with Auto Mode enabled.&lt;br&gt;
&lt;strong&gt;3. Key Settings:&lt;/strong&gt;&lt;br&gt;
  • Choose regions and Availability Zones.&lt;br&gt;
  • Define default NodePools, which are automatically created and managed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Migrating Existing Clusters to Auto Mode&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS allows you to transition existing EKS clusters to Auto Mode.&lt;br&gt;
&lt;strong&gt;• Steps:&lt;/strong&gt;&lt;br&gt;
    • Evaluate workloads for compatibility with EKS Auto Mode.&lt;br&gt;
    • Modify the cluster’s configuration to enable Auto Mode.&lt;br&gt;
&lt;strong&gt;• Considerations:&lt;/strong&gt;&lt;br&gt;
   • Existing configurations may need adjustments, such as migrating &lt;br&gt;
manually managed NodeGroups to Auto-managed NodePools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Deploying Workloads on Auto Mode Clusters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EKS Auto Mode is fully compatible with Kubernetes-native tools and workflows. You can deploy workloads using:&lt;br&gt;
&lt;strong&gt;• kubectl:&lt;/strong&gt; Standard Kubernetes CLI for managing workloads.&lt;br&gt;
&lt;strong&gt;• Helm Charts:&lt;/strong&gt; Package manager for Kubernetes applications.&lt;br&gt;
&lt;strong&gt;• GitOps Tools:&lt;/strong&gt; Integrate with tools like ArgoCD for declarative application management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Configuring Auto Mode Clusters&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;• NodePools:&lt;/strong&gt;&lt;br&gt;
    • Default NodePools are created automatically, but you can define custom NodePools to meet specific requirements.&lt;br&gt;
    • NodePools support a variety of instance types, including GPU-enabled instances for AI/ML workloads.&lt;br&gt;
&lt;strong&gt;• Scaling Policies:&lt;/strong&gt;&lt;br&gt;
    • Configure scaling policies to handle workload spikes.&lt;br&gt;
    • Use Pod Disruption Budgets (PDBs) to ensure availability during scaling events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Understanding Instance Management in Auto Mode&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EKS Auto Mode manages instances as immutable appliances, ensuring:&lt;br&gt;
&lt;strong&gt;• Security:&lt;/strong&gt; No SSH or direct access to nodes is allowed.&lt;br&gt;
&lt;strong&gt;• Automated Replacement:&lt;/strong&gt; Instances are replaced regularly to maintain compliance with security standards.&lt;br&gt;
&lt;strong&gt;• Compatibility:&lt;/strong&gt; Supports both EC2 On-Demand Instances and Spot Instances for cost optimization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. Reduced Operational Overhead:&lt;/strong&gt; EKS Auto Mode automates infrastructure management tasks, freeing teams to focus on application development.&lt;br&gt;
&lt;strong&gt;2. Cost Efficiency:&lt;/strong&gt; Dynamic scaling and Spot Instance support optimize costs without compromising performance.&lt;br&gt;
&lt;strong&gt;3. Scalability:&lt;/strong&gt; The service scales resources up or down based on workload demands, ensuring high availability.&lt;br&gt;
&lt;strong&gt;4. Enhanced Security:&lt;/strong&gt; Regular instance replacement and immutable configurations ensure compliance with the latest security standards.&lt;br&gt;
&lt;strong&gt;5. Seamless Integration:&lt;/strong&gt; Works with Kubernetes-native tools and AWS services for a consistent and familiar user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common Use Cases&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. Web Applications:&lt;/strong&gt; Automatically scale resources to handle traffic spikes.&lt;br&gt;
&lt;strong&gt;2. AI/ML Workloads:&lt;/strong&gt; Use GPU-enabled NodePools to run training and inference jobs efficiently.&lt;br&gt;
&lt;strong&gt;3. Microservices:&lt;/strong&gt; Deploy containerized workloads with Kubernetes-native tools.&lt;br&gt;
&lt;strong&gt;4. CI/CD Pipelines:&lt;/strong&gt; Integrate with GitOps tools to streamline deployment workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Considerations&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;• Compatibility:&lt;/strong&gt; Not all workloads are immediately compatible with Auto Mode. Evaluate your existing configurations before migrating.&lt;br&gt;
&lt;strong&gt;• Custom Configurations:&lt;/strong&gt; While default settings are sufficient for many use cases, some applications may require custom NodePools or scaling policies.&lt;br&gt;
&lt;strong&gt;• Access Restrictions:&lt;/strong&gt; Auto Mode enforces strict security policies, limiting direct access to managed nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EKS Auto Mode is a significant advancement for organizations looking to simplify Kubernetes cluster management. By automating tasks like scaling, upgrades, and security compliance, it allows teams to focus on delivering value through applications rather than managing infrastructure. Whether you’re starting fresh or migrating existing clusters, EKS Auto Mode offers a robust, scalable, and cost-effective solution for modern cloud-native applications.&lt;/p&gt;

&lt;p&gt;Additionally, with the release of &lt;strong&gt;Terraform AWS Provider v5.79.0&lt;/strong&gt;, managing EKS Auto Mode clusters has become even more accessible. This version introduces support for EKS Auto Mode, enabling Infrastructure as Code (IaC) configurations for compute, storage, and networking components. The integration further streamlines cluster deployments and simplifies automation workflows, making it easier for teams to adopt EKS Auto Mode within their existing Terraform-based setups.&lt;/p&gt;

&lt;p&gt;Start exploring EKS Auto Mode today and unlock new levels of efficiency, scalability, and simplicity in Kubernetes management!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/automode.html" rel="noopener noreferrer"&gt;Automate Cluster Infrastructure with EKS Auto Mode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/create-auto.html" rel="noopener noreferrer"&gt;Creating an Auto Mode Cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/migrate-auto.html" rel="noopener noreferrer"&gt;Migrating to Auto Mode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/auto-workloads.html" rel="noopener noreferrer"&gt;Deploying Workloads on Auto Mode Clusters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/settings-auto.html" rel="noopener noreferrer"&gt;Configuring Auto Mode Clusters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/automode-learn-instances.html" rel="noopener noreferrer"&gt;Understanding Instance Management&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>eks</category>
    </item>
    <item>
      <title>Subnet Settings in AWS: A Subtle Configuration That Can Cause Big Headaches</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Fri, 06 Dec 2024 05:54:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/subnet-settings-in-aws-a-subtle-configuration-that-can-cause-big-headaches-28pa</link>
      <guid>https://dev.to/aws-builders/subnet-settings-in-aws-a-subtle-configuration-that-can-cause-big-headaches-28pa</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdn18ggtpwkg6hoag7ig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdn18ggtpwkg6hoag7ig.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve spent countless hours configuring VPCs, subnets, and routing tables, but every now and then, something as simple as a checkbox reminds me how much there is to learn in AWS networking. Let me share a recent discovery that saved me hours of troubleshooting—and might save you even more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem: Internet Access Issues in a New Subnet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I was manually creating a new subnet in AWS. Everything seemed perfectly configured:&lt;br&gt;
    • &lt;strong&gt;Internet Gateway (IGW).&lt;/strong&gt;&lt;br&gt;
    • &lt;strong&gt;Route Table pointing to the IGW.&lt;/strong&gt;&lt;br&gt;
    • &lt;strong&gt;Security Groups allowing outbound traffic.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yet when I launched an EC2 instance in the subnet, it had &lt;strong&gt;no internet access&lt;/strong&gt;. At first, I assumed I had missed something minor, but after reviewing my setup repeatedly, I was stumped.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Missing Link: Auto-Assign Public IPv4 Address&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After hours of digging, I found the issue buried in the &lt;strong&gt;subnet settings&lt;/strong&gt;. By default, when you manually create a subnet, &lt;strong&gt;the Auto-assign public IPv4 address&lt;/strong&gt; option is &lt;strong&gt;disabled&lt;/strong&gt;. This means that even though the route table points to an IGW and the security group allows traffic, instances launched in the subnet won’t be assigned a public IP address—making internet connectivity impossible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Happens&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Public IP assignment is managed &lt;strong&gt;per subnet&lt;/strong&gt;, not at the instance level or in the Launch Template. Even if your Launch Template includes settings for a public IP, it won’t override the subnet’s configuration. Without enabling this option, your instances are essentially isolated from the internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Fix It&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Enable Public IP Assignment in Subnet Settings&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;strong&gt;VPC Console&lt;/strong&gt; &amp;gt; &lt;strong&gt;Subnets&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the subnet in question and click &lt;strong&gt;Edit Subnet Settings&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Auto-assign IP settings&lt;/strong&gt;, check the box for &lt;strong&gt;Enable auto-assign public IPv4 address&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Alternative: Assign Elastic IPs Manually&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you prefer not to enable auto-assignment, you can manually assign an &lt;strong&gt;Elastic IP&lt;/strong&gt; to your instance. This approach gives you more control but adds extra steps during setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lessons Learned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This experience reinforced an important AWS networking concept:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Even with the correct routing table and security group settings, an instance cannot access the internet without a public IP.&lt;/strong&gt;&lt;br&gt;
When creating subnets manually, always remember:&lt;br&gt;
• If the instances in the subnet require internet access, enable the &lt;strong&gt;Auto-assign public IPv4 address&lt;/strong&gt; option.&lt;br&gt;
• Alternatively, assign Elastic IPs or use a NAT Gateway for outbound internet access.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>subnets</category>
      <category>vpc</category>
    </item>
    <item>
      <title>Deploying Multiple PHP Applications Using AWS Elastic Beanstalk with a Standalone ALB</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Sat, 30 Nov 2024 19:08:49 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-multiple-php-applications-using-aws-elastic-beanstalk-with-a-standalone-alb-54pn</link>
      <guid>https://dev.to/aws-builders/deploying-multiple-php-applications-using-aws-elastic-beanstalk-with-a-standalone-alb-54pn</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2l41tef38s9zx47fm1u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2l41tef38s9zx47fm1u.jpg" alt="Image description" width="750" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this guide, we’ll deploy multiple PHP applications using AWS Elastic Beanstalk (EB) environments, and configure a single standalone Application Load Balancer (ALB) for all environments. Based on the actual implementation, this article clarifies how to manage multiple Elastic Beanstalk environments with dedicated target groups under one centralized ALB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’ll set up multiple PHP applications as separate EB environments. Instead of configuring a load balancer for each environment, we’ll use one ALB with dedicated target groups for each environment. This approach is cost-efficient, simplifies management, and ensures centralized control over routing and scaling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Components:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;• Elastic Beanstalk Environments&lt;/strong&gt;: Each PHP application runs in its environment.&lt;br&gt;
&lt;strong&gt;• Standalone ALB&lt;/strong&gt;: A single ALB handles all incoming traffic and routes it to the appropriate target group.&lt;br&gt;
&lt;strong&gt;• Target Groups&lt;/strong&gt;: Each Elastic Beanstalk environment has its target group for routing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture Workflow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ALB receives traffic for all applications.&lt;/li&gt;
&lt;li&gt;Listener rules on the ALB route traffic to the correct target group based on host headers or path patterns.&lt;/li&gt;
&lt;li&gt;Target groups forward traffic to the registered instances of the respective Elastic Beanstalk environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Set Up Elastic Beanstalk Environments&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Create Separate Environments for PHP Applications:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the Elastic Beanstalk Console.&lt;/li&gt;
&lt;li&gt;Click Create Application and Configure:
• Application Name: PHP-App-1.
• Platform: Select PHP.
• Environment: Choose Web Server Environment.&lt;/li&gt;
&lt;li&gt;Upload your .zip package containing the PHP application (e.g., index.php, composer.json).&lt;/li&gt;
&lt;li&gt;Deploy the application.&lt;/li&gt;
&lt;li&gt;Repeat these steps for additional applications (e.g., PHP-App-2, PHP-App-3).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create a Standalone ALB&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create the ALB:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the EC2 Console &amp;gt; Load Balancers.&lt;/li&gt;
&lt;li&gt;Click Create Load Balancer and select Application Load Balancer.&lt;/li&gt;
&lt;li&gt;Configure:
• Name: standalone-alb.
• Scheme: Internet-facing.
• Listeners: Add an HTTPS listener (port 443).
• Availability Zones: Choose the same zones as your Elastic Beanstalk environments.&lt;/li&gt;
&lt;li&gt;Click Create.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Register ALB with Elastic Beanstalk:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to each Elastic Beanstalk environment.&lt;/li&gt;
&lt;li&gt;Under Configuration, link the environment to the newly created ALB.&lt;/li&gt;
&lt;li&gt;Ensure health checks are consistent with the ALB configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Configure Target Groups for Each Environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create Target Groups:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to EC2 Console &amp;gt; Target Groups.&lt;/li&gt;
&lt;li&gt;Click Create Target Group for each Elastic Beanstalk environment.
• Name: Example: php-app-1-tg.
• Target Type: Instance.
• Protocol: HTTP.
• Port: 80.
• Health Check Path: / (or a custom endpoint defined in your application).&lt;/li&gt;
&lt;li&gt;Add instances of the respective Elastic Beanstalk environment to the target group.&lt;/li&gt;
&lt;li&gt;Navigate to the Targets tab in each target group and confirm the registered instances are healthy.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Add Listener Rules to the ALB&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the ALB Console &amp;gt; Listeners &amp;gt; HTTP:80 &amp;gt; Edit Rules.&lt;/li&gt;
&lt;li&gt;Add a rule for each target group:
• Condition: Use Host Header to match the Elastic Beanstalk environment domain (e.g., php-app-1.elasticbeanstalk.com).
• Action: Forward traffic to the corresponding target group (e.g., php-app-1-tg).&lt;/li&gt;
&lt;li&gt;Repeat this process for all environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Testing the Setup&lt;/strong&gt;&lt;br&gt;
• Simulate traffic to verify that the ALB forwards requests correctly to the appropriate target groups based on listener rules.&lt;br&gt;
• Check the health of each target group to ensure all instances are healthy and receiving traffic as expected.&lt;br&gt;
• Use tools like curl or Postman to send requests directly to the ALB DNS endpoint. Confirm that the traffic is routed to the correct Elastic Beanstalk environment and returns the expected responses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Cost Efficiency:&lt;/strong&gt; Reduces infrastructure costs by using one ALB for all environments.&lt;br&gt;
&lt;strong&gt;2. Simplified Management:&lt;/strong&gt; Centralizes traffic routing and listener rule configuration in one place.&lt;br&gt;
&lt;strong&gt;3. Scalability:&lt;/strong&gt; Supports independent scaling of target groups for each environment.&lt;br&gt;
&lt;strong&gt;4. Enhanced Traffic Control:&lt;/strong&gt; Provides granular routing with ALB listener rules.&lt;br&gt;
&lt;strong&gt;5. Centralized Health Monitoring:&lt;/strong&gt; Consolidates health checks for all environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By using a single ALB with target groups for multiple Elastic Beanstalk environments, you achieve a cost-effective, scalable, and centralized solution for hosting PHP applications.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>elasticbeanstalk</category>
      <category>loadbalancer</category>
      <category>php</category>
    </item>
    <item>
      <title>Building and Integrating Lambda Layers with Serverless API</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Fri, 12 Jul 2024 12:05:44 +0000</pubDate>
      <link>https://dev.to/ahmedsalem2020/building-and-integrating-lambda-layers-with-serverless-api-11o7</link>
      <guid>https://dev.to/ahmedsalem2020/building-and-integrating-lambda-layers-with-serverless-api-11o7</guid>
      <description>&lt;p&gt;In this tutorial, we’ll explore how to use AWS Lambda Layers to organize and reuse code across multiple AWS Lambda functions. Specifically, we’ll move DynamoDB queries to a separate Lambda Layer and integrate it with a Serverless API. This approach enhances maintainability and promotes code reuse, making your serverless applications more scalable and efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our solution involves creating a Lambda Layer containing DynamoDB operations (list, add, and get countries) and integrating this layer with our existing Serverless API. We will define the required resources using the Serverless Framework and deploy them to AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat3jjthto1inb2p3nehi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat3jjthto1inb2p3nehi.png" alt="Image description" width="800" height="618"&gt;&lt;/a&gt;&lt;br&gt;
The architecture includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A DynamoDB table to store country information.&lt;/li&gt;
&lt;li&gt;A Lambda Layer to encapsulate DynamoDB operations.&lt;/li&gt;
&lt;li&gt;Multiple Lambda functions to handle API requests (add, list, get countries).&lt;/li&gt;
&lt;li&gt;API Gateway to expose these Lambda functions as RESTful endpoints.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Directory Structure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here is the directory structure for our project:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk08njdcpcrpb2ui5zlh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk08njdcpcrpb2ui5zlh.png" alt="Image description" width="570" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. Create a Separate Folder for Lambda Layers&lt;/strong&gt;&lt;br&gt;
Create a folder named Dynamo_layer and add the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A folder named modules-layer.&lt;/li&gt;
&lt;li&gt;A folder named dynamo-sdk-layer.&lt;/li&gt;
&lt;li&gt;A serverless.yml file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Add Layers Resources in serverless.yml&lt;/strong&gt;&lt;br&gt;
In the Dynamo_layer directory, add the following content to serverless.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Service name has to be unique for your account.
service: my-dependencies

# framework version range supported by this service.
frameworkVersion: "&amp;gt;=1.1.0 &amp;lt;8.0.0"

# configuration of the cloud provider. As we are using AWS so we defined AWS corresponding configuration.
provider:
  name: aws
  lambdaHashingVersion: 20201221
  runtime: nodejs14.x
  #stage: dev
  region: us-east-2
  # Create an ENV variable to be able to use it in my JS code. *** Check line 4 in get-country-by-name JS file ***
  environment:
    tableName: ${self:custom.tableName}
  # To Give a permission to each lambda function to access DynamoDB table
  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:*
        # - dynamodb:Query
        # - dynamodb:Scan
        # - dynamodb:GetItem
        # - dynamodb:PutItem
      Resource: "*"

layers:
  ModulesLayer:
    path: ./modules-layer
    description: "my dependencies"

  DynamoSdkLayer:
    path: ./dynamo-sdk-layer
    description: "my dynamo dependencies"

custom:
  tableName: countries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Deploy Lambda Layers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deploy the Dynamo_layer to create a CloudFormation stack for Lambda Layers by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sls deploy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Navigate to the my-service folder, which contains the serverless.yml file for your API Gateway endpoints and Lambda functions.&lt;/strong&gt; &lt;br&gt;
Ensure the configuration includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A DynamoDB table.&lt;/li&gt;
&lt;li&gt;Lambda functions for countries_loadData, add-country, list-countries, and get-country-by-name.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;serverless.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Service name has to be unique for your account.
service: my-service

# framework version range supported by this service.
frameworkVersion: "&amp;gt;=1.1.0 &amp;lt;8.0.0"

# configuration of the cloud provider. As we are using AWS so we defined AWS corresponding configuration.
provider:
  name: aws
  lambdaHashingVersion: 20201221
  runtime: nodejs14.x
  #stage: dev
  region: us-east-2
  # Create an ENV variable to be able to use it in my JS code. *** Check line 4 in get-country-by-name JS file ***
  environment:
    tableName: ${self:custom.tableName}
    NODE_PATH: "./:/opt/node_modules"
  # To Give a permission to each lambda function to access DynamoDB table
  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:*
        # - dynamodb:Query
        # - dynamodb:Scan
        # - dynamodb:GetItem
        # - dynamodb:PutItem
      Resource: "*"

# package:
#   exclude:
#     - Dynamo-SDK/**

custom:
  tableName: countriees         

# As shown below, when the HTTP POST or GET request is made, the handler should be invoked.
functions:

#(1) Lambda function to initially fill DynamoDB
  FillDynamoDB:
    handler: lambdas/countries_loadData.fill
    description: fill DynamoDB table with set of countries.
    events:
      - http: 
          path: fill-dynamoDB
          method: POST
          cors: true
    layers:
      - arn:aws:lambda:us-east-2:&amp;lt;ACCOUNT-ID&amp;gt;:layer:ModulesLayer:3
      - arn:aws:lambda:us-east-2:&amp;lt;ACCOUNT-ID&amp;gt;:layer:DynamoSdkLayer:3

#(2) Lambda function to list all the countries 
  GetAllCountries:
    handler: lambdas/list-countries.list
    description: get all the countries information.
    events:
      - http: 
          path: list-countries
          method: GET
          cors: true
    layers:
      - arn:aws:lambda:us-east-2:&amp;lt;ACCOUNT-ID&amp;gt;:layer:ModulesLayer:3
      - arn:aws:lambda:us-east-2:&amp;lt;ACCOUNT-ID&amp;gt;:layer:DynamoSdkLayer:3

#(3) Lambda function to get a country by name
  GetCountryByName:
    handler: lambdas/get-country-by-name.get
    description: get country By Name.
    events:
      - http: 
          path: get-country/{NAME}
          method: GET
          cors: true
    layers:
      - arn:aws:lambda:us-east-2:&amp;lt;ACCOUNT-ID&amp;gt;:layer:ModulesLayer:3
      - arn:aws:lambda:us-east-2:&amp;lt;ACCOUNT-ID&amp;gt;:layer:DynamoSdkLayer:3

#(4) Lambda function to add a new country 
  AddNewCountry:
    handler: lambdas/add-country.add
    description: add a new country.
    events:
      - http: 
          path: add-country
          method: POST
          cors: true
    layers:
      - arn:aws:lambda:us-east-2:&amp;lt;ACCOUNT-ID&amp;gt;:layer:ModulesLayer:3
      - arn:aws:lambda:us-east-2:&amp;lt;ACCOUNT-ID&amp;gt;:layer:DynamoSdkLayer:3



#Resources are AWS infrastructure components which your Functions use. 
#The Serverless Framework deploys an AWS components your Functions depend upon.
resources:
  Resources:
    myDynamoDbTable:
      Type: 'AWS::DynamoDB::Table'
      #DeletionPolicy: Retain
      Properties:       
        TableName: ${self:custom.tableName}

        AttributeDefinitions:
          -
            AttributeName: "NAME"
            AttributeType: "S" 

        KeySchema:
          -
            AttributeName: "NAME"
            KeyType: "HASH"

        BillingMode: PAY_PER_REQUEST

    # Custom resource to invoke lambda function to fill the countries DynamoDB table
    TriggerFillDynamoDBFunction:
     Type: AWS::CloudFormation::CustomResource
     Properties:
       ServiceToken: !GetAtt 'FillDynamoDBLambdaFunction.Arn'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Deploy Serverless API&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deploy the serverless configuration by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sls deploy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Access API Endpoints&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can access the endpoints using the following URLs:&lt;/p&gt;

&lt;p&gt;List countries: GET&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://nymaeazapc.execute-api.us-east-2.amazonaws.com/dev/list-countries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get country by name: GET&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://nymaeazapc.execute-api.us-east-2.amazonaws.com/dev/get-country/{NAME}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add a country: POST&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://nymaeazapc.execute-api.us-east-2.amazonaws.com/dev/add-country

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;7. Remove All Functions, Events, and Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To clean up, remove all AWS resources created by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sls remove

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By following these steps, you have successfully utilized AWS Lambda Layers to modularize and reuse DynamoDB queries across multiple Lambda functions. This approach demonstrates the power of Lambda Layers in promoting code reuse and maintainability, making your serverless applications more efficient and scalable.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>serverless</category>
      <category>api</category>
    </item>
    <item>
      <title>Building Scalable Serverless Applications with AWS SQS and Lambda using SAM</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Sat, 17 Feb 2024 21:59:41 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-scalable-serverless-applications-with-aws-sqs-and-lambda-using-sam-339c</link>
      <guid>https://dev.to/aws-builders/building-scalable-serverless-applications-with-aws-sqs-and-lambda-using-sam-339c</guid>
      <description>&lt;p&gt;In this tutorial, we will explore how to leverage the AWS Serverless Application Model (SAM) to build a scalable serverless application that utilizes Amazon Simple Queue Service (SQS) and AWS Lambda. Our goal is to create a system where a scheduled Lambda function sends random messages to an SQS queue, triggering another Lambda function to process and log these messages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;br&gt;
Our solution involves creating two Lambda functions - one to produce random messages and push them into an SQS queue, and another to consume these messages from the queue and log them to CloudWatch. We'll use AWS SAM to define and deploy our serverless application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fircecloh021p95s71hs5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fircecloh021p95s71hs5.png" alt="Image description" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide&lt;/strong&gt;&lt;br&gt;
Let's break down the implementation into the following steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Setting Up AWS SAM CLI&lt;/strong&gt;&lt;br&gt;
Ensure the AWS SAM CLI is installed on your local machine. Follow the installation instructions provided &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-windows.html" rel="noopener noreferrer"&gt;here&lt;/a&gt; for Windows or refer to the official documentation for other operating systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Creating an AWS SAM Project using PowerShell Window&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1) Use SAM init command to generate a sample module. This will&lt;br&gt;
give us a preformatted module that is ready to be deployed&lt;br&gt;
after we change a few things.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; # sam init

          sam-app/
            ├── README.md
            ├── events/
            │   └── event.json
            ├── hello_world/
            │   ├── __init__.py
            │   ├── app.py            #Contains your AWS Lambda handler logic.
            │   └── requirements.txt  #Contains any Python dependencies the application requires, used for sam build
            ├── template.yaml         #Contains the AWS SAM template defining your application's AWS resources.
            └── tests/
                  └── unit/
                  ├── __init__.py
                  └── test_handler.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) the project will be created under C:\Windows\System32. Copy the project into your target directory and navigate to it in&lt;br&gt;
PowerShell using the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cd "C:\Users\ahmedsalem"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Defining SQS Resource&lt;/strong&gt;&lt;br&gt;
Create SQS resource in template.yaml file&lt;/p&gt;

&lt;p&gt;template.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: &amp;gt;
  ahmedsalem

  Sample SAM Template for ahmedsalem

# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
  Function:
    Timeout: 3

Resources:
  MyQueue: 
    Type: AWS::SQS::Queue
    Properties: 
      QueueName: "ahmedsalem-sqs"
      VisibilityTimeout: 120

  SendDataToSQS:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: Sender/
      Handler: sender.send_data
      Runtime: nodejs12.x
      Timeout: 60

  LambdaScheduledRule: 
      Type: AWS::Events::Rule
      Properties: 
        Description: "ScheduledRule"
        ScheduleExpression: cron(*/2 * * * ? *)
        State: "ENABLED"
        Targets: 
          - 
            Arn: !GetAtt 'SendDataToSQS.Arn'
            Id: "TargetFunctionV1"

  PermissionForEventsToInvokeLambda:
      Type: AWS::Lambda::Permission
      Properties:
        FunctionName: !Ref SendDataToSQS
        Action: "lambda:InvokeFunction"
        Principal: "events.amazonaws.com"
        SourceArn: !GetAtt LambdaScheduledRule.Arn

  LambdaRoleForSQS:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
              - lambda.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      Path: /
      ManagedPolicyArns:
        - "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
        - "arn:aws:iam::aws:policy/AmazonSQSFullAccess"
        - "arn:aws:iam::aws:policy/service-role/AWSLambdaSQSQueueExecutionRole"
      # Policies:
      #   - PolicyName: SQSpermissions
      #     PolicyDocument:
      #       Version: 2012-10-17
      #       Statement:
      #         - Effect: Allow
      #           Action:
      #           - "sqs:*"
      #           Resource: '*'

  SampleSQSPolicy: 
    Type: AWS::SQS::QueuePolicy
    Properties: 
      Queues: 
        - !Ref MyQueue
      PolicyDocument: 
        Statement: 
          - 
            Action: 
            - "SQS:*"
            # - "SQS:SendMessage" 
            # - "SQS:ReceiveMessage"
            Effect: "Allow"
            Resource: "*"
            Principal:  
              Service:
              - lambda.amazonaws.com     

  LambdaConsumer:
    Type: AWS::Lambda::Function
    Properties:
        #CodeUri: Receiver/
        Handler: receiver.receive_data
        Runtime: nodejs12.x
        Timeout: 60
        Role: !GetAtt LambdaRoleForSQS.Arn

  LambdaFunctionEventSourceMapping:
   Type: AWS::Lambda::EventSourceMapping
   Properties:
     BatchSize: 1
     Enabled: true
     EventSourceArn: !GetAtt MyQueue.Arn
     FunctionName: !GetAtt LambdaConsumer.Arn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Configuring CloudWatch Event Rule&lt;/strong&gt;&lt;br&gt;
Define a CloudWatch Event Rule in the template.yaml file to trigger the Lambda function every two minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Implementing Lambda Functions&lt;/strong&gt;&lt;br&gt;
Create lambda function resource that sends random data to SQS with its nodeJS backend code.&lt;/p&gt;

&lt;p&gt;sender.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Load the AWS SDK for Node.js
var AWS = require('aws-sdk');

// Set the region 
//AWS.config.update({region: 'REGION'});

// Create an SQS service object
var sqs = new AWS.SQS({apiVersion: '2012-11-05'});

module.exports.send_data = (event, context) =&amp;gt; {
    var sqs_url = "https://sqs.us-east-2.amazonaws.com/944163165741/ahmedsalem-sqs"

    var params = {
        // Remove DelaySeconds parameter and value for FIFO queues
       DelaySeconds: 10,
       MessageAttributes: {
         "Title": {
           DataType: "String",
           StringValue: "The Whistler"
         },
         "Author": {
           DataType: "String",
           StringValue: "John Grisham"
         },
         "WeeksOn": {
           DataType: "Number",
           StringValue: "6"
         }
       },
       MessageBody: "Information about current NY Times fiction bestseller for week of 12/11/2016.",
       // MessageDeduplicationId: "TheWhistler",  // Required for FIFO queues
       // MessageGroupId: "Group1",  // Required for FIFO queues
       QueueUrl: sqs_url
    };

     sqs.sendMessage(params, function(err, data) {
       if (err) {
         console.log("Error", err);
       } else {
         console.log("Success", params.MessageBody);
       }
     });

     return;



}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Create a lambda permission resource for events to invoke sender.js Lambda Function.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Create an IAM Role to allow lambda access to SQS.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Create SQS QueuePolicy resource to grant lambda access to SQS.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Create lambda function resource that receives that random data from SQS and send it to CloudWatch logs with its nodeJS backend code.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;recevier.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exports.receive_data = async function(event, context) {
    event.Records.forEach(record =&amp;gt; {
      const { body } = record;
      console.log(body);
    });
    return {};
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;10. Build your application&lt;/strong&gt;&lt;br&gt;
change into the project directory, where the template.yaml file for the sample application is located. Then run the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# sam build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;11. Deploy your application to the AWS Cloud&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# sam deploy --guided
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;12.Clean up&lt;/strong&gt;&lt;br&gt;
If you no longer need the AWS resources that you created, you can remove them by deleting the AWS CloudFormation stack that you deployed.&lt;/p&gt;

&lt;p&gt;You can delete the AWS CloudFormation stack using one of the below options:&lt;/p&gt;

&lt;p&gt;1- From the AWS Management Console.&lt;/p&gt;

&lt;p&gt;2- By running the following AWS CLI command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# aws cloudformation delete-stack --stack-name ahmedsalem --region us-east-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
By following these steps, you can build a scalable serverless application using AWS SAM, SQS, and Lambda. This architecture enables efficient processing of messages and facilitates the development of event-driven applications.&lt;/p&gt;

</description>
      <category>sqs</category>
      <category>sam</category>
      <category>lambda</category>
      <category>aws</category>
    </item>
    <item>
      <title>Leveraging Kinesis Stream and ElasticSearch with Serverless Framework</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Sun, 14 Jan 2024 08:17:09 +0000</pubDate>
      <link>https://dev.to/aws-builders/leveraging-kinesis-stream-and-elasticsearch-with-serverless-framework-dj3</link>
      <guid>https://dev.to/aws-builders/leveraging-kinesis-stream-and-elasticsearch-with-serverless-framework-dj3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;br&gt;
This guide outlines a dynamic architecture that seamlessly integrates Amazon Kinesis Stream, Elasticsearch, and Serverless Framework for efficient data collection, processing, and analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
In this guide, we'll explore the integration of Amazon Kinesis Stream and ElasticSearch using the Serverless Framework. This implementation involves creating a scheduled Node.js Lambda function to generate random data, and pushing it to a Kinesis stream. Subsequently, another Lambda function consumes this stream and stores the data in ElasticSearch.The ultimate goal is to enable querying of inserted data via Kibana over ElasticSearch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Components:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- AWS Kinesis Stream:&lt;/strong&gt; A scalable stream for real-time data ingestion and buffering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Elasticsearch:&lt;/strong&gt; A powerful search and analytics engine for storing, indexing, and querying data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Serverless Framework:&lt;/strong&gt; A toolkit for building and deploying serverless applications, simplifying infrastructure management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Lambda Functions:&lt;/strong&gt; Serverless functions triggered by events to execute specific tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mxbnrfl9qcypxqor8qp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mxbnrfl9qcypxqor8qp.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Flow:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Generation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A time-based event (CloudWatch Event Rule) triggers a Lambda function (sender.js) every 2 minutes.
** "sender.js" generates random data and pushes it into the Kinesis Stream.**&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Data Consumption and Storage:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A second Lambda function (receiver.js) continuously consumes data from the Kinesis Stream.&lt;/li&gt;
&lt;li&gt;"receiver.js" extracts and processes the data, preparing it for storage.&lt;/li&gt;
&lt;li&gt;"receiver.js" sends the processed data to Elasticsearch for indexing and storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Data Analysis:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kibana, a visualization tool integrated with Elasticsearch, enables interactive exploration and analysis of the stored data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before we delve into the implementation, ensure you have Serverless Framework installed on your machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install serverless -g
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-by-Step Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create a Serverless Project&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless create --template aws-nodejs --path my-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will create "my-service" directory with the following structure:&lt;br&gt;
      .&lt;br&gt;
      ├── .npmignore&lt;br&gt;
      ├── handler.js&lt;br&gt;
      └── serverless.yml&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create a Kinesis Stream&lt;/strong&gt;&lt;br&gt;
Follow the &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kinesis-stream.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; to create a Kinesis stream as a CloudFormation resource.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Schedule Data Generation&lt;/strong&gt;&lt;br&gt;
Create a CloudWatch Event Rule resource to trigger the "sender.js" Lambda function every 2 minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Lambda Permission for Event&lt;/strong&gt;&lt;br&gt;
Create a Lambda Permission resource for events to invoke the "sender.js" Lambda function.&lt;/p&gt;

&lt;p&gt;Check the "handler.js" code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { KinesisClient, PutRecordCommand } = require("@aws-sdk/client-kinesis"); // CommonJS import
const {v4: uuidv4} = require("uuid")
const faker = require("faker")

const kinesisInstance = new KinesisClient();

const kinesisName = process.env.kinesisName

module.exports.handler = async function(event) {
  const response = await kinesisInstance.send(new PutRecordCommand({
    Data: Buffer.from(JSON.stringify({
      name: faker.name.firstName(),
      jobTitle: faker.name.jobTitle()  
    })),
    PartitionKey: uuidv4(),
    StreamName: kinesisName
  }));
  console.log(response)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the below "serverless.yaml" file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The service name has to be unique to your account.
service: my-service

# framework version range supported by this service.
frameworkVersion: '2'

# Configuration of the cloud provider. As we are using AWS we defined AWS corresponding configuration.
provider:
  name: aws
  #runtime: nodejs14.x
  #stage: dev
  runtime: nodejs12.x
  lambdaHashingVersion: 20201221
  region: us-east-2

  # Create an ENV variable to be able to use it in my JS code. 
  environment:
    kinesisName: ${self:resources.Resources.MyKinesisStream.Properties.Name}

  # To Permit each lambda function to access the DynamoDB table
  iamRoleStatements:
    - Effect: Allow
      Action:
        - kinesis:*
        - es:*
      Resource: "*"


custom:
  CronExpression: cron(*/2 * * * ? *)  


functions:
  #(1) Lambda function that sends random data to kinesis stream.
  SendDataTokinesis:
    handler: sender.handler

  #(2) Lambda function that receives random data from kinesis stream.
  ReceiveDataFromkinesis:
    handler: receiver.handler
    events:
      - stream:
          type: kinesis
          arn:
            Fn::GetAtt:
              - MyKinesisStream
              - Arn

#Resources are AWS infrastructure components which your Functions use. 
#The Serverless Framework deploys an AWS components your Functions depend upon.
resources:
  Resources:
    MyKinesisStream: 
      Type: AWS::Kinesis::Stream 
      Properties: 
          Name: ahmedsalem-KinesisStream
          RetentionPeriodHours: 168 
          ShardCount: 3

    LambdaScheduledRule: 
      Type: AWS::Events::Rule
      Properties: 
        Description: "ScheduledRule"
        ScheduleExpression: ${self:custom.CronExpression}
        State: "ENABLED"
        Targets: 
          - 
            Arn: !GetAtt 'SendDataTokinesisLambdaFunction.Arn'
            Id: "TargetFunctionV1"

    PermissionForEventsToInvokeLambda:
      Type: AWS::Lambda::Permission
      Properties:
        FunctionName: !Ref SendDataTokinesisLambdaFunction
        Action: "lambda:InvokeFunction"
        Principal: "events.amazonaws.com"
        SourceArn: !GetAtt LambdaScheduledRule.Arn

    ElasticSearchInstance:
      Type: AWS::Elasticsearch::Domain
      Properties:
        DomainName: 'test'
        EBSOptions:
          EBSEnabled: true
          VolumeType: gp2
          VolumeSize: 10
        AccessPolicies:
          Version: '2012-10-17'
          Statement:
            -
              Effect: 'Allow'
              Principal:
                Service: 'lambda.amazonaws.com'
                #AWS: 'arn:aws:iam::&amp;lt;ACCOUNT-ID&amp;gt;:user/&amp;lt;USER-ID&amp;gt;'
              Action: 'es:*'
              #Resource: 'arn:aws:es:us-east-2:944163165741:domain/test/*'
              Resource: '*'
        ElasticsearchClusterConfig:
          InstanceType: t2.small.elasticsearch
          InstanceCount: 1
          DedicatedMasterEnabled: false
          ZoneAwarenessEnabled: false
        ElasticsearchVersion: 5.3


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Implement sender.js&lt;/strong&gt;&lt;br&gt;
Create a Lambda function sender.js that sends random data to the Kinesis stream.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { KinesisClient, PutRecordCommand } = require("@aws-sdk/client-kinesis"); // CommonJS import
const {v4: uuidv4} = require("uuid")
const faker = require("faker")

const kinesisInstance = new KinesisClient();

const kinesisName = process.env.kinesisName

module.exports.handler = async function(event) {
  const response = await kinesisInstance.send(new PutRecordCommand({
    Data: Buffer.from(JSON.stringify({
      name: faker.name.firstName(),
      jobTitle: faker.name.jobTitle()  
    })),
    PartitionKey: uuidv4(),
    StreamName: kinesisName
  }));
  console.log(response)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, install the below-required packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm init
npm install @aws-sdk/client-kinesis
npm install uuid
npm install faker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Implement receiver.js&lt;/strong&gt;&lt;br&gt;
Create a Lambda function receiver.js that consumes data from the Kinesis stream.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { HttpRequest} = require("@aws-sdk/protocol-http");
const { defaultProvider } = require("@aws-sdk/credential-provider-node");
const { SignatureV4 } = require("@aws-sdk/signature-v4");
const { NodeHttpHandler } = require("@aws-sdk/node-http-handler");
const { Sha256 } = require("@aws-crypto/sha256-browser");
const {v4: uuidv4} = require("uuid")


var region = 'us-east-2';
var domain = 'search-test-mz4py3tpigmamsdhngu5gjdvaq.us-east-2.es.amazonaws.com'; // e.g. search-domain.region.es.amazonaws.com
var index = 'node-test';
var type = 'doc';
var id = '8';


module.exports.handler = async function(event) {
    event.Records.forEach(async function(record) {
        console.log(record.kinesis.data);
        // Kinesis data is base64 encoded so decode here
        var payload = Buffer.from(record.kinesis.data, 'base64').toString();
        console.log('Decoded payload:', payload); 
        await indexDocument(payload)

    });
}


//indexDocument(payload).then(() =&amp;gt; process.exit())

async function indexDocument(document) {

    // Create the HTTP request
    var request = new HttpRequest({
        body: document,
        headers: {
            'Content-Type': 'application/json',
            'host': domain
        },
        hostname: domain,
        method: 'PUT',
        path: index + '/' + type + '/' + id
    });

    // Sign the request
    var signer = new SignatureV4({
        credentials: defaultProvider(),
        region: region,
        service: 'es',
        sha256: Sha256
    });

    var signedRequest = await signer.sign(request);

    // Send the request
    var client = new NodeHttpHandler();
    var { response } =  await client.handle(signedRequest)
    //console.log(response.statusCode + ' ' + response.body.statusMessage);
    console.log(response);

    var responseBody = '';
    await new Promise(() =&amp;gt; {
      response.body.on('data', (chunk) =&amp;gt; {
        responseBody += chunk;
      });
      response.body.on('end', () =&amp;gt; {
        console.log('Response body: ' + responseBody);
      });
    }, (error) =&amp;gt; {
        console.log('Error: ' + error);
    });
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;7. Set Up ElasticSearch&lt;/strong&gt;&lt;br&gt;
Create an ElasticSearch resource with the necessary permissions for Lambda to access it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Update receiver.js&lt;/strong&gt;&lt;br&gt;
Update the receiver.js Lambda function to consume the Kinesis stream and send data to ElasticSearch. Install required package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install http-aws-es
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Following these detailed steps, you've successfully integrated Amazon Kinesis Stream and ElasticSearch using the Serverless Framework. This architecture allows you to generate and process data in real-time, storing it in ElasticSearch for easy querying and analysis using tools like Kibana.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.serverless.com/framework/docs/providers/aws/events/streams/" rel="noopener noreferrer"&gt;Serverless Framework - DynamoDB / Kinesis Streams&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis-example.html" rel="noopener noreferrer"&gt;Tutorial: Using AWS Lambda with Amazon Kinesis&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html" rel="noopener noreferrer"&gt;What is Amazon OpenSearch Service?&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/kinesis/" rel="noopener noreferrer"&gt;KinesisClient&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticsearch-domain.html" rel="noopener noreferrer"&gt;AWS::Elasticsearch::Domain&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kinesis</category>
      <category>serverless</category>
      <category>elasticsearch</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Leveraging AWS Cognito with Serverless for User Authentication and Data Management</title>
      <dc:creator>Ahmed Salem</dc:creator>
      <pubDate>Fri, 13 Oct 2023 06:06:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/leveraging-aws-cognito-with-serverless-for-user-authentication-and-data-management-15m1</link>
      <guid>https://dev.to/aws-builders/leveraging-aws-cognito-with-serverless-for-user-authentication-and-data-management-15m1</guid>
      <description>&lt;p&gt;In this comprehensive guide, we will demonstrate how to integrate AWS Cognito with a Serverless application to handle user registration, authentication, and data management. This powerful combination allows you to build secure and scalable serverless APIs with user authentication and data storage features. We'll break down the process step by step and provide code snippets for each task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview:&lt;/strong&gt;&lt;br&gt;
Our objective is to create a Serverless API using Node.js Lambda functions that enable user registration and list countries from DynamoDB for authenticated users. We'll leverage the Serverless Framework for this purpose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Acceptance Criteria:&lt;/strong&gt;&lt;br&gt;
we aim to achieve the following :&lt;br&gt;
1- Register users through the Serverless API and verify their email addresses.&lt;br&gt;
2- Enable users to list countries using a REST client by providing a token obtained during registration.&lt;br&gt;
4- Store user data securely in DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4125a4910morohkx99vf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4125a4910morohkx99vf.png" alt="Image description" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Implementation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Serverless Framework&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Serverless Framework simplifies the deployment and management of serverless applications. To get started, you can install it globally using npm (Node Package Manager).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install serverless -g
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs the Serverless Framework globally on your machine, allowing you to create and manage serverless projects with ease.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create a Serverless Project&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Serverless Framework provides a convenient way to create a new serverless project. You can use predefined templates to scaffold your project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless create --template aws-nodejs --path my-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command creates a new serverless project in a directory called my-service. Inside this directory, you'll find essential project files such as serverless.yml and handler.js.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create a DynamoDB Table for Countries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In your serverless.yml file shown below, you need to define the DynamoDB table that will store country data. DynamoDB is a fully managed NoSQL database service provided by AWS.&lt;/p&gt;

&lt;p&gt;serverless.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Service name has to be unique for your account.
service: my-service

# framework version range supported by this service.
frameworkVersion: '2'

# Configuration of the cloud provider. As we are using AWS so we defined AWS corresponding configuration.
provider:
  name: aws
  runtime: nodejs14.x
  #lambdaHashingVersion: 20201221
  stage: dev
  region: us-east-2

  # Create an ENV variable to be able to use it in my JS code. *** Check line 4 in get-country-by-name JS file ***
  environment:
    countriestableName: ${self:custom.countriestableName}
    userstableName: ${self:custom.userstableName}

  # To Give a permission to each lambda function to access DynamoDB table
  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:*
        # - dynamodb:Query
        # - dynamodb:Scan
        # - dynamodb:GetItem
        # - dynamodb:PutItem
      Resource: "*"


custom:
  countriestableName: countriees  
  userstableName: useres
  userPoolCallback: http://localhost

# As shown below, when the HTTP POST or GET request is made, the handler should be invoked.
functions:
  #(1) Lambda function to initially fill DynamoDB
  FillDynamoDB:
    handler: lambdas/common/countries_loadData.fill
    description: fill DynamoDB table with set of countries.
    events:
      - http: 
          path: fill-dynamoDB
          method: POST
          cors: true

  #(2) Lambda function to list all the countries 
  GetAllCountries:
    handler: lambdas/lambda-endpoints/list-countries.list
    description: get all the countries information.
    events:
      - http: 
          path: list-countries
          method: GET
          cors: true 
          integration: lambda
          authorizer:
            #name: ${self:resources.Resources.ApiGatewayAuthorizer.Properties.Name}
            name: CognitoUserPoolAuthorizer
            # The type of authorizer. COGNITO_USER_POOLS: An authorizer that uses Amazon Cognito user pools.
            type: COGNITO_USER_POOLS
            scopes: email
            # The source of the identity in an incoming request.
            identitySource: method.request.header.Authorization
            # The ID of the RestApi resource that API Gateway creates the authorizer in.
            RestApiId: ApiGatewayRestApi
            arn:
              Fn::GetAtt:
                - UserPool
                - Arn

  #(3) Lambda function to add the user data in users dynamoDB table 
  RegisterNewUser:
    handler: lambdas/lambda-endpoints/addusertoDB.putNewUser
    description: get all the countries information.
    events:
      - http: 
          path: add-new-user
          method: POST
          cors: true

#Resources are AWS infrastructure components which your Functions use. 
#The Serverless Framework deploys an AWS components your Functions depend upon.
resources:
  ${file(./CF.yaml)}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CF.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Resources:
    countriesDynamoDbTable:
      Type: 'AWS::DynamoDB::Table'
      #DeletionPolicy: Retain
      Properties:       
        TableName: ${self:custom.countriestableName}

        AttributeDefinitions:
          -
            AttributeName: "NAME"
            AttributeType: "S" 


        KeySchema:
          -
            AttributeName: "NAME"
            KeyType: "HASH"

        BillingMode: PAY_PER_REQUEST

    usersDynamoDbTable:
      Type: 'AWS::DynamoDB::Table'
      #DeletionPolicy: Retain
      Properties:       
        TableName: ${self:custom.userstableName}

        AttributeDefinitions:
          -
            AttributeName: "NAME"
            AttributeType: "S" 


        KeySchema:
          -
            AttributeName: "NAME"
            KeyType: "HASH"

        BillingMode: PAY_PER_REQUEST

    # Custom resource to invoke lambda function to fill the countries DynamoDB table
    TriggerFillDynamoDBFunction:
     Type: AWS::CloudFormation::CustomResource
     #DependsOn: !Ref FillDynamoDB
     Properties:
       #ServiceToken: arn:aws:lambda:us-east-2:944163165741:function:my-service-dev-FillDynamoDB
       ServiceToken: !GetAtt 'FillDynamoDBLambdaFunction.Arn'

      # User Pool Resources

    # Cognito User Pool Resource
    UserPool:
      Type: AWS::Cognito::UserPool
      Properties:
        AdminCreateUserConfig:
        UserPoolName: ahmedsalem-UserPool
        Schema:
          - Name: email
            AttributeDataType: String
            Mutable: false
            Required: true
        AutoVerifiedAttributes:
          - email
        UsernameConfiguration:
          CaseSensitive: false
        AccountRecoverySetting:
          RecoveryMechanisms:
            - Priority: 1
              Name: "verified_email"
        LambdaConfig:
          PostConfirmation: !GetAtt RegisterNewUserLambdaFunction.Arn

    UserPoolToRegisterNewUserLambdaPermission:
      Type: AWS::Lambda::Permission
      Properties:
        FunctionName: !GetAtt RegisterNewUserLambdaFunction.Arn
        Principal: cognito-idp.amazonaws.com
        Action: lambda:InvokeFunction
        SourceArn: !GetAtt UserPool.Arn

    UserPoolClient:
      Type: AWS::Cognito::UserPoolClient
      Properties:
        ClientName: ahmedsalem-UserPoolClient
        UserPoolId: !Ref UserPool
        GenerateSecret: false
        AllowedOAuthFlows:
          - implicit
        AllowedOAuthFlowsUserPoolClient: true
        AllowedOAuthScopes:
          - phone
          - email
          - openid
          - profile
          - aws.cognito.signin.user.admin
        CallbackURLs:
          - ${self:custom.userPoolCallback}
        SupportedIdentityProviders:
          - COGNITO

    UserPoolDomain:
      Type: AWS::Cognito::UserPoolDomain
      Properties:
        Domain: ahmedsalem-app
        UserPoolId: !Ref UserPool
 Outputs:
  TokenURL:
    Description: Url for users signing in/up
    Value: !Sub "https://${UserPoolDomain}.auth.us-east-2.amazoncognito.com/oauth2/authorize?response_type=token&amp;amp;client_id=${UserPoolClient}&amp;amp;redirect_uri=${self:custom.userPoolCallback}"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Create Lambda function to initially fill countries DynamoDB table in serverless.yaml file and its backend nodejs code.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;API_Responses.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const Responses = {
    _200(data = {}) {
        return {
            headers: {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Methods': '*',
                'Access-Control-Allow-Origin': '*',
            },
            statusCode: 200,
            body: JSON.stringify(data),
        };
    },

    _400(data = {}) {
        return {
            headers: {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Methods': '*',
                'Access-Control-Allow-Origin': '*',
            },
            statusCode: 400,
            body: JSON.stringify(data),
        };
    },
};

module.exports = Responses;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;countries.json&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[

    {
        "name": "Albania",
        "code": "AL"
    },

    {
        "name": "Australia",
        "code": "AU"

    },

    {
        "name": "Belgium",
        "code": "BE"

    },

    {
        "name": "Brazil",
        "code": "BR"

    },

    {
        "name": "China",
        "code": "CN"

    },

    {
        "name": "Greece",
        "code": "GR"

    }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dynamo.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//  To access AWS resources  
const AWS = require('aws-sdk');

// when you're working with dynamoDB you have to use the document client to acces the files in DynamoDB
const documentClient = new AWS.DynamoDB.DocumentClient();

// Create an obj named Dynamo that has a get method that takes an NAME and tablename
// this a Async process looking into DynamoDB, So we need to await this 
const Dynamo = {
    async get(NAME, TableName) {
        const params = {
            TableName,
            Key: {
                NAME,
            },
        };

        const data = await documentClient.get(params).promise();

        if (!data || !data.Item) {
            throw Error(`There was an error fetching the data for NAME of ${NAME} from ${TableName}`);
        }
        console.log(data);

        return data.Item;
    },

    async write(data, TableName) {
        if (!data.NAME) {
            throw Error('no NAME on the data');
        }

        const params = {
            TableName,
            Item: data,
        };

        const res = await documentClient.put(params).promise();

        if (!res) {
            throw Error(`There was an error inserting NAME of ${data.NAME} in table ${TableName}`);
        }

        return data;
    },
    async list(TableName) {
        if (!data) {
            throw Error('error in listing all countries');
        }

        const params = {
            TableName
        };

        const res = await documentClient.scan(params).promise();

        if (!res) {
            throw Error(`error in listing all countires ${TableName}`);
        }

        return data;
    },
};
module.exports = Dynamo;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;countries_loadData.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var AWS = require("aws-sdk");
var fs = require('fs');
const Responses = require('../common/API_Responses');
const Dynamo = require('../common/Dynamo');
const countriestableName = process.env.countriestableName;
var response = require('cfn-response');

var documentClient = new AWS.DynamoDB.DocumentClient();

console.log("Importing countries into DynamoDB. Please wait...");

var allcountries = JSON.parse(fs.readFileSync('lambdas/common/countries.json', 'utf8'));

console.log(allcountries);
exports.fill = (event, context) =&amp;gt; {

    if (event.RequestType != "Create") {
        return response.send(event, context, response.SUCCESS, {})
    }

allcountries.forEach( function(country) {
    var params = {
        TableName: countriestableName,
        Item: {
            "NAME":  country.name,
            "Code": country.code
        }
    };
    documentClient.put(params, function(err, data) {
       if (err) {
           console.error("Unable to add country", country.name, ". Error JSON:", JSON.stringify(err, null, 2));
           return response.send(event, context, response.FAILED, {})

       } else {
           console.log("PutItem succeeded:", country.name);
           return response.send(event, context, response.SUCCESS, {})
       }
    });
});
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Create a Custom Resource to fill the DynamoDB Table&lt;/strong&gt;&lt;br&gt;
To automate the population of the DynamoDB table during deployment, you can define a custom resource in your serverless.yml file above. This custom resource triggers the Lambda function responsible for filling the countries' DynamoDB table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Create Lambda Function for Listing Countries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a Lambda function to list countries from the DynamoDB table. This function should be defined in Node.js and include code for querying DynamoDB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// const Responses = require('../common/API_Responses');
const AWS = require('aws-sdk');
const Dynamo = require('../common/Dynamo');
const documentClient = new AWS.DynamoDB.DocumentClient();
const countriestableName = process.env.countriestableName;

const params = {
 TableName : countriestableName
}

async function listItems(){
 try {
   const data = await documentClient.scan(params).promise()
   return data
 } catch (err) {
   return err
 }
}

exports.list = async (event, context) =&amp;gt; {
 try {
   const data = await listItems()
   return { body: JSON.stringify(data) }
 } catch (err) {
   return { error: err }
 }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7: Access the API&lt;/strong&gt;&lt;br&gt;
To access your Serverless API, you can use the provided URLs. Utilize a REST client to test the API, specifically the /list-countries endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Create a DynamoDB Table for Users&lt;/strong&gt;&lt;br&gt;
Similar to step 3, you need to define another DynamoDB table to store user data. This table will hold user information after registration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9: Create a Cognito User Pool&lt;/strong&gt;&lt;br&gt;
Amazon Cognito is an identity management service that simplifies user authentication and authorization. Here, you create a Cognito user pool, which is essentially a user directory where your users can sign up and sign in.&lt;/p&gt;

&lt;p&gt;a) Create a User Pool&lt;/p&gt;

&lt;p&gt;b) Create a Lambda Permission to Register New Users&lt;br&gt;
Create a Lambda permission to allow Cognito to invoke your Lambda function for registering new users. This step establishes the connection between Cognito and your Lambda function.&lt;/p&gt;

&lt;p&gt;c) Create a User Pool Client&lt;br&gt;
A user pool client represents a web or mobile application that interacts with your Cognito User Pool. Define the client properties to configure how users interact with your app.&lt;/p&gt;

&lt;p&gt;d) Create a User Pool Domain&lt;br&gt;
A user pool domain provides a custom domain name for your Cognito user pool's hosted UI. It's used for user sign-up and sign-in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10: Implement Lambda Function for User Registration&lt;/strong&gt;&lt;br&gt;
Create a Lambda function that handles user registration. This function should be triggered when a user signs up through the Cognito user pool. Ensure that you have installed any required dependencies for your Lambda function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const {
    DynamoDBClient,
    PutItemCommand,
} = require("@aws-sdk/client-dynamodb");

console.log("hello");

const dynamoDbClient = new DynamoDBClient();
const USER_TABLE = process.env.userstableName;
module.exports.putNewUser = async (event, context, callback) =&amp;gt; {
    console.log(USER_TABLE);
    console.log(event.request.userAttributes.email)
    console.log(event.userName)

    await dynamoDbClient.send(new PutItemCommand({

        TableName: USER_TABLE,
        Item: {
            NAME: {S: event.userName},
            Email: {S: event.request.userAttributes.email}
        }
    }))
    callback(null, event)
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 11: Test User Registration&lt;/strong&gt;&lt;br&gt;
To Confirm that lambda function will to be fired when new user sign up using AWS Cognito and its data will be saved in users table.&lt;/p&gt;

&lt;p&gt;(a) Go to the created User Pool and select "App Client Settings"&lt;br&gt;
(b) Click on "Launch Hosted UI"&lt;br&gt;
(c) Click on sign up, add new user, and paste the confirmation code that will be sent to you through the mail&lt;br&gt;
(d) Check the users DynamoDB table, it should have the new registered user data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 12: Create an API Authorizer&lt;/strong&gt;&lt;br&gt;
API authorizers are used to control access to your API endpoints. In this step, you create an API authorizer that checks the access_token header in the request to ensure that only authorized users can access your API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 13: Test API Authorization&lt;/strong&gt;&lt;br&gt;
To confirm that the API endpoint is restricted to authorized users, follow these steps:&lt;/p&gt;

&lt;p&gt;1- Go to your Cognito User Pool settings.&lt;br&gt;
2- Select "App Client Settings."&lt;br&gt;
3- Launch the hosted UI.&lt;br&gt;
4- Sign in with a user account.&lt;br&gt;
5- After signing in, you'll be directed to a rollback URL; &lt;br&gt;
6- copy the URL to obtain the access_token.&lt;br&gt;
7- Open a REST client and paste the API endpoint URL.&lt;br&gt;
8- Set the action to "GET."&lt;br&gt;
9- If you attempt to send the request without the access_token, you should receive an "unauthorized request" response.&lt;br&gt;
10- Add an "Authorization" header with the access_token, and you should receive the expected response from the API endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 14: Deploy Your Serverless Application&lt;/strong&gt;&lt;br&gt;
Deploying your Serverless application is straightforward using the Serverless Framework. The sls deploy command packages and deploys your entire application stack to AWS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sls deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command uploads your Lambda functions, API Gateway, and other resources to AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 15: Remove Resources (Optional)&lt;/strong&gt;&lt;br&gt;
If you ever need to remove all the functions, events, and resources created by your Serverless application from your AWS account, you can use the sls remove command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sls remove
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
In this article, we've covered the complete process of integrating AWS Cognito with a Serverless application for user registration, authentication, and data management. Leveraging the power of Serverless and AWS Cognito, you can build secure and scalable serverless APIs with user authentication.&lt;/p&gt;

</description>
      <category>awscognito</category>
      <category>lambda</category>
      <category>dynamodb</category>
      <category>cloudformation</category>
    </item>
  </channel>
</rss>
