<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chathra Serasinghe</title>
    <description>The latest articles on DEV Community by Chathra Serasinghe (@chathra222).</description>
    <link>https://dev.to/chathra222</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chathra222"/>
    <language>en</language>
    <item>
      <title>4K Resolution in Amazon AppStream 2.0</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Mon, 18 Sep 2023 12:52:04 +0000</pubDate>
      <link>https://dev.to/chathra222/elevating-visuals-4k-resolution-in-amazon-appstream-20-3b1</link>
      <guid>https://dev.to/chathra222/elevating-visuals-4k-resolution-in-amazon-appstream-20-3b1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon AppStream 2.0, a fully managed application streaming service, has recently introduced support for 4K resolution. This enhancement promises a significant upgrade for users who require high-definition visuals for their applications. But why is 4K resolution a pivotal addition to AppStream, and how can you deploy it using CloudFormation? Let's explore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we need 4K resolution?
&lt;/h2&gt;

&lt;p&gt;For the Professionals: Think of graphic designers working on intricate artwork, video editors piecing together 4K footage, architects visualizing detailed structures, or medical professionals analyzing high-res medical images.If they want to stream those applications using AppStream, 4K support is quite important.&lt;/p&gt;

&lt;p&gt;Sharper, Clearer, More Detailed: With 4096 x 2160 pixels, 4K offers four times the pixel density of Full HD. This means applications on AppStream will now look crisper.&lt;/p&gt;

&lt;p&gt;Elevated User Experience: Even for normal users, With 4K , Texts are sharper, icons pop out, and visuals are just a treat to the eyes. It's all about offering an unmatched user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lets stream your application in 4K
&lt;/h2&gt;

&lt;p&gt;Before you jump into the 4K bandwagon, here are a few things to keep in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Must use AppStream Client: Make sure you're using the latest AppStream Windows client.Go to this &lt;a href="https://clients.amazonappstream.com/"&gt;link&lt;/a&gt; to download Appstream Client &lt;/li&gt;
&lt;li&gt;Choose the Graphic Instances: 4K is gorgeous but demands its fair share of resources. Opt for &lt;code&gt;on-demand&lt;/code&gt; or &lt;code&gt;always-on&lt;/code&gt; &lt;strong&gt;graphic instances&lt;/strong&gt; to ensure smooth streaming.You cannot use AppStream &lt;code&gt;elastic&lt;/code&gt; fleet(No server-less unfortunately) &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Quick guide to deploy:
&lt;/h3&gt;

&lt;p&gt;1) First you need to create an Image Builder.This is a required step to create a Custom image which includes your application and configurations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: '2010-09-09'
Description: AppStream 2.0 Custom Image Builder
Parameters:
  InstanceType:
    Type: String
    Default: stream.graphics-design.xlarge
  ImageBuilderName:
    Type: String
    Default: 4KImageBuilder
  VpcId:
    Type: AWS::EC2::VPC::Id
  Subnet1Id:
    Type: AWS::EC2::Subnet::Id
Resources:
  AppStreamImageBuilder:
    Type: 'AWS::AppStream::ImageBuilder'
    Properties:
      Name: !Ref ImageBuilderName
      InstanceType: !Ref InstanceType
      ImageArn: !Sub arn:aws:appstream:${AWS::Region}::image/AppStream-Graphics-Design-WinServer2019-06-12-2023
      IamRoleArn: !GetAtt AppStreamIAMRole.Arn
      EnableDefaultInternetAccess: false
      DisplayName: 4K image builder
      Description: 4K image builder
      VpcConfig:
        SecurityGroupIds: 
          - !Ref AppStreamSecurityGroup
        SubnetIds:
          - !Ref Subnet1Id
  AppStreamSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: AppStream Security Group
      SecurityGroupIngress:
        - IpProtocol: -1
          CidrIp: 10.0.0.0/8
      VpcId: !Ref VpcId
      Tags:
        - Key: Name
          Value: !Sub "${AWS::StackName}-SecurityGroup"
  AppStreamIAMRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: appstream.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AmazonAppStreamServiceAccess
      Policies:
        - PolicyName: AppStreamIAMPolicy
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action: s3:*
                Resource:
                  - "arn:aws:s3:::appstream*/*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This template will create the image builder.&lt;/p&gt;

&lt;p&gt;2)Connect to AppStream Image Builder instance and install and configure the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YNMCEm3m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybap02fbi269wt74csuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YNMCEm3m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybap02fbi269wt74csuw.png" alt="Image description" width="800" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More info: &lt;a href="https://docs.aws.amazon.com/appstream2/latest/developerguide/managing-image-builders-connect.html"&gt;https://docs.aws.amazon.com/appstream2/latest/developerguide/managing-image-builders-connect.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3) Using Image Assistant, create an Appstream Image called &lt;code&gt;4k-appstream-app&lt;/code&gt; which is used to define Appstream fleet in next step. You may replace variables according to your application in following script and run it script inside the Image builder to create the image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ##*===============================================
    ##* APPSTREAM VARIABLE DECLARATION
    ##*===============================================
    [string]$AppName = '4k-appstream-app'
    [string]$AppPath = 'C:\4k-appstream-app\4k-appstream-app.exe'
    [string]$AppDisplayName = '4k-appstream-app'
    [string]$AppParameters = ''
    [string]$AppWorkingDir = ''
    [string]$AppIconPath =  ''
    [string]$ManifestPath = "C:\Users\ImageBuilderAdmin\Documents\optimization_manifest.txt"

    [string]$ImageAssistantPath = "C:\Program Files\Amazon\Photon\ConsoleImageBuilder\image-assistant.exe"

    ##*===============================================
    ##* ADD APPLICATION TO APPSTREAM CATALOG
    ##*===============================================
    #AppStream's Image Assistant Required Parameters
    $Params = " --name " + $AppName + " --absolute-app-path """ + $AppPath + """"     

    #AppStream's Image Assistant Optional Parameters
    if ($AppDisplayName) { $Params += " --display-name """ + $AppDisplayName + """" }
    if ($AppWorkingDir) { $Params += " --working-directory """ + $AppWorkingDir + """" }
    if ($AppIconPath) { $Params += " --absolute-icon-path """ + $AppIconPath + """" }      
    if ($AppParameters) { $Params += " --launch-parameters """ + $AppParameters + """" }     
    if ($ManifestPath) { $Params += " --absolute-manifest-path """ + $ManifestPath + """" }

    #Escape spaces in EXE path
    $ImageAssistantPath = $ImageAssistantPath -replace ' ','` '

    #Assemble Image Assistant API command to add applications
    $AddAppCMD = $ImageAssistantPath + ' add-application' + $Params

    Write-Host "Adding $AppDisplayName to AppStream Image Catalog using command $AddAppCMD"

    #Run Image Assistant command and parameters
    $AddApp = Invoke-Expression $AddAppCMD | ConvertFrom-Json
    if ($AddApp.status -eq 0) {
        Write-Host "SUCCESS adding $AppName to the AppStream catalog."
    } else {
        Write-Host "ERROR adding $AppName to the AppStream catalog." 
        Write-Host $AddApp.message

    }
    #AppStream's Image Assistant Required Parameters
    $Params = " --name " + $AppName + " --display-name  """ + $AppName + """"     

    #Assemble Image Assistant API command to add applications
    $CreateImgCMD = $ImageAssistantPath + ' create-image' + $Params

    Write-Host "Creating image $AppName using command $CreateImgCMD"

    #Run Image Assistant command and parameters
    $createImg = Invoke-Expression $CreateImgCMD | ConvertFrom-Json
    if ($createImg.status -eq 0) {
        Write-Host "SUCCESS creating image $AppName"
    } else {
        Write-Host "ERROR creating image $AppName" 
        Write-Host $createImg.message

  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you run the script successfully you will notice image created &lt;code&gt;4k-appstream-app&lt;/code&gt; under Appstream images.&lt;/p&gt;

&lt;p&gt;4) Deploy AppStream fleet and Stack using Cloudformation template&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: 2010-09-09
Description: "Provision AppStream 2.0 Fleet and Stack"
Parameters:
  FleetName:
    Type: String

  FleetInstanceType:
    Type: String
    Default: stream.graphics-design.large

  StackName:
    Type: String

  SecurityGroupId:
    Type: String
    Default: ""

  VpcId:
    Type: String

  Subnet1Id:
    Type: String

  Subnet2Id:
    Type: String

  FileUploadPermission:
    Type: String

  FileDownloadPermission:
    Type: String

  ClipboardCopyToLocalDevice:
    Type: String

  ClipboardCopyFromLocalDevice:
    Type: String

  PrintingToLocalDevice:
    Type: String


Conditions:
  SecurityGroupIdNotGiven: !Equals [!Ref SecurityGroupId, ""]

Resources:
  4KAppStreamFleet:
    Type: AWS::AppStream::Fleet
    Properties:
      Name: !Ref FleetName
      ComputeCapacity:
        DesiredInstances: 1
      InstanceType: !Ref FleetInstanceType
      # replace the Image Arn of previously created Image
      ImageArn: !Sub arn:aws:appstream:${AWS::Region}:${AWS::AccountId}:image/4k-appstream
      FleetType: "ON_DEMAND"
      VpcConfig:
        SecurityGroupIds:
          - !If [
              SecurityGroupIdNotGiven,
              !Ref AppStreamSecurityGroup,
              !Ref SecurityGroupId,
            ]
        SubnetIds:
          - !Ref Subnet1Id
          - !Ref Subnet2Id
      EnableDefaultInternetAccess: False
      MaxUserDurationInSeconds: "57600"
      DisconnectTimeoutInSeconds: "900"
      IdleDisconnectTimeoutInSeconds: "900"
      StreamView: APP
    CreationPolicy:
      StartFleet: True

  AppStreamSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: AppStream Security Group
      SecurityGroupIngress:
        - IpProtocol: -1
          CidrIp: 10.0.0.0/8
      VpcId: !Ref VpcId
      Tags:
        - Key: Name
          Value: !Sub "${AWS::StackName}-SecurityGroup"

  # Create AppStream Stack
  AppStreamStack:
    Type: AWS::AppStream::Stack
    Properties:
      Name: !Ref StackName
      Description: This demo stack was created using CloudFormation
      ApplicationSettings:
        Enabled: false
      UserSettings:
        - Action: CLIPBOARD_COPY_TO_LOCAL_DEVICE
          Permission: !Ref ClipboardCopyToLocalDevice
        - Action: FILE_DOWNLOAD
          Permission: !Ref FileDownloadPermission
        - Action: FILE_UPLOAD
          Permission: !Ref FileUploadPermission
        - Action: CLIPBOARD_COPY_FROM_LOCAL_DEVICE
          Permission: !Ref ClipboardCopyFromLocalDevice
        - Action: PRINTING_TO_LOCAL_DEVICE
          Permission: !Ref PrintingToLocalDevice

  # Create Association between fleet and stack
  4KAppStreamDemoStackFleetAssociation:
    Type: AWS::AppStream::StackFleetAssociation
    Properties:
      FleetName: !Ref 4KAppStreamFleet
      StackName: !Ref AppStreamStack

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6) Launch Appstream URL using AppStream Windows client&lt;/p&gt;

&lt;p&gt;You have to base64 encode the AppStream url and add &lt;code&gt;amazonappstream:&lt;/code&gt; so you can launch the application from Browser using AppStream client.You may use code similar to following to covert your AppStream url, so that it will launch the AppStream session through AppStream Client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function stringToBase64WithPrefix(appstreamUrl: string): string {
    const base64Encoded=Buffer.from(appstreamUrl).toString('base64');
    return `amazonappstream:${base64Encoded}`
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you launch your AppStream session using Appstream URL, you should be able to see the option in AppStream menu to change the resolution to 4K.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--atGCR8Dc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5jjm078m7u6ip9e6nyf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--atGCR8Dc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5jjm078m7u6ip9e6nyf.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Seamlessly Integrate GitHub with AWS using OpenID Connect for GitHub Actions</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Sat, 22 Apr 2023 13:47:16 +0000</pubDate>
      <link>https://dev.to/chathra222/how-to-seamlessly-integrate-github-with-aws-using-openid-connect-for-github-actions-ihm</link>
      <guid>https://dev.to/chathra222/how-to-seamlessly-integrate-github-with-aws-using-openid-connect-for-github-actions-ihm</guid>
      <description>&lt;p&gt;As more and more organizations adopt a DevOps approach to software development, seamless and secure integration between different tools and platforms is becoming increasingly important. In this blog post, I'll show you how to integrate GitHub with AWS using OpenID Connect for GitHub Actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is OpenID Connect?
&lt;/h2&gt;

&lt;p&gt;OpenID Connect is an authentication protocol that allows users to authenticate themselves to an application by using a third-party identity provider, such as GitHub. OpenID Connect provides a standard way to exchange authentication and authorization data between different systems, making it a popular choice for integrating different platforms securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Integrate GitHub with AWS using OpenID Connect?
&lt;/h2&gt;

&lt;p&gt;Without OIDC, When GitHub Actions workflows need to access cloud providers for deployment or using their services, requiring credentials to be stored as secrets in GitHub. However, this method of hardcoding secrets requires duplicating them in both the cloud provider and GitHub.This is not secure.&lt;/p&gt;

&lt;p&gt;With OIDC, we don't have to store any secrets.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Integrate GitHub with AWS using OpenID Connect for GitHub Actions?
&lt;/h2&gt;

&lt;p&gt;To integrate GitHub with AWS using OpenID Connect, you will need to follow these steps:&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1.Create an Identity Provider in AWS
&lt;/h4&gt;

&lt;p&gt;The first step is to create an Identity Provider in AWS. To do this, you will need to sign in to the AWS Management Console and navigate to the IAM (Identity and Access Management) service. From there, you can create a new identity provider and select OpenID Connect as the provider type. You will need to provide the Client ID and Client Secret for your GitHub application, which you can obtain from your GitHub Developer Settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6ZPcW-EH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8qq44tudr05dm9wa0hxk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6ZPcW-EH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8qq44tudr05dm9wa0hxk.png" alt="Image" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provider URL:&lt;/strong&gt; &lt;code&gt;https://token.actions.githubusercontent.com&lt;/code&gt;&lt;br&gt;
This is the Github OpenID Connect URL for authentication requests&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audiences :&lt;/strong&gt; &lt;code&gt;sts.amazonaws.com&lt;/code&gt;&lt;br&gt;
(Audiences is also known as client ID) &lt;br&gt;
Audience is a value that identifies the application that is registered with an OpenID Connect provider&lt;/p&gt;
&lt;h4&gt;
  
  
  2.Assign a role
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TJNbYOE5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/au9s2dthsxua2hikkkgm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TJNbYOE5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/au9s2dthsxua2hikkkgm.png" alt="Image" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can create a role or add an existing role.But make sure that its trust relationships of that role configured as follows.&lt;/p&gt;
&lt;h5&gt;
  
  
  Trust relationships of the role
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "&amp;lt;Arn of Identity provider&amp;gt;"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
                    "token.actions.githubusercontent.com:sub": "repo:&amp;lt;GitHub organization name&amp;gt;/&amp;lt;GitHub repo name&amp;gt;:ref:refs/heads/&amp;lt;branch name&amp;gt;"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You have to replace the placeholders with correct values.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Arn of Identity Provider&lt;/code&gt; - ARN of the Identity Provider that you created in step 1.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GitHub organization name&lt;/code&gt; - your GitHub Organization name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GitHub repo name&lt;/code&gt; - your GitHub Repository name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;branch name&lt;/code&gt; - Your branch that you want to trigger Github actions&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;
  
  
  Permissions of the role
&lt;/h5&gt;

&lt;p&gt;Set the permissions according to your requirement.&lt;br&gt;
E.g:- If your Github actions need only S3 bucket access then make sure you just give only that permission.&lt;/p&gt;
&lt;h4&gt;
  
  
  3.Configure GitHub Actions
&lt;/h4&gt;

&lt;p&gt;Next, you will need to configure GitHub Actions to use OpenID Connect for authentication. To do this, you will need to create a new GitHub Actions workflow file and add the necessary configuration. You will need to specify the ID of your AWS Identity Provider, as well as the client ID and client secret for your GitHub application.&lt;/p&gt;

&lt;p&gt;this is an example of GitHub Actions workflow file &lt;code&gt;(.github/workflows/dev.yaml)&lt;/code&gt; which achieve the repository and upload to s3 bucket when code pushes to &lt;code&gt;dev&lt;/code&gt; branch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Dev Branch Build and Deploy

on:
  push:
    branches:
      - dev
env:
  BUCKET_NAME_PREFIX: "test-bucket"
  AWS_REGION: "ap-southeast-1"
  GITHUB_REF: "dev"

jobs:
  build:
    name: Build and Package
    runs-on: gh-runner
    permissions:
      id-token: write
      contents: read
    steps:
      - uses: actions/checkout@v3
        with:
          path: "./${{ env.GITHUB_REF }}"
      - name: Extract branch name
        shell: bash
        run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
        id: extract_branch
      - name: Extract commit hash
        shell: bash
        run: echo "##[set-output name=commit_hash;]$(echo "$GITHUB_SHA")"
        id: extract_hash
      - name: configure aws credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          role-to-assume: &amp;lt;role arn&amp;gt;
          role-session-name: &amp;lt;role session-name&amp;gt;
          aws-region: ${{ env.AWS_REGION }}
      # Copy build directory to S3
      - name: Copy build to S3
        run: |
          cd ${{ env.GITHUB_REF }} 
          pwd
          git status 
          git archive --format=zip --output=artifact.zip ${{ steps.extract_branch.outputs.branch }}          
          aws s3 cp ./artifact.zip s3://${{ env.BUCKET_NAME_PREFIX }}-${{ steps.extract_branch.outputs.branch }}/artifact.zip

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4.Test the Integration
&lt;/h4&gt;

&lt;p&gt;Once you have configured GitHub Actions, you can test the integration by pushing a change to your GitHub repository. GitHub Actions should automatically trigger a build and deploy process in AWS, using the OpenID Connect authentication data to authenticate the user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of using OIDC
&lt;/h2&gt;

&lt;p&gt;One of the primary benefits of using OIDC tokens is the elimination of the need for cloud secrets. Instead of duplicating your cloud credentials as long-lived GitHub secrets, by this method, it request a short-lived access token from the provider through OIDC. This eliminates the need to store secrets in your GitHub repository, thus reducing the risk of secrets being accidentally or intentionally exposed.&lt;/p&gt;

&lt;p&gt;Another major benefit is,OIDC tokens allow for the rotation of credentials. With OIDC, your cloud provider issues a short-lived access token that is only valid for a single job, and then automatically expires. This ensures that credentials are rotated frequently, thus reducing the likelihood of misuse or abuse.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Achieving Sustainability Goals in AWS</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Thu, 26 Jan 2023 00:50:41 +0000</pubDate>
      <link>https://dev.to/chathra222/achieving-sustainability-goals-in-aws-3bgh</link>
      <guid>https://dev.to/chathra222/achieving-sustainability-goals-in-aws-3bgh</guid>
      <description>&lt;p&gt;Climate change is the defining issue of our time. It's caused by the burning of fossil fuels, and its effects are being felt worldwide. It's a planetary problem—and it will be felt for many years to come.&lt;/p&gt;

&lt;p&gt;AWS has larger number of customers that include companies of all sizes in nearly every industry around the world&lt;br&gt;
and they believes that they can help accelerate the adoption of sustainable energy for our planet. In 2000, AWS became one of the first companies in the industry to adopt a comprehensive approach to environmental stewardship, setting goals and guidelines for energy efficiency and carbon neutrality at each step along its supply chain process. &lt;/p&gt;

&lt;p&gt;Today, AWS has taken many initiatives including Wind Farm Project in Ireland and US East in Ohio—which uses wind power to drive cloud-based services for customers data center.&lt;br&gt;
AWS is on track to power their operations entirely with renewable energy by 2025, which will help various businesses in achieving environmental sustainability goals.However, this is not sufficient.&lt;/p&gt;

&lt;p&gt;Shared responsibility model for environmental sustainability&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gst_BQZA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h5yvfo2ehr1w0qb4bxtz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gst_BQZA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h5yvfo2ehr1w0qb4bxtz.png" alt="Image" width="800" height="333"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Source: AWS documentation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AWS has data centers around the world. They are run by electricity. Electrical energy could be produced from renewable sources like solar or wind power. There are servers in data centers. They must be chilled. Water can be refreshing at times. There will be leftovers. There will be construction supplies. In essence, AWS is responsible for numerous areas within AWS and carefully examines how they are sustainable.&lt;/p&gt;

&lt;p&gt;On the other hand, customers are responsible for sustainability for what they're doing in the cloud. AWS doesn't have control on it. Understanding the implications of the services used, measuring affects over the whole workload lifecycle, and using design principles and best practices to minimize these impacts are all part of the discipline of building cloud workloads.&lt;/p&gt;

&lt;p&gt;AWS announced the Sustainability Pillar in re:Invent 2021 to help customers in minimizing the environmental impacts of running cloud workloads. It is available in the AWS-Well-Architected tool, which contains design principles, operational guidance, and best practices for meeting sustainability goals.&lt;/p&gt;

&lt;p&gt;The Greenhouse Gas Protocol categorize carbon emissions into three major scopes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All direct emissions from the activities of an organization 
eg:- fuel combustion by data center backup generators.&lt;/li&gt;
&lt;li&gt;Indirect emissions from electricity purchased and used to power data centers and other facilities
eg:- emissions from commercial power generation.&lt;/li&gt;
&lt;li&gt;All other indirect emissions from activities of an organization from sources it doesn’t control (Customer's responsibility)
eg:- Each workload deployed generates a fraction of the total AWS emissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How can we optimize our workloads to reach our sustainability goals?&lt;/p&gt;

&lt;p&gt;In order to reduce the amount of physical hardware, electricity, and carbon emissions that our workload actually produces, what we want to do is use the least amount of virtual hardware possible. Also we need to make sure to use the best and suitable technology.&lt;br&gt;
AWS is always improving their existing services and introducing new ones in order to meet a variety of goals that customers want to achieve, including sustainability. Attending to re:Invent, reading AWS documentation, and following blogs can help you stay updated.&lt;/p&gt;

&lt;p&gt;There are three major areas we could examine when discussing workloads.&lt;br&gt;
Those are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Computing &lt;/li&gt;
&lt;li&gt; Storage&lt;/li&gt;
&lt;li&gt; Networking.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First up, with compute services, we want to optimize the compute size, the instant size, the number of containers, and ultimately the number of instances that you're actually using to serve your end users. But the number of instances/containers should change over time based on usage, this means that when you don't need a certain amount of compute power, containers, or instances, it should automatically reduced in size. In other words, the main objective is to lowering the amount of compute required per transaction.&lt;/p&gt;

&lt;p&gt;Some of the Design Principals for Compute:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Right sizing&lt;/li&gt;
&lt;li&gt;Eliminate the idle resources&lt;/li&gt;
&lt;li&gt;Using ARM processors instead of Intel(eg:-AWS Graviton processors)&lt;/li&gt;
&lt;li&gt;Amazon EC2 Auto Scaling &lt;/li&gt;
&lt;li&gt;Containerize the workloads to achieve maximize server utilization&lt;/li&gt;
&lt;li&gt;Use Serverless services(eg:- Fargate, Lambda)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are certain design principles you may use to keep your data in a sustainable manner.&lt;/p&gt;

&lt;p&gt;Some of the Design Principals for storage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use S3 lifecycle rules&lt;/li&gt;
&lt;li&gt;Use Amazon S3 Intelligent-Tiering&lt;/li&gt;
&lt;li&gt;Use columnar data formats and compression(Columnar data formats like Parquet and ORC when applicable)&lt;/li&gt;
&lt;li&gt;Use Amazon Data Lifecycle Manager to delete old EBS snapshots and Amazon EBS-backed Amazon Machine Images (AMIs) automatically.&lt;/li&gt;
&lt;li&gt;Turning on data deduplication for your Amazon FSx for Windows File Server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When it comes to network optimization, it is primarily about optimizing the path data takes across a network and reducing the size of data transmitted.&lt;/p&gt;

&lt;p&gt;Some of the Design Principals for Networking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using CDN network to reduce the network path( eg:-CloudFront)&lt;/li&gt;
&lt;li&gt;Optimize CloudFront cache hit ratio&lt;/li&gt;
&lt;li&gt;If workload is deploying only on a single region, choose a Region that is near the majority of your users&lt;/li&gt;
&lt;li&gt;If your users are spread over multiple Regions, set up multiple copies of the data to reside in each Region.
(eg:- RDS cross-Region read replicas and DynamoDB global tables)&lt;/li&gt;
&lt;li&gt;Serve compressed files(eg:-configure CloudFront to automatically compress objects)&lt;/li&gt;
&lt;li&gt;Use Edge-optimized API endpoints for geographically distributed clients&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's crucial to keep in mind that optimizing is a journey, not a single task. Although it's a journey, businesses can progress more faster toward their sustainability goals by putting the design principles recommended by the AWS Well-Architected framework into practice. Moreover, AWS has launched the AWS Customer Carbon Footprint Tool, which allows you to track carbon emissions from your workload and make proactive decisions to reduce carbon emissions. It is also essential to look at Cloudwatch Metrics and AWS Trusted Advisor to analyze and optimize your workloads.&lt;/p&gt;

&lt;p&gt;AWS commissioned 451 Research, a technology market research and advisory business, to undertake a study on the energy and carbon efficiency of enterprise data centers and server architecture to assess the environmental benefits for enterprises shifting to its public cloud infrastructure.&lt;br&gt;
According to the Amazon Sustainability webpage, it says that "AWS can lower customers’ workload carbon footprints by nearly 80% compared to surveyed enterprise data centers, and up to 96% once AWS is powered with 100% renewable energy".&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VL4bVrLh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zu90zjbf2ffyqzl80hjw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VL4bVrLh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zu90zjbf2ffyqzl80hjw.png" alt="Image" width="800" height="245"&gt;&lt;/a&gt;&lt;br&gt;
You can always compare how much your carbon footprint has decreased after switching from on-premises data centers to AWS using the AWS Customer Carbon Footprint Tool. If you haven't considered sustainability yet, now is the time to do so in order to saving earth and future generations. If you are ready, Versent can help you in this journey(Green Brick Road), by working with your ICT team to examine and assess your ICT estate, procedures, and capabilities and align them to sustainability best practices.&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://assets.aboutamazon.com/2b/73/4f5c2c884e1b8461b2f7fe4ea138/aws451researchapacjuly2021.pdf"&gt;https://assets.aboutamazon.com/2b/73/4f5c2c884e1b8461b2f7fe4ea138/aws451researchapacjuly2021.pdf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://d1.awsstatic.com/institute/The%20carbon%20opportunity%20of%20moving%20to%20the%20cloud%20for%20APAC.pdf"&gt;https://d1.awsstatic.com/institute/The%20carbon%20opportunity%20of%20moving%20to%20the%20cloud%20for%20APAC.pdf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sustainability.aboutamazon.com/environment/the-cloud?energyType=true"&gt;https://sustainability.aboutamazon.com/environment/the-cloud?energyType=true&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://d39w7f4ix9f5s9.cloudfront.net/e3/79/42bf75c94c279c67d777f002051f/carbon-reduction-opportunity-of-moving-to-aws.pdf"&gt;https://d39w7f4ix9f5s9.cloudfront.net/e3/79/42bf75c94c279c67d777f002051f/carbon-reduction-opportunity-of-moving-to-aws.pdf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.aboutamazon.eu/news/amazon-web-services/amazons-first-operational-wind-farm-in-ireland-delivers-clean-energy-to-the-grid"&gt;https://www.aboutamazon.eu/news/amazon-web-services/amazons-first-operational-wind-farm-in-ireland-delivers-clean-energy-to-the-grid&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Understanding AWS Lambda layers</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Tue, 27 Sep 2022 00:55:25 +0000</pubDate>
      <link>https://dev.to/chathra222/understanding-aws-lambda-layers-54jn</link>
      <guid>https://dev.to/chathra222/understanding-aws-lambda-layers-54jn</guid>
      <description>&lt;h3&gt;
  
  
  What is a lambda layer?
&lt;/h3&gt;

&lt;p&gt;It is an &lt;strong&gt;achieve&lt;/strong&gt; type &lt;strong&gt;durable&lt;/strong&gt; storage option for lambda which is typically use to store reusable code, such as libraries, dependencies, custom runtimes etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the storage options supported by Lambda functions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;lambda layers (extracts into /opt)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;/tmp&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;S3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EFS&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What is the benefit of using Lambda layers?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It is one of the best practice of sharing same code libraries with multiple lambda functions
e.g:-Include AWS SDK in a lambda layer and reuse it&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dependency can easily be updated&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It reduce code duplication&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Faster uploading your lambda code.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to create a Lambda layer?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
Typically you need to include the reusable code/binaries in a lambda layer so which can be used later by multiple lambda functions.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once you decided which code/binaries to be included in the lambda layer, first step is to create a.zip file of it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You need to adhere the correct directory structure based on your programming language(runtime language) of the lambda function&lt;br&gt;
Eg:- if the runtime language is Python, then you may follow directory structure &lt;code&gt;python/&amp;lt;your code or binaries&amp;gt;&lt;/code&gt; when creating the zip file.&lt;br&gt;
Refer this table to understand Layer paths for each Lambda runtime &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#configuration-layers-path"&gt;configuration-layers-path&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;lets say you want to have boto3 and dependencies as a lambda layer. Then you can go to a terminal and run following.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;pip3 install -t python boto3&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9d7aukp---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iq1kcd28waqe2oyq9lwr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9d7aukp---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iq1kcd28waqe2oyq9lwr.png" alt="Image" width="748" height="262"&gt;&lt;/a&gt;&lt;br&gt;
This command will download all the binaries to target directory &lt;code&gt;python/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then you create the zip file &lt;/p&gt;

&lt;p&gt;&lt;code&gt;zip -r layer.zip .&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
After that you can create the lambda layer using zip file as follows.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws lambda publish-layer-version --layer-name "AWSBoto3layer" --description "My boto3 layer" --zip-file fileb://layer.zip --compatible-runtimes python3.8&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5XjhSfWX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/poftfuyqy4m8ixxpsh22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5XjhSfWX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/poftfuyqy4m8ixxpsh22.png" alt="Image" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see the new Lambda layer created in the AWS console and its version is version 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4lihRVS5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7xtypgl3a37062cxjerp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4lihRVS5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7xtypgl3a37062cxjerp.png" alt="Image" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you try to update the same layer it will create a new version. Then then version will be version 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K7m7LekY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9z6klqnofvmo8dz781o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K7m7LekY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9z6klqnofvmo8dz781o.png" alt="Image" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you can use following cli command to attach your lambda layer to lambda function.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws lambda update-function-configuration --function-name &amp;lt;lambda function name&amp;gt; --layers &amp;lt;arn of layer&amp;gt;:&amp;lt;version of the layer&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note:- You have to make sure to link the correct version of the layer&lt;/p&gt;

&lt;h3&gt;
  
  
  Where does AWS lambda function extract layers in its excecution environment?
&lt;/h3&gt;

&lt;p&gt;When setting up the function's execution environment, Lambda extracts the layer contents into the &lt;code&gt;/opt&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2uwEP4sg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/je1pde11mpnhxi74esq7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2uwEP4sg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/je1pde11mpnhxi74esq7.png" alt="Image" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What are important facts you should know before using Lambda layers?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You can only add up to &lt;strong&gt;5 layers&lt;/strong&gt; to a lambda function&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Layer should be in the &lt;strong&gt;same region&lt;/strong&gt; of lambda function(But it can be in a different account)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lambda layers are &lt;strong&gt;immutable&lt;/strong&gt;.Therefore when a new version of the layer is published, you would need to deploy an update to the Lambda functions and explicitly reference the new version.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Maximum size of lambda layer is 50MB&lt;/strong&gt;.It make sense because the purpose of this storage option is to store static code or binaries and for most of the cases it is sufficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unzipped package size lambda function including lambda layers should be 250MB&lt;/strong&gt;.This is a hard limit for lambda function. It means lambda layers doesn't solve completely solving the sizing problem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A layer is private to your AWS account by default&lt;/strong&gt;. But you can expose it other account explicitly by adding a statement to the layer version's permissions policy&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  References:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/blogs/compute/using-lambda-layers-to-simplify-your-development-process/#:%7E:text=Lambda%20layers%20provide%20a%20convenient,faster%20to%20deploy%20your%20code"&gt;https://aws.amazon.com/blogs/compute/using-lambda-layers-to-simplify-your-development-process/#:~:text=Lambda%20layers%20provide%20a%20convenient,faster%20to%20deploy%20your%20code&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#configuration-layers-path"&gt;https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#configuration-layers-path&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/compute/choosing-between-aws-lambda-data-storage-options-in-web-apps/"&gt;https://aws.amazon.com/blogs/compute/choosing-between-aws-lambda-data-storage-options-in-web-apps/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Targeted Sentiment Analysis in real-time using Amazon Comprehend</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Fri, 23 Sep 2022 22:59:13 +0000</pubDate>
      <link>https://dev.to/chathra222/targeted-sentiment-analysis-in-real-time-using-amazon-comprehend-og6</link>
      <guid>https://dev.to/chathra222/targeted-sentiment-analysis-in-real-time-using-amazon-comprehend-og6</guid>
      <description>&lt;p&gt;On September 21 2022, AWS announced that Amazon Comprehend supports synchronous processing for targeted sentiments. In other words, Amazon Comprehend is now capable of extracting sentiments associated with entities in a document in real-time(synchronously).Earlier we were able to do this asynchronously only using a Comprehend analysis job.&lt;/p&gt;

&lt;p&gt;First, let's try to grasp relevant use cases.&lt;/p&gt;

&lt;h1&gt;
  
  
  Use cases
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Enable accurate and scalable brand and competitor insights.&lt;/li&gt;
&lt;li&gt;Live market research&lt;/li&gt;
&lt;li&gt;Producing brand experience&lt;/li&gt;
&lt;li&gt;Improving customer satisfaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's take one of the real world customer review to understand it further.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need targeted sentiments Analysis?
&lt;/h2&gt;

&lt;h4&gt;
  
  
  A real world customer review for a hotel:
&lt;/h4&gt;

&lt;p&gt;"The hotel itself was beautiful and clean. I only give it 3 stars because paying $43 USD for parking is DISGUSTING!!!!!!! What a ripoff!!!!! As well, hotel cleaning staff don't clean the rooms anymore but yet the prices are still VERY high."&lt;/p&gt;

&lt;h4&gt;
  
  
  Sentiment analysis
&lt;/h4&gt;

&lt;p&gt;Sentiment analysis determines the overall sentiment of an input document, but doesn't provide further information about sentiment of each entity(word) in the document.&lt;br&gt;
eg: - Using Sentiment analysis you can only find out whether Customer feedback was positive, negative, neutral, or mixed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6CnGqB1s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/civpdbacvhwtlw05l3vf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6CnGqB1s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/civpdbacvhwtlw05l3vf.png" alt="Image" width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this scenario, after performing sentiment analysis, it suggest that the overall statement(document) sentiment is MIXED at a confidence level of 94%. However, if you want improve your hotel by understanding the negative areas, and take actions immediately for those then this information is not sufficient. &lt;/p&gt;

&lt;h4&gt;
  
  
  Targeted Sentiment Analysis
&lt;/h4&gt;

&lt;p&gt;In Targeted sentiment analysis, you can identify the emotions connected to particular entities in your input documents.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note:- An entity is a textual reference to the unique name of a real-world object such as people, places, and commercial items, and to precise references to measures such as dates and quantities.&lt;br&gt;
When you call Targeted sentiment, it provides the following information:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;It identifies the entities in the documents.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Classification of the entity type for each entity mentioned.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can find the list of entity types here. &lt;a href="https://docs.aws.amazon.com/comprehend/latest/dg/how-targeted-sentiment.html#how-targeted-sentiment-entities"&gt;Entity types&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For each entity mentioned, the sentiment and a sentiment score will be evaluated&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Groups of mentions that correspond to a single entity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;See the outputs of the targeted sentiment analysis for the same example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aS9fGaNf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6tpxt424qxotskr925xh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aS9fGaNf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6tpxt424qxotskr925xh.png" alt="Image" width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, you can definitely get insights on areas of improvement in their hotel service. The staff and prices have negative sentiments where they can take immediate actions &lt;br&gt;
to those areas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need it synchronous?
&lt;/h2&gt;

&lt;p&gt;You want to take action immediately to upon customer review.&lt;br&gt;
When you call this new API(synchronous), you can immediately get the results and it can  be send through any messaging channel(eg:-sms,whatsapp) or update in a real-time dashboard. Therefore you can update the required staff immediately upon review received from customer to take action to rectify the issues, so this new API is super helpful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Programmatically accessing this new API
&lt;/h3&gt;

&lt;p&gt;I am going to use Python to demonstrate the usage of this new API &lt;code&gt;detect_targeted_sentiment&lt;/code&gt;. Please ensure that you have the most recent boto3 version installed, otherwise it will not work because this is a really new API.&lt;/p&gt;

&lt;h4&gt;
  
  
  Code
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import subprocess

session = boto3.Session()
comprehend_client = session.client(service_name='comprehend', region_name='us-east-2')
text="The hotel itself was beautiful and clean. I only give it 3 starts because paying $43 USD for parking is DISGUSTING!!!!!!! What a ripoff!!!!! As well, hotel cleaning staff don't clean the rooms anymore but yet the prices are still VERY high."
response = comprehend_client.detect_targeted_sentiment(
LanguageCode='en',
Text = text
)
print(response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Output
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Rh8C8VoJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fwrpjd463jnter8vcbxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rh8C8VoJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fwrpjd463jnter8vcbxh.png" alt="Image" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Well-architected EKS Cluster using EKS Blueprints</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Thu, 22 Sep 2022 11:23:37 +0000</pubDate>
      <link>https://dev.to/chathra222/well-architected-eks-cluster-using-eks-blueprints-2og4</link>
      <guid>https://dev.to/chathra222/well-architected-eks-cluster-using-eks-blueprints-2og4</guid>
      <description>&lt;p&gt;There are various approaches you can follow to deploy Kubernetes clusters in the AWS environment. However, choosing the correct set of tools and configuring them correctly is always a tricky task. It is because the Kubernetes ecosystem is rapidly growing and the things you used a few months ago may have become obsolete now. As a result, correctly implementing and administering such complex clusters is becoming a nightmare.&lt;br&gt;
On the other hand, Customers always prefer to get things done quickly while yet adhering to best standards. Therefore AWS has introduced codified reference architectures called EKS blueprints, which helps you to create and manage well-architected EKS clusters with less effort and time. &lt;/p&gt;
&lt;h2&gt;
  
  
  Deploying EKS using EKS blueprints - Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuerq4ioq4mmpllirtpks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuerq4ioq4mmpllirtpks.png" alt="Image"&gt;&lt;/a&gt; image reference: &lt;a href="https://aws.amazon.com/blogs/containers/bootstrapping-clusters-with-eks-blueprints/" rel="noopener noreferrer"&gt;AWS blog&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
You can consider &lt;strong&gt;addons&lt;/strong&gt; are something like modules if you coming from a terraform background. These addons can be added to the cluster to enhance the clusters capabilities. You can easily grant access to EKS cluster using &lt;strong&gt;Teams&lt;/strong&gt;.&lt;br&gt;
Basically, it supports two types of &lt;strong&gt;Teams&lt;/strong&gt; you can define using EKS blueprints by default.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ApplicationTeam&lt;/code&gt; ---&amp;gt; Managing workloads&lt;br&gt;
 &lt;code&gt;PlatformTeam&lt;/code&gt;   ----&amp;gt; Administrating cluster&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Let's get our hands dirty.&lt;br&gt;
I am using CDK for this demonstration. However, terraform also can be used to build a Kubernetes cluster using EKS blueprints.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;First, you have to make sure that you have installed the following.&lt;br&gt;
1)Nodejs&lt;br&gt;
&lt;code&gt;sudo apt install nodejs&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
2)CDK latest or at least 2.37.1&lt;br&gt;
&lt;code&gt;npm install -g aws-cdk@2.37.1&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create a typescript CDK project
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;cdk init app --language typescript&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnunvj1m7md0rpq34zoj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnunvj1m7md0rpq34zoj.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Install the eks-blueprints NPM package
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;npm i @aws-quickstart/eks-blueprints&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1kvk8gtbudkiy0q1nlm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1kvk8gtbudkiy0q1nlm.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then set your AWS credentials. Refer the following AWS docs if you don't know how to do it.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Replace your main cdk file with the following code.
&lt;/h3&gt;

&lt;p&gt;Typically, it is located in the project's bin folder.(i.e &lt;code&gt;bin/&amp;lt;root_project_directory&amp;gt;.ts&lt;/code&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import * as blueprints from '@aws-quickstart/eks-blueprints';

const app = new cdk.App();
const account = '&amp;lt;your account no&amp;gt;';
const region = '&amp;lt;region&amp;gt;';

const addOns: Array&amp;lt;blueprints.ClusterAddOn&amp;gt; = [
    new blueprints.addons.VpcCniAddOn(),
    new blueprints.addons.CoreDnsAddOn(),
    new blueprints.addons.KubeProxyAddOn(),
    new blueprints.addons.CertManagerAddOn(),
    //Adding CalicoOperatorAddOn support Network policies
    new blueprints.addons.CalicoOperatorAddOn(),
    //Adding MetricsServerAddOn to support metrics collection
    new blueprints.addons.MetricsServerAddOn(),
    //ClusterAutoScalerAddOn will add required resources to support ClusterAutoScalling
    new blueprints.addons.ClusterAutoScalerAddOn(),
    new blueprints.addons.AwsLoadBalancerControllerAddOn(),
];

blueprints.EksBlueprint.builder()
    .account(account)
    .region(region)
    .addOns(...addOns)
    .build(app, 'eks-blueprint');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  CDK bootstrap and deploy
&lt;/h3&gt;

&lt;p&gt;run &lt;code&gt;cdk bootstrap&lt;/code&gt; command to bootstrap the enviroment. If you are running for the first time this command is required to create cdk bootstrap enviroment in your AWS account.&lt;/p&gt;

&lt;p&gt;run &lt;code&gt;cdk deploy&lt;/code&gt; command to deploy the EKS cluster and its addons.&lt;br&gt;
This process will take roughly around 20-30 mints.&lt;/p&gt;
&lt;h3&gt;
  
  
  Accesing the Kubernetes Cluster
&lt;/h3&gt;

&lt;p&gt;Once the deployment is success you will get a similar output as shown below which consists the update-kubeconfig command to execute which  typically update the&lt;code&gt;~/.kube/config&lt;/code&gt; file and enables us to access the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwz9nb4w2xiysev6rlee.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwz9nb4w2xiysev6rlee.PNG" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the update-kubeconfig command and run it on your terminal&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawqu5clflhdez0nvtkqr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawqu5clflhdez0nvtkqr.PNG" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Verifying the Kubernetes resources
&lt;/h3&gt;

&lt;p&gt;You can run following verification steps to see what have been installed in Kubernetes cluster. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;List all namespaces&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle6g8pbrsd91250o1a63.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle6g8pbrsd91250o1a63.PNG" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;List all pods in &lt;code&gt;kube-system&lt;/code&gt; namespace
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21nmnd38l145c6lhohio.PNG" alt="Image"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can see the running EKS worker instance as well in EC2 page in AWS management console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftaup10cawcenn84czyu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftaup10cawcenn84czyu3.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Verifying the provisioned AWS resources
&lt;/h3&gt;

&lt;p&gt;Note:- Even though CDK handles the majority of the hard work for you, it is always worthwhile to review what resources CDK has provisioned. In cloudformation page, you can see that the CDK code has created 1 main stack and 2 nested stacks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpg9ev3dcokd8yj2rfh8u.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpg9ev3dcokd8yj2rfh8u.PNG" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may wish to execute the following command and see what AWS resources have been provisioned through each stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudformation describe-stack-resources --stack-name &amp;lt;stack name&amp;gt; --region &amp;lt;region&amp;gt; --query 'StackResources[*].ResourceType' --no-cli-pager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Customize your cluster
&lt;/h3&gt;

&lt;p&gt;By default, EKS blueprint creates a Managed Node group for the EKS Cluster. If you wish to customize the default configurations of Managed Node group, you can do that via MngClusterProvider.&lt;br&gt;
For example, In this scenario I want to increase the number of desired nodes to be 2.You can do that by defining the properties for MngClusterProvider as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const props: blueprints.MngClusterProviderProps = {
    version: eks.KubernetesVersion.V1_21,
    minSize: 1,
    maxSize: 3,
    //increasing worker node count to 2
    desiredSize: 2,
}
const clusterProvider = new blueprints.MngClusterProvider(props);

blueprints.EksBlueprint.builder()
    .account(account)
    .region(region)
    //Adding the cluster provider to the EksBlueprint builder.
    .clusterProvider(clusterProvider)
    .addOns(...addOns)
    .build(app, 'eks-blueprint');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you deploy the changes, You can notice that the old instance has been terminated, and a new managed node group with two instances has been created.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n51zkku630ma4jlppfy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n51zkku630ma4jlppfy.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is not the end, but the EKS blueprints provide you a great start to build your EKS cluster faster and provide a plethora of add-ons and customizations to enrich it in terms of various aspects.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS CodePipeline - All possible integrations</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Sun, 11 Sep 2022 00:17:29 +0000</pubDate>
      <link>https://dev.to/chathra222/aws-codepipeline-all-possible-integrations-p66</link>
      <guid>https://dev.to/chathra222/aws-codepipeline-all-possible-integrations-p66</guid>
      <description>&lt;p&gt;AWS CodePipeline is the prime AWS native CI/CD orchestration tool, which enables you to define your software release process as a workflow. But there are other tools available on the market, such as Jenkins, Gitlab, etc., which also help to create CI/CD pipelines.&lt;br&gt;
Then why do we use CodePipeline?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is an AWS-managed Service where AWS does the heavy lifting for us, which means we do not need to manage or maintain any servers, which is a great relief.&lt;/li&gt;
&lt;li&gt;It is also highly available and reliable.&lt;/li&gt;
&lt;li&gt;If you plan to deploy your applications and infrastructure on AWS&lt;/li&gt;
&lt;li&gt;(eg: S3,ECS,Cloudformation..) Then it is much easier to do with CodePipeline because it has many seamless integrations with other AWS services, so you do not need to install any plugins (e.g., Jenkins).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's discuss the possible integrations available in the AWS &lt;strong&gt;Code Pipeline&lt;/strong&gt; Service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zTvm_NGT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5md8c7qcvf6lt0nx1qxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zTvm_NGT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5md8c7qcvf6lt0nx1qxz.png" alt="Summary" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Typically, any CI/CD pipeline should have a source code repository. If you are using &lt;strong&gt;CodeCommit, Bitbucket, Github, or S3&lt;/strong&gt; to store your source code, then you should be able to use CodePipeline to orchestrate your CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;However,if you are dealing with container-based applications, you may need to integrate with container registries. Then you can easily  integrate &lt;strong&gt;ECR&lt;/strong&gt; with &lt;strong&gt;CodePipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You have two ways of building the code. AWS native way is using &lt;strong&gt;CodeBuild&lt;/strong&gt;, Otherwise, you can integrate with Jenkins if you are required to integrate with &lt;strong&gt;Jenkins&lt;/strong&gt;. My preference is using &lt;strong&gt;CodeBuild&lt;/strong&gt; because&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is a fully managed service(no servers to manage) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Continuously scaling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pay only for what you use&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can easily integrate KMS for encrypting artifacts &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But &lt;strong&gt;Jenkins&lt;/strong&gt; is a very mature product with many plugins besides there could be situations where you must use Jenkins to get your work done(rarely).&lt;/p&gt;

&lt;p&gt;Both &lt;strong&gt;CodeBuild&lt;/strong&gt; and &lt;strong&gt;Jenkins&lt;/strong&gt; can be used, not just for building but also for testing.&lt;/p&gt;

&lt;p&gt;Also, there are more interesting integrations available:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Device Farm&lt;/strong&gt;: Device Farm is a mobile app testing service and it supports various application types and test frameworks/types(e.g.:-Appium).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;BlazeMeter&lt;/strong&gt; - Application Performance Testing(extends Jmeter capabilities)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Micro Focus StormRunner Load&lt;/strong&gt; - Another Performance testing tool&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ghost Inspector&lt;/strong&gt; - used for Web UI testing &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Runscope API monitoring&lt;/strong&gt; - API testing and monitoring tool&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Synk&lt;/strong&gt;- scan your application for open source security flaws and enables you to deliver secure applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Moreover, when it comes to deployment, you have miscellaneous actions/integrations to services available in AWS Code Pipeline.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Deploy&lt;/strong&gt; - It is one of the "go-to" services if you want to deploy your application on an on-premises instance, EC2, Lambda, or ECS with a variety of essential deployment options.(eg:- Blue/Green,Canary).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CloudFormation&lt;/strong&gt; - This integration is extremely useful if your CI/CD pipeline requires interaction with the CloudFormation service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;App Config&lt;/strong&gt; - A better way of deploying your application configuration with different deployment strategies &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;OpsWorks&lt;/strong&gt; - If you want to deploy your code on OpsWorks Stack, this option is available through Code Pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Catalog&lt;/strong&gt;- if you want to provision Service Catalog products through a CI/CD pipeline, yes! it is possible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Elastic Beanstalk&lt;/strong&gt; - Code Pipeline can deploy your web application onto Elastic Beanstalk environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Likewise, it has direct integrations with ECS and S3 as well.&lt;/p&gt;

&lt;p&gt;If you need to integrate a product or service which has no direct integration with AWS Code Pipeline, then you still have your AWS &lt;strong&gt;Lambda&lt;/strong&gt; friend to make it happen. &lt;/p&gt;

&lt;p&gt;Finally, if your pipeline demands even more sophisticated workflows, you could easily delegate from Code Pipeline to Step functions. &lt;strong&gt;Step Functions&lt;/strong&gt; has great capabilities such as automatic error handling, a wider range of AWS service integrations, handling long-running tasks and many more to make your life easier. ;-)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Establish secure cloud foundations in AWS using Control Tower</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Thu, 02 Jun 2022 13:02:52 +0000</pubDate>
      <link>https://dev.to/chathra222/establish-secure-cloud-foundations-in-aws-using-control-tower-9fa</link>
      <guid>https://dev.to/chathra222/establish-secure-cloud-foundations-in-aws-using-control-tower-9fa</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Ir_xQVZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c95t6tr72d1qotjmbrhv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Ir_xQVZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c95t6tr72d1qotjmbrhv.png" alt="Image" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we discuss Control Tower, there are few other topics to discuss. You may have heard about Multi Account strategy. This is one of the best practices accepted in the industry for various reasons. In any organization, typically you have many departments, teams, products, environments etc. Segregating and placing resources in multiple accounts logically(by department/team/product/environment) which can be handy when it comes to billing. &lt;br&gt;
However, if you have a large number of accounts to manage, you may encounter different problems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How can we ensure that all accounts meet security and compliance requirements?  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can we make the account creation process more safe and precise?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can we reduce repetitive tasks when setting up accounts?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can we monitor these accounts?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do we get to know if something goes wrong or if something unusual occurs in a multi-account environment?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where landing zone and Control Tower comes into picture. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Landing Zone?
&lt;/h2&gt;

&lt;p&gt;It is a preconfigured, secure, scalable multi-account environment based on best practice blueprints. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Blueprints are Well-architected design patterns that are used to set up the Landing Zone&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why do we need AWS Control Tower?
&lt;/h2&gt;

&lt;p&gt;AWS Control Tower helps to automate the landing zone to set up baseline environment. In other words, this will help to build a secure environment for teams to provision development and production accounts in alignment with AWS recommendations and best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the main AWS services used by AWS Control Tower?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Organizations&lt;/li&gt;
&lt;li&gt;AWS Cloudformation&lt;/li&gt;
&lt;li&gt;AWS Service Catalog&lt;/li&gt;
&lt;li&gt;AWS SSO&lt;/li&gt;
&lt;li&gt;AWS IAM&lt;/li&gt;
&lt;li&gt;AWS Config&lt;/li&gt;
&lt;li&gt;AWS CloudTrail&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are guardrails and why they needed?
&lt;/h2&gt;

&lt;p&gt;Guardrails are high-level rules that provides ongoing governance for your overall AWS environment which does preventive and detective controls. Typically Control Tower creates and enables some guardrails(mandatory guardrails) during the initial setup. Under the hood these are made from SCPs or AWS config rules. There 3 guardrail types called &lt;code&gt;mandatory&lt;/code&gt;, &lt;code&gt;strongly recommended&lt;/code&gt; and &lt;code&gt;elective&lt;/code&gt;. This can be enabled per OU.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nc-za-6W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hoh5va85jexi8t1u1xc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nc-za-6W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hoh5va85jexi8t1u1xc.png" alt="Image" width="800" height="838"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;eg:-  if you enable the detective guardrail &lt;code&gt;Detect Whether Public Read Access to Amazon S3 Buckets is Allowed&lt;/code&gt; on an OU - you can determine whether a user would be permitted to have read access to any S3 buckets for any accounts under that OU. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can find the details of available guardrails here: &lt;a href="https://docs.aws.amazon.com/controltower/latest/userguide/guardrails-reference.html"&gt;guardrails reference&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Account Factory in Control Tower?
&lt;/h2&gt;

&lt;p&gt;It is a set of pre-approved account configurations which helps to standardize the provisioning of new accounts. It includes baseline network configurations(VPC configuration options).Account factory is publish to AWS Service catalog automatically as a product.&lt;/p&gt;

&lt;h2&gt;
  
  
  What kind of AWS account require to setup Control Tower?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;a fresh AWS master account&lt;/strong&gt; that is both Master Payer(which pays the charges for all member accounts) and Organization Master.&lt;/p&gt;

&lt;h2&gt;
  
  
  What accounts Control Tower creates by default?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Log archiving account&lt;/li&gt;
&lt;li&gt;Audit account&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;why&lt;/strong&gt;? According to Well-Architected multi account environment, Any organization should have separate account for logging and for auditing. Idea of creating separate accounts is for isolation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How long the Control Tower initial setup process will take?
&lt;/h2&gt;

&lt;p&gt;Around 1 hour&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the recommended action after the initial setup  of the Control Tower?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Adding OUs to organize accounts and projects&lt;/li&gt;
&lt;li&gt;Configure the Account factory&lt;/li&gt;
&lt;li&gt;Enable more Guardrails - Not all guardrails are enabled by default, so you may need enable as you need. You can enable those guardrails from AWS Control Tower page.&lt;/li&gt;
&lt;li&gt;Review the user identity store and SSO for your users across accounts&lt;/li&gt;
&lt;li&gt;Review the settings of the shared accounts that AWS Control Tower setup for you.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How can you ensure your accounts are compliant with respect to the enabled guardrails?
&lt;/h2&gt;

&lt;p&gt;When you click accounts or Organizational units, you will see list of accounts and OUs. Then check the compliance Status field under each OU or Account. It will show as shown below mentioning its compliant in green color.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u0-lHFgp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axbg4d9awepziqlaze7x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u0-lHFgp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axbg4d9awepziqlaze7x.png" alt="Image" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Do you need to pay for AWS Control Tower Service?
&lt;/h2&gt;

&lt;p&gt;No. It is a free service. However, you will have to pay for the AWS resources that Control Tower generates.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setting up a S3 File Gateway on a EC2 Windows Server</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Wed, 06 Apr 2022 22:12:25 +0000</pubDate>
      <link>https://dev.to/chathra222/setting-up-a-s3-file-gateway-on-a-ec2-windows-server-432h</link>
      <guid>https://dev.to/chathra222/setting-up-a-s3-file-gateway-on-a-ec2-windows-server-432h</guid>
      <description>&lt;p&gt;S3 File Gateway can be used to Store and access objects in Amazon S3 from NFS or SMB file data with local caching.&lt;/p&gt;

&lt;p&gt;Typically, the architecture might look like as shown below when you connecting to your s3 file gateway from On-Premises over NFS or SMB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y3e3x8eqotnjq2o4mnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y3e3x8eqotnjq2o4mnz.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But in this blog, I am using EC2 hosted Storage Gateway Appliance for demo purpose. So, in this case Storage Gateway Appliance will be placed on the AWS Cloud not On-prem as shown above.If you have your applications on AWS and want to connect through NFS or SMB where files stored in S3,then this will be the ideal setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up a EC2 based S3 File Gateway
&lt;/h3&gt;

&lt;p&gt;First, go to Storage Gateway service in the AWS management console and click create gateway. Then you will see a page as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh46f4bd1naev7ikppfyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh46f4bd1naev7ikppfyd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select &lt;code&gt;Amazon S3 File Gateway&lt;/code&gt; and click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flux930adirvgq2uxcqkq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flux930adirvgq2uxcqkq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before select &lt;code&gt;Amazon EC2&lt;/code&gt; and Click next, You can use this CloudFormation template to create the Storage gateway instance and security groups to avoid bit of ClickOps and save time. :-).Otherwise you can click Launch instance to create an EC2 instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: 2010-09-09

Parameters:
  ImageId:
    Type: 'AWS::SSM::Parameter::Value&amp;lt;AWS::EC2::Image::Id&amp;gt;'
    Default: /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base
  InstanceType:
    Type: String
    Default: m6i.8xlarge
    Description: "Select m6i.[xlarge, 2xlarge, 4xlarge, 8xlarge]. Default is m6i.8xlarge for production use."
    AllowedValues:
      - m6i.xlarge   # 4-vCPU, 16G-RAM, 10Gbps-NET
      - m6i.2xlarge  # 8-vCPU, 32G-RAM, 10Gbps-NET
      - m6i.4xlarge  # 16-vCPU,64G-RAM, 10Gbps-NET
      - m6i.8xlarge  # 32-vCPU,128-RAM, 10Gbps-NET
  VpcId:
    Description: VPC IDs
    Type: AWS::EC2::VPC::Id
  SubnetId:
    Description: Subnet ID
    Type: AWS::EC2::Subnet::Id
  KeyName:
    Description: The SSH keypair
    Type: AWS::EC2::KeyPair::KeyName
  VolumeType:
    Type: String
    Default: io2
  RootVolumeType:
    Type: String
    Default: gp3
  VolumeDeleteOnTermination:
    Default: True
    Type: String
  VolumeSize:
    Description: "SGW cache disk size minimum 150 GiB"
    Type: Number
    Default: 150
  RootVolumeIops:
    Description: "Recommended at least 3000 IOPS or more."
    Type: Number
    Default: 3000
  VolumeIops:
    Description: "Recommended at least 3000 IOPS or more for cache disks."
    Type: Number
    Default: 3000
Resources:
  SGWSG01:
    Type: 'AWS::EC2::SecurityGroup'
    Properties:
      VpcId: !Ref VpcId
      GroupDescription: "EC2 File Storage Gateway Security Group"
      GroupName: !Sub 'ec2-sgw-sg-${AWS::StackName}'
      Tags:
      - Key: "Name"
        Value: SGWSG01
      SecurityGroupEgress:
      - IpProtocol: "-1"
        CidrIp: 0.0.0.0/0
      SecurityGroupIngress:
      - IpProtocol: "-1"
        CidrIp: 10.0.0.0/8

  SGW01:
    Type: 'AWS::EC2::Instance'
    DeletionPolicy: Delete
    Properties:
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-sgw-01'
      PropagateTagsToVolumeOnCreation: True
      KeyName: !Ref KeyName
      ImageId: !Ref ImageId
      InstanceType: !Ref InstanceType
      SecurityGroupIds:
        - !Ref SGWSG01
      SubnetId: !Ref SubnetId
      BlockDeviceMappings:
        - DeviceName: "/dev/xvda"
          Ebs:
            VolumeType: !Ref RootVolumeType
            DeleteOnTermination: !Ref VolumeDeleteOnTermination
            Iops: !Ref RootVolumeIops
        - DeviceName: "/dev/sda1"
          Ebs:
            VolumeType: !Ref VolumeType
            DeleteOnTermination: !Ref VolumeDeleteOnTermination
            VolumeSize: !Ref VolumeSize
            Iops: !Ref VolumeIops
    DependsOn: SGWSG01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So just click &lt;code&gt;Amazon EC2&lt;/code&gt; and click Next.&lt;/p&gt;

&lt;p&gt;Then you will be asked to select the Storage Gateway service Endpoint. I am choosing &lt;code&gt;VPC&lt;/code&gt; but if you want you can choose &lt;code&gt;Public&lt;/code&gt; based on your requirement.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7lcsxjob8lpelf5fcl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7lcsxjob8lpelf5fcl3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to select VPC endpoint, make sure you have created a VPC interface endpoint as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm24ege1bli7ix0qz1phx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm24ege1bli7ix0qz1phx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next you will need to connect to the storage gateway to get the Activation code. If you choose IP address option make sure your default browser can access storage gateway console. This step will automatically redirect to Storage gateway console and the generated link will be something like this:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://&amp;lt;Storage GW EC2 instance IP&amp;gt;/?gatewayType=FILE_S3&amp;amp;activationRegion=&amp;lt;region&amp;gt;&amp;amp;no_redirect&amp;amp;vpcEndpoint=&amp;lt;vpc endpoint dns name&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;If that link is not accessible from your current browser/network, copy that link and paste somewhere your storage gateway is accessible. Then it will show you the Activation Code. Then you can copy that into the given box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhavwp8qvt75maff6cwn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhavwp8qvt75maff6cwn1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you can enter the gateway name as you prefer and activate your gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4aqmnaxm4oc1fq8gyxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4aqmnaxm4oc1fq8gyxg.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then mention the cache disk size you want to have.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Best practice to allocate at least 150 GiB of cache storage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhscc4gohvzeggffk776y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhscc4gohvzeggffk776y.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally there is an optional step. But I would prefer to have Cloudwatch logs but its totally up to you to disable or enable logs. Enabling logging is useful for auditing purposes.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzot7i3t2xnuw9bs6m38z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzot7i3t2xnuw9bs6m38z.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the gateway you can create NFS or SMB shares up to 10 per Gateway.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>S3 Batch Operations-Copy large amount objects from one bucket to another</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Fri, 04 Mar 2022 08:28:48 +0000</pubDate>
      <link>https://dev.to/chathra222/s3-batch-operations-move-large-amount-between-s3-buckets-4nap</link>
      <guid>https://dev.to/chathra222/s3-batch-operations-move-large-amount-between-s3-buckets-4nap</guid>
      <description>&lt;h2&gt;
  
  
  What can you do with S3 Batch operations?
&lt;/h2&gt;

&lt;p&gt;You can perform a selected single operation(copy, replace tags...) on larger number of objects in a S3 bucket using a single request.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does S3 batch operations work?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1.) Choose objects which you want to perform any operation&lt;/strong&gt;&lt;br&gt;
    You can provide it using:&lt;br&gt;
    - Using inventory reports&lt;br&gt;
    - Using CSV&lt;br&gt;
    - S3 Replication configuration(can create a manifest)&lt;br&gt;
       (Note: in this case only operation will be &lt;em&gt;replicate&lt;/em&gt;) &lt;br&gt;
&lt;strong&gt;2.) Select an operation&lt;/strong&gt;&lt;br&gt;
    - Copy&lt;br&gt;
    - Invoke Lambda function&lt;br&gt;
    - Replace all object tags&lt;br&gt;
    - Delete all object tags&lt;br&gt;
    - Replace access control list (ACL)&lt;br&gt;
    - Restore achieved objects&lt;br&gt;
    - Enable Object lock &lt;br&gt;
    - Enable legal hold (same protection as a retention period, but it has no expiration date)&lt;br&gt;
    - Replicate( this option is only available when you choose objects using S3 replication configuration based manifest)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.) Run, View Progress and get reports&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you run the job you will be able to see the progress and then the reports will be generated in the s3 bucket you provided for reporting. You will be able ask s3 batch operations to only generate reports for all objects or only failed objects(my preference for most of the cases).&lt;/p&gt;

&lt;h2&gt;
  
  
  What options you can use to copy objects from bucket to another bucket?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Amazon S3 batch operations&lt;/li&gt;
&lt;li&gt;AWS SDK&lt;/li&gt;
&lt;li&gt;cross-Region replication or same-Region replication&lt;/li&gt;
&lt;li&gt;S3DistCp with Amazon EMR&lt;/li&gt;
&lt;li&gt;AWS DataSync&lt;/li&gt;
&lt;li&gt;Run parallel uploads using the AWS Command Line Interface (AWS CLI)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Too many options, What to choose?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AWS CLI&lt;/strong&gt; ---&amp;gt; Not efficient when transferring large number of objects. &lt;/p&gt;

&lt;p&gt;According to AWS, &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;custom application using an AWS SDK might be more efficient &lt;br&gt;
   at performing a transfer at the scale of hundreds of millions &lt;br&gt;
   of objects than AWS CLI&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3DistCp with Amazon EMR&lt;/strong&gt; ---&amp;gt; Too expensive&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Data Sync&lt;/strong&gt; ---&amp;gt; Still expensive and if you have files with special characters you may encounter some issues.(I have an faced issue - invalid UTF8 characters in file name)&lt;br&gt;
Refer the link below:&lt;br&gt;
&lt;a href="https://forums.aws.amazon.com/thread.jspa?threadID=337210"&gt;https://forums.aws.amazon.com/thread.jspa?threadID=337210&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-Region replication or same-Region replication&lt;/strong&gt;---&amp;gt;&lt;br&gt;
It replicates new objects and changes to existing objects. If you need to transfer existing objects then its not ideal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS SDK&lt;/strong&gt; --&amp;gt; Extremely Powerful when uploading larger size objects but you may need to design the application to scale.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Batch operations&lt;/strong&gt; ---&amp;gt; It is extremely powerful to do a task on multiple objects with a single request. However, if you use the copy operation of S3 batch operations alone, there are some limitations, such as the size of the objects to be transferred, which can be up to 5 GB.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How about combining of &lt;em&gt;S3 batch operations+ SDK(Invoke lambda)&lt;/em&gt;?
&lt;/h2&gt;

&lt;p&gt;Yes. This is a great choice(at least to me). Lambda provides you flexibility to handle your things in your own way(customizations) and also provides opportunity to use powerful SDK which can copy larger files than 5GB.Then S3 batch operations provide you more convenience.&lt;/p&gt;

&lt;p&gt;In simple terms, you are going to use S3 batch operations job using &lt;em&gt;Invoke Lambda function operation&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;What do you need to create?&lt;br&gt;
1) A role for Lambda&lt;br&gt;
2) A role for S3 batch operations&lt;br&gt;
3) Lambda function(using Python Boto3)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-requisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source bucket &lt;/li&gt;
&lt;li&gt;Destination bucket&lt;/li&gt;
&lt;li&gt;Manifest file and a bucket to place it.
If you are using CSV file, you will need to make sure 
following.

&lt;ul&gt;
&lt;li&gt;Note:- Object keys must be URL-encoded&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Bucket(s) for manifest and S3 batch operation completion reports&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can use my solution to generate csv manifest files and create s3 batch operations jobs.&lt;/p&gt;

&lt;p&gt;Please clone my GIT repository and refer README.MD file for detailed steps. &lt;a href="https://github.com/chathra222/s3batchoperationsrunner"&gt;code&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note:- This code was motivated by the blog below as well as my own experience with S3 batch operations.&lt;/p&gt;

&lt;p&gt;References:&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/storage/copying-objects-greater-than-5-gb-with-amazon-s3-batch-operations/"&gt;https://aws.amazon.com/blogs/storage/copying-objects-greater-than-5-gb-with-amazon-s3-batch-operations/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Auto Scaling nodes and pods in EKS</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Sun, 07 Nov 2021 09:55:58 +0000</pubDate>
      <link>https://dev.to/chathra222/auto-scaling-nodes-and-pods-in-eks-alk</link>
      <guid>https://dev.to/chathra222/auto-scaling-nodes-and-pods-in-eks-alk</guid>
      <description>&lt;h2&gt;
  
  
  What are the options you have to Autoscale kubenertes?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Cluster Autoscaler ---&amp;gt; Scales nodes&lt;/li&gt;
&lt;li&gt;HPA ---&amp;gt; Scales up or down your deployment/replicaset based on resource's CPU utilization&lt;/li&gt;
&lt;li&gt;VPA ---&amp;gt; Automatically adjusts the CPU and memory reservations for your pods&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Kubernetes Cluster Autoscaler?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;It adjusts the size(scale up and down nodes) of a Kubernetes cluster to meet the current needs.&lt;/li&gt;
&lt;li&gt;Supported by the major cloud platforms&lt;/li&gt;
&lt;li&gt;Cluster Autoscaler typically runs as a Deployment in your cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does Kubernetes Cluster Autoscaler work?
&lt;/h2&gt;

&lt;p&gt;Cluster AutoScaler checks the status of nodes and pods on a regular basis and takes action based on node usage or pod scheduling status. &lt;br&gt;
When Cluster Autoscaler finds pending pods on a cluster, it will add nodes until the waiting pods are scheduled or the cluster exceeds its maximum node limit. &lt;br&gt;
If node utilization is low, Cluster Autoscaler will remove excess nodes and pods will be able to transfer to other nodes.&lt;br&gt;
So this is not based CPU or memory utilization.&lt;/p&gt;
&lt;h2&gt;
  
  
  What you should have before deploying Cluster Autoscaler in Kubernetes Cluster?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;An IAM OIDC provider for your cluster.&lt;/p&gt;
&lt;h4&gt;
  
  
  Why?
&lt;/h4&gt;

&lt;p&gt;Cluster Autoscaler requires AWS permission to scale up or down nodes. This permissions will be granted through IAM roles for service account. To support IAM roles for service accounts, your cluster needs to have OIDC URL.(The IAM roles for service accounts feature is available on Amazon EKS versions 1.14 and later and for EKS clusters)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Cluster Autoscaler requires the following tags on your &lt;br&gt;
Auto Scaling groups so that they can be &lt;strong&gt;auto-discovered&lt;/strong&gt;.&lt;br&gt;
&lt;code&gt;k8s.io/cluster-autoscaler/enabled=true&lt;/code&gt;&lt;br&gt;
&lt;code&gt;k8s.io/cluster-autoscaler/&amp;lt;cluster-name&amp;gt;=owned&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Demo
&lt;/h1&gt;

&lt;p&gt;Lets create a EKS cluster with cluster autoscaling in Terraform way:&lt;/p&gt;

&lt;p&gt;1) Lets create the EKS cluster using Terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
locals {
  name            = "eks-scalable-cluster"
  cluster_version = "1.20"
  region          = "ap-southeast-1"
}

###############
# EKS Module
###############

module "eks" {
  source = "terraform-aws-modules/eks/aws"

  cluster_name    = local.name
  cluster_version = local.cluster_version

  vpc_id  = module.vpc.vpc_id
  subnets = module.vpc.private_subnets

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = true

  enable_irsa = true

  worker_groups = [
    {
      name                 = "worker-group-1"
      instance_type        = "t3.medium"
      asg_desired_capacity = 1
      asg_max_size         = 4
      #Cluster autoscaler Auto-Discovery Setup
      #https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup
      tags = [
        {
          "key"                 = "k8s.io/cluster-autoscaler/enabled"
          "propagate_at_launch" = "false"
          "value"               = "true"
        },
        {
          "key"                 = "k8s.io/cluster-autoscaler/${local.name}"
          "propagate_at_launch" = "false"
          "value"               = "owned"
        }
      ]
    }
  ]
  tags = {
    clustername = local.name
  }
}


data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

data "aws_availability_zones" "available" {
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Creates an IAM role which can be assumed by trusted resources using OpenID Connect Federated Users (Cluster autoscaler will use these permissions to access AWS Services such as autoscaling,ec2)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_caller_identity" "current" {}

data "aws_region" "current" {}

locals {
  k8s_service_account_namespace = "kube-system"
  k8s_service_account_name      = "cluster-autoscaler-aws"
}


module "iam_assumable_role_admin" {
  #Creates a single IAM role which can be assumed by trusted resources using OpenID Connect Federated Users.
  source  = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
  version = "~&amp;gt; 4.0"

  create_role                   = true
  role_name                     = "cluster-autoscaler"
  provider_url                  = replace(module.eks.cluster_oidc_issuer_url, "https://", "")
  role_policy_arns              = [aws_iam_policy.cluster_autoscaler.arn]
  oidc_fully_qualified_subjects = ["system:serviceaccount:${local.k8s_service_account_namespace}:${local.k8s_service_account_name}"]
}

resource "aws_iam_policy" "cluster_autoscaler" {
  name_prefix = "cluster-autoscaler"
  description = "EKS cluster-autoscaler policy for cluster ${module.eks.cluster_id}"
  policy      = data.aws_iam_policy_document.cluster_autoscaler.json
}

data "aws_iam_policy_document" "cluster_autoscaler" {
  statement {
    sid    = "clusterAutoscalerAll"
    effect = "Allow"

    actions = [
      "autoscaling:DescribeAutoScalingGroups",
      "autoscaling:DescribeAutoScalingInstances",
      "autoscaling:DescribeLaunchConfigurations",
      "autoscaling:DescribeTags",
      "ec2:DescribeLaunchTemplateVersions",
    ]

    resources = ["*"]
  }

  statement {
    sid    = "clusterAutoscalerOwn"
    effect = "Allow"

    actions = [
      "autoscaling:SetDesiredCapacity",
      "autoscaling:TerminateInstanceInAutoScalingGroup",
      "autoscaling:UpdateAutoScalingGroup",
    ]

    resources = ["*"]

    condition {
      test     = "StringEquals"
      variable = "autoscaling:ResourceTag/k8s.io/cluster-autoscaler/${module.eks.cluster_id}"
      values   = ["owned"]
    }

    condition {
      test     = "StringEquals"
      variable = "autoscaling:ResourceTag/k8s.io/cluster-autoscaler/enabled"
      values   = ["true"]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Install cluster-autoscaler using helm charts&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "helm_release" "cluster-autoscaler" {
  depends_on = [
    module.eks
  ]

  name             = "cluster-autoscaler"
  namespace        = local.k8s_service_account_namespace
  repository       = "https://kubernetes.github.io/autoscaler"
  chart            = "cluster-autoscaler"
  version          = "9.10.7"
  create_namespace = false

  set {
    name  = "awsRegion"
    value = data.aws_region.current.name
  }
  set {
    name  = "rbac.serviceAccount.name"
    value = local.k8s_service_account_name
  }
  set {
    name  = "rbac.serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
    value = module.iam_assumable_role_admin.iam_role_arn
    type  = "string"
  }
  set {
    name  = "autoDiscovery.clusterName"
    value = local.name
  }
  set {
    name  = "autoDiscovery.enabled"
    value = "true"
  }
  set {
    name  = "rbac.create"
    value = "true"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note:- &lt;br&gt;
Make sure your public and private subnets properly tagged which enables automatic subnet discovery so Kubernetes Cloud Controller Manager (cloud-controller-manager) and AWS Load Balancer Controller (aws-load-balancer-controller) can identify which subnets going to use for provisioning a ELB when creating Loadbalancer type Service. If you creating the VPC and Subnets from scratch you may use vpc.tf . Otherwise you can tag you subnets accordingly.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  public_subnet_tags = {
    "kubernetes.io/cluster/${local.name}" = "shared"
    "kubernetes.io/role/elb"              = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.name}" = "shared"
    "kubernetes.io/role/internal-elb"     = "1"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will find my code in &lt;a href="https://github.com/chathra222/tf-eks-autoscaling" rel="noopener noreferrer"&gt;https://github.com/chathra222/tf-eks-autoscaling&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deploy all the resources using terraform&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once applied,You can check it in AWS management console also.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazwf2stm1zuduq9gi0w1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazwf2stm1zuduq9gi0w1.png" alt="Image test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lets discover what  Kubernetes resources have been provisioned
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deploy -n kube-system
NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
cluster-autoscaler-aws-cluster-autoscaler   1/1     1            1           8h
coredns                                     2/2     2            2           8h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see there is a deployment called &lt;code&gt;cluster-autoscaler-aws-cluster-autoscaler&lt;/code&gt; in &lt;code&gt;kube-system&lt;/code&gt; namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deploy -n kube-system cluster-autoscaler-aws-cluster-autoscaler -o yaml|grep -i serviceAccountName
      serviceAccountName: cluster-autoscaler-aws
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets investigate the service account &lt;code&gt;cluster-autoscaler-aws&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe sa cluster-autoscaler-aws
Name:                cluster-autoscaler-aws
Namespace:           kube-system
Labels:              app.kubernetes.io/instance=cluster-autoscaler
                     app.kubernetes.io/managed-by=Helm
                     app.kubernetes.io/name=aws-cluster-autoscaler
                     helm.sh/chart=cluster-autoscaler-9.10.7
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::272435851616:role/cluster-autoscaler
                     meta.helm.sh/release-name: cluster-autoscaler
                     meta.helm.sh/release-namespace: kube-system
Image pull secrets:  &amp;lt;none&amp;gt;
Mountable secrets:   cluster-autoscaler-aws-token-x7ds6
Tokens:              cluster-autoscaler-aws-token-x7ds6
Events:              &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you may notice the annotation &lt;code&gt;eks.amazonaws.com/role-arn: arn:aws:iam::272435851616:role/cluster-autoscaler&lt;/code&gt; which says that the this serivce account can assume role &lt;code&gt;arn:aws:iam::272435851616:role/cluster-autoscaler&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Lets discover &lt;code&gt;cluster-autoscaler&lt;/code&gt; deployment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deploy -o yaml cluster-autoscaler-aws-cluster-autoscaler
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: cluster-autoscaler
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2021-11-07T01:10:29Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: cluster-autoscaler
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: aws-cluster-autoscaler
    helm.sh/chart: cluster-autoscaler-9.10.7
  name: cluster-autoscaler-aws-cluster-autoscaler
  namespace: kube-system
  resourceVersion: "1292"
  uid: 9f0f7f3f-adfd-422f-a007-7c1aa20deb4e
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: cluster-autoscaler
      app.kubernetes.io/name: aws-cluster-autoscaler
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: cluster-autoscaler
        app.kubernetes.io/name: aws-cluster-autoscaler
    spec:
      containers:
      - command:
        - ./cluster-autoscaler
        - --cloud-provider=aws
        - --namespace=kube-system
        - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/eks-scalable-cluster
        - --logtostderr=true
        - --stderrthreshold=info
        - --v=4
        env:
        - name: AWS_REGION
          value: ap-southeast-1
        image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health-check
            port: 8085
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: aws-cluster-autoscaler
        ports:
        - containerPort: 8085
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: cluster-autoscaler-aws
      serviceAccountName: cluster-autoscaler-aws
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2021-11-07T01:11:42Z"
    lastUpdateTime: "2021-11-07T01:11:42Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2021-11-07T01:10:29Z"
    lastUpdateTime: "2021-11-07T01:11:42Z"
    message: ReplicaSet "cluster-autoscaler-aws-cluster-autoscaler-74977bcc47" has
      successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are various parameters for &lt;code&gt;cluster-autoscaler&lt;/code&gt;. You may refer &lt;a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca" rel="noopener noreferrer"&gt;https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca&lt;/a&gt; to customize according to your need.&lt;/p&gt;

&lt;p&gt;This link is a really good if you want to understand further about cluster autoscaler.&lt;br&gt;
&lt;a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md" rel="noopener noreferrer"&gt;https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets save more costs by using these autoscaling options wisely. :-)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>EKS Fargate with Nginx Ingress Controller</title>
      <dc:creator>Chathra Serasinghe</dc:creator>
      <pubDate>Fri, 05 Nov 2021 21:29:23 +0000</pubDate>
      <link>https://dev.to/chathra222/eks-fargate-with-ingress-controllers-3h3e</link>
      <guid>https://dev.to/chathra222/eks-fargate-with-ingress-controllers-3h3e</guid>
      <description>&lt;h2&gt;
  
  
  Why Fargate?
&lt;/h2&gt;

&lt;p&gt;Fargate removes the requirement for you to set up and maintain EC2 instances for your Kubernetes applications. When your pods start, Fargate automatically allocates computing resources to operate them on-demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use?
&lt;/h2&gt;

&lt;p&gt;If your workload/traffic patterns are irregular and unpredictable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why would I choose the NGINX ingress controller over the Application Load Balancer (ALB) ingress controller?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;With the NGINX Ingress controller:

&lt;ul&gt;
&lt;li&gt;can have multiple ingress objects for multiple environments or namespaces with the same network load balancer &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;With the ALB ingress controller:

&lt;ul&gt;
&lt;li&gt;each ingress object requires a new load balancer.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Nginx ingress controller cannot run on Fargate only cluster?
&lt;/h2&gt;

&lt;p&gt;Nginx ingress controller needs privilege escalation but in Fargate you are not allowed to do it. You will get following error when you try to deploy.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Pod not supported on Fargate: invalid SecurityContext fields: AllowPrivilegeEscalation&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Why?
&lt;/h1&gt;

&lt;h2&gt;
  
  
  There are currently a few limitations that you should be aware of EKS fargate:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;There is a maximum of 4 vCPU and 30Gb memory per pod.&lt;/li&gt;
&lt;li&gt;Currently there is no support for stateful workloads that require persistent volumes or file systems.&lt;/li&gt;
&lt;li&gt;You cannot run Daemonsets, Privileged pods, or pods that use HostNetwork or HostPort.&lt;/li&gt;
&lt;li&gt;The only load balancer you can use is an Application Load Balancer.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  So how can you still run your workload on fargate while using Nginx ingress controller?
&lt;/h2&gt;

&lt;p&gt;You can run &lt;em&gt;Nginx ingress contoller on EKS managed nodes&lt;/em&gt; and &lt;em&gt;workload can be run on Fargate nodes&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Fargate Profile?
&lt;/h2&gt;

&lt;p&gt;Before you can schedule pods on Fargate in your cluster, you must define at least one Fargate profile that specifies which pods use Fargate when launched.&lt;br&gt;
The Fargate profile allows an administrator to declare which pods run on Fargate.&lt;br&gt;
This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and optional labels.&lt;br&gt;
If a pod matches multiple Fargate profiles, Amazon EKS picks one of the matches at random. In this case, you can specify which profile a pod should use by adding the following Kubernetes label to the pod specification: eks.amazonaws.com/fargate-profile: &lt;/p&gt;

&lt;h3&gt;
  
  
  How fargate profile should look like:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;can have maximum 5 selectors per profile&lt;/li&gt;
&lt;li&gt;selector must associated only one namespace&lt;/li&gt;
&lt;li&gt;you can also specify labels for a namespace (optional)&lt;/li&gt;
&lt;li&gt;you should have an pod execution role &lt;/li&gt;
&lt;li&gt;You should specify subnet ids (only private subnets)&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Demo:
&lt;/h1&gt;

&lt;p&gt;1) Deploy EKS Fargate Cluster with a managed node&lt;/p&gt;

&lt;p&gt;You can use this code to launch your EKS Fargate cluster&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

https://github.com/chathra222/eks-fargate-example


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2)  Use the AWS CLI update-kubeconfig command to create or update your kubeconfig for your cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

aws eks --region &amp;lt;region-code&amp;gt; update-kubeconfig --name &amp;lt;cluster_name&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Get the nodes. You will notice that there are 2 Fargate nodes and a one managed node.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get no
NAME                                                      STATUS   ROLES    AGE    VERSION
fargate-ip-172-16-1-180.ap-southeast-1.compute.internal   Ready    &amp;lt;none&amp;gt;   142m   v1.20.7-eks-135321
fargate-ip-172-16-1-81.ap-southeast-1.compute.internal    Ready    &amp;lt;none&amp;gt;   126m   v1.20.7-eks-135321
ip-172-16-1-192.ap-southeast-1.compute.internal           Ready    &amp;lt;none&amp;gt;   10m    v1.20.10-eks-3bcdcd


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3) Install Nginx Ingress Controller in EKS Fargate&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/aws/deploy.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You will notice that Nginx ingress controller is deployed on managed node.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

 kubectl get po -o wide -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE    IP             NODE                                              NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-7fp69        0/1     Completed   0          136m   172.16.1.28    ip-172-16-1-192.ap-southeast-1.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
ingress-nginx-admission-patch-br5qg         0/1     Completed   1          136m   172.16.1.179   ip-172-16-1-192.ap-southeast-1.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
ingress-nginx-controller-5699dc4f77-x9z4j   1/1     Running     0          15m    172.16.1.196   ip-172-16-1-192.ap-southeast-1.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In step 1, I created a fargate cluster with a fargate profile called default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj44tfvbhcxbtz7kkhc9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj44tfvbhcxbtz7kkhc9d.png" alt="Image test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets create a pod called &lt;code&gt;test&lt;/code&gt; with a label called &lt;code&gt;WorkerType=fargate&lt;/code&gt; in namespace default&lt;/p&gt;

&lt;p&gt;&lt;code&gt;test.yaml&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: v1
kind: Pod
metadata:
  labels:
    WorkerType: fargate
  name: test
spec:
  containers:
  - image: frjaraur/non-root-nginx
    name: test


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f test.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You will notice that it has scheduled in fargate node.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get po -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP             NODE                                                      NOMINATED NODE   READINESS GATES
test   1/1     Running   0          6m25s   172.16.3.209   fargate-ip-172-16-3-209.ap-southeast-1.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;But if you run a pod which doesn't match with fargate profile's selectors, then it will not run on fargate nodes.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nomatchpod.yaml&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nomatchpod
  name: nomatchpod
spec:
  containers:
  - image: nginx
    name: nginx


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f nomatchpod.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can notice that nomatchpod didn't run on fargate node because it didn't match the criteria defined in fargate profile.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

 kubectl get po -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP             NODE                                                      NOMINATED NODE   READINESS GATES
nomatchpod   1/1     Running   0          23s   172.16.1.41    ip-172-16-1-192.ap-southeast-1.compute.internal           &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
test         1/1     Running   0          15m   172.16.3.209   fargate-ip-172-16-3-209.ap-southeast-1.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Lets expose &lt;code&gt;test&lt;/code&gt; pod as a service called &lt;code&gt;testsvc&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl expose po test --name=testsvc --port=80
service/testsvc exposed


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then create an ingress resource &lt;code&gt;ingress.yaml&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.class: nginx # make sure to add this
spec:
  rules:
    - http:
        paths:
          - path: /test
            pathType: Prefix
            backend:
              service:
                name: testsvc
                port:
                  number: 80




&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f ingress.yaml
ingress.networking.k8s.io/minimal-ingress created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

NAME              CLASS    HOSTS   ADDRESS                                                                              PORTS   AGE
minimal-ingress   &amp;lt;none&amp;gt;   *       a4c206981b7d14678bf6be5911d8223a-2122040efbe82462.elb.ap-southeast-1.amazonaws.com   80      39m


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can access the service now using &lt;a href="http://nlb_hostname/test" rel="noopener noreferrer"&gt;http://nlb_hostname/test&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7ij61whjr5rfzlxc2u5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7ij61whjr5rfzlxc2u5.png" alt="Image done"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how you can use Nginx Ingress Controller in a EKS fargate Cluster.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
