<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andrii Shykhov</title>
    <description>The latest articles on DEV Community by Andrii Shykhov (@andrii-shykhov).</description>
    <link>https://dev.to/andrii-shykhov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/andrii-shykhov"/>
    <language>en</language>
    <item>
      <title>Static Website on S3 with CloudFront and Route 53</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Thu, 13 Mar 2025 21:02:08 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/static-website-on-s3-with-cloudfront-and-route-53-2n6l</link>
      <guid>https://dev.to/andrii-shykhov/static-website-on-s3-with-cloudfront-and-route-53-2n6l</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbenidd642m57z4ncfl5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbenidd642m57z4ncfl5.png" alt="Image from Freepik" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This project demonstrates how AWS services like S3, CloudFront, and Route 53 can be used to host a static website. CloudFormation automates the infrastructure provisioning. The example site has a simple structure with &lt;code&gt;index.html&lt;/code&gt; and &lt;code&gt;error.html&lt;/code&gt; files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Estimation
&lt;/h2&gt;

&lt;p&gt;The cost of this solution depends on usage, but approximate monthly expenses include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Storage&lt;/strong&gt;: $0.023/GB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CloudFront Data Transfer&lt;/strong&gt;: $0.085/GB for North America and Europe.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Route 53 Domain Registration&lt;/strong&gt;: ~$12 per year (varies by domain TLD).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Route 53 Hosted Zone&lt;/strong&gt;: $0.50 per month for the first 25 hosted zones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CloudFront Requests&lt;/strong&gt;: $0.0075 per 10,000 HTTP/HTTPS requests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a small static site with low traffic, the cost can be as low as a few dollars per month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of CloudFront and Route 53 Over Direct S3 Access
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Better Performance&lt;/strong&gt;: CloudFront caches content in edge locations globally, reducing latency for users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: S3 bucket access can be restricted to CloudFront, preventing direct public access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Domain&lt;/strong&gt;: Route 53 allows seamless domain management instead of using an S3 bucket URL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTTPS Support&lt;/strong&gt;: CloudFront provides free SSL/TLS certificates via ACM, securing website traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Costs&lt;/strong&gt;: CloudFront reduces data transfer costs compared to direct S3 access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error Handling&lt;/strong&gt;: Custom error pages can be displayed instead of raw S3 errors.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Project structure with infrastructure schema and site example
&lt;/h2&gt;

&lt;p&gt;Here is configuration in &lt;code&gt;infrastructure/root.yaml&lt;/code&gt; CloudFormation template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    AWSTemplateFormatVersion: '2010-09-09'
    Description: CFN template for S3 Static Website with CloudFront and Route53

    Parameters:
      HostedZoneName:
        Type: String
        Default: ''
      HostedZoneId:
        Type: String
        Default: ''
      AcmCertificateArn:
        Type: String
        Default: ''

    Resources:
      StaticWebsiteBucket:
        Type: 'AWS::S3::Bucket'
        Properties:
          BucketName: !Sub 'static-website-${AWS::AccountId}'
          WebsiteConfiguration:
            IndexDocument: index.html
            ErrorDocument: error.html

      BucketPolicy:
        Type: 'AWS::S3::BucketPolicy'
        Properties:
          Bucket: !Ref StaticWebsiteBucket
          PolicyDocument:
            Statement:
              - Action: 's3:GetObject'
                Effect: Allow
                Resource: !Sub 'arn:${AWS::Partition}:s3:::${StaticWebsiteBucket}/*'
                Principal:
                  Service: 'cloudfront.amazonaws.com'

      OriginAccessControl:
        Type: 'AWS::CloudFront::OriginAccessControl'
        Properties:
          OriginAccessControlConfig:
            Name: 'OAC-StaticWebsite'
            OriginAccessControlOriginType: 's3'
            SigningBehavior: 'always'
            SigningProtocol: 'sigv4'

      CloudFrontDistribution:
        Type: 'AWS::CloudFront::Distribution'
        Properties:
          DistributionConfig:
            Enabled: true
            DefaultRootObject: index.html
            Aliases:
              - !Ref HostedZoneName
            Origins:
              - Id: S3Origin
                DomainName: !GetAtt StaticWebsiteBucket.RegionalDomainName
                OriginAccessControlId: !Ref OriginAccessControl
                S3OriginConfig:
                  OriginAccessIdentity: ''
            DefaultCacheBehavior:
              TargetOriginId: S3Origin
              ViewerProtocolPolicy: 'redirect-to-https'
              AllowedMethods: ['GET', 'HEAD']
              CachedMethods: ['GET', 'HEAD']
              ForwardedValues:
                QueryString: false
                Cookies:
                  Forward: none
            CustomErrorResponses:
              - ErrorCode: 403
                ResponsePagePath: '/error.html'
                ResponseCode: 200
              - ErrorCode: 404
                ResponsePagePath: '/error.html'
                ResponseCode: 200
            ViewerCertificate:
              AcmCertificateArn: !Ref AcmCertificateArn
              SslSupportMethod: 'sni-only'
            PriceClass: PriceClass_100

      Route53RecordSet:
        Type: 'AWS::Route53::RecordSet'
        Properties:
          HostedZoneId: !Ref HostedZoneId
          Name: !Ref HostedZoneName
          Type: 'A'
          AliasTarget:
            DNSName: !GetAtt CloudFrontDistribution.DomainName
            HostedZoneId: 'Z2FDTNDATAQYW2' # the hosted zone ID applicable only for CloudFront

    Outputs:
      CloudFrontURL:
        Description: 'CloudFront Distribution URL'
        Value: !Sub 'https://${CloudFrontDistribution.DomainName}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Infrastructure schema&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62wuyop2yrzyvx04izak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62wuyop2yrzyvx04izak.png" alt="Infrastructure schema" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Site example&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhvx13dcz3ttjnq9mqa4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhvx13dcz3ttjnq9mqa4.png" alt="Site example" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Site error handling&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2c3mawqesxpa7wqlxzq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2c3mawqesxpa7wqlxzq.png" alt="Site error handling" width="488" height="195"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Ensure the following prerequisites are in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with sufficient permissions to create and manage resources.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; installed on the local machine.&lt;/li&gt;
&lt;li&gt;A registered domain in Route 53.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;p&gt;1.Clone the repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    git clone git@gitlab.com:Andr1500/s3_static_website.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Create ACM certificate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws acm request-certificate \
        --domain-name 'yourdomain.com' \
        --validation-method DNS \
        --idempotency-token 'cert_request' --region us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Fill in all necessary Parameters in infrastructure/root.yaml CloudFormation template, and create the CloudFormation stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws acm list-certificates --region us-east-1

    aws route53 list-hosted-zones-by-name --dns-name 'yourdomain.com' --query 'HostedZones[0].Id' --output text

    aws cloudformation create-stack \
        --stack-name static-website \
        --template-body file://infrastructure/root.yaml \
        --capabilities CAPABILITY_NAMED_IAM \
        --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Retrieve outputs of the CloudFormation stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation describe-stacks \
        --stack-name static-website \
        --query "Stacks[0].Outputs" --output json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.Upload static website files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    export BUCKET_NAME=&amp;lt;Bucket name&amp;gt;

    aws s3 cp site_files s3://$BUCKET_NAME/ --recursive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6.Update files to the S3 bucket and invalidate CloudFormation cache.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws s3 sync site_files s3://$BUCKET_NAME

    aws cloudfront list-distributions --query "DistributionList.Items[*].[Id,DomainName]"

    aws cloudfront create-invalidation --distribution-id YOUR_DISTRIBUTION_ID --paths "/*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7.Cleanup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws s3 rm s3://$BUCKET_NAME --recursive

    aws cloudformation delete-stack --stack-name static-website

    aws acm delete-certificate --certificate-arn &amp;lt;certificate_arn&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By leveraging AWS services like S3, CloudFront, and Route 53, a static website can be deployed and accessed via a custom domain. This solution provides a scalable, cost-effective, and secure static site hosting infrastructure without the complexity of traditional web servers.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the reaction button below to show your support. Feel free to use and share this post. 🙂&lt;/p&gt;

</description>
      <category>awsstaticwebsite</category>
      <category>cloudfront</category>
      <category>s3</category>
      <category>route53</category>
    </item>
    <item>
      <title>CloudFormation policy compliance monitoring: leveraging CloudTrail and Athena</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Thu, 13 Mar 2025 20:53:08 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/cloudformation-policy-compliance-monitoring-leveraging-cloudtrail-and-athena-5a5n</link>
      <guid>https://dev.to/andrii-shykhov/cloudformation-policy-compliance-monitoring-leveraging-cloudtrail-and-athena-5a5n</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This post details an implementation that monitors who or what changed AWS resources created by CloudFormation. By leveraging AWS Config, CloudTrail, Athena, and Lambda, changes can be tracked, logs can be analysed, and compliance reporting can be automated. The collected data is stored in Amazon S3, making it accessible for audits and compliance verification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the Project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This post builds upon my earlier article on &lt;a href="https://dev.to/andrii-shykhov/cloudformation-drift-detection-and-notification-with-aws-config-remediation-action-2nn3"&gt;monitoring CloudFormation stack drift using AWS Config Rules&lt;/a&gt;. In this enhanced version, the monitoring capabilities are extended by:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tracking user activities&lt;/strong&gt; affecting CloudFormation resources.&lt;br&gt;
&lt;strong&gt;Logging change details&lt;/strong&gt; to Amazon S3 via CloudTrail.&lt;br&gt;
&lt;strong&gt;Processing and querying logs&lt;/strong&gt; using AWS Athena.&lt;br&gt;
&lt;strong&gt;Automating remediation&lt;/strong&gt; through AWS Systems Manager and Lambda.&lt;/p&gt;

&lt;p&gt;Core Components:&lt;br&gt;
&lt;strong&gt;AWS Config Rule&lt;/strong&gt;: Monitors stacks for drift using CLOUDFORMATION_STACK_DRIFT_DETECTION_CHECK.&lt;br&gt;
&lt;strong&gt;Systems Manager Automation Runbook&lt;/strong&gt;: Invokes a Lambda function for compliance checks.&lt;br&gt;
&lt;strong&gt;Remediation Action&lt;/strong&gt;: Executes the InvokeLambdaFromConfig automation document.&lt;br&gt;
&lt;strong&gt;Amazon S3 Bucket&lt;/strong&gt;: Stores logs from CloudTrail and Athena query results.&lt;br&gt;
&lt;strong&gt;Athena Table&lt;/strong&gt;: Organises and queries raw log data.&lt;br&gt;
&lt;strong&gt;CloudTrail Trail&lt;/strong&gt;: Captures AWS API activity logs.&lt;br&gt;
&lt;strong&gt;Lambda Function&lt;/strong&gt;: Extracts CloudFormation resource names and queries Athena for recent changes.&lt;br&gt;
Infrastructure Scema:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyubvj67bi9ynfd9xmto4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyubvj67bi9ynfd9xmto4.png" alt="schema" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configuration in &lt;code&gt;infrastructure/monitoring_stack_cloudtrail.yaml&lt;/code&gt; CloudFormation template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    AWSTemplateFormatVersion: '2010-09-09'
    Description: CloudTrail setup for monitoring CFN stack modifications

    Parameters:
      AthenaDatabaseName:
        Type: String
        Description: Athena database name for running queries
        Default: 'cloudtrail_logs'
      StackNameToMonitor:
        Type: String
        Description: CloudFormation stack name to monitor
        Default: 'base-infrastructure'
      MaximumExecutionFrequency:
        Type: String
        Description: The maximum frequency with which drift in CloudFormation stacks need to be evaluated
        Default: 'One_Hour'

    Resources:
    #################################
    # CloudTrail and Athena
    #################################
      CloudTrailLogsBucket:
        Type: AWS::S3::Bucket
        Properties:
          BucketName: !Sub "aws-cloudtrail-logs-${AWS::AccountId}"
          VersioningConfiguration:
            Status: Enabled
          LifecycleConfiguration:
            Rules:
              - Id: ExpireLogs
                Status: Enabled
                ExpirationInDays: 365

      CloudTrailLogsBucketPolicy:
        Type: AWS::S3::BucketPolicy
        Properties:
          Bucket: !Ref CloudTrailLogsBucket
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Sid: "AWSCloudTrailAclCheck"
                Effect: Allow
                Principal:
                  Service: cloudtrail.amazonaws.com
                Action: s3:GetBucketAcl
                Resource: !Sub "arn:${AWS::Partition}:s3:::aws-cloudtrail-logs-${AWS::AccountId}"
                Condition:
                  StringEquals:
                    AWS:SourceArn: !Sub "arn:${AWS::Partition}:cloudtrail:${AWS::Region}:${AWS::AccountId}:trail/monitoring-cfn-policy-compliance"
              - Sid: "AWSCloudTrailWrite"
                Effect: Allow
                Principal:
                  Service: cloudtrail.amazonaws.com
                Action: s3:PutObject
                Resource: !Sub "arn:${AWS::Partition}:s3:::aws-cloudtrail-logs-${AWS::AccountId}/AWSLogs/${AWS::AccountId}/*"
                Condition:
                  StringEquals:
                    AWS:SourceArn: !Sub "arn:${AWS::Partition}:cloudtrail:${AWS::Region}:${AWS::AccountId}:trail/monitoring-cfn-policy-compliance"
                    s3:x-amz-acl: "bucket-owner-full-control"
              - Sid: "AthenaQueryResultPutObject"
                Effect: Allow
                Principal:
                  Service: athena.amazonaws.com
                Action: s3:PutObject
                Resource: !Sub "arn:${AWS::Partition}:s3:::aws-cloudtrail-logs-${AWS::AccountId}/athena-results/*"
                Condition:
                  StringEquals:
                    aws:SourceArn: !Sub "arn:${AWS::Partition}:athena:${AWS::Region}:${AWS::AccountId}:workgroup/primary"

      CloudTrail:
        Type: AWS::CloudTrail::Trail
        Properties:
          TrailName: monitoring-cfn-policy-compliance
          S3BucketName: !Ref CloudTrailLogsBucket
          IncludeGlobalServiceEvents: true
          IsMultiRegionTrail: true
          EnableLogFileValidation: false
          IsOrganizationTrail: false
          IsLogging: true

      AthenaDatabase:
        Type: AWS::Glue::Database
        Properties:
          CatalogId: !Ref AWS::AccountId
          DatabaseInput:
            Name: !Ref AthenaDatabaseName

      AthenaTable:
        Type: AWS::Glue::Table
        Properties:
          CatalogId: !Ref AWS::AccountId
          DatabaseName: !Ref AthenaDatabase
          TableInput:
            Name: !Sub "aws_cloudtrail_logs_${AWS::AccountId}"
            TableType: EXTERNAL_TABLE
            Parameters:
              classification: cloudtrail
            StorageDescriptor:
              Columns:
                - Name: eventVersion
                  Type: string
                - Name: userIdentity
                  Type: struct&amp;lt;type:string,principalId:string,arn:string,accountId:string,invokedBy:string,accessKeyId:string,userName:string,sessionContext:struct&amp;lt;attributes:struct&amp;lt;mfaAuthenticated:string,creationDate:string&amp;gt;,sessionIssuer:struct&amp;lt;type:string,principalId:string,arn:string,accountId:string,username:string&amp;gt;,ec2RoleDelivery:string,webIdFederationData:struct&amp;lt;federatedProvider:string,attributes:map&amp;lt;string,string&amp;gt;&amp;gt;&amp;gt;&amp;gt;
                - Name: eventTime
                  Type: string
                - Name: eventSource
                  Type: string
                - Name: eventName
                  Type: string
                - Name: awsRegion
                  Type: string
                - Name: sourceIpAddress
                  Type: string
                - Name: userAgent
                  Type: string
                - Name: errorCode
                  Type: string
                - Name: errorMessage
                  Type: string
                - Name: requestParameters
                  Type: string
                - Name: responseElements
                  Type: string
                - Name: additionalEventData
                  Type: string
                - Name: requestId
                  Type: string
                - Name: eventId
                  Type: string
                - Name: resources
                  Type: array&amp;lt;struct&amp;lt;arn:string,accountId:string,type:string&amp;gt;&amp;gt;
                - Name: eventType
                  Type: string
                - Name: apiVersion
                  Type: string
                - Name: readOnly
                  Type: string
                - Name: recipientAccountId
                  Type: string
                - Name: serviceEventDetails
                  Type: string
                - Name: sharedEventID
                  Type: string
                - Name: vpcEndpointId
                  Type: string
                - Name: tlsDetails
                  Type: struct&amp;lt;tlsVersion:string,cipherSuite:string,clientProvidedHostHeader:string&amp;gt;
              Location: !Sub "s3://aws-cloudtrail-logs-${AWS::AccountId}/AWSLogs/${AWS::AccountId}/CloudTrail/"
              InputFormat: com.amazon.emr.cloudtrail.CloudTrailInputFormat
              OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
              SerdeInfo:
                SerializationLibrary: org.apache.hive.hcatalog.data.JsonSerDe

    #################################
    # Lambda function
    #################################
      LambdaExecutionRole:
        Type: AWS::IAM::Role
        Properties:
          RoleName: LambdaAthenaQueryExecutionRole
          AssumeRolePolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Principal:
                  Service: 
                    - lambda.amazonaws.com
                    - athena.amazonaws.com
                Action:
                  - sts:AssumeRole
          Policies:
            - PolicyName: CloudFormationDescribe
              PolicyDocument:
                Version: "2012-10-17"
                Statement:
                  - Effect: Allow
                    Action:
                      - cloudformation:DescribeStackResources
                    Resource: "arn:aws:cloudformation:*"
            - PolicyName: AthenaQueryPolicy
              PolicyDocument:
                Version: "2012-10-17"
                Statement:
                  - Effect: Allow
                    Action:
                    - athena:StartQueryExecution
                    - athena:GetQueryExecution
                    - athena:GetQueryResults
                    - athena:GetWorkGroup
                    - athena:GetDataCatalog
                    - athena:GetTableMetadata
                    - glue:GetDatabase
                    - glue:GetTable
                    - glue:GetPartitions
                    Resource: "*"
                  - Effect: Allow
                    Action:
                    - s3:PutObject
                    - s3:GetObject
                    - s3:ListBucket
                    - s3:GetBucketLocation
                    - s3:PutObjectAcl
                    Resource: 
                      - !Sub "arn:${AWS::Partition}:s3:::${CloudTrailLogsBucket}"
                      - !Sub "arn:${AWS::Partition}:s3:::${CloudTrailLogsBucket}/*"
                  - Effect: Allow
                    Action:
                      - lambda:AddPermission
                    Resource: "*"
                  - Effect: Allow
                    Action:
                      - logs:CreateLogGroup
                      - logs:CreateLogStream
                      - logs:PutLogEvents
                    Resource: "*"

      CheckCloudTrailLogsLambda:
        Type: AWS::Lambda::Function
        Properties:
          FunctionName: CheckCloudTrailLogsLambda
          Runtime: nodejs22.x
          Handler: index.handler
          Role: !GetAtt LambdaExecutionRole.Arn
          Timeout: 120
          MemorySize: 256
          Environment:
            Variables:
              STACKS_TO_MONITOR: !Ref StackNameToMonitor
              ATHENA_DATABASE: !Ref AthenaDatabase
              ATHENA_TABLE: !Ref AthenaTable
              S3_OUTPUT_BUCKET: !Ref CloudTrailLogsBucket
          Code:
            ZipFile: |
              const { AthenaClient, StartQueryExecutionCommand } = require("@aws-sdk/client-athena");
              const { CloudFormationClient, DescribeStackResourcesCommand } = require("@aws-sdk/client-cloudformation");

              const athena = new AthenaClient({});
              const cloudformation = new CloudFormationClient({});

              exports.handler = async (event) =&amp;gt; {
                  console.log("Event received:", JSON.stringify(event, null, 2));

                  const stacks = process.env.STACKS_TO_MONITOR.split(",");
                  const tableName = process.env.ATHENA_TABLE;
                  const databaseName = process.env.ATHENA_DATABASE;
                  const s3Bucket = process.env.S3_OUTPUT_BUCKET;

                  let resourceNames = [];

                  // Extract resource names from the CloudFormation stacks
                  for (const stack of stacks) {
                      const stackResources = await cloudformation.send(
                          new DescribeStackResourcesCommand({ StackName: stack })
                      );

                      stackResources.StackResources.forEach(resource =&amp;gt; {
                          if (resource.PhysicalResourceId) {
                              resourceNames.push(resource.PhysicalResourceId);
                          }
                      });
                  }

                  // Construct Athena query
                  let whereClause = resourceNames.map(name =&amp;gt; `resource.arn LIKE '%${name}%'`).join(" OR ");
                  let queryString = `
                      SELECT 
                          userIdentity.userName AS username,
                          eventName AS action,
                          eventTime AS timestamp,
                          resource.arn AS resource_arn,
                          sourceIPAddress AS request_source,
                          userAgent AS user_agent
                      FROM ${tableName}
                      CROSS JOIN UNNEST(resources) AS t(resource)
                      WHERE (${whereClause})
                      AND eventName IS NOT NULL
                      AND userIdentity.userName IS NOT NULL
                      AND from_iso8601_timestamp(eventTime) &amp;gt;= current_timestamp - INTERVAL '1' HOUR
                      ORDER BY from_iso8601_timestamp(eventTime) DESC;
                  `;

                  // Run the Athena query
                  const params = {
                      QueryString: queryString,
                      QueryExecutionContext: { Database: databaseName },
                      ResultConfiguration: { OutputLocation: `s3://${s3Bucket}/athena-results/` }
                  };

                  try {
                      const command = new StartQueryExecutionCommand(params);
                      const queryExecution = await athena.send(command);
                      console.log("Query started:", queryExecution.QueryExecutionId);
                      return { status: "Query started successfully", queryExecutionId: queryExecution.QueryExecutionId };
                  } catch (error) {
                      console.error("Error running query:", error);
                      throw error;
                  }
              };

      LambdaPermissionForConfig:
        Type: AWS::Lambda::Permission
        Properties:
          FunctionName: !Ref CheckCloudTrailLogsLambda
          Action: lambda:InvokeFunction
          Principal: config.amazonaws.com

    #################################
    # Config Rule
    #################################
      IamRoleForConfig2:
        Type: AWS::IAM::Role
        Properties:
          RoleName: CfnDriftDetectionForCloudTrail
          Description: IAM role for AWS Config to access CloudFormation drift detection
          AssumeRolePolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Principal:
                  Service: config.amazonaws.com
                Action:
                  - sts:AssumeRole
          ManagedPolicyArns:
            - arn:aws:iam::aws:policy/ReadOnlyAccess
          Policies:
            - PolicyName: CloudFormationDriftDetectionpolicy
              PolicyDocument:
                Version: "2012-10-17"
                Statement:
                  - Effect: Allow
                    Action:
                      - cloudformation:DetectStackResourceDrift
                      - cloudformation:DetectStackDrift
                      - cloudformation:DescribeStacks
                      - cloudformation:DescribeStackResources
                      - cloudformation:BatchDescribeTypeConfigurations
                      - cloudformation:DescribeStackResourceDrifts
                      - cloudformation:DescribeStackDriftDetectionStatus
                    Resource: !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:*"

      ConfigRuleCheckCloudTralLogs:
        DependsOn:
        - LambdaPermissionForConfig
        Type: AWS::Config::ConfigRule
        Properties:
          ConfigRuleName: ConfigRuleCheckCloudTrailLogs
          Description: AWS Config rule to detect drift in CFN stacks and check CloudTrail logs
          Scope:
            TagKey: stack-name
            TagValue: !Ref StackNameToMonitor
          Source:
            Owner: AWS
            SourceIdentifier: CLOUDFORMATION_STACK_DRIFT_DETECTION_CHECK
          MaximumExecutionFrequency: !Ref MaximumExecutionFrequency
          InputParameters:
            cloudformationRoleArn: !GetAtt IamRoleForConfig2.Arn

      IamRoleForRemediation:
        Type: AWS::IAM::Role
        Properties:
          RoleName: AwsConfigRemediationActionInvokeLambda
          Description: IAM role for AWS Config remediation action to invoke Lambda function
          AssumeRolePolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Principal:
                  Service:
                    - config.amazonaws.com
                    - ssm.amazonaws.com
                Action:
                  - sts:AssumeRole
          Policies:
            - PolicyName: InvokeLambdaPolicy
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: Allow
                    Action:
                      - lambda:InvokeFunction
                    Resource: !GetAtt CheckCloudTrailLogsLambda.Arn

      SsmDocumentInvokeLambda:
        Type: AWS::SSM::Document
        Properties:
          DocumentType: Automation
          Name: InvokeLambdaFromConfig
          Content:
            schemaVersion: "0.3"
            description: "SSM Automation document to invoke a Lambda function"
            parameters:
              AutomationAssumeRole:
                type: String
                description: (Optional) The ARN of the role that allows Automation to perform the actions
                default: !GetAtt IamRoleForRemediation.Arn
            mainSteps:
              - name: InvokeLambda
                action: aws:invokeLambdaFunction
                inputs:
                  FunctionName: !Ref CheckCloudTrailLogsLambda
                  Payload: '{}'
                  InvocationType: Event
                  LogType: None
                maxAttempts: 2
                timeoutSeconds: 30
                onFailure: Abort
                isCritical: true
            assumeRole: !GetAtt IamRoleForRemediation.Arn

      RemediationActionInvokeLambda:
        Type: AWS::Config::RemediationConfiguration
        Properties:
          ConfigRuleName: !Ref ConfigRuleCheckCloudTralLogs
          TargetType: SSM_DOCUMENT
          TargetId: !Ref SsmDocumentInvokeLambda
          Automatic: true
          MaximumAutomaticAttempts: 2
          RetryAttemptSeconds: 30
          Parameters:
            AutomationAssumeRole:
              StaticValue:
                Values:
                  - !GetAtt IamRoleForRemediation.Arn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure the following prerequisites are in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with sufficient permissions to create and manage resources.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; installed on the local machine.&lt;/li&gt;
&lt;li&gt;CloudFormation infrastructure deployed from my previous &lt;a href="https://dev.to/andrii-shykhov/cloudformation-drift-detection-and-notification-with-aws-config-remediation-action-2nn3"&gt;post&lt;/a&gt; (if applicable).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploy the CloudFormation Stack.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name monitoring-policy-compliance \
        --template-body file://infrastructure/monitoring_stack_cloudtrail.yaml \
        --capabilities CAPABILITY_NAMED_IAM \
        --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Test the resources deployed with the stack. Change value of the resource from the base-infrastructure stack and evaluate the drift detection rule to verify functionality.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws ssm put-parameter --name "ConnectionToken" --value "secret_token_value_2" --type "String" --overwrite

    aws configservice start-config-rules-evaluation --config-rule-names ConfigRuleCheckCloudTrailLogs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.After drift detection runs, review Athena query results stored in S3 under /athena-results in .csv file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws s3 ls s3://&amp;lt;your-bucket-name&amp;gt;/athena-results/ --recursive

    aws s3 cp s3://&amp;lt;your-bucket-name&amp;gt;/&amp;lt;report_name&amp;gt;.csv ./
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is an example of logs from this file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01g2z3zgal8bvz2hes49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01g2z3zgal8bvz2hes49.png" alt="example logs" width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;4.Cleanup Resources. After testing, stop the CloudTrail Trail logging, delete all data from the S3 bucket, and delete the CloudFormation stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudtrail stop-logging --name monitor-cfn-policy-compliance

    aws s3 rm s3://&amp;lt;your-bucket-name&amp;gt; --recursive

    aws cloudformation delete-stack --stack-name monitoring-policy-compliance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Implementing this solution provides visibility into changes affecting CloudFormation-managed resources. This enhances security, compliance tracking, and audit readiness. The ability to log and query user actions simplifies responses to compliance requests from clients, regulators, or security teams.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the reaction button below to show your support. Feel free to use and share this post. 🙂&lt;/p&gt;

</description>
      <category>awscloudtrail</category>
      <category>awsconfig</category>
      <category>cloudformationdrift</category>
    </item>
    <item>
      <title>CloudFormation Drift Detection and Notification with AWS Config Remediation Action</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Thu, 13 Mar 2025 20:45:14 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/cloudformation-drift-detection-and-notification-with-aws-config-remediation-action-2nn3</link>
      <guid>https://dev.to/andrii-shykhov/cloudformation-drift-detection-and-notification-with-aws-config-remediation-action-2nn3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CloudFormation stack drift occurs when resources created by a CloudFormation stack are manually changed or deleted. This post explains the process of automatically detecting stack drift using an AWS Config Rule, triggering a Remediation Action with the AWS-PublishSNSNotification Systems Manager automation runbook, and sending notifications via an SNS topic for efficient monitoring and response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This solution deploys an automated system for monitoring and alerting on CloudFormation stack drift. The core components of the project are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Config Rule&lt;/strong&gt;: Monitors stacks for drift using the CLOUDFORMATION_STACK_DRIFT_DETECTION_CHECK rule.&lt;br&gt;
&lt;strong&gt;Remediation Action&lt;/strong&gt;: Utilizes the AWS-PublishSNSNotification Systems Manager automation runbook to trigger notifications as a remediation step.&lt;br&gt;
&lt;strong&gt;SNS Notifications&lt;/strong&gt;: Alerts are sent to a preconfigured endpoint upon drift detection.&lt;br&gt;
&lt;strong&gt;CloudFormation Templates&lt;/strong&gt;: Streamline the deployment and configuration of resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauowih1ynzf6nqmao560.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauowih1ynzf6nqmao560.png" alt="Infrastructure Scema" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Config Rule and Remediation Action configurations from &lt;code&gt;infrastructure/monitoring_stack.yaml&lt;/code&gt; template:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Parameters:
  StackNameToMonitor:
    Type: String
    Description: Name of the CloudFormation stack to monitor for drift detection
    Default: 'base-infrastructure'
  ConfigRuleName:
    Type: String
    Description: Provide the name for the Config rule
    Default: 'CloudFormationDriftDetection'

Resources:
  IamRoleForConfig:
    Type: AWS::IAM::Role
    Properties:
      RoleName: CloudFormationDriftDetectionRole
      Description: IAM role for AWS Config to access CloudFormation drift detection
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: config.amazonaws.com
            Action:
              - sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/ReadOnlyAccess
      Policies:
        - PolicyName: CloudFormationDriftDetectionpolicy
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - cloudformation:DetectStackResourceDrift
                  - cloudformation:DetectStackDrift
                  - cloudformation:DescribeStacks
                  - cloudformation:DescribeStackResources
                  - cloudformation:BatchDescribeTypeConfigurations
                  - cloudformation:DescribeStackResourceDrifts
                  - cloudformation:DescribeStackDriftDetectionStatus
                Resource: !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:*"

  ConfigRuleToDetectCfnDrift:
    Type: AWS::Config::ConfigRule
    Properties:
      ConfigRuleName: !Ref ConfigRuleName
      Description: AWS Config rule to detect drift in CloudFormation stacks
      Scope:
        TagKey: stack-name
        TagValue: !Ref StackNameToMonitor
      Source:
        Owner: AWS
        SourceIdentifier: CLOUDFORMATION_STACK_DRIFT_DETECTION_CHECK
      MaximumExecutionFrequency: !Ref MaximumExecutionFrequency
      InputParameters:
        cloudformationRoleArn: !GetAtt IamRoleForConfig.Arn

  IamRoleForRemediation:
    Type: AWS::IAM::Role
    Properties:
      RoleName: AwsConfigRemediationAction
      Description: IAM role for AWS Config remediation action to publish SNS notifications
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - config.amazonaws.com
                - ssm.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: SNSPublishPolicy
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - sns:Publish
                Resource: !Ref SnsTopicToNotifyCfnDriftAlarm

  RemediationActionForConfigRule:
    Type: AWS::Config::RemediationConfiguration
    Properties:
      ConfigRuleName: !Ref ConfigRuleName
      TargetType: SSM_DOCUMENT
      TargetId: AWS-PublishSNSNotification
      Automatic: true
      MaximumAutomaticAttempts: 2
      RetryAttemptSeconds: 30
      Parameters:
        AutomationAssumeRole:
          StaticValue:
            Values:
              - !GetAtt IamRoleForRemediation.Arn
        Message:
          StaticValue:
            Values:
              - !Sub |
                "*** CloudFormation Stack drift detected! ***"
                "Account: ${AWS::AccountId}"
                "CloudFormation Stack: ${StackNameToMonitor}"
        TopicArn:
          StaticValue:
            Values:
              - !Ref SnsTopicToNotifyCfnDriftAlarm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure the following prerequisites are in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with sufficient permissions to create and manage resources.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; installed on the local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Clone the repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    git clone https://gitlab.com/Andr1500/cfn-drift-detection.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Deploy the base infrastructure. Use the infrastructure/infrastructure_stack.yaml template to set up the foundational resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name base-infrastructure \
        --template-body file://infrastructure/infrastructure_stack.yaml \
        --capabilities CAPABILITY_NAMED_IAM \
        --tags Key=stack-name,Value=base-infrastructure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Verify AWS Config setup. Check if AWS Config is enabled, and create necessary resources if missing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws configservice describe-configuration-recorders

    aws configservice describe-delivery-channels

    aws iam get-role --role-name AWSServiceRoleForConfig

    aws iam create-service-linked-role --aws-service-name config.amazonaws.com

    aws configservice put-configuration-recorder \
        --configuration-recorder name=default,roleARN=arn:aws:iam::${AWS::AccountId}:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig

    aws configservice put-delivery-channel \
        --delivery-channel-name default \
        --s3-bucket-name &amp;lt;YourBucketName&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Deploy the monitoring stack using the &lt;code&gt;infrastructure/monitoring_stack.yaml&lt;/code&gt; template. It is important to grant necessary access for the Config Rule, otherwise, you will have an error for the monitored stack resources scope:AWS CloudFormation failed to detect drift. The default compliance state has been set to NON_COMPLIANT. Re-evaluate the rule and try again. If the problem persists contact AWS CloudFormation support.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name monitoring-infrastructure \
        --template-body file://infrastructure/monitoring_stack.yaml \
        --capabilities CAPABILITY_NAMED_IAM \
        --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.Test the monitoring stack. Change value of the resource from the base infrastructure stack and evaluate the drift detection rule to verify functionality.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws ssm put-parameter --name "ConnectionToken" --value "secret_token_value_2" --type "String" --overwrite

    aws configservice start-config-rules-evaluation --config-rule-names CloudFormationDriftDetection
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6.Clean up resources. After testing, clean up resources by deleting the CloudFormation stacks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation delete-stack --stack-name infrastructure_stack.yaml

    aws cloudformation delete-stack --stack-name monitoring_stack.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By implementing remediation actions with the AWS-PublishSNSNotification Systems Manager automation runbook, you can automate notifications with minimal setup. While this approach is efficient, it’s essential to consider the 256-character limit for the Message field, which may restrict detailed notifications.&lt;/p&gt;

&lt;p&gt;For scenarios requiring advanced workflows or richer notifications, consider augmenting the solution with EventBridge rules or Lambda functions. With this system in place, you can ensure timely responses to drift events and maintain infrastructure compliance effectively.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the reaction button to show your support. Feel free to use and share this post.&lt;/p&gt;

</description>
      <category>awsconfig</category>
      <category>cfndriftdetection</category>
      <category>remediationaction</category>
    </item>
    <item>
      <title>OpenTelemetry Tracing for File Downloads from an S3 Bucket via API Gateway and Lambda Functions</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Thu, 13 Mar 2025 20:38:17 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/opentelemetry-tracing-for-file-downloads-from-an-s3-bucket-via-api-gateway-and-lambda-functions-mp3</link>
      <guid>https://dev.to/andrii-shykhov/opentelemetry-tracing-for-file-downloads-from-an-s3-bucket-via-api-gateway-and-lambda-functions-mp3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This post demonstrates how to build a serverless solution using AWS services like API Gateway, Lambda functions, and S3 buckets to enable file downloads using presigned URLs. By integrating OpenTelemetry tracing, we can gain deep insights into each request’s journey. The focus is on creating a secure and efficient method for downloading files from an S3 bucket while tracing the interactions from API Gateway (REST API) through Lambda to S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this project, we build a serverless architecture that allows users to securely download files from an S3 bucket via an API Gateway endpoint. API Gateway is secured using a Lambda Authorizer to ensure only authorized requests are processed. The architecture also incorporates OpenTelemetry tracing, providing detailed visibility into each request’s lifecycle — from API Gateway to Lambda functions and interactions with the S3 bucket. All resources (except the parameter in the Parameter Store) were created with CloudFormation.&lt;/p&gt;

&lt;p&gt;For the Lambda functions to work properly, the project uses the AWS Distro for OpenTelemetry (ADOT) Lambda layer, which enables tracing. You can find more information about setting up this layer &lt;a href="https://aws-otel.github.io/docs/getting-started/lambda/lambda-python#add-the-arn-of-the-lambda-layer" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The core components of the project are:&lt;br&gt;
&lt;strong&gt;API Gateway:&lt;/strong&gt; Serves as the entry point for API requests, secured with a Lambda Authorizer.&lt;br&gt;
&lt;strong&gt;Lambda Functions:&lt;/strong&gt; Handle the business logic, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authenticating requests via the Lambda Authorizer.&lt;/li&gt;
&lt;li&gt;Generating presigned URLs for downloading files from the S3 bucket.&lt;/li&gt;
&lt;li&gt;Handling error cases, such as incorrect API Gateway resources or invalid file names.
&lt;strong&gt;S3 Bucket:&lt;/strong&gt; Stores the files for download.
&lt;strong&gt;OpenTelemetry Tracing:&lt;/strong&gt; Tracks and logs the flow of requests for debugging and performance optimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8htp1ot1zgod4g7zcbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8htp1ot1zgod4g7zcbq.png" alt="Infrastructure Schema" width="782" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcbk53rpfmiqkagcve7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcbk53rpfmiqkagcve7f.png" alt="X-Ray Trace Map in case of correct request" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbc18u6iesasrd42wcnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbc18u6iesasrd42wcnj.png" alt="X-Ray Trace Map in case of incorrect token" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Main Lambda function and IAM role configuration in &lt;code&gt;infrastructure/root.yaml&lt;/code&gt; CloudFormation template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Parameters:
      S3BucketName:
        Type: String
        Default: 'store-files-20241010'
      StageName:
        Type: String
        Default: 'dev'
      OtelLambdaLayerArn:
        Type: String
        Default: 'arn:aws:lambda:eu-central-1:901920570463:layer:aws-otel-python-amd64-ver-1-25-0:1'
    Resources:
      MainLambdaFunction:
        DependsOn:
          - S3Bucket
        Type: AWS::Lambda::Function
        Properties:
          FunctionName: MainLambdaFunction
          Description: Makes requests to the S3 bucket
          Runtime: python3.12
          Handler: index.lambda_handler
          Role: !GetAtt MainLambdaExecutionRole.Arn
          Timeout: 30
          MemorySize: 512
          Environment:
            Variables:
              S3_BUCKET_NAME: !Ref S3BucketName
              OTEL_PROPAGATORS: xray
              AWS_LAMBDA_EXEC_WRAPPER: /opt/otel-instrument
          TracingConfig:
            Mode: Active
          Layers:
            - !Ref OtelLambdaLayerArn
          Code:
            ZipFile: |
              import json
              import boto3
              import os
              import logging
              from botocore.exceptions import ClientError
              from opentelemetry import trace

              # Initialize logging
              logger = logging.getLogger()
              logger.setLevel(logging.INFO)

              # OpenTelemetry Tracing
              tracer = trace.get_tracer(__name__)

              # Initialize the S3 client
              s3_client = boto3.client('s3')

              def lambda_handler(event, context):
                  # Start tracing for the request
                  with tracer.start_as_current_span("main-handler-span") as span:
                      # Log the entire event received
                      logger.info('Received event: %s', json.dumps(event))

                      # Extract and log routeKey, path, and method
                      route_key = event.get("routeKey")
                      path = event.get("path")
                      http_method = event.get("httpMethod")

                      logger.info('Route Key: %s', route_key)
                      logger.info('HTTP Method: %s', http_method)
                      logger.info('Path: %s', path)

                      # Add tracing attributes
                      span.set_attribute("routeKey", route_key)
                      span.set_attribute("http.method", http_method)
                      span.set_attribute("http.path", path)

                      # Handle routes based on HTTP method and path
                      if route_key == "GET /list" or (http_method == "GET" and path == "/list"):
                          logger.info("Handling /list request")
                          return list_objects(span)
                      elif route_key == "GET /download" or (http_method == "GET" and path == "/download"):
                          logger.info("Handling /download request")
                          return generate_presigned_url(event, span)
                      else:
                          # Return a specific error message for invalid routes
                          logger.error("Invalid route: %s", route_key)
                          return {
                              'statusCode': 404,
                              'body': json.dumps({"error": "Resource name is incorrect."})
                          }

              def list_objects(span):
                  bucket_name = os.environ.get('S3_BUCKET_NAME')
                  try:
                      logger.info("Listing objects in bucket: %s", bucket_name)
                      response = s3_client.list_objects_v2(Bucket=bucket_name)
                      if 'Contents' not in response:
                          logger.info("No objects found in bucket.")
                          return {
                              'statusCode': 200,
                              'body': json.dumps({"objects": []})
                          }
                      object_keys = [{'File': obj['Key']} for obj in response['Contents']]
                      logger.info("Found %d objects in bucket.", len(object_keys))
                      span.set_attribute("s3.object_count", len(object_keys))
                      return {
                          'statusCode': 200,
                          'body': json.dumps({"objects": object_keys})
                      }
                  except ClientError as e:
                      logger.error("Error listing objects in the bucket: %s", e)
                      return {
                          'statusCode': 500,
                          'body': json.dumps({"error": "Error listing objects in the bucket."})
                      }

              def generate_presigned_url(event, span):
                  bucket_name = os.environ.get('S3_BUCKET_NAME')
                  query_string_params = event.get("queryStringParameters")
                  if not query_string_params:
                      logger.error("Query string parameters are missing or request is incorrect.")
                      return {
                          'statusCode': 400,
                          'body': json.dumps({"error": "Request is incorrect. Please provide valid query parameters."})
                      }
                  object_key = query_string_params.get("objectKey")
                  logger.info("Bucket name: %s", bucket_name)
                  logger.info("Requested object key: %s", object_key)
                  if not object_key:
                      logger.error("objectKey is missing in queryStringParameters.")
                      return {
                          'statusCode': 400,
                          'body': json.dumps({"error": "objectKey is required."})
                      }
                  # Check if the object exists
                  try:
                      logger.info("Checking if object %s exists in bucket %s", object_key, bucket_name)
                      s3_client.head_object(Bucket=bucket_name, Key=object_key)
                  except ClientError as e:
                      if e.response['Error']['Code'] == '404':
                          logger.error("Object %s does not exist in bucket %s", object_key, bucket_name)
                          return {
                              'statusCode': 404,
                              'body': json.dumps({"error": "Object name is incorrect or object doesn't exist."})
                          }
                      else:
                          logger.error("Error checking object existence in bucket: %s", e)
                          return {
                              'statusCode': 500,
                              'body': json.dumps({"error": "Error checking object existence."})
                          }
                  # Generate presigned URL if the object exists
                  try:
                      logger.info("Generating presigned URL for object: %s", object_key)
                      presigned_url = s3_client.generate_presigned_url(
                          'get_object',
                          Params={'Bucket': bucket_name, 'Key': object_key},
                          ExpiresIn=900
                      )
                      logger.info("Generated presigned URL: %s", presigned_url)
                      span.set_attribute("s3.object_key", object_key)
                      return {
                          'statusCode': 200,
                          'body': json.dumps({"url": presigned_url})
                      }
                  except ClientError as e:
                      logger.error("Error generating presigned URL for object %s: %s", object_key, e)
                      return {
                          'statusCode': 500,
                          'body': json.dumps({"error": "Error generating presigned URL."})
                      }

      MainLambdaExecutionRole:
        Type: AWS::IAM::Role
        Properties:
          RoleName: MainLambdaExecutionRole
          AssumeRolePolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Principal:
                  Service:
                    - lambda.amazonaws.com
                Action:
                  - sts:AssumeRole
          ManagedPolicyArns:
            - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
          Policies:
            - PolicyName: S3BucketAccessPolicy
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: Allow
                    Action:
                      - s3:ListBucket
                      - s3:GetObject
                      - s3:HeadObject
                    Resource:
                      - !Sub arn:${AWS::Partition}:s3:::${S3BucketName}/*
                      - !Sub arn:${AWS::Partition}:s3:::${S3BucketName}
            - PolicyName: XRayAccessPolicy
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: Allow
                    Action:
                      - xray:PutTraceSegments
                      - xray:PutTelemetryRecords
                    Resource: "*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure the following prerequisites are in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with sufficient permissions to create and manage resources.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; installed on the local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Clone the repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    git clone https://gitlab.com/Andr1500/api-gateway-dowload-from-s3.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Create an authorization token in the AWS Systems Manager Parameter Store.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws ssm put-parameter --name "AuthorizationLambdaToken" --value "token_value_secret" --type "SecureString"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Fill in all necessary Parameters in infrastructure/root.yaml CloudFormation template, and create the CloudFormation stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name api-gw-dowload-from-s3 \
        --template-body file://infrastructure/root.yaml \
        --capabilities CAPABILITY_NAMED_IAM --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Retrieve the Invoke URLs of the Stage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation describe-stacks --stack-name api-gw-dowload-from-s3 --query "Stacks[0].Outputs"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.Copy some files to the S3 bucket for testing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws s3 cp images/apigw_lambda_s3.png  s3://s3_bucket_name/apigw_lambda_s3.png
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6.Test API and download the file with CURL. &lt;br&gt;
IMPORTANT! After the infrastructure creation, it can take some time for the bucket name to propagate across AWS regions. During this time “Temporary Redirect” response might be received for requests to the object with presigned URL. In my case, even though the region was specified in the origin domain name, it was not working, only 1 day later downloading with the presigned URL worked correctly.&lt;br&gt;
To view the X-Ray trace map and segments timeline, navigate to the AWS Console: &lt;strong&gt;CloudWatch&lt;/strong&gt; -&amp;gt; &lt;strong&gt;X-Ray traces&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Traces&lt;/strong&gt;. In the &lt;strong&gt;Traces&lt;/strong&gt; section, a list of trace IDs will be displayed. Clicking on a trace ID will reveal the Trace map, Segments timeline, and associated Logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    export APIGW_TOKEN='token_value_secret'

    curl -X GET -H "Authorization: Bearer $APIGW_TOKEN" "https://api_id.execute-api.eu-central-1.amazonaws.com/dev/list"

    curl -X GET -H "Authorization: Bearer $APIGW_TOKEN" "https://api_id.execute-api.eu-central-1.amazonaws.com/download?objectKey=apigw_lambda_s3.png"

    curl -O "https://s3_bucket_name.s3.amazonaws.com/apigw_lambda_s3.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;amp;X-Amz-Credential=ASIASAQWERTY123456%2Feu-central-1%2Fs3%2Faws4_request&amp;amp;X-Amz-Date=20241003T125740Z&amp;amp;X-Amz-Expires=600&amp;amp;X-Amz-SignedHeaders=host&amp;amp;X-Amz-Security-Token=token_value&amp;amp;X-Amz-Signature=signature_value"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7.Clean Up Resources. After testing, clean up resources by deleting the token from the Parameter Store, and the CloudFormation stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws ssm delete-parameter --name "AuthorizationLambdaToken"

    aws cloudformation delete-stack --stack-name api-gw-dowload-from-s3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By leveraging AWS services like API Gateway, Lambda, and S3, along with the power of OpenTelemetry tracing, this project provides a robust solution for tracing file downloads. Using CloudFormation to manage the infrastructure as code ensures easy deployment and repeatability.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the button to show your support. Feel free to use and share this post. You can also support me with a &lt;a href="https://buymeacoffee.com/andrworld1500" rel="noopener noreferrer"&gt;virtual coffee&lt;/a&gt; 🙂&lt;/p&gt;

</description>
      <category>opentelemetry</category>
      <category>aws</category>
      <category>s3presignedurl</category>
    </item>
    <item>
      <title>Implementing Tracing with AWS X-Ray service and AWS X-Ray SDK for REST API Gateway and Lambda functions</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Thu, 13 Mar 2025 20:29:53 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/implementing-tracing-with-aws-x-ray-service-and-aws-x-ray-sdk-for-rest-api-gateway-and-lambda-lkl</link>
      <guid>https://dev.to/andrii-shykhov/implementing-tracing-with-aws-x-ray-service-and-aws-x-ray-sdk-for-rest-api-gateway-and-lambda-lkl</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This post details the implementation of tracing with AWS X-Ray and the AWS X-Ray SDK within a serverless AWS infrastructure. The focus is on integrating tracing across various AWS services, including API Gateway (REST API), Lambda functions, Systems Manager Parameter Store, and the Amazon Bedrock model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the Project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Based on &lt;a href="https://medium.com/p/0ba51436fd02" rel="noopener noreferrer"&gt;my earlier article&lt;/a&gt; that covers interacting with Amazon Bedrock models using API Gateway and Lambda functions, this post extends the existing infrastructure by adding configurations for tracing with AWS X-Ray. Initially, the setup utilized an HTTP API, which lacks native X-Ray support; therefore, a REST API was adopted to enable tracing. Additional information on API Gateway tracing with X-Ray is available in the &lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/xray-services-apigateway.html" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;. The AWS X-Ray SDK documentation can be found &lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-python.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The Main Lambda function and IAM role configuration in &lt;code&gt;infrastructure/tracing-rest-api.yaml&lt;/code&gt; CloudFormation template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Parameters:
      BedrockModelId:
        Type: String
        Default: 'amazon.titan-text-express-v1'
      LambdaLayerVersionArn:
        Type: String
        Default: 'arn:aws:lambda:&amp;lt;region&amp;gt;:&amp;lt;account_id&amp;gt;:layer:aws-xray-sdk-layer:&amp;lt;version_number&amp;gt;'

    Resources:   
      MainLambdaFunction:
        Type: AWS::Lambda::Function
        Properties:
          FunctionName: MainLambdaFunction
          Description: Make requests to Bedrock models
          Runtime: python3.12
          Handler: index.lambda_handler
          Role: !GetAtt MainLambdaExecutionRole.Arn
          Timeout: 30
          MemorySize: 512
          TracingConfig:
            Mode: Active
          Layers:
            - !Ref LambdaLayerVersionArn
          Environment:
            Variables:
              BEDROCK_MODEL_ID: !Ref BedrockModelId
          Code:
            ZipFile: |
              import json
              import boto3
              import os
              import logging
              from botocore.exceptions import ClientError
              from aws_xray_sdk.core import patch_all, xray_recorder

              # Initialize logging
              logger = logging.getLogger()
              logger.setLevel(logging.INFO)

              # Initialize the X-Ray SDK
              patch_all()

              # Initialize the Bedrock Runtime client
              bedrock_runtime = boto3.client('bedrock-runtime')

              def lambda_handler(event, context):
                  try:
                      # Retrieve the model ID from environment variables
                      model_id = os.environ['BEDROCK_MODEL_ID']

                      # Validate the input
                      input_text = event.get("queryStringParameters", {}).get("inputText")
                      if not input_text:
                          logger.error('Input text is missing in the request')
                          raise ValueError("Input text is required in the request query parameters.")

                      # Prepare the payload for invoking the Bedrock model
                      payload = json.dumps({
                          "inputText": input_text,
                          "textGenerationConfig": {
                              "maxTokenCount": 8192,
                              "stopSequences": [],
                              "temperature": 0,
                              "topP": 1
                          }
                      })

                      logger.info('Payload for Bedrock model: %s', payload)

                      # Create a subsegment for the Bedrock model invocation
                      with xray_recorder.in_subsegment('Bedrock InvokeModel') as subsegment:
                          # Invoke the Bedrock model
                          response = bedrock_runtime.invoke_model(
                              modelId=model_id,
                              contentType="application/json",
                              accept="application/json",
                              body=payload
                          )

                          logger.info('Response from Bedrock model: %s', response)

                          # Check if the 'body' exists in the response and handle it correctly
                          if 'body' not in response or not response['body']:
                              logger.error('Response body is empty')
                              raise ValueError("Response body is empty.")

                          # Read and process the response
                          response_body = json.loads(response['body'].read().decode('utf-8'))
                          logger.info('Processed response body: %s', response_body)

                          return {
                              'statusCode': 200,
                              'body': json.dumps(response_body)
                          }

                  except ClientError as e:
                      logger.error('ClientError: %s', e)
                      return {
                          'statusCode': 500,
                          'body': json.dumps({"error": "Error interacting with the Bedrock API"})
                      }
                  except ValueError as e:
                      logger.error('ValueError: %s', e)
                      return {
                          'statusCode': 400,
                          'body': json.dumps({"error": str(e)})
                      }
                  except Exception as e:
                      logger.error('Exception: %s', e)
                      return {
                          'statusCode': 500,
                          'body': json.dumps({"error": "Internal Server Error"})
                      }

      MainLambdaExecutionRole:
        Type: AWS::IAM::Role
        Properties:
          RoleName: MainLambdaExecutionRole
          AssumeRolePolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Principal:
                  Service:
                    - lambda.amazonaws.com
                Action:
                  - sts:AssumeRole
          ManagedPolicyArns:
            - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
          Policies:
            - PolicyName: BedrockAccessPolicy
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: Allow
                    Action:
                      - bedrock:InvokeModel
                      - bedrock:ListFoundationModels
                    Resource: '*'
            - PolicyName: XRayAccessPolicy
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: Allow
                    Action:
                      - xray:PutTelemetryRecords
                      - xray:PutAnnotation
                      - xray:PutTraceSegments
                    Resource: '*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure the following prerequisites are in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with sufficient permissions to create and manage resources.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; installed on the local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before Starting&lt;/strong&gt;: If the infrastructure from &lt;a href="https://medium.com/p/0ba51436fd02" rel="noopener noreferrer"&gt;my earlier post&lt;/a&gt; has not been set up, follow steps 1 – 4 from this article before proceeding with the deployment steps below.&lt;/p&gt;

&lt;p&gt;1.Create a Lambda layer that includes the AWS X-Ray SDK. Use the layer’s version ARN in the tracing-rest-api.yaml CloudFormation template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws lambda publish-layer-version --layer-name aws-xray-sdk-layer --zip-file fileb://infrastructure/aws-xray-sdk-layer-layer/aws-xray-sdk-layer-layer.zip --compatible-runtimes python3.12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Update the existing CloudFormation stack to enable X-Ray tracing and transition from an HTTP API to a REST API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation update-stack \
        --stack-name apigw-lambda-bedrock \
        --template-body file://infrastructure/tracing-rest-api.yaml \
        --capabilities CAPABILITY_NAMED_IAM \
        --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Retrieve the Invoke URL for the API Gateway Stage using the retrieve_invoke_url_rest_api.sh script. Test the API using CURL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ./scripts/retrieve_invoke_url_rest_api.sh

    export APIGW_TOKEN='token_value'

    curl -s -X GET -H "Authorization: Bearer $APIGW_TOKEN" "https://api_id.execute-api.eu-central-1.amazonaws.com/dev/invoke?inputText=your_question" | jq -r '.results[0].outputText'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.To view the X-Ray trace map and segments timeline, navigate to the AWS Console: &lt;strong&gt;CloudWatch&lt;/strong&gt; -&amp;gt; &lt;strong&gt;X-Ray traces&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Traces&lt;/strong&gt;. In the &lt;strong&gt;Traces&lt;/strong&gt; section, a list of trace IDs will be displayed. Clicking on a trace ID will reveal the Trace map, Segments timeline, and associated Logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
![X-Ray Trace Map](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/voy3jootl0gy0ufj080m.png)

![X-Ray Segments Timeline](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ev6b0d86fjdt7kv12ptg.png)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.Clean Up Resources. After testing, clean up resources by deleting the Lambda layer version, the token from the Parameter Store, and the CloudFormation stack.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda delete-layer-version --layer-name aws-xray-sdk-layer --version-number &amp;lt;number&amp;gt;

aws ssm delete-parameter --name "AuthorizationLambdaToken"

aws cloudformation delete-stack --stack-name apigw-lambda-bedrock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The integration of AWS X-Ray into a serverless infrastructure provides deep insights into the performance of APIs and Lambda functions, as well as their interactions with other AWS services such as Systems Manager and Bedrock. This enhanced visibility facilitates effective debugging, performance optimization, and comprehensive system monitoring, contributing to the smooth and efficient operation of applications.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the  button to show your support. Feel free to use and share this post. You can also support me with a &lt;a href="https://www.buymeacoffee.com/andrworld1500" rel="noopener noreferrer"&gt;virtual coffee&lt;/a&gt; 🙂&lt;/p&gt;

</description>
      <category>awsxray</category>
      <category>awsserverless</category>
    </item>
    <item>
      <title>AWS Lambda URL Invocations with IAM Authentication and Throttling Limits</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Thu, 13 Mar 2025 20:17:08 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/aws-lambda-url-invocations-with-iam-authentication-and-throttling-limits-1ll7</link>
      <guid>https://dev.to/andrii-shykhov/aws-lambda-url-invocations-with-iam-authentication-and-throttling-limits-1ll7</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenhuzk831r38njly9sus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenhuzk831r38njly9sus.png" alt="Image by svstudioart on Freepik" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This blog post details how to securely invoke AWS Lambda functions using Lambda URLs with AWS IAM authentication and throttling limits to add more security to the Lambda function. It covers setting up the necessary infrastructure using AWS CloudFormation, generating AWS Signature Version 4 for authenticated requests, and testing the setup using both the command line and testing tool like Postman.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use Case Scenario: Invoking Lambda Functions from Different Environments. This setup allows Lambda functions to be invoked from various environments, including non-AWS infrastructure, such as local environments, on-premises servers.&lt;/p&gt;

&lt;p&gt;For unauthorized or throttled requests that don’t invoke the Lambda function, no charges will be incurred. Here are the &lt;strong&gt;key points&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Throttling and Concurrency&lt;/em&gt;: Setting &lt;code&gt;ReservedConcurrentExecutions&lt;/code&gt; will limit the number of concurrent executions. If the limit is reached, additional requests will be throttled, and no charges will be incurred for these throttled requests.&lt;br&gt;
&lt;em&gt;Authorization Failures&lt;/em&gt;: Requests that fail due to authorization errors (e.g., invalid IAM credentials) will not result in Lambda function invocation. These requests are handled only by the IAM service, and no charges will be incurred for these unauthorized requests.&lt;br&gt;
&lt;em&gt;Additional Security&lt;/em&gt;: AWS Shield Standard is automatically available for Lambda functions. More information can be found &lt;a href="https://aws.amazon.com/shield/getting-started/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;em&gt;Monitoring&lt;/em&gt;: Monitoring of the Lambda function throttling can be helpful by creating a CloudWatch alarm based on the “Throttles” metric.&lt;/p&gt;

&lt;p&gt;References and useful links:&lt;br&gt;
Authenticating requests documentation: &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html" rel="noopener noreferrer"&gt;AWS S3 — Authenticating Requests&lt;/a&gt;.&lt;br&gt;
Examples of creating AWS SigV4 signatures in different programming languages: &lt;a href="https://github.com/aws-samples/sigv4-signing-examples/tree/main/no-sdk" rel="noopener noreferrer"&gt;AWS SigV4 Signing Examples&lt;/a&gt;.&lt;br&gt;
Note: Function URLs are not supported in some regions. &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/urls-configuration.html#urls-throttling" rel="noopener noreferrer"&gt;More information can be found here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1zogi5599bci1df9kad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1zogi5599bci1df9kad.png" alt="Infrastructure schema" width="622" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;generate_sigv4.sh&lt;/code&gt; script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    #!/bin/sh

    # Request parameters
    METHOD="POST"
    SERVICE="lambda"
    REQUEST_PAYLOAD='{"inputText": "Request to Lambda."}'  # simple payload

    # Environment variables
    AWS_ACCESS_KEY=$AWS_ACCESS_KEY_ID
    AWS_SECRET_KEY=$AWS_SECRET_ACCESS_KEY
    REGION=$AWS_REGION
    HOST=$LAMBDA_FUNCTION_HOST
    ENDPOINT="https://${HOST}/"

    # Check if environment variables are set
    if [ -z "$AWS_ACCESS_KEY" ] || [ -z "$AWS_SECRET_KEY" ] || [ -z "$HOST" ]; then
      echo "Error: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and LAMBDA_FUNCTION_HOST must be set as environment variables."
      exit 1
    fi

    # Get the current date and time
    AMZ_DATE=$(date -u +"%Y%m%dT%H%M%SZ")
    DATE_STAMP=$(date -u +"%Y%m%d")

    # Create a canonical request
    CANONICAL_URI="/"
    CANONICAL_QUERYSTRING=""
    CANONICAL_HEADERS="content-type:application/json\nhost:${HOST}\nx-amz-date:${AMZ_DATE}\n"
    SIGNED_HEADERS="content-type;host;x-amz-date"
    PAYLOAD_HASH=$(printf "$REQUEST_PAYLOAD" | openssl dgst -sha256 | sed 's/^.* //')
    CANONICAL_REQUEST="${METHOD}\n${CANONICAL_URI}\n${CANONICAL_QUERYSTRING}\n${CANONICAL_HEADERS}\n${SIGNED_HEADERS}\n${PAYLOAD_HASH}"

    # Create a string to sign
    ALGORITHM="AWS4-HMAC-SHA256"
    CREDENTIAL_SCOPE="${DATE_STAMP}/${REGION}/${SERVICE}/aws4_request"
    STRING_TO_SIGN="${ALGORITHM}\n${AMZ_DATE}\n${CREDENTIAL_SCOPE}\n$(printf "$CANONICAL_REQUEST" | openssl dgst -sha256 | sed 's/^.* //')"

    # Create the signing key
    kSecret=$(printf "AWS4${AWS_SECRET_KEY}" | xxd -p -c 256)
    kDate=$(printf "${DATE_STAMP}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"$kSecret" | sed 's/^.* //')
    kRegion=$(printf "${REGION}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"$kDate" | sed 's/^.* //')
    kService=$(printf "${SERVICE}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"$kRegion" | sed 's/^.* //')
    kSigning=$(printf "aws4_request" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"$kService" | sed 's/^.* //')

    # Create the signature
    SIGNATURE=$(printf "$STRING_TO_SIGN" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"$kSigning" | sed 's/^.* //')

    # Create the authorization header
    AUTHORIZATION_HEADER="${ALGORITHM} Credential=${AWS_ACCESS_KEY}/${CREDENTIAL_SCOPE}, SignedHeaders=${SIGNED_HEADERS}, Signature=${SIGNATURE}"

    # Output the necessary information for test tool like Postman
    echo  "Endpoint: ${ENDPOINT}\n"
    echo  "x-amz-date: ${AMZ_DATE}\n"
    echo  "Authorization: ${AUTHORIZATION_HEADER}\n"
    echo  "Payload: ${REQUEST_PAYLOAD}\n"

    # Output the curl command
    echo  "curl -X ${METHOD} \"${ENDPOINT}\" -H \"Content-Type: application/json\" -H \"x-amz-date: ${AMZ_DATE}\" -H \"Authorization: ${AUTHORIZATION_HEADER}\" -d '${REQUEST_PAYLOAD}'\n"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before starting, ensure the following requirements are met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with permissions to create resources.&lt;/li&gt;
&lt;li&gt;AWS CLI &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;https://aws.amazon.com/cli/&lt;/a&gt; installed on local machine.&lt;/li&gt;
&lt;li&gt;OpenSSL installed on the test environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Clone the &lt;a href="https://gitlab.com/Andr1500/lambda-url-with-iam-auth.git" rel="noopener noreferrer"&gt;repository&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    git clone https://gitlab.com/Andr1500/lambda-url-with-iam-auth.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Check and increase the Lambda function Concurrent Executions quota (if necessary).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws service-quotas get-service-quota \
        --service-code lambda \
        --quota-code L-B99A9384

    aws service-quotas request-service-quota-increase \
        --service-code lambda \
        --quota-code L-B99A9384 \
        --desired-value 100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Fill in all necessary parameters in the infrastructure/root.yaml CloudFormation template and create the CloudFormation stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name invoke-lambda-url-iam-auth \
        --template-body file://infrastructure/root.yaml \
        --capabilities CAPABILITY_NAMED_IAM \
        --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Retrieve outputs for setting up environment variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation describe-stacks \
        --stack-name invoke-lambda-url-iam-auth \
        --query "Stacks[0].Outputs" --output json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.Set up region, Lambda function host, Access, and Secret keys for the created IAM user as environment variables in the test environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    export AWS_ACCESS_KEY_ID=&amp;lt;Access Key&amp;gt;
    export AWS_SECRET_ACCESS_KEY=&amp;lt;Secret Access Key&amp;gt;
    export AWS_REGION=&amp;lt;Region&amp;gt;
    export LAMBDA_FUNCTION_HOST=&amp;lt;function-url.lambda-url-region.amazonaws.com&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6.Generate AWS Signature v4 and POST request using the &lt;code&gt;generate_sigv4.sh&lt;/code&gt; script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ./generate_sigv4.sh

    curl -X POST "https://.&amp;lt;url-id&amp;gt;.&amp;lt;region&amp;gt;.on.aws/" -H "Content-Type: application/json" -H "x-amz-date: 20240619T150022Z" -H "Authorization: AWS4-HMAC-SHA256 Credential=&amp;lt;ACCESS-KEY&amp;gt;/20240619/&amp;lt;region&amp;gt;/lambda/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=&amp;lt;signarure-content&amp;gt;" -d '{"inputText": "Request to Lambda."}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tests from command line and Postman:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e6qv6s8w34dp81imp1k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e6qv6s8w34dp81imp1k.png" alt="*Generate signature and request to the Lambda function URL from CLI*" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frztkhn8fx7s65kmytp2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frztkhn8fx7s65kmytp2u.png" alt="*Request from Postman, correct response*" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g58shdqtv69mojbwr6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g58shdqtv69mojbwr6f.png" alt="Request from Postman, the signature is expired" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05recqmqcd8qax2lj92z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05recqmqcd8qax2lj92z.png" alt="Request from Postman, the header is incorrect" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.Delete the CloudFormation stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation delete-stack --stack-name invoke-lambda-url-iam-auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This project demonstrates how to securely invoke AWS Lambda functions using Lambda URLs with IAM authentication. Additional security can be added by throttling limits and monitoring the Lambda function with CloudWatch alarms.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the  button to show your support. Feel free to use and share this post. 🙂&lt;/p&gt;

</description>
      <category>lambdafunctionurl</category>
      <category>sigv4</category>
      <category>aws</category>
    </item>
    <item>
      <title>Interacting with Amazon Bedrock Model through API Gateway and Lambda Functions</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Thu, 13 Mar 2025 20:00:27 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/interacting-with-amazon-bedrock-model-through-api-gateway-and-lambda-functions-1g36</link>
      <guid>https://dev.to/andrii-shykhov/interacting-with-amazon-bedrock-model-through-api-gateway-and-lambda-functions-1g36</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxw81fieome6ijqyvne4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxw81fieome6ijqyvne4.png" alt="Infrastructure Schema" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, we will explore how to interact with an Amazon Bedrock model through a secured API Gateway and Lambda function. The API Gateway is secured using a Lambda Authorizer, ensuring that only authorized requests can access the Bedrock model. This setup provides a scalable and secure way to integrate machine learning models into applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the Project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this project, we have set up an Amazon Bedrock model, an API Gateway, and two Lambda functions: the “Authorizer” Lambda function, which acts as an access gatekeeper, and the “Main” Lambda function, which sends requests to the Bedrock model. We also use the Systems Manager Parameter Store for storing the authorization token securely.&lt;/p&gt;

&lt;p&gt;Currently, AWS does not support AWS CLI access to the Amazon Bedrock service directly, and granting access to models must be done via the AWS Console. The token in the Parameter Store was created using AWS CLI because CloudFormation does not yet support the creation of parameters with the SecureString type. For more information, see the AWS CloudFormation SSM Parameter &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-parameter.html#cfn-ssm-parameter-type" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;All other resources — Lambda functions and API Gateway — were created using CloudFormation, ensuring that the infrastructure is managed as code and can be easily deployed and maintained.&lt;/p&gt;

&lt;p&gt;This project is based on my other project about &lt;a href="https://medium.com/p/b26a3c3c7302" rel="noopener noreferrer"&gt;serverless architecture using API Gateway, Lambda Authorizer, and Secrets Manager&lt;/a&gt;. In current version, we store tokens in Parameter Store instead of Secrets Manager for lower cost and added the necessary integration between the Main Lambda function and the Bedrock model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frx0fom8vju68jlx7wgue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frx0fom8vju68jlx7wgue.png" alt="Infrastructure Schema" width="782" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Main Lambda function and IAM role configuration in &lt;code&gt;infrastructure/root.yaml&lt;/code&gt; CloudFormation template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Parameters:
      BedrockModelId:
        Type: String
        Default: ''

    Resources:  
      MainLambdaFunction:
          Type: AWS::Lambda::Function
          Properties:
            FunctionName: MainLambdaFunction
            Description: Make requests to Bedrock models
            Runtime: python3.12
            Handler: index.lambda_handler
            Role: !GetAtt MainLambdaExecutionRole.Arn
            Timeout: 30
            MemorySize: 512
            Environment:
              Variables:
                BEDROCK_MODEL_ID: !Ref BedrockModelId
            Code:
              ZipFile: |
                import json
                import boto3
                from botocore.exceptions import ClientError

                bedrock_runtime = boto3.client('bedrock-runtime')

                def lambda_handler(event, context):
                    try:
                        model_id = os.environ['BEDROCK_MODEL_ID']

                        # Validate the input
                        input_text = event.get("queryStringParameters", {}).get("inputText")
                        if not input_text:
                            raise ValueError("Input text is required in the request query parameters.")

                        # Prepare the payload for invoking the Bedrock model
                        payload = json.dumps({
                            "inputText": input_text,
                            "textGenerationConfig": {
                                "maxTokenCount": 8192,
                                "stopSequences": [],
                                "temperature": 0,
                                "topP": 1
                            }
                        })

                        # Invoke the Bedrock model
                        response = bedrock_runtime.invoke_model(
                            modelId=model_id,
                            contentType="application/json",
                            accept="application/json",
                            body=payload
                        )

                        # Check if the 'body' exists in the response and handle it correctly
                        if 'body' not in response or not response['body']:
                            raise ValueError("Response body is empty.")

                        response_body = json.loads(response['body'].read().decode('utf-8'))

                        return {
                            'statusCode': 200,
                            'body': json.dumps(response_body)
                        }

                    except ClientError as e:
                        return {
                            'statusCode': 500,
                            'body': json.dumps({"error": "Error interacting with the Bedrock API"})
                        }
                    except ValueError as e:
                        return {
                            'statusCode': 400,
                            'body': json.dumps({"error": str(e)})
                        }
                    except Exception as e:
                        return {
                            'statusCode': 500,
                            'body': json.dumps({"error": "Internal Server Error"})
                        }

        MainLambdaExecutionRole:
          Type: AWS::IAM::Role
          Properties:
            RoleName: MainLambdaExecutionRole
            AssumeRolePolicyDocument:
              Version: '2012-10-17'
              Statement:
                - Effect: Allow
                  Principal:
                    Service:
                      - lambda.amazonaws.com
                  Action:
                    - sts:AssumeRole
            ManagedPolicyArns:
              - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
            Policies:
              - PolicyName: BedrockAccessPolicy
                PolicyDocument:
                  Version: '2012-10-17'
                  Statement:
                    - Effect: Allow
                      Action:
                        - bedrock:InvokeModel
                        - bedrock:ListFoundationModels
                      Resource: '*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you start, make sure the following requirements are met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with permissions to create resources.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; installed on your local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Clone the &lt;a href="https://gitlab.com/Andr1500/api-gateway-lambda-bedrock.git" rel="noopener noreferrer"&gt;repository&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    git clone https://gitlab.com/Andr1500/api-gateway-lambda-bedrock.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Set up Amazon Bedrock model.&lt;/p&gt;

&lt;p&gt;Go to the Amazon Bedrock service in the AWS Console.&lt;br&gt;
Navigate to Get Started -&amp;gt; Request model access -&amp;gt; Modify model access -&amp;gt; Choose the appropriate model available in your region (e.g., Titan Text G1 — Express) -&amp;gt; Next -&amp;gt; Submit. &lt;br&gt;
Wait a few minutes and refresh the page to see “Access granted”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1pw6k6w7inze9lbebox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1pw6k6w7inze9lbebox.png" alt="Bedrock request model access" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to Overview -&amp;gt; choose Provider of the available model -&amp;gt; choose the model -&amp;gt; see the API request of the model. For different models the API request configuration can be different.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp24ip3a1z15wt9rhg4q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp24ip3a1z15wt9rhg4q.png" alt="Bedrock model API request configuration" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Create authorization token in AWS Systems Manager Parameter Store.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws ssm put-parameter --name "AuthorizationLambdaToken" --value "token_value_secret" --type "SecureString"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Fill in all necessary Parameters in infrastructure/root.yaml CloudFormation template, and scripts/retrieve_invoke_url.sh script, and create the CloudFormation stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name apigw-lambda-bedrock \
        --template-body file://infrastructure/root.yaml \
        --capabilities CAPABILITY_NAMED_IAM \
        --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.Retrieve the Invoke URL of the Stage using the &lt;code&gt;scripts/retrieve_invoke_url.sh script&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;6.Test the Bedrock model, Main Lambda function, and API request from CLI. It is necessary to have a workaround with saving content response to some file because, according to the documentation for &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/bedrock-runtime/invoke-model.html" rel="noopener noreferrer"&gt;aws bedrock-runtime invoke-model&lt;/a&gt; and &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/invoke.html" rel="noopener noreferrer"&gt;aws lambda invoke&lt;/a&gt;, the outfile option is required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpjxbwdfx7mbslt2d4kl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpjxbwdfx7mbslt2d4kl.png" alt="tests from CLI" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73irjiotxxdvp0w1lm2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73irjiotxxdvp0w1lm2x.png" alt="test from Postman" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.Delete token from SM Parameter Store, and the CloudFormation stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws ssm delete-parameter --name "AuthorizationLambdaToken"

    aws cloudformation delete-stack --stack-name apigw-lambda-bedrock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Conclusion:&lt;/p&gt;

&lt;p&gt;By leveraging API Gateway, Lambda functions, and Amazon Bedrock models, we can create a scalable and efficient solution for deploying machine learning models in a serverless environment. With the addition of a Lambda Authorizer, this solution is more secure, protecting against unauthorized access.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the button to show your support. Feel free to use and share this post.&lt;/p&gt;

</description>
      <category>amazonbedrock</category>
      <category>apigateway</category>
      <category>awsparameterstore</category>
    </item>
    <item>
      <title>Managing CloudFormation nested stacks with AWS CodePipeline: A GitOps approach</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Mon, 10 Feb 2025 09:22:00 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/managing-cloudformation-nested-stacks-with-aws-codepipeline-a-gitops-approach-137k</link>
      <guid>https://dev.to/andrii-shykhov/managing-cloudformation-nested-stacks-with-aws-codepipeline-a-gitops-approach-137k</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, we delve into creating CloudFormation nested stacks and automating their deployment using AWS CodePipeline. This discussion builds on the GitOps deployment strategy previously explored, where we detailed modeling GitOps environments and promoting releases across them using an “environment-per-folder” approach. While my earlier blog post (&lt;a href="https://medium.com/p/88e17b1dd613" rel="noopener noreferrer"&gt;link here&lt;/a&gt;) focused on updating standalone CloudFormation stacks with the CloudFormation Git Sync feature, here we expand our scope to include nested stacks, offering a more structured solution for managing complex infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the Project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This project is structured into two main folders: codepipeline and infrastructure. The codepipeline folder contains the codepipeline_pipeline.yaml template, which establishes all required configurations for orchestrating the deployment of nested stacks. &lt;br&gt;
Key resources set up by this template include:&lt;br&gt;
&lt;strong&gt;CodeStar Connection&lt;/strong&gt;: Facilitates integration with remote Git repository.&lt;br&gt;
&lt;strong&gt;IAM Roles and Policies&lt;/strong&gt;: Three roles are defined to securely manage resources during the pipeline operations — creating resources via CloudFormation, executing CodeBuild projects, and running CodePipeline workflows.&lt;br&gt;
&lt;strong&gt;S3 Bucket and S3 Bucket Policy&lt;/strong&gt;: Provides storage for nested stack templates and CodePipeline artifacts.&lt;br&gt;
&lt;strong&gt;CodeBuild Project&lt;/strong&gt;: Executes cfn-lint to ensure the nested stack templates adhere to best practices before deployment.&lt;br&gt;
The infrastructure folder has environment-specific folders containing simple CloudFormation templates for S3 bucket creation and associated policies. These serve as examples to illustrate the deployment process within a GitOps framework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pipeline Overview:&lt;/strong&gt;&lt;br&gt;
The CodePipeline setup includes several stages crucial for streamlined operations:&lt;br&gt;
&lt;strong&gt;Source&lt;/strong&gt;: Monitors the Git repository for changes and triggers the pipeline.&lt;br&gt;
&lt;strong&gt;CFN-Lint&lt;/strong&gt;: Validates the CloudFormation templates against common errors and best practices.&lt;br&gt;
&lt;strong&gt;Copy-to-S3&lt;/strong&gt;: Processes and stores files from the Source stage to the S3 bucket.&lt;br&gt;
&lt;strong&gt;Deploy-CFN-stacks&lt;/strong&gt;: Handles the creation and updates of CloudFormation nested stacks for each environment.&lt;/p&gt;

&lt;p&gt;The project structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ├── codepipeline
    │   └── codepipeline_pipeline.yaml
    └── infrastructure
        ├── development
        │   ├── root.yaml
        │   ├── s3_bucket.yaml
        │   └── s3_bucket_policy.yaml
        ├── production
        │   ├── root.yaml
        │   ├── s3_bucket.yaml
        │   └── s3_bucket_policy.yaml
        └── staging
            ├── root.yaml
            ├── s3_bucket.yaml
            └── s3_bucket_policy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CodePipeline pipeline configuration from codepipeline_pipeline.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    CreateCfnStackFromRepo:
      Type: 'AWS::CodePipeline::Pipeline'
      Properties:
        Name: !Ref CodePipelineName
        RoleArn: !GetAtt CodePipelineRole.Arn
        ArtifactStore:
          Type: S3
          Location: !Ref S3BucketName
        Stages:
          - Name: Source
            Actions:
              - Name: Source
                ActionTypeId:
                  Category: Source
                  Owner: AWS
                  Provider: CodeStarSourceConnection
                  Version: '1'
                RunOrder: 1
                Configuration:
                  BranchName: !Ref BranchName
                  ConnectionArn: !GetAtt GitLabConnection.ConnectionArn
                  DetectChanges: 'true'
                  FullRepositoryId: !Ref FullRepositoryId
                  OutputArtifactFormat: CODE_ZIP
                OutputArtifacts:
                  - Name: SourceArtifact
                Namespace: SourceVariables
          - Name: CFN-Lint
            Actions:
              - Name: Run-CFN-Lint
                ActionTypeId:
                  Category: Build
                  Owner: AWS
                  Provider: CodeBuild
                  Version: '1'
                Configuration:
                  ProjectName: !Ref CfnlintCodeBuildProject
                InputArtifacts:
                  - Name: SourceArtifact
                OutputArtifacts:
                  - Name: CflintArtifact
                RunOrder: 1
          - Name: Copy-to-S3
            Actions:
              - Name: Copy-to-S3
                ActionTypeId:
                  Category: Deploy
                  Owner: AWS
                  Provider: S3
                  Version: '1'
                RunOrder: 1
                Configuration:
                  BucketName: !Ref S3BucketName
                  Extract: 'true'
                InputArtifacts:
                  - Name: SourceArtifact
          - Name: Deploy-CFN-stacks
            Actions:
              - Name: DeployDevelopmentStack
                ActionTypeId:
                  Category: Deploy
                  Owner: AWS
                  Provider: CloudFormation
                  Version: '1'
                Configuration:
                  ActionMode: CREATE_UPDATE
                  Capabilities: 'CAPABILITY_NAMED_IAM,CAPABILITY_AUTO_EXPAND'
                  StackName: !Sub '${CodePipelineName}-development'
                  TemplatePath: SourceArtifact::infrastructure/development/root.yaml
                  RoleArn: !GetAtt CloudFormationExecutionRole.Arn
                  ParameterOverrides: |
                    {
                      "Environment": "development"
                    }
                InputArtifacts:
                  - Name: SourceArtifact
                RunOrder: 1
              - Name: DeployStagingStack
                ActionTypeId:
                  Category: Deploy
                  Owner: AWS
                  Provider: CloudFormation
                  Version: '1'
                Configuration:
                  ActionMode: CREATE_UPDATE
                  Capabilities: 'CAPABILITY_NAMED_IAM,CAPABILITY_AUTO_EXPAND'
                  StackName: !Sub '${CodePipelineName}-staging'
                  TemplatePath: SourceArtifact::infrastructure/staging/root.yaml
                  RoleArn: !GetAtt CloudFormationExecutionRole.Arn
                  ParameterOverrides: |
                    {
                      "Environment": "staging"
                    }
                InputArtifacts:
                  - Name: SourceArtifact
                RunOrder: 1
              - Name: DeployProductionStack
                ActionTypeId:
                  Category: Deploy
                  Owner: AWS
                  Provider: CloudFormation
                  Version: '1'
                Configuration:
                  ActionMode: CREATE_UPDATE
                  Capabilities: 'CAPABILITY_NAMED_IAM,CAPABILITY_AUTO_EXPAND'
                  StackName: !Sub '${CodePipelineName}-production'
                  TemplatePath: SourceArtifact::infrastructure/production/root.yaml
                  RoleArn: !GetAtt CloudFormationExecutionRole.Arn
                  ParameterOverrides: |
                    {
                      "Environment": "production"
                    }
                InputArtifacts:
                  - Name: SourceArtifact
                RunOrder: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CodePipeline pipeline stages schema:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj61wcihub8ylxjcxp8f8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj61wcihub8ylxjcxp8f8.png" alt="part 1" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foobdml5n840koheejg6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foobdml5n840koheejg6q.png" alt="CodePipeline pipeline stages" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you start, make sure the following requirements are met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with permissions to create resources.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; installed on your local machine.&lt;/li&gt;
&lt;li&gt;Account in any Git provider for remote Git repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the &lt;a href="https://gitlab.com/Andr1500/cloudformation_stack_from_codepipeline" rel="noopener noreferrer"&gt;repository&lt;/a&gt; to get started.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    git clone https://gitlab.com/Andr1500/cloudformation_stack_from_codepipeline.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Set up the required AWS and GitLab integration:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;a. Deploy a CloudFormation stack with necessary services and roles:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name codepipeline-pipeline-cfn \
        --template-body file://codepipeline/codepipeline_pipeline.yaml \
        --capabilities CAPABILITY_NAMED_IAM --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. Authorise the CodeStar connection via the AWS Console under CodePipeline settings to link to your Git repository.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create CloudFormation nested stacks. Update the templates in the infrastructure folders as needed, then commit and push these changes. The pipeline will handle creating the stacks automatically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update nested stacks. Always use Git to make changes to your templates and push them to apply. Avoid using the AWS Management Console for updates. To move updates from one environment (like development) to another (like staging), copy the files, commit, and push:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    cp -rf infrastructure/development/* infrastructure/staging/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Delete nested Stacks. If you no longer need the nested stacks, they can be deleted either through the AWS Management Console, using the AWS CLI, or by configuring the CodePipeline to perform the deletion. To enable deletion via the pipeline, modify the Deploy-CFN-stacks stage in your pipeline configuration by changing the ActionMode from &lt;strong&gt;CREATE_UPDATE&lt;/strong&gt; to &lt;strong&gt;DELETE_ONLY&lt;/strong&gt;. This adjustment will instruct the pipeline to remove the stacks rather than update or create new ones.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS CodePipeline is a powerful tool that can be customized for complex deployment tasks like managing CloudFormation nested stacks. For enhanced oversight, consider setting up a Notification rule linked to an SNS topic to monitor pipeline activities. You can find the setup for this in my &lt;a href="https://gitlab.com/Andr1500/ssm_runbook_bluegreen/-/tree/main?ref_type=heads" rel="noopener noreferrer"&gt;other project here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the reaction button below to show your support for the author. Feel free to use and share this post! 🙂&lt;/p&gt;

</description>
      <category>awscodepipeline</category>
      <category>gitops</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>GitOps deployment strategy with CloudFormation’s Git Sync feature</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Mon, 10 Feb 2025 09:14:00 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/gitops-deployment-strategy-with-cloudformations-git-sync-feature-2d0h</link>
      <guid>https://dev.to/andrii-shykhov/gitops-deployment-strategy-with-cloudformations-git-sync-feature-2d0h</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, we have the process of modeling GitOps environments and promoting releases between them using AWS CloudFormation’s Git Sync feature. Here we use the “environment-per-folder” approach, more information on how to model GitOps environments and promote releases between them is in &lt;a href="https://codefresh.io/blog/how-to-model-your-gitops-environments-and-promote-releases-between-them/" rel="noopener noreferrer"&gt;this&lt;/a&gt; article. More information about the CloudFormation Git Sync concept, prerequisites, and walkthrough &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/git-sync.html" rel="noopener noreferrer"&gt;is here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this project we have 2 folders. The folder git_connection has CloudFormation template git_connection.yaml which creates all necessary configurations for the creation of CloudFormation stacks with the Git Sync option. With this stack, we create resources: &lt;strong&gt;CodeStar&lt;/strong&gt; connection for connection to our Gitlab account, two &lt;strong&gt;IAM roles&lt;/strong&gt; — first to update the stack from the Git repository and second to use for all operations performed on the stack, &lt;strong&gt;SSM parameter&lt;/strong&gt; for S3 bucket prefix name. The folder infrastructure has environment folders. In every folder, we have a deployment file and a template file. In the template file, we have a simple configuration for the creation of an S3 bucket with an S3 Bucket policy. We assume that we have the same template file for all environments and specify parameters in our deployment files.&lt;/p&gt;

&lt;p&gt;The project structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ├── git_connection
    │   └── git_connection.yaml
    └── infrastructure
        ├── development
        │   ├── deployment_parameters.yaml
        │   └── s3bucket.yaml
        ├── production
        │   ├── deployment_parameters.yaml
        │   └── s3bucket.yaml
        └── staging
            ├── deployment_parameters.yaml
            └── s3bucket.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CloudFormation git_connection.yaml template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    AWSTemplateFormatVersion: '2010-09-09'
    Description: 'Connect Gitbab repository with AWS'

    Parameters:
      ConnectionName:
        Type: String
        Default: 'Gitlab-to-CloudFormation'
      S3BucketPrefixName:
        Type: String
        Default: 'cf-app-files'

    Resources:
      GitLabConnection:
        Type: 'AWS::CodeStarConnections::Connection'
        Properties:
          ConnectionName: !Ref ConnectionName
          ProviderType: 'GitLab'

      CloudFormationS3AccessRole:
        Type: AWS::IAM::Role
        Properties:
          RoleName: CloudFormationS3AccessRole
          AssumeRolePolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Principal:
                  Service: 
                    - cloudformation.amazonaws.com
                Action: 
                  - sts:AssumeRole
          Policies:
            - PolicyName: CloudFormationS3ManagementPolicy
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: Allow
                    Action:
                      - s3:CreateBucket
                      - s3:DeleteBucket
                      - s3:PutBucketPolicy
                      - s3:GetBucketPolicy
                      - s3:ListBucket
                      - s3:PutBucketTagging
                      - s3:PutBucketPolicy
                      - s3:DeleteBucketPolicy
                    Resource: '*'
                  - Effect: Allow
                    Action:
                      - ssm:GetParameters
                    Resource: '*'
                  - Effect: Allow
                    Action:
                      - cloudformation:*
                    Resource: '*'

      GitSyncRole:
        Type: AWS::IAM::Role
        Properties:
          RoleName: CFNGitSyncRole
          AssumeRolePolicyDocument:
            Version: 2012-10-17
            Statement:
              - Sid: CfnGitSyncTrustPolicy
                Effect: Allow
                Principal:
                  Service: cloudformation.sync.codeconnections.amazonaws.com
                Action: sts:AssumeRole
          Policies:
            - PolicyName: GitSyncPermissions
              PolicyDocument:
                Version: 2012-10-17
                Statement:
                  - Sid: SyncToCloudFormation
                    Effect: Allow
                    Action:
                      - cloudformation:CreateChangeSet
                      - cloudformation:DeleteChangeSet
                      - cloudformation:DescribeChangeSet
                      - cloudformation:DescribeStackEvents
                      - cloudformation:DescribeStacks
                      - cloudformation:ExecuteChangeSet
                      - cloudformation:GetTemplate
                      - cloudformation:ListChangeSets
                      - cloudformation:ListStacks
                      - cloudformation:ValidateTemplate
                    Resource: '*'
                  - Sid: PolicyForManagedRules
                    Effect: Allow
                    Action:
                      - events:PutRule
                      - events:PutTargets
                    Resource: '*'
                    Condition:
                      StringEquals:
                        events:ManagedBy: cloudformation.sync.codeconnections.amazonaws.com
                  - Sid: PolicyForDescribingRule
                    Effect: Allow
                    Action: events:DescribeRule
                    Resource: '*'

      SsmS3BucketPrefixName:
        Type: AWS::SSM::Parameter
        Properties:
          Name: S3BucketPrefixName
          Type: String
          Value: !Ref S3BucketPrefixName
          Description: "Prefix of the S3 bucket name  for git sync cf stacks"

The CloudFormation s3bucket.yaml template:

    AWSTemplateFormatVersion: '2010-09-09'
    Description: 'S3 bucket with a dynamically created name and apply a policy for access from EC2 instances'

    Parameters:
      S3BucketPrefixName:
        Type: AWS::SSM::Parameter::Value&amp;lt;String&amp;gt;
        Default: ''
      Environment:
        Description: List of available environments
        Type: String
        Default: dev
        AllowedValues:
          - dev
          - stag
          - prod
        ConstraintDescription: Use valid environment [dev, stag, prod]

    Mappings:
      EnvironmentLabel:
        dev:
          label: development
        stag:
          label: staging
        prod:
          label: production

    Resources:
      S3BucketFiles:
        Type: AWS::S3::Bucket
        Properties:
          BucketName:
            !Sub
              - '${S3BucketPrefixName}-${UsedEnvironmentLabel}'
              - UsedEnvironmentLabel: !FindInMap [ EnvironmentLabel, !Ref Environment, label ]
          Tags:
            - Key: Name
              Value:
                !Sub
                  - '${S3BucketPrefixName}-${UsedEnvironmentLabel}'
                  - UsedEnvironmentLabel: !FindInMap [ EnvironmentLabel, !Ref Environment, label ]
            - Key: Environment
              Value: !FindInMap [ EnvironmentLabel, !Ref Environment, label ]

      S3BucketPolicy:
        Type: AWS::S3::BucketPolicy
        Properties:
          Bucket: !Ref S3BucketFiles
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Principal:
                  Service: 
                    - ec2.amazonaws.com
                Action:
                  - s3:GetObject
                Resource: 
                  !Sub
                    - 'arn:${AWS::Partition}:s3:::${S3BucketPrefixName}-${UsedEnvironmentLabel}/*'
                    - UsedEnvironmentLabel: !FindInMap [ EnvironmentLabel, !Ref Environment, label ]

The CloudFormation development/deployment_parameters.yaml template:

    template-file-path: './infrastructure/development/s3bucket.yaml'

    parameters:
      S3BucketPrefixName: 'S3BucketPrefixName'
      Environment: 'dev'

    tags:
      InfraDeploymentProcess: 'cf stack with git sync option for dev env'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you start, make sure the following requirements are met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with permissions to create resources.&lt;/li&gt;
&lt;li&gt;AWS CLI installed on your local machine.&lt;/li&gt;
&lt;li&gt;Account in any Git provider for remote Git repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repository
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    git clone https://gitlab.com/Andr1500/cloudformation_sync_from_git.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Creation of necessary configuration for interconnection between Gitlab and AWS account.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;a. Create CloudFormation stack with CodeStar connection and necessary IAM roles:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name cf-git-sync-config \
        --template-body file://git_connection/git_connection.yaml \
        --capabilities CAPABILITY_NAMED_IAM
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. Make necessary changes, commit it, and push it to your remote repository.&lt;/p&gt;

&lt;p&gt;c. Update CodeStar pending connection. Open the AWS Console, next go: CodePipeline -&amp;gt; Settings -&amp;gt; Connections -&amp;gt; choose the created connection in pending status -&amp;gt; Update pending connection -&amp;gt; depends on your provider make authorisation, and grant necessary access to the repository.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;PreBuild step (optional) before CloudFormation Git Sync stack creation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sometimes, if you already earlier linked the repository with the AWS account by CodeStar connection after deleting it and recreating again, you will have the issue during the process of creating the CloudFormation stack: the repository will be already linked to the “old” connection. In this case, we should “unlink” the repository with the AWS CLI command by deleting “old” repository link and “link” again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws codestar-connections list-repository-links

    aws codestar-connections delete-repository-link --repository-link-id 1234567-76a3-4f20-858f-qwerty12345
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4hngyd1zn3a77ph687v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4hngyd1zn3a77ph687v.png" alt="CF issue with the CodeStar connection" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creation of CloudFormation stacks for different environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Open the AWS Console, next go: CloudFormation -&amp;gt; Create stack (button) -&amp;gt; With new resources -&amp;gt; Choose options “Template is ready” and “Sync from Git” -&amp;gt; Next -&amp;gt; Provide stack name (for example, “cf-git-deployment”), choose option “I am providing my own file in my repository”, choose option “Link a Git repository” (if you create it for the first time), choose the repository provider, connection, repository, branch, provide the deployment file path, choose the IAM role for CloudFormation to update the stack from the Git repository -&amp;gt; Next -&amp;gt; choose the IAM role for CloudFormation to use for all operations performed on the stack, choose the stack failure options -&amp;gt; Next -&amp;gt; Review and create -&amp;gt; Submit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoerd38ykq16n22axwm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoerd38ykq16n22axwm2.png" alt="CF creation 1 step" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17usju0u8hzo2162q2b2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17usju0u8hzo2162q2b2.png" alt="CF creation 2 step" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fssctk8k91k2ts68zj0o5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fssctk8k91k2ts68zj0o5.png" alt="CF creation 3 step" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Repeat a similar procedure for creating CloudFormation stacks for other environments, in this case, we need to specify different CF stack names and different deployment file paths.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update CloudFormation stacks and promote changes between the environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT !&lt;/strong&gt; All changes (Creation of new resources, updating existing resources) with the created CloudFormation stacks should be done with the Git repository, not with the CloudFormation “Update” button.&lt;/p&gt;

&lt;p&gt;a. Make necessary changes in the CF template file for dev env, commit changes, and push it. Check if the changes applied in the CloudFormation stack are related to the development environment.&lt;/p&gt;

&lt;p&gt;b. Make a copy of the template file from one environment to another (for example, from dev env to stag env), commit changes, and push it. Check if the changes applied in the CloudFormation stack are related to a staging environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    cp infrastructure/development/s3bucket.yaml infrastructure/staging/s3bucket.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;imho, for now, the CloudFormation Git Sync feature has some advantages and disadvantages. As one of the advantages — we have only one source of code and we can simply manage changes in our infrastructure. As a disadvantage — we can’t create nested stack with this feature because in AWS::CloudFormation::Stack resource TemplateURL parameter supports only the S3 bucket URL. Also, we can’t integrate the created Webhook into our Gitlab CI/CD pipeline because the Webhook’s secret token is hidden and we don’t have any option for using existed Webhook in our pipeline. We can’t create CloudFormation Git Sync stack with the AWS CLI command create-stack because, similar to nested stack, — template-url supports only the S3 bucket URL.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the clap button below to show your support. Feel free to use and share it.&lt;/p&gt;

</description>
      <category>cloudformation</category>
      <category>gitops</category>
      <category>codestarconnection</category>
    </item>
    <item>
      <title>Monitoring EC2 instances deployed with Blue/Green deployment</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Mon, 03 Feb 2025 22:42:38 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/monitoring-ec2-instances-deployed-with-bluegreen-deployment-17dp</link>
      <guid>https://dev.to/andrii-shykhov/monitoring-ec2-instances-deployed-with-bluegreen-deployment-17dp</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, we have a configuration for monitoring EC2 instances deployed with the Blue/Green deployment strategy. This configuration consists resources: &lt;br&gt;
&lt;strong&gt;Lambda function&lt;/strong&gt; with necessary access which does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;get the instance IDs based on the instance names,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;get the metric's value for the instance from the AWS/EC2 CloudWatch namespace,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;enhance the metric’s data with additional information,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;put changed metric's data to custom ws-deployment namespace;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EventBridge schedule rule&lt;/strong&gt; which runs the lambda function every 5 minutes;&lt;br&gt;
&lt;strong&gt;CloudWatch alarms&lt;/strong&gt; which monitor the EC2 instances based on the metrics from the custom ws-deployment namespace.&lt;/p&gt;

&lt;p&gt;The reason for this configuration is that for the AWS/EC2 namespace as metric dimension we have only InstanceId, not InstanceName, more information about CloudWatch metrics is &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Dimension" rel="noopener noreferrer"&gt;here&lt;/a&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudwatch list-metrics --namespace AWS/EC2 --metric-name CPUUtilization   
    {
        "Metrics": [
            {
                "Namespace": "AWS/EC2",
                "MetricName": "CPUUtilization",
                "Dimensions": [
                    {
                        "Name": "InstanceId",
                        "Value": "i-1234567890abcde"
                    }
                ]
            },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This post is the third part series of posts about Blue/Green deployment on AWS EC2 instances with the Systems Manager Automation runbook, the first part is &lt;a href="https://medium.com/p/ce1643ef642c" rel="noopener noreferrer"&gt;here&lt;/a&gt;, and the second part is &lt;a href="https://medium.com/p/90b3c65885db" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All infrastructure is created with CloudFormation template &lt;code&gt;infrastructure/ec2_monitoring.yaml&lt;/code&gt; and has independent deployment from the &lt;code&gt;ec2-bluegreen-deployment&lt;/code&gt; stack. In the Systems Manager Automation runbook configuration, we have only one EC2 instance for creation, but the lambda function can work with many instances, for this, we only need to specify instance names as a comma-separated list of the “InstanceNames” parameter.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ec2_monitoring.yaml&lt;/code&gt; template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    AWSTemplateFormatVersion: '2010-09-09'
    Description: 'CloudWatch metrics and alarms for monitoring the deployed EC2 instances'

    Parameters:
      TransformMetricsLfName:
        Type: String
        Default: 'TransformEc2Metrics'
      CustomNamespace:
        Type: String
        Default: 'ws-deployment'
      InstanceNames:
        Type: String
        Description: 'Comma-separated list of the instance names'
        Default: 'ws-instance'
      MetricNames:
        Type: String
        Description: 'Comma-separated list of the metric names'
        Default: 'CPUUtilization,StatusCheckFailed_Instance,StatusCheckFailed_System'
      MetricUnits:
        Type: String
        Description: 'Comma-separated list of the metric units'
        Default: 'Percent,Count,Count'
      SnsTopicName:
        Type: String
        Default: 'blue-green-deployment-notifications'

    Resources:
    #####################################
    #  Lambda Function configuration
    #####################################
      TransformEc2MetricsRole:
        Type: AWS::IAM::Role
        Properties:
          RoleName: TransformEc2MetricsRole
          AssumeRolePolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Principal:
                  Service: lambda.amazonaws.com
                Action: sts:AssumeRole
          ManagedPolicyArns:
            - 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
          Policies:
            - PolicyName: TransformEc2Metricspolicy
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: Allow
                    Action:
                      - ec2:DescribeInstances
                      - cloudwatch:GetMetricStatistics
                      - cloudwatch:PutMetricData
                      - cloudwatch:GetMetricData
                    Resource: '*'

      LambdaLogsGroup:
        Type: AWS::Logs::LogGroup
        DeletionPolicy: Delete
        UpdateReplacePolicy: Retain
        Properties:
          LogGroupName: !Sub '/aws/lambda/${TransformMetricsLfName}'
          RetentionInDays: '7'

      TransformCustomMetrics:
        Type: AWS::Lambda::Function
        Properties:
          FunctionName: !Ref TransformMetricsLfName
          Description: 'transforming and processing metrics from AWS/EC2 namespace'
          Runtime: python3.12
          Handler: index.lambda_handler
          Timeout: 30
          Role: !GetAtt TransformEc2MetricsRole.Arn
          LoggingConfig:
            LogGroup: !Sub '/aws/lambda/${TransformMetricsLfName}'
          Environment:
            Variables:
              instance_names: !Ref InstanceNames
              metric_names: !Ref MetricNames
              metric_units: !Ref MetricUnits
              custom_namespace: !Ref CustomNamespace
          Code:
            ZipFile: |
              import boto3
              import os
              from datetime import datetime, timedelta

              def lambda_handler(event, context):
                  try:
                      instance_names = [name.strip() for name in os.environ['instance_names'].split(',')]
                      metric_names =  [metric.strip() for metric in os.environ['metric_names'].split(',')]
                      metric_units =  [unit.strip() for unit in os.environ['metric_units'].split(',')]
                      custom_namespace = os.environ['custom_namespace']

                      # Initialize EC2 and CloudWatch clients
                      ec2_client = boto3.client('ec2')
                      cloudwatch_client = boto3.client('cloudwatch')

                      # Get instance IDs based on the instance names
                      instance_ids, instance_names_result = get_instance_ids(instance_names, ec2_client)
                      if not instance_ids:
                          print("[INFO] No instances found for transforming metrics.")
                          return

                      # Get metrics for each instance
                      for instance_id, instance_name in zip(instance_ids, instance_names_result):
                          metrics = get_instance_metrics(instance_id, metric_names, metric_units, instance_name, cloudwatch_client)

                          # Put formatted metrics to custom CloudWatch namespace
                          put_custom_metrics(metrics, custom_namespace, cloudwatch_client)

                  except Exception as e:
                      print(f"Error with proceeding metrics transformation: {str(e)}")

              def get_instance_ids(instance_names, ec2_client):
                  instance_ids = []
                  instance_names_result = []

                  for instance_name in instance_names:
                      response = ec2_client.describe_instances(
                          Filters=[
                              {'Name': 'tag:Name', 'Values': [instance_name]},
                              {'Name': 'instance-state-name', 'Values': ['running']}
                          ]
                      )

                      # Extract instance IDs from the response
                      ids = [instance['InstanceId'] for reservation in response['Reservations'] for instance in reservation['Instances']]

                      # append instance IDs
                      if ids:
                          instance_ids.extend(ids)
                          instance_names_result.append(instance_name)

                  return instance_ids, instance_names_result

              def get_instance_metrics(instance_id, metric_names, metric_units, instance_name, cloudwatch_client):
                  end_time = datetime.utcnow()
                  start_time = end_time - timedelta(minutes=10)

                  metrics_dict = {}

                  for metric_name, unit in zip(metric_names, metric_units):
                      # take necessary metric values
                      id_for_query = metric_name.lower()

                      query = {
                          "Id": id_for_query,
                          "MetricStat": {
                              "Metric": {
                                  "Namespace": "AWS/EC2",
                                  "MetricName": metric_name,
                                  "Dimensions": [
                                      {"Name": "InstanceId", "Value": instance_id}
                                  ]
                              },
                              "Period": 300,
                              "Stat": "Average",
                              "Unit": unit
                          },
                          "ReturnData": True
                      }

                      response = cloudwatch_client.get_metric_data(
                          MetricDataQueries=[query],
                          StartTime=start_time,
                          EndTime=end_time
                      )

                      # Extract data from the response
                      metric_data_results = response.get('MetricDataResults', [])

                      if not metric_data_results:
                          print(f"No data available for metric: {metric_name} related to {instance_name}")
                          continue

                      values = metric_data_results[0].get('Values', [])

                      if not values:
                          print(f"No values available for metric: {metric_name} related to {instance_name}")
                          continue

                      # Get the latest value
                      latest_value = values[-1]

                      # Combine metric name, value, unit, instance name into a dictionary
                      metrics_dict[id_for_query] = {
                          'MetricName': metric_name,
                          'Value': latest_value,
                          'Unit': unit,
                          'InstanceId': instance_id,
                          'InstanceName': instance_name
                      }
                  return metrics_dict

              def put_custom_metrics(metrics_dict, custom_namespace, cloudwatch_client):
                  for metric_id, metric_info in metrics_dict.items():
                      metric_name = metric_info['MetricName']
                      value = metric_info['Value']
                      dimensions = [
                          {'Name': 'InstanceName', 'Value': metric_info['InstanceName']}
                      ]
                      unit = metric_info['Unit']
                      instance_name = metric_info['InstanceName']

                      response = cloudwatch_client.put_metric_data(
                          Namespace=custom_namespace,
                          MetricData=[
                              {
                                  'MetricName': metric_name,
                                  'Dimensions': dimensions,
                                  'Value': value,
                                  'Unit': unit
                              }
                          ]
                      )

                      # Print information about the success or failure process
                      if response['ResponseMetadata']['HTTPStatusCode'] == 200:
                          print(f"Successfully put metric data for {metric_name} in {custom_namespace} related to {instance_name}")
                      else:
                          print(f"Failed to put metric data for {metric_name} in {custom_namespace}. Response: {response}")

      LambdaInvokePermission:
        Type: AWS::Lambda::Permission
        Properties:
          Action: lambda:InvokeFunction
          FunctionName: !Ref TransformCustomMetrics
          Principal: events.amazonaws.com
          SourceArn: !GetAtt ScheduleRule.Arn

      ScheduleRule:
        Type: AWS::Events::Rule
        Properties:
          Name: TransformCustomMetricsScheduleRule
          ScheduleExpression: 'rate(5 minutes)'
          Targets:
            - Arn: !GetAtt TransformCustomMetrics.Arn
              Id: TransformCustomMetricsTarget

    #####################################
    #  CloudWatch Alarms
    #####################################
      CPUAlarm: 
        Type: AWS::CloudWatch::Alarm
        Properties:
          AlarmName: 
            !Sub 
              - '${InstanceName} - High CPU Usage'
              - InstanceName: !Select [0, !Split [",", !Ref InstanceNames]]
          AlarmDescription: 'High CPU Usage'
          AlarmActions:
          - !Sub 'arn:${AWS::Partition}:sns:${AWS::Region}:${AWS::AccountId}:${SnsTopicName}'
          OKActions:
          - !Sub 'arn:${AWS::Partition}:sns:${AWS::Region}:${AWS::AccountId}:${SnsTopicName}'
          MetricName: !Select [0, !Split [",", !Ref MetricNames]]
          Unit: !Select [0, !Split [",", !Ref MetricUnits]]
          Namespace: !Ref CustomNamespace
          Statistic: Average
          Period: 300
          EvaluationPeriods: 3
          Threshold: 90
          ComparisonOperator: GreaterThanOrEqualToThreshold
          Dimensions:
          - Name: InstanceName
            Value: !Select [0, !Split [",", !Ref InstanceNames]]

      SystemStatusAlarm:
        Type: AWS::CloudWatch::Alarm
        Properties:
          AlarmName: 
            !Sub 
              - '${InstanceName} - System Status Check Failed'
              - InstanceName: !Select [0, !Split [",", !Ref InstanceNames]]
          AlarmDescription: 'System Status Check Failed'
          Namespace: !Ref CustomNamespace
          MetricName: !Select [1, !Split [",", !Ref MetricNames]]
          Unit: !Select [1, !Split [",", !Ref MetricUnits]]
          Statistic: Minimum
          Period: 300
          EvaluationPeriods: 1
          ComparisonOperator: GreaterThanThreshold
          Threshold: 0
          AlarmActions:
          - !Sub 'arn:${AWS::Partition}:sns:${AWS::Region}:${AWS::AccountId}:${SnsTopicName}'
          OKActions:
          - !Sub 'arn:${AWS::Partition}:sns:${AWS::Region}:${AWS::AccountId}:${SnsTopicName}'
          Dimensions:
          - Name: InstanceName
            Value: !Select [0, !Split [",", !Ref InstanceNames]]

      InstanceStatusAlarm:
        Type: AWS::CloudWatch::Alarm
        Properties:
          AlarmName: 
            !Sub 
              - '${InstanceName} - Instance Status Check Failed'
              - InstanceName: !Select [0, !Split [",", !Ref InstanceNames]]
          AlarmDescription: 'Instance Status Check Failed'
          Namespace: !Ref CustomNamespace
          MetricName: !Select [2, !Split [",", !Ref MetricNames]]
          Unit: !Select [2, !Split [",", !Ref MetricUnits]]
          Statistic: Minimum
          Period: 300
          EvaluationPeriods: 1
          ComparisonOperator: GreaterThanThreshold
          Threshold: 0
          AlarmActions:
          - !Sub 'arn:${AWS::Partition}:sns:${AWS::Region}:${AWS::AccountId}:${SnsTopicName}'
          OKActions:
          - !Sub 'arn:${AWS::Partition}:sns:${AWS::Region}:${AWS::AccountId}:${SnsTopicName}'
          Dimensions:
          - Name: InstanceName
            Value: !Select [0, !Split [",", !Ref InstanceNames]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Infrastructure schema and list of metrics from the ws-deployment namespace:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8ce2c27bpyke67tft6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8ce2c27bpyke67tft6t.png" alt="Full infrastructure schema" width="800" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctib0mxb5jopwnbiax8s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctib0mxb5jopwnbiax8s.png" alt="CloudWatch EC2 metrics in the custom namespace" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;clone the &lt;a href="https://gitlab.com/Andr1500/ssm_runbook_bluegreen.git" rel="noopener noreferrer"&gt;repository&lt;/a&gt; (if you don’t have already cloned it).
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    git clone https://gitlab.com/Andr1500/ssm_runbook_bluegreen.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Put “dummy” metrics data with AWS CLI into the custom namespace for each metric. It is necessary because without this data CloudWatch alarms were not created correctly.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudwatch put-metric-data \
        --namespace "ws-deployment" \
        --metric-name "CPUUtilization" \
        --dimensions "InstanceName=ws-instance" \
        --value 70 --unit Percent 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create CloudFormation stack.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name ws-ec2-monitoring \
        --template-body file://ec2_monitoring.yaml \
        --capabilities CAPABILITY_NAMED_IAM --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, we showed how we can monitor EC2 instances deployed with the Blue/Green deployment strategy. To be sure that the lambda function works correctly we can add the configuration of the “InsufficientDataActions” parameter in the CloudWatch alarms for sending notifications in case of changing the CloudWatch alarm state to “Insufficient data”. If you need to have more specific CloudWatch metrics from the EC2 instances — &lt;a href="https://medium.com/p/b4347a6ee6e2" rel="noopener noreferrer"&gt;here&lt;/a&gt; is my post about Monitoring Disk Space (as example) with CloudWatch agent.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the reaction button below to show your support for the author. Feel free to use and share this post!&lt;/p&gt;

</description>
      <category>eventbridge</category>
      <category>lambda</category>
      <category>cloudwatch</category>
    </item>
    <item>
      <title>CodePipeline pipeline for automation Blue/Green deployment with SM Automation runbook</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Mon, 03 Feb 2025 22:35:38 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/codepipeline-pipeline-for-automation-bluegreen-deployment-with-sm-automation-runbook-3lm7</link>
      <guid>https://dev.to/andrii-shykhov/codepipeline-pipeline-for-automation-bluegreen-deployment-with-sm-automation-runbook-3lm7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, we have a configuration for Systems Manager automation execution on GitLab webhook event with CodePipeline pipeline.&lt;br&gt;
CodePipeline pipeline consists of stages: &lt;strong&gt;Source&lt;/strong&gt;, &lt;strong&gt;Copy-to-s3&lt;/strong&gt;, &lt;strong&gt;Invoke-lambda&lt;/strong&gt;. &lt;br&gt;
&lt;strong&gt;Source&lt;/strong&gt;: retrieves code changes when a webhook event is sent from GitLab;&lt;br&gt;
&lt;strong&gt;Copy-to-s3&lt;/strong&gt;: files from the Source stage are unzipped before saving and save files to the S3 bucket;&lt;br&gt;
&lt;strong&gt;Invoke-lambda&lt;/strong&gt;: invoke a Lambda function to perform execution of the SM Automation runbook.&lt;br&gt;
AWS CodeStar Gitlab Connections service is not available in all regions, more information &lt;a href="https://docs.aws.amazon.com/codepipeline/latest/userguide/connections-gitlab.html" rel="noopener noreferrer"&gt;here&lt;/a&gt; .&lt;br&gt;
Monitoring of the pipeline is done with the notification rule which sends error notifications to the SNS topic.&lt;/p&gt;

&lt;p&gt;This post is the second part of my series of posts about Blue/Green deployment on AWS EC2 instances with the Systems Manager Automation runbook, the first part is &lt;a href="https://medium.com/p/ce1643ef642c" rel="noopener noreferrer"&gt;here&lt;/a&gt;, and the third part is &lt;a href="https://medium.com/p/4ae36c6cecdf" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All infrastructure is created with CloudFormation template &lt;code&gt;infrastructure/codepipeline_pipeline.yaml&lt;/code&gt; and has independent deployment from the ec2-bluegreen-deployment stack.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;codepipeline_pipeline.yaml&lt;/code&gt; template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    AWSTemplateFormatVersion: '2010-09-09'
    Description: 'Codepipeline: take source code, store to S3, execute SSM runbook'

    Parameters:
      SsmDocumentName:
        Type: String
        Default: 'Ec2BlueGreenDeployment'
      ConnectionName:
        Type: String
        Default: 'gitlab-to-s3'
      S3BucketName:
        Type: String
        Default: ''
      BranchName:
        Type: String
        Default: ''
      FullRepositoryId:
        Type: String
        Default: ''
      CodePipelineName:
        Type: String
        Default: 'execute-blue-green-runbook'
      TopicName:
        Type: String
        Default: 'blue-green-deployment-status'

    Resources:
      GitLabConnection:
        Type: 'AWS::CodeStarConnections::Connection'
        Properties:
          ConnectionName: !Ref ConnectionName
          ProviderType: 'GitLab'

      ExecuteSsmRunbookRole:
        Type: 'AWS::IAM::Role'
        Properties:
          RoleName: ExecuteSsmRunbookRole
          AssumeRolePolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Principal:
                  Service: lambda.amazonaws.com
                Action: sts:AssumeRole
          ManagedPolicyArns:
            - 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
          Policies:
            - PolicyName: ExecuteSsmRunbook
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: Allow
                    Action:
                      - ssm:StartAutomationExecution
                    Resource:
                      - !Sub 'arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:*'
            - PolicyName: ReturnMessageToCodepipeline
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: Allow
                    Action:
                      - codepipeline:PutJobFailureResult
                      - codepipeline:PutJobSuccessResult
                    Resource: '*'


      ExecuteSsmRunbook:
        Type: 'AWS::Lambda::Function'
        Properties:
          FunctionName: ExecuteSsmRunbook
          Handler: index.lambda_handler
          Role: !GetAtt ExecuteSsmRunbookRole.Arn
          Environment:
            Variables:
              SsmDocumentName: !Ref SsmDocumentName
          Runtime: python3.12
          Timeout: 10
          Code:
            ZipFile: |
              import boto3
              import os
              from botocore.exceptions import ClientError

              def lambda_handler(event, context):
                  try:
                      aws_region = os.environ.get('AWS_REGION')
                      SsmDocumentName = os.environ['SsmDocumentName']
                      codepipeline = boto3.client('codepipeline', region_name=aws_region)
                      ssm = boto3.client('ssm')

                      # Retrieve the Job ID from the Lambda action
                      job_id = event.get('CodePipeline.job', {}).get('id')

                      # Notify CodePipeline of a successful job
                      def put_job_success(message):
                          params = {
                              'jobId': job_id,
                              'executionDetails': {
                                  'summary': str(message),
                                  'percentComplete': 100,
                                  'externalExecutionId': context.aws_request_id
                              }
                          }
                          codepipeline.put_job_success_result(**params)
                          print(message)

                      # Notify CodePipeline of a failed job
                      def put_job_failure(message):
                          params = {
                              'jobId': job_id,
                              'failureDetails': {
                                  'message': str(message),
                                  'type': 'JobFailed',
                                  'externalExecutionId': context.aws_request_id
                              }
                          }
                          codepipeline.put_job_failure_result(**params)
                          print(message)

                      # Start the SSM Automation runbook execution
                      response = ssm.start_automation_execution(DocumentName=SsmDocumentName)
                      execution_id = response['AutomationExecutionId']

                      put_job_success(f'Started SSM Automation execution with ID: {execution_id}')

                  except Exception as e:
                      # Log any unexpected errors and fail the job
                      error_message = f'Error: {str(e)}'
                      print(error_message)
                      put_job_failure(error_message)
                      raise e

      SSMAutomationTriggerPermissions:
        Type: AWS::Lambda::Permission
        Properties:
          Action: 'lambda:InvokeFunction'
          FunctionName: !GetAtt ExecuteSsmRunbook.Arn
          Principal: 'codepipeline.amazonaws.com'

      CodePipelineRole:
        Type: 'AWS::IAM::Role'
        Properties:
          RoleName: 'CodePipelineRole'
          AssumeRolePolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: 'Allow'
                Principal:
                  Service: 'codepipeline.amazonaws.com'
                Action: 'sts:AssumeRole'
          Policies:
            - PolicyName: 'CodePipelineFullAccess'
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: 'Allow'
                    Action:
                      - 'codepipeline:*'
                    Resource: !Sub 'arn:${AWS::Partition}:codepipeline:${AWS::Region}:${AWS::AccountId}:*'
            - PolicyName: 'S3FullAccess'
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: 'Allow'
                    Action:
                      - s3:PutObject
                      - s3:GetObject
                      - s3:ListBucket
                    Resource: !Sub 'arn:${AWS::Partition}:s3:::*'
            - PolicyName: 'LambdaInvokeAccess'
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: 'Allow'
                    Action:
                      - lambda:InvokeFunction
                    Resource: !Sub 'arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:*'
            - PolicyName: 'CodeStarSourceConnectionAccess'
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: 'Allow'
                    Action:
                      - codestar-connections:UseConnection
                    Resource: !Sub 'arn:${AWS::Partition}:codestar-connections:${AWS::Region}:${AWS::AccountId}:connection/*'
            - PolicyName: 'SendNotificationsBySns'
              PolicyDocument:
                Version: '2012-10-17'
                Statement:
                  - Effect: 'Allow'
                    Action:
                      - sns:Publish
                    Resource:  '*'

      CodePipelineTriggerRunbook:
        Type: AWS::CodePipeline::Pipeline
        Properties:
          Name: !Ref CodePipelineName
          RoleArn: !GetAtt CodePipelineRole.Arn
          ArtifactStore:
            Type: S3
            Location: !Ref S3BucketName
          Stages:
            - Name: Source
              Actions:
                - Name: Source
                  ActionTypeId:
                    Category: Source
                    Owner: AWS
                    Provider: CodeStarSourceConnection
                    Version: '1'
                  RunOrder: 1
                  Configuration:
                    BranchName: !Ref BranchName
                    ConnectionArn: !Ref 'GitLabConnection'
                    DetectChanges: 'true'
                    FullRepositoryId: !Ref FullRepositoryId
                    OutputArtifactFormat: CODE_ZIP
                  OutputArtifacts:
                    - Name: SourceArtifact
                  Namespace: SourceVariables
            - Name: Copy-to-s3
              Actions:
                - Name: copy-to-s3
                  ActionTypeId:
                    Category: Deploy
                    Owner: AWS
                    Provider: S3
                    Version: '1'
                  RunOrder: 1
                  Configuration:
                    BucketName: !Ref S3BucketName
                    Extract: 'true'
                  InputArtifacts:
                    - Name: SourceArtifact
            - Name: Invoke-lambda
              Actions:
                - Name: invoke-lambda
                  ActionTypeId:
                    Category: Invoke
                    Owner: AWS
                    Provider: Lambda
                    Version: '1'
                  RunOrder: 1
                  Configuration:
                    FunctionName: !Ref ExecuteSsmRunbook
                  OutputArtifacts:
                    - Name: Message
                  InputArtifacts:
                    - Name: SourceArtifact
          PipelineType: 'V2'

      CodePipelineNotificationRule:
        Type: 'AWS::CodeStarNotifications::NotificationRule'
        Properties:
          DetailType: 'BASIC'
          EventTypeIds:
            - 'codepipeline-pipeline-pipeline-execution-failed'
            - 'codepipeline-pipeline-stage-execution-failed'
            - 'codepipeline-pipeline-action-execution-failed'
          Name: SsmPipelineNotificationRule
          Resource: !Sub 'arn:${AWS::Partition}:codepipeline:${AWS::Region}:${AWS::AccountId}:${CodePipelineName}'
          Targets:
            - TargetAddress: !Sub 'arn:${AWS::Partition}:sns:${AWS::Region}:${AWS::AccountId}:${TopicName}'
              TargetType: 'SNS'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deployment and infrastructure schemas:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx85sl1646iubgirtbyfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx85sl1646iubgirtbyfs.png" alt="CodePipeline pipeline stages" width="431" height="912"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwe680bfclkzbd2s1eguo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwe680bfclkzbd2s1eguo.png" alt="Infrastructure schema" width="800" height="658"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you start, make sure the following requirements are met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with permissions to create resources.&lt;/li&gt;
&lt;li&gt;AWS CLI installed on your local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repository (if you don’t have already cloned it), navigate to the cloned repository, and push the repository to your remote repository.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    git clone https://gitlab.com/Andr1500/ssm_runbook_bluegreen.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Fill all necessary Parameters in the infrastructure/codepipeline_pipeline.yaml file, go to the infrastructure directory, and create CloudFormation stack.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name codepipeline-pipeline\
        --template-body file://codepipeline_pipeline.yaml \
        --capabilities CAPABILITY_NAMED_IAM --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Update CodeStar pending connection. Open the AWS Console, next go: CodePipeline -&amp;gt; Settings -&amp;gt; Connections -&amp;gt; choose the created connection in pending status -&amp;gt; Update pending connection -&amp;gt; depends on your provider make authorisation, and grant necessary access to the repository. More information about CodeStar connections &lt;a href="https://docs.aws.amazon.com/dtconsole/latest/userguide/connections-update.html" rel="noopener noreferrer"&gt;here&lt;/a&gt; .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For deployment of a new version of the application — make changes in application/index.html, commit changes, and push it to your remote repository. The deployment process will be started automatically and in case of any issues with the deployment, both the CodePipeline pipeline and runbook, you will receive an email with issues details.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, we showed how the CodePipeline pipeline can automate Blue/Green deployment with Application Load Balancer’s weighted target group feature realised with the Systems Manager Automation runbook.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the reaction button below to show your support for the author. Feel free to use and share this post!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Blue/Green deployment on AWS EC2 instances with Systems Manager Automation runbook</title>
      <dc:creator>Andrii Shykhov</dc:creator>
      <pubDate>Mon, 03 Feb 2025 22:29:56 +0000</pubDate>
      <link>https://dev.to/andrii-shykhov/bluegreen-deployment-on-aws-ec2-instances-with-systems-manager-automation-runbook-24h3</link>
      <guid>https://dev.to/andrii-shykhov/bluegreen-deployment-on-aws-ec2-instances-with-systems-manager-automation-runbook-24h3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Blue/Green deployment strategy in this post is based on &lt;a href="https://aws.amazon.com/blogs/devops/blue-green-deployments-with-application-load-balancer/" rel="noopener noreferrer"&gt;this&lt;/a&gt; AWS post about fine-tuning Blue/Green deployments on Application Load Balancer. But why is this solution instead of using CodeDeploy?&lt;/p&gt;

&lt;p&gt;In our case we need to have immediate swap traffic from the “Green” environment to “Blue”, CodeDeploy, for now, doesn’t have this option for the EC2/On-premises compute platform, even with the option “Reroute traffic immediately” we have a response in the same time from “Green” and “Blue” EC2 instances for some time during deployment processes.&lt;/p&gt;

&lt;p&gt;Another issue is that CodeDeploy deployment on the EC2/On-premises compute platform is based on the instances tags, after the deployment process we need to have a solution for managing tags on the “Blue” instance to be ready for the next deployment.&lt;/p&gt;

&lt;p&gt;Also, there is an &lt;a href="https://repost.aws/questions/QUngah7ttAR3CmHEO6FfcMIg#ANegqYpZvtTQSuBgAUUdNENQ" rel="noopener noreferrer"&gt;issue&lt;/a&gt; with the creation necessary configuration of the CodeDeploy deployment group with CloudFormation (CloudFormation is used for the creation of infrastructure in AWS).&lt;/p&gt;

&lt;p&gt;The Systems Manager Automation runbook can be more flexible in deployment configuration related to our requirements.&lt;/p&gt;

&lt;p&gt;This post is the first part of my series of posts about Blue/Green deployment on AWS EC2 instances with the Systems Manager Automation runbook, the second part is &lt;a href="https://medium.com/p/90b3c65885db" rel="noopener noreferrer"&gt;here&lt;/a&gt; and the third part is &lt;a href="https://medium.com/p/4ae36c6cecdf" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All infrastructure in AWS (except S3 bucket, Route 53 domain, and SSL/TLS certificate) for this project is created with CloudFormation and can be found in &lt;a href="https://gitlab.com/Andr1500/ssm_runbook_bluegreen.git" rel="noopener noreferrer"&gt;this&lt;/a&gt; repository. S3 bucket is necessary for storing nested stack templates and application files. Route 53 domain and SSL/TLS certificate are necessary for secure WEB access to the EC2 instances through the Application Load Balancer. Amazon SNS topic is necessary for sending notifications about the status of the deployment.&lt;/p&gt;

&lt;p&gt;The main steps of the Systems Manager Automation runbook:&lt;/p&gt;

&lt;p&gt;a) checking if we have already any execution related to the runbook, if yes — skip the deployment;&lt;/p&gt;

&lt;p&gt;b) creating a “Green” EC2 instance with all necessary configuration, waiting a few minutes to make sure everything is configured correctly;&lt;/p&gt;

&lt;p&gt;c) making a reboot of the instance and checking the status of the instance after reboot;&lt;/p&gt;

&lt;p&gt;d) making deployment of the “Green” EC2 instance: registering the instance to the necessary Target Group, making swap weights of the Target Groups in the listener rule configuration, terminating “Blue” EC2 instance;&lt;/p&gt;

&lt;p&gt;e) sending notification about status of the deployment.&lt;/p&gt;

&lt;p&gt;Blue/Green deployment step configuration from the Systems Manager runbook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    - name: BlueGreenDeployment
      action: aws:executeScript
      maxAttempts: 2
      timeoutSeconds: 300
      isCritical: true
      onFailure: step:SendNotification
      inputs:
        Runtime: python3.8
        Handler: BlueGreenDeployment
        InputPayload:
          Region: "{{ global:REGION }}"
          ListenerArn: "{{ ListenerArn }}"
          InstanceIds: "{{ LaunchInstance.CreatedInstanceId }}"
        Script: |
          import boto3
          import time
          from botocore.exceptions import ClientError

          def BlueGreenDeployment(event, context):
              try:
                  elbv2 = boto3.client('elbv2', region_name=event['Region'])
                  ec2_client = boto3.client('ec2', region_name=event['Region'])

                  # Register instance with the target group
                  chosen_target_group, target_group_arn_1, target_group_arn_2, instances_tg1, instances_tg2, weight_tg1, weight_tg2 = get_listener_details(elbv2, event['ListenerArn'])

                  if chosen_target_group is None:
                      raise Exception("[ERROR] We don't have necessary target group.")

                  elbv2.register_targets(
                      TargetGroupArn=chosen_target_group,
                      Targets=[
                          {'Id': instance_id}
                          for instance_id in event['InstanceIds']
                      ],
                  )
                  print(f"[INFO] Instance {event['InstanceIds']} registered with the target group {chosen_target_group}")

                  # Wait for instances to be healthy
                  wait_for_instances_to_be_healthy(elbv2, chosen_target_group, event['InstanceIds'], max_wait_time=300)

                  if instances_tg1 or instances_tg2:
                      swap_weights(elbv2, event['ListenerArn'], target_group_arn_1, target_group_arn_2, weight_tg1, weight_tg2)

                      time.sleep(15)

                      # Terminate "Blue" instance
                      if instances_tg1:
                          for instance_id in instances_tg1:
                              if instance_id not in instances_tg2 and weight_tg1 == 100:
                                  ec2_client.terminate_instances(InstanceIds=[instance_id])
                                  print(f"Instance {instance_id}  terminated from Target Group 1")
                      if instances_tg2:
                          for instance_id in instances_tg2:
                              if instance_id not in instances_tg1 and weight_tg2 == 100:
                                  ec2_client.terminate_instances(InstanceIds=[instance_id])
                                  print(f"Instance {instance_id}  terminated from Target Group 2")
                      print("[INFO] Traffic swap and EC2 instance termination completed successfully.")
                  else:
                      print("[INFO] Traffic swap and EC2 instance termination were skipped")
              except ClientError as e:
                  raise Exception("[ERROR]", e)

          # get ARNs of target groups and weights of this groups in Listener rule
          def get_listener_details(elb_client, listener_arn):
              # Get the current state of the listener and its rules
              listener_description = elb_client.describe_rules(ListenerArn=listener_arn)

              # Get the Target Group ARNs and weights from the listener rule
              target_group_arn_1 = listener_description['Rules'][0]['Actions'][0]['ForwardConfig']['TargetGroups'][0]['TargetGroupArn']
              target_group_arn_2 = listener_description['Rules'][0]['Actions'][0]['ForwardConfig']['TargetGroups'][1]['TargetGroupArn']
              weight_tg1 = listener_description['Rules'][0]['Actions'][0]['ForwardConfig']['TargetGroups'][0]['Weight']
              weight_tg2 = listener_description['Rules'][0]['Actions'][0]['ForwardConfig']['TargetGroups'][1]['Weight']

              # Get instance IDs from target groups
              instances_tg1 = get_instance_ids(elb_client, target_group_arn_1)
              instances_tg2 = get_instance_ids(elb_client, target_group_arn_2)

              # Check if instances are registered to the target groups
              if not instances_tg1 and not instances_tg2:
                  chosen_target_group = target_group_arn_2 if weight_tg2 == 100 else target_group_arn_1
              elif not instances_tg1:
                  chosen_target_group = target_group_arn_1
              elif not instances_tg2:
                  chosen_target_group = target_group_arn_2
              else:
                  chosen_target_group = None
              return chosen_target_group, target_group_arn_1, target_group_arn_2, instances_tg1, instances_tg2, weight_tg1, weight_tg2

          # Take Instances IDs registered to the Target Groups
          def get_instance_ids(elb_client, target_group_arn):
            response = elb_client.describe_target_health(TargetGroupArn=target_group_arn)
            instance_ids = [target['Target']['Id'] for target in response['TargetHealthDescriptions'] if target['TargetHealth']['State'] == 'healthy']
            return instance_ids

          def wait_for_instances_to_be_healthy(elbv2_client, target_group_arn, instance_ids, max_wait_time=300, polling_interval=10):
              start_time = time.time()

              while time.time() - start_time &amp;lt; max_wait_time:
                  # Describe the health of the instances in the target group
                  health_response = elbv2_client.describe_target_health(TargetGroupArn=target_group_arn)

                  # Check if all specified instances are healthy
                  healthy_instance_ids = {health['Target']['Id'] for health in health_response['TargetHealthDescriptions'] if health['TargetHealth']['State'] == 'healthy'}
                  if set(instance_ids) == healthy_instance_ids:
                      print(f"[INFO] All instances {instance_ids} are healthy in {target_group_arn}")
                      return

                  # Wait before the next check
                  time.sleep(polling_interval)

              # If the loop exits, raise an exception indicating that instances are not healthy
              raise Exception(f"[ERROR] Instances {instance_ids} in target group {target_group_arn} did not become healthy within the specified time.")

          def swap_weights(elb_client, listener_arn, target_group_arn_1, target_group_arn_2, weight_tg1, weight_tg2):
              # Swap the weights for the listener rule
              elb_client.modify_listener(
                  ListenerArn=listener_arn,
                  DefaultActions=[
                      {
                          "Type": "forward",
                          "ForwardConfig": {
                              "TargetGroups": [
                                  {
                                      "TargetGroupArn": target_group_arn_1,
                                      "Weight": weight_tg2
                                  },
                                  {
                                      "TargetGroupArn": target_group_arn_2,
                                      "Weight": weight_tg1
                                  }
                              ]
                          }
                      }
                  ]
              )
              print("Weight swap completed successfully")
      nextStep: SendNotification
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deployment and infrastructure schemas:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnczvdpb0evwwqecpg3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnczvdpb0evwwqecpg3j.png" alt="Deployment schema of the Systems Manager runbook" width="800" height="724"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hhfkasxpcku38czcci1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hhfkasxpcku38czcci1.png" alt="Infrastructure schema" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you start, make sure the following requirements are met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with permissions to create resources.&lt;/li&gt;
&lt;li&gt;AWS CLI installed on your local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repository.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    git clone https://gitlab.com/Andr1500/ssm_runbook_bluegreen.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create an S3 bucket with a unique name for nested stack templates and application files. &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html" rel="noopener noreferrer"&gt;Here&lt;/a&gt; are requirements to the bucket naming rules.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    on Linux:
    date=$(date +%Y%m%d%H%M%S)

    on Windows PowerShell:
    $date = Get-Date -Format "yyyyMMddHHmmss"

    aws s3api create-bucket --bucket cloudformation-app-files-${date} --region YOUR_REGION \
     --create-bucket-configuration LocationConstraint=YOUR_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add policy to the S3 bucket for access from the EC2 instance.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws s3api put-bucket-policy --bucket cloudformation-app-files-${date} \
    --policy '{"Version":"2012–10–17","Statement":[{"Effect":"Allow","Principal":{"Service":"ec2.amazonaws.com"},"Action":"s3:GetObject","Resource":"arn:aws:s3:::'"cloudformation-app-files-${date}"'/*"}]}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Fill in all necessary Parameters in the infrastructure/root.yaml file and send all nested stack files to the S3 bucket.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws s3 cp infrastructure s3://cloudformation-app-files-${date}/infrastructure  --recursive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Go to the infrastructure directory and create CloudFormation stack.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws cloudformation create-stack \
        --stack-name ec2-bluegreen-deployment \
        --template-body file://root.yaml \
        --capabilities CAPABILITY_NAMED_IAM \
        --parameters ParameterKey=UserData,ParameterValue="$(base64 -i user_data.txt)" \
        --disable-rollback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open your mailbox and confirm your subscription to the SNS topic. Access to the deployed EC2 instance is possible through the Systems Manager. Go to AWS console -&amp;gt; AWS Systems Manager -&amp;gt; Fleet Manager -&amp;gt; choose the created EC2 instance -&amp;gt; Node actions -&amp;gt; Connect -&amp;gt; Start terminal session. Here you can check if everything was created and configured correctly during the deployment process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For manual deployment. Send all application files to the S3 bucket.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws s3 cp application s3://cloudformation-app-files-${date}/application  --recursive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start Systems Manager Automation runbook execution, if everything is ok — you receive an email with the information “Deployment Status: Success” and will have WEB access to the deployed EC2 instance through the WEB browser, in case of any failure — you receive an email with information “Deployment Status: Failed” and details about the failed step. For deployment of a new version of the application — make changes in infrastructure/index.html, send changes to the S3 bucket, and start the Systems Manager Automation runbook again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws ssm start-automation-execution --document-name "Ec2BlueGreenDeployment"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Deployment test. In tests/test_deployment.sh you can find a simple script for making a test of availability and response from “Green” and “Blue” EC2 instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deletion all files from the S3 bucket and deletion of the S3 bucket.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    aws s3 rm s3://cloudformation-app-files-${date} --recursive
    aws s3 rb s3://cloudformation-app-files-${date} --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Conclusion:&lt;/p&gt;

&lt;p&gt;In this post, we showed how to perform Blue/Green deployment with Application Load Balancer’s weighted target group feature realised with Systems Manager Automation runbook.&lt;/p&gt;

&lt;p&gt;If you found this post helpful and interesting, please click the reaction button below to show your support for the author. Feel free to use and share this post!&lt;/p&gt;

</description>
      <category>ssm</category>
      <category>bluegreendeployment</category>
      <category>cloudformation</category>
      <category>sns</category>
    </item>
  </channel>
</rss>
