<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ziad Osman</title>
    <description>The latest articles on DEV Community by Ziad Osman (@ziadnosman).</description>
    <link>https://dev.to/ziadnosman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ziadnosman"/>
    <language>en</language>
    <item>
      <title>Log CloudTrail events to DynamoDB using AWS State Machine</title>
      <dc:creator>Ziad Osman</dc:creator>
      <pubDate>Sat, 10 Aug 2024 16:56:21 +0000</pubDate>
      <link>https://dev.to/ziadnosman/log-cloudtrail-events-to-dynamodb-using-aws-state-machine-pin</link>
      <guid>https://dev.to/ziadnosman/log-cloudtrail-events-to-dynamodb-using-aws-state-machine-pin</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;If you’re ever browsing through the different AWS services and offerings, you might have come across AWS State Machines and gotten scared by the name. In reality, the state machine service allows you to chain events easily and seamlessly. In addition to its usage to chain other AWS services together, State Machines have some cool integrations up their sleeves, which can save you from writing your own logic. Less moving parts means less stuff to break!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This will be a two part blog, where each part will focus on solving a real world issue using State Machines&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;For this blog, I will be using a trick that allows a state machine to take in an input, and write it directly to DynamoDB, without you having to write your own database insert logic. This can be used for a real world scenario where you need to log certain events from CloudTrail to a DynamoDB table.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Resources needed
&lt;/h2&gt;

&lt;p&gt;In order to complete this demo, I’m going to need to create a few AWS resources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A DynamoDB table: this will be used to save the event.&lt;/li&gt;
&lt;li&gt;A state machine: for obvious reasons :).&lt;/li&gt;
&lt;li&gt;An EventBridge rule: this will be used to transfer the events from CloudTrail to our state machine.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1 - DynamoDB
&lt;/h2&gt;

&lt;p&gt;First off, we need to create our DynamoDB table. For my example, I'm logging the event as it is, and I’m using the EventID, which is readily available as part of the event, as my Partition Key.&lt;/p&gt;

&lt;p&gt;Here is the terraform snippet of my DyanmoDB table. Please be aware that on-demand mode is probably not the most economical billing mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_dynamodb_table" "state_machine_demo_table" {
  name           = "state-machine-demo-table"
  tags = var.tags
  billing_mode   = "PAY_PER_REQUEST"  # On-demand
  hash_key       = "EventID"


  attribute {
    name = "EventID"
    type = "S"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2 - state machine
&lt;/h2&gt;

&lt;p&gt;Our state machine definition will be quite simple. Let’s look at it and break it down.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Comment": "transfer CloudTrail events to DynamoDB.",
  "StartAt": "WriteToDynamoDB",
  "States": {
    "WriteToDynamoDB": {
      "Type": "Task",
      "Resource": "arn:aws:states:::dynamodb:putItem",
      "Parameters": {
        "TableName": "state-machine-demo-table",
        "Item": {
          "EventID": {
            "S.$": "$.detail.eventID"
          },
          "EventData": {
            "S.$": "States.JsonToString($.detail)"
          }
        }
      },
      "End": true
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The state machine only  has one task, which is to write to our DB. We use the &lt;code&gt;states:::dynamodb:putItem&lt;/code&gt; resource, which is the resource responsible for the native integration between the state machine and DynamoDB.&lt;/p&gt;

&lt;p&gt;In the parameters sections, we specify the table name, and the Item. Each Item element will become a column in our DB row. Since &lt;code&gt;EVENTID&lt;/code&gt; is our Partition key, this one is mandatory. Note that you can create as many items as you want depending on your need. &lt;/p&gt;

&lt;p&gt;If you look closely at the annotation, you’ll see this :  &lt;code&gt;"S.$": "$.detail.eventID"&lt;/code&gt;. What this means is that We’re going to insert an item of type string (hence the S at the beginning). All CloudTrail events nest their &lt;code&gt;eventID&lt;/code&gt; under the “detail” section, which is why we’re retrieving it there. That is to say that &lt;strong&gt;you can perform logic on your Json object in the state machine so that you only retrieve the data you want&lt;/strong&gt;. This is a really powerful feature, and a necessary one if you plan on inserting complex data.&lt;/p&gt;

&lt;p&gt;The Other item is our Event data, turned to a string via the built-in function &lt;code&gt;States.JsonToString()&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  3 - EventBridge
&lt;/h2&gt;

&lt;p&gt;Quite simply, this is an event bridge rule that will have the state machine as a target. The event pattern can be whatever events you would like to catch. For the sake of this example, I added 2 events (&lt;code&gt;CreateSubnet&lt;/code&gt; and &lt;code&gt;DeleteSubnet&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;This is what my event pattern looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "detail-type": [
    "AWS API Call via CloudTrail"
  ],
  "detail": {
    "eventName": [
      "CreateSubnet",
      "DeleteSubnet",
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;When creating your EventBridge rule, remember to associate it to the Default EventBus. This will ensure that the eventbridge rule picks up the CloudTrail events. Also, remember to set your state machine as the target of the rule&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this, we have just created an AWS State Machine that takes in CloudTrail events as an input, and directly writes them to a DynamoDB table. This can be changed to a more complex architecture where your data is coming in from another source, and where you retrieve only certain data in your state machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2
&lt;/h2&gt;

&lt;p&gt;In part 2 of this blog, I'm going to show you how to chain multiple lambda functions, so that the output of one is fed as an input to the other. More importantly, I’m going to show you how you can trigger parallel executions of your lambda function, such that for a list of objects, a lambda is triggered for each one. This trick will help you in processing data in parallel. &lt;a href="https://dev.to/ziadnosman/parallel-lambda-execution-with-aws-state-machine-4llj"&gt;Check it out here!&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>cloud</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Parallel Lambda execution with AWS State Machine</title>
      <dc:creator>Ziad Osman</dc:creator>
      <pubDate>Sat, 10 Aug 2024 16:56:10 +0000</pubDate>
      <link>https://dev.to/ziadnosman/parallel-lambda-execution-with-aws-state-machine-4llj</link>
      <guid>https://dev.to/ziadnosman/parallel-lambda-execution-with-aws-state-machine-4llj</guid>
      <description>&lt;p&gt;This is part 2 of a 2 part blog about real life scenarios that can be solved with AWS State Machine. In part 1, we discussed how to log CloudTrail events to DynamoDB by using a State Machine to write directly to DynamoDB. If you’re interested in that part, please refer to &lt;a href="https://dev.to/ziadnosman/log-cloudtrail-events-to-dynamodb-using-aws-state-machine-pin"&gt;this link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this blog, we’re going to be using a State Machine to feed the output of one lambda function to the other. Furthermore, this output is going to be a list of objects. And, for each object, we’re going to trigger a lambda function in parallel. This is very useful when you have a scenario where you need to process a lot of objects at the same time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;In addition to the state machine, I’m going to be creating two lambda functions. The first will be called OutputLambda, and the second will be called InputLambda.&lt;/p&gt;

&lt;p&gt;We’re going to go back to the code of the lambda functions in a second, but first, let’s look out our State Machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  1 - State machine
&lt;/h2&gt;

&lt;p&gt;The state machine is going to have two Steps. The first step is the OutputLambda, which will trigger step two upon completion. The second step will be of type Map, which means that it will trigger once for each element in a list, in parallel.&lt;/p&gt;

&lt;p&gt;Here is what our state machine definition looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Comment": "parallel execution of lambda functions demo",
  "StartAt": "FirstState",
  "States": {
    "FirstState": {
      "Type": "Task",
      "Resource": "&amp;lt;OutputLambdaArn&amp;gt;",
      "Next": "IterateOverList"
    },
    "IterateOverList": {
      "Type": "Map",
      "ItemsPath": "$.list",
      "Iterator": {
        "StartAt": "SecondState",
        "States": {
          "SecondState": {
            "Type": "Task",
            "Resource": "&amp;lt;InputLambdaArn&amp;gt;",
            "End": true
          }
        }
      },
      "End": true
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break down what we got: &lt;/p&gt;

&lt;p&gt;The first step is very simple, as it specifies a lambda function as a resource and sets &lt;code&gt;IterateOverList&lt;/code&gt; as its next step. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;IterateOverList&lt;/code&gt; is defined as type Map, which is what we need to achieve the parallelism as specified before. It specifies the &lt;code&gt;ItemsPath&lt;/code&gt; as being named “list”. This means that &lt;code&gt;OutputLambda&lt;/code&gt; needs to return an object named “list” for it to be caught by this state (we'll get back to this in a second). &lt;/p&gt;

&lt;p&gt;The Iterator section is where the magic happens. This is simply a foreach loop, that loops over the elements of our object named “list”, and triggers the task we defined for each element. Finally, inside the Iterator section, we have defined our task as being the &lt;code&gt;InputLambda&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Now, let’s move on to look at what our lambda functions look like.&lt;/p&gt;

&lt;h2&gt;
  
  
  2 - OutputLambda
&lt;/h2&gt;

&lt;p&gt;This function only needs to return a list of objects. As defined in the &lt;code&gt;ItemsPath&lt;/code&gt; section of our state machine definition, the name of this list of objects should be “list”. Here is what the lambda function could look like, using Python3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lambda_handler(event, context):
    fruits = ["apple","orange","pear"]
    return {'list': fruits}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next up, let's look at our &lt;code&gt;InputLambda&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  3 - InputLambda
&lt;/h2&gt;

&lt;p&gt;This lambda function will be triggered three times at the same time by our state machine (since in our example, the list has three items). Each instance of it triggering will have access to a different element of the list. Again, using Python3, here is how to retrieve that element in the lambda_handler.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lambda_handler(event, context):
    print(f"fruit of the day: {event}.")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yep, as simple as that! The list element is available as the event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog, I showed you how you can use a state machine to chain multiple lambda functions together. We saw how we can feed the output of one lambda function as the input of another. We also saw how to trigger a lambda function multiple times in parallel, which is useful to process data in parallel at the same time.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>cloud</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>CloudFront 101: API Gateway and S3 SPA origins</title>
      <dc:creator>Ziad Osman</dc:creator>
      <pubDate>Thu, 06 Jul 2023 12:10:58 +0000</pubDate>
      <link>https://dev.to/aws-builders/cloudfront-101-api-gateway-and-s3-spa-origins-5a2h</link>
      <guid>https://dev.to/aws-builders/cloudfront-101-api-gateway-and-s3-spa-origins-5a2h</guid>
      <description>&lt;p&gt;I was recently thrown into the deep end with CloudFront when I had to configure an AWS account for a client. The account had a CloudFront distribution with multiple origins, one of which is a single page application S3 static website, and the others apis in api gateway. Because of the scarce and/or dispersed information I found, I decided to aggregate all the points that I found challenging into a single blog. &lt;/p&gt;

&lt;p&gt;Unlike my other blogs, this one is more of a “what-to-do” and less of a “step by step how-to”. If you find it too long and too condensed to be read whole, I would recommend referring to the specific points you’re stuck on.&lt;/p&gt;

&lt;p&gt;The points that will be covered are:&lt;/p&gt;

&lt;p&gt;1 - CloudFront VS edge optimized api gateway &lt;br&gt;
2 - api default endpoint vs custom domain name&lt;br&gt;
3 - CloudFront multi origin default behavior and ordered behavior&lt;br&gt;
4 - Cloudfront with an SPA (paths, 403 redirects and lambda@edge)&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Cloudfront is AWS’s offering of a CDN, or content delivery network. In short, its job is to cache your content through a worldwide network of edge locations to improve latency and user experience.&lt;/p&gt;

&lt;p&gt;On the other hand, Api Gateway, as the name suggests, is a service that provides an entry point to your apis. &lt;/p&gt;

&lt;p&gt;And finally S3, which is simply a storage solution that AWS offers. What’s special about S3 is that you can use it to host static websites (think read-only sites, or sites that can use apis to interact with the database rather than having a full-on backend framework). &lt;/p&gt;

&lt;h2&gt;
  
  
  CloudFront Vs edge optimized api gateway
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Edge optimized endpoints
&lt;/h4&gt;

&lt;p&gt;Suppose you need to leverage the power of edge locations to decrease the latency of your apis. In that case, you need to use CloudFront. Luckily, you don’t need to figure out how CloudFront works to use it with your apis, since Api Gateway offers a nifty integration to CF. &lt;/p&gt;

&lt;p&gt;All you need to do is go into your api settings and switch the endpoint type to edge optimized.&lt;br&gt;
This will create a CloudFront distribution for you and link it to your api.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjcsxuolbyth0ixutdql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjcsxuolbyth0ixutdql.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So why would you ever use a manually created CF distribution if this option already exists?&lt;/p&gt;

&lt;h4&gt;
  
  
  Linking your existing CloudFront distribution to your api
&lt;/h4&gt;

&lt;p&gt;While edge optimized endpoints are good enough for the majority of use cases, the CF distribution that is created for you cannot be modified and is in fact managed by AWS. If your use case involved some tinkering with CF, you might have to opt into the second option: creating your own CF distribution.&lt;/p&gt;

&lt;p&gt;To add your api to your CF distribution, you just simply add your api as an origin, while specifying your api’s default endpoint as an origin domain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you happen to have a websocket api&lt;/strong&gt;, make sure to add the following headers in an origin request policy. (for more information, consult &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-working-with.websockets.html" rel="noopener noreferrer"&gt;the official documentation&lt;/a&gt;)&lt;br&gt;
Sec-WebSocket-Key&lt;br&gt;
Sec-WebSocket-Version&lt;br&gt;
Sec-WebSocket-Protocol&lt;br&gt;
Sec-WebSocket-Accept&lt;br&gt;
Sec-WebSocket-Extensions&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you happen to have a lambda authorizer in place for your api&lt;/strong&gt;, be sure to add the Authorize header in the origin behavior in CF&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpizxar9gz8wjr856tvjv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpizxar9gz8wjr856tvjv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But that's not all&lt;/strong&gt;. Remember, your api has a stage defined (often, stages will be the names of the environments, so /dev or /prod). If your api has a /dev stage, then you need to add a /dev to the end of your default endpoint url in CF. This can be done by specifying an origin path of “/dev” to your origin, or by adding a behavior to your cloudfront distribution. &lt;/p&gt;

&lt;p&gt;If you decide to go for the latter approach, you specify the path pattern as “/dev”, and associate your api origin to it. Now, when someone accesses the url: your-cloudfront-domain-name/dev, they will be redirected to your-default-endpoint/dev, which will correctly point to your api stage.&lt;/p&gt;

&lt;p&gt;Great, now you’re all set! But…what’s a default endpoint anyway? &lt;/p&gt;

&lt;h2&gt;
  
  
  api default endpoint vs custom domain name
&lt;/h2&gt;

&lt;p&gt;A default endpoint is the url that is created along with your api, and it is how you in fact access said api. The default endpoint is an amazon url, and it looks like this : api-id.execute-api.region.amazonaws.com. And while this endpoint is perfectly usable, it might not work for all use cases. If for some reason you need your api endpoint url to not be an amazon url, you need to use custom domains.&lt;/p&gt;

&lt;p&gt;Custom domains are quite simply a custom domain name that points to your api.&lt;br&gt;
To create a custom domain you need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to ACM and create a certificate for the domain in question&lt;br&gt;
and validate the certificate in route 53&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the custom domain in the api console. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under api mapping, link the custom domain to an api, and to a stage in the api&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to route 53 and create an alias to an api gateway. (the api endpoint to specify can be found in the custom domain page in the api console, under “API Gateway domain name”).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lazbcmlp3zz0nux8fz9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lazbcmlp3zz0nux8fz9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie8tylgp0shb42lnyrma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie8tylgp0shb42lnyrma.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’re all set, now you can access your api using the custom domain. Don’t forget to disable the default endpoint in the api settings if you’re not using it anymore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important note when using CloudFront&lt;/strong&gt; : &lt;br&gt;
Since the custom domain name already points to a stage, you don't need to add any paths in CloudFront. This means that, unlike with default endpoints, you do not specify an origin path, or a behavior. &lt;/p&gt;

&lt;p&gt;Alternatively, if you have to specify a behavior because you have multiple origins, you can add a path to the custom domain name. By adding a “/dev” path to the api custom domain name, you can now keep the CF behavior of “/dev” as well.&lt;/p&gt;

&lt;p&gt;But how do you use multiple origins in CloudFront?&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudFront multi origin default behavior and ordered behavior
&lt;/h2&gt;

&lt;p&gt;CloudFront can handle multiple origins, and they don’t even have to have the same type! As an example, you can have 2 rest api origins, 1 websocket origin and an S3 bucket origin.&lt;/p&gt;

&lt;p&gt;They Way CloudFront knows which request to forward to what origin is by using behaviors. In the behaviors you specify a path that, when matched, the request is passed to the origin. &lt;/p&gt;

&lt;p&gt;So for example, you can specify a behavior path of “/api” to forward to your api origin, and a default behavior to forward to your S3 website.&lt;/p&gt;

&lt;p&gt;In this example, if you go to the url cloudfront-domain-name/api, you’re forwarded to the api. and if you go to any other path, (including cloudfront-domain-name/ base path), you’re forwarded to the S3 website.&lt;/p&gt;

&lt;p&gt;If you happen to be using Terraform, you need to note the following: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To specify an ordered behavior, you have to first specify a default behavior&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The ordered behaviors are created in the same order as in the code. This has an effect on the precedence (the order in which the behaviors get evaluated by CloudFront)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we know how to add origins, let's look at the specifics of adding an SPA website hosted on S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloudfront with an SPA
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Reaching your S3 static website
&lt;/h4&gt;

&lt;p&gt;Before we go into the specifics of an SPA, let's first discuss how to reach your website hosted on S3.&lt;/p&gt;

&lt;p&gt;First you need to create an origin, and put the S3 bucket url an origin domain. &lt;/p&gt;

&lt;p&gt;Second, create an origin access control for CloudFront to have permissions to reach the S3 bucket. In terraform, this might look like this&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_cloudfront_origin_access_control" "CloudFrontOriginAccessControl" { 
  name                              = "CF-access-control"
  description                       = "reach s3 static website"
  origin_access_control_origin_type = "s3"
  signing_behavior                  = "always"
  signing_protocol                  = "sigv4"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Third, make sure the bucket permissions allow CF to access the bucket. The statement for the bucket policy might look like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
            "Sid": "AllowCloudFrontServicePrincipal",&lt;br&gt;
            "Effect": "Allow",&lt;br&gt;
            "Principal": {&lt;br&gt;
                "Service": "cloudfront.amazonaws.com"&lt;br&gt;
            },&lt;br&gt;
            "Action": "s3:GetObject",&lt;br&gt;
            "Resource": "arn:aws:s3:::your-s3-bucket-name/*",&lt;br&gt;
            "Condition": {&lt;br&gt;
                "StringEquals": {&lt;br&gt;
                    "AWS:SourceArn": "arn:aws:cloudfront::account-id:distribution/distribution-id"&lt;br&gt;
                }&lt;br&gt;
            }&lt;br&gt;
        }&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important note about paths&lt;/strong&gt;: CloudFront always redirects the path as is. So for example, if you have set the behavior path of “/static-website” to redirect to your s3 static website origin, then you have to make sure your website code is in a folder named “static-website“ in S3. Generally, it's best to keep the behavior to “/”, and to keep the website code in the root of your S3 bucket.&lt;/p&gt;

&lt;h4&gt;
  
  
  Redirecting 403 errors in an SPA
&lt;/h4&gt;

&lt;p&gt;Apparently, CloudFront has some weird behavior with single page application websites, which causes it to throw 403 errors. A solution for this is to redirect any 403 errors to the index page.&lt;/p&gt;

&lt;p&gt;If your CF distribution only has one origin, then this is fairly straightforward. You just need to add an error page to your distribution that redirects 403 errors to a 200 (which is the http code for an ‘OK’ response), and to display the index page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3d8e6ullyjvyir9apey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3d8e6ullyjvyir9apey.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, when your CF distribution has multiple origins, it gets a little more complicated. This error page that we just created is applied to all origins, which means that in the case where our api origin was to throw a 403, then it would get redirected to the index page of the static website.&lt;/p&gt;

&lt;p&gt;If we want the redirects to only apply to one of our origins, then we need to add logic to our Cloudfront. To do That, we need to use &lt;a href="mailto:lambda@edge"&gt;lambda@edge&lt;/a&gt;. The steps are: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The lambda needs to be created in us-east-1, and here is what the code could look like (Nodejs runtime)&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

'use strict';
exports.handler = (event, context, callback) =&amp;gt; {
       const response = event.Records[0].cf.response;
    if (response.status === '403') {
        response.status = 302;
        response.statusDescription = 'page found';
        /* Drop the body*/
        response.body = '';
        response.headers['location'] = [{ key: 'Location', value: '/index.html' }];
    }
    callback(null, response);
};


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;in the lambda console, under action, press deploy to &lt;a href="mailto:lambda@edge"&gt;lambda@edge&lt;/a&gt;. Specify your distribution and and deploy it as an “origin response”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;in CF, in your origin behavior for the S3 origin, scroll down. Under origin response, add the lambda arn&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiox6v6zf4wrml5hi9lt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiox6v6zf4wrml5hi9lt0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’re all set!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This blog was a non-exhaustive list of the problems you might face when working with CloudFront and api gateway, as well as when working with CloudFront and S3 static websites. My attempts to be brief still resulted in an 8 page blog, and I still feel I can add more detail, so feel free to ask any questions you have in the comments and I will be sure to get to them. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>api</category>
      <category>serverless</category>
    </item>
    <item>
      <title>CodePipeline live feedback on slack - with git tags</title>
      <dc:creator>Ziad Osman</dc:creator>
      <pubDate>Tue, 30 Aug 2022 10:03:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/codepipeline-live-feedback-on-slack-with-git-tags-1jno</link>
      <guid>https://dev.to/aws-builders/codepipeline-live-feedback-on-slack-with-git-tags-1jno</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This guide is for you if you have ever wanted to get live feedback on how your Pipelines on AWS are going. Additionally, if you keep track of your application version on git tags, I’m going to also show you how to retrieve them in CodeBuild and, as a bonus, how to also send them as slack messages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Credits
&lt;/h2&gt;

&lt;p&gt;While mathmaticians and phycisits stand on the shoulders of giants, software engineers stand on the shoulders of other software engineers. This guide would not be possible without &lt;a href="https://github.com/WesleyCharlesBlake" rel="noopener noreferrer"&gt;Wesley’s charles’&lt;/a&gt; contribution, as he made the script for the slack bot. As for how to retrieve git tags from CodeBuild, that as well would not have been possible without the contribution of &lt;a href="https://itnext.io/how-to-access-git-metadata-in-codebuild-when-using-codepipeline-codecommit-ceacf2c5c1dc" rel="noopener noreferrer"&gt;Timothy Jones&lt;/a&gt; who wrote a fantastic script to do so. &lt;/p&gt;

&lt;h2&gt;
  
  
  Pre requisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A configured AWS CLI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html" rel="noopener noreferrer"&gt;AWS SAM CLI&lt;/a&gt; (we will be deploying our slack bot script with the help of AWS SAM)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A working AWS CodePipeline (including CodeCommit, CodeBuild, and CodeDeploy)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sufficient credentials for the AWS User &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cloning the slack bot script
&lt;/h2&gt;

&lt;p&gt;The first thing we need to do is clone &lt;a href="https://github.com/ZiadNOsman/aws-codepipeline-slack" rel="noopener noreferrer"&gt;the slack bot repo&lt;/a&gt; to a local machine.&lt;/p&gt;

&lt;p&gt;After cloning the repo, we should have these files inside of our directory&lt;br&gt;
&lt;code&gt;│   .gitignore&lt;br&gt;
│   build.gif&lt;br&gt;
│   LICENSE&lt;br&gt;
│   Pipfile&lt;br&gt;
│   README.md&lt;br&gt;
│   template.yml&lt;br&gt;
│&lt;br&gt;
└───src&lt;br&gt;
        .pylintrc&lt;br&gt;
        build_info.py&lt;br&gt;
        message_builder.py&lt;br&gt;
        notifier.py&lt;br&gt;
        requirements.txt&lt;br&gt;
        slack_helper.py&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring slack
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Creating a slack app
&lt;/h4&gt;

&lt;p&gt;First up we need to create a slack app. For that head on to &lt;a href="https://api.slack.com/apps" rel="noopener noreferrer"&gt;https://api.slack.com/apps&lt;/a&gt;.&lt;br&gt;
Click on Create New App&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F424hdlwfhs1sm8rw2qcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F424hdlwfhs1sm8rw2qcz.png" alt="Image description" width="800" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click “From Scratch”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwab33m3a9tangdznn8oo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwab33m3a9tangdznn8oo.png" alt="Image description" width="794" height="567"&gt;&lt;/a&gt;&lt;br&gt;
For App Name, it doesn’t matter. I picked “Pipeline progress”. For workspace, choose your workspace from the drop-down list. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfgmwcnutx3pqvm9t6lx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfgmwcnutx3pqvm9t6lx.png" alt="Image description" width="761" height="731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After your app is created, in the Sidebar, go to OAuth &amp;amp; Permissions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4qtcye52vdwb2xpjwho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4qtcye52vdwb2xpjwho.png" alt="Image description" width="341" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now Scroll down to scope and add the following permissions one by one: channels:history, channels:manage, channels:read, chat:write, chat:write.customize, chat:write.public, groups:history, groups:read, im:read, links:write, mpim:read&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh01yop742sv93zae3n78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh01yop742sv93zae3n78.png" alt="Image description" width="710" height="1170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, scroll to the top of the page, generate a Bot User OAuth Token &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: make note of the OAuth Token, as we will need it soon.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l9x404zbojgdic0izcj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l9x404zbojgdic0izcj.png" alt="Image description" width="800" height="621"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, press the button : install the app to your workspace. This is what the button will look like after a successful installation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv3jrlnc385tqlmzek0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv3jrlnc385tqlmzek0p.png" alt="Image description" width="263" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Add app to slack
&lt;/h4&gt;

&lt;p&gt;Go to slack and create a new channel called builds. This is the default channel name that the script takes. If you name your channel anything else, I will show you in a later step where to specify it.&lt;/p&gt;

&lt;p&gt;Inside your builds channel, in the dialog box, press /.&lt;br&gt;
This opens up a search box. Search for apps and click on add apps to this channel&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9tdgg7oi0d43xxwe3th.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9tdgg7oi0d43xxwe3th.png" alt="Image description" width="563" height="533"&gt;&lt;/a&gt;&lt;br&gt;
On the next page, search for your app by its name and add it to the channel. In my case, the name is Pipeline progress.&lt;br&gt;
This is what it should look like after adding it to the channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7kp1f956mzl33683m4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7kp1f956mzl33683m4s.png" alt="Image description" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the script to AWS
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Note: For this step, make sure you have the AWS CLI installed and configured, along with the SAM CLI.&lt;/em&gt; &lt;br&gt;
To deploy, open a CMD in the script directory, and first run:&lt;br&gt;
&lt;code&gt;sam build&lt;/code&gt;&lt;br&gt;
followed by:&lt;br&gt;
&lt;code&gt;sam deploy --guided&lt;/code&gt;&lt;br&gt;
this will open up an interactive prompt. &lt;/p&gt;

&lt;p&gt;For stack name, I chose to name it as “aws-codepipeline-slack”. &lt;br&gt;
&lt;code&gt;Stack Name [sam-app]: aws-codepipeline-slack&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For region, if the default is fine press enter, otherwise specify the region&lt;br&gt;
&lt;code&gt;AWS Region [eu-west-1]:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For SlackBotUserOAuthAccessToken, paste the OAuth token we created in a previous step. Note that this is a hidden field, meaning that what you paste won’t show on the screen.&lt;br&gt;
&lt;code&gt;Parameter SlackBotUserOAuthAccessToken:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For SlackChannel, if you kept the channel name as build, then just press enter. Otherwise, specify the channel name&lt;br&gt;
&lt;code&gt;Parameter SlackChannel [builds]:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For SlackBotName, this is the name of the bot that will send pipeline updates. I left it at default and pressed enter.&lt;br&gt;
&lt;code&gt;Parameter SlackBotName [PipelineBuildBot]:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For SlackBotIcon, this is the icon of the bot that will send pipeline updates. I left it at default and pressed enter.&lt;br&gt;
&lt;code&gt;Parameter SlackBotIcon [:robot_face:]:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For show resource change, if you select y, it will show you changes of resources and prompt you to accept them before it deploys on each time you run SAM deploy. &lt;br&gt;
&lt;code&gt;#Shows you resources changes to be deployed and require a 'Y' to initiate deploy&lt;br&gt;
        Confirm changes before deploy [y/N]:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For SAM permissions, keep it at the default Y&lt;br&gt;
&lt;code&gt;#SAM needs permission to be able to create roles to connect to the resources in your template&lt;br&gt;
        Allow SAM CLI IAM role creation [Y/n]:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For Disable rollback, choose according to your use-case, it won’t matter much if you’re only planning on deploying once. &lt;br&gt;
&lt;code&gt;#Preserves the state of previously provisioned resources when an operation fails&lt;br&gt;
        Disable rollback [y/N]:&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;For authorization, select Y &lt;br&gt;
&lt;code&gt;Notifier Function Url may not have authorization defined, Is this okay? [y/N]: Y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For save arguments to config file, select Y. this will make future deployments faster since the default values are saved. &lt;br&gt;
&lt;code&gt;Save arguments to configuration file [Y/n]: Y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Press enter to keep the default config file as samconfig.toml&lt;br&gt;
&lt;code&gt;SAM configuration file [samconfig.toml]:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Press enter to keep the configuration environment as default&lt;br&gt;
&lt;code&gt;SAM configuration environment [default]:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This should be enough to successfully deploy the script. If you selected Y for “Confirm changes before deploy” then you’ll get an additional prompt confirming if you want to deploy. Select Y on it and you’re good to go.&lt;/p&gt;

&lt;p&gt;With this, we’re done! you are now able to get updates on the states of your pipelines.&lt;br&gt;
You can check out your new lambda function if you go into your AWS console and go to lambda&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjemkswwmcc8j4odfn4ke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjemkswwmcc8j4odfn4ke.png" alt="Image description" width="800" height="71"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding support for git tags
&lt;/h2&gt;

&lt;p&gt;If you happen to keep track of your application version using git tags, and you’d like to also get notified of the version that’s getting deployed on slack. Then please read along and follow these steps.&lt;br&gt;
We need to enable function url on our newly deployed lambda function, so that we can call the function from CodeBuild and pass to it the git tag. We will also need to add some permissions to the CodeBuild service role. Finally , we will need to add a bash script to our CodeCommit repo that enables CodeBuild to clone the repo and retrieve the git tag from it. If you want to learn more about why we need to do this workaround instead of just retrieving the git tags from our CodeCommit repo, then I highly suggest reading &lt;a href="https://itnext.io/how-to-access-git-metadata-in-codebuild-when-using-codepipeline-codecommit-ceacf2c5c1dc" rel="noopener noreferrer"&gt;Timothy Jones’s article&lt;/a&gt;, the person behind the bash script we will be using.&lt;/p&gt;

&lt;h4&gt;
  
  
  Enabling function url
&lt;/h4&gt;

&lt;p&gt;To enable function url on our slack script. Open the function code locally on your favorite IDE. Navigate to template.yml, and uncomment these two lines. (line 28 and 29).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9329nh41kpot0m8vr9am.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9329nh41kpot0m8vr9am.png" alt="Image description" width="713" height="108"&gt;&lt;/a&gt;&lt;br&gt;
And that’s it! Now follow the same steps above to deploy&lt;br&gt;
&lt;code&gt;sam build&lt;/code&gt;&lt;br&gt;
followed by:&lt;br&gt;
&lt;code&gt;sam deploy --guided&lt;/code&gt;&lt;br&gt;
Now if you got back to your AWS console, and go to your lambda function, you should be able to find your function url under configurtation -&amp;gt; function url&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmeva2uztiju3dus99ve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmeva2uztiju3dus99ve.png" alt="Image description" width="800" height="197"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Note: make note of your function URL as we will need it in the next step&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  adding the bash script to CodeCommit
&lt;/h4&gt;

&lt;p&gt;as explained, we will be using a bash script in CodeBuild to help us retrieve the git tag.&lt;br&gt;
Add &lt;a href="https://raw.githubusercontent.com/TimothyJones/codepipeline-git-metadata-example/master/scripts/codebuild-git-wrapper.sh" rel="noopener noreferrer"&gt;this file&lt;/a&gt; to your CodeCommit repo’s base directory. Make sure the file name is: codebuild-git-wrapper.sh&lt;/p&gt;

&lt;h4&gt;
  
  
  Adding the necessary permissions to CodeBuild
&lt;/h4&gt;

&lt;p&gt;You will need to add 2 policies to your CodeBuild service role.&lt;br&gt;
The first one will enable CodeBuild to perform git pull on the CodeCommit repo. This is the policy template. &lt;strong&gt;Make sure to add your repo ARN under Resource&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
    "Version": "2012-10-17",&lt;br&gt;
    "Statement": [&lt;br&gt;
        {&lt;br&gt;
            "Sid": "VisualEditor0",&lt;br&gt;
            "Effect": "Allow",&lt;br&gt;
            "Action": "codecommit:GitPull",&lt;br&gt;
            "Resource": "YOUR_REPO_ARN"&lt;br&gt;
        }&lt;br&gt;
    ]&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The second policy will give CodeBuild permission to invoke the lambda function url. This is the policy template. &lt;strong&gt;Make sure to replace the resource with your lambda function ARN&lt;/strong&gt;.&lt;br&gt;
&lt;code&gt;{&lt;br&gt;
    "Version": "2012-10-17",&lt;br&gt;
    "Statement": [&lt;br&gt;
        {&lt;br&gt;
            "Sid": "VisualEditor0",&lt;br&gt;
            "Effect": "Allow",&lt;br&gt;
            "Action": "lambda:InvokeFunctionUrl",&lt;br&gt;
            "Resource": "YOUR_FUNCTION_ARN"&lt;br&gt;
        }&lt;br&gt;
    ]&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Modify your buildspec.yml
&lt;/h4&gt;

&lt;p&gt;Finally, we will be adding commands to buildspec.yml to clone the repo, retrieve the git tag, and pass the git tag to our function url.&lt;br&gt;
Go to your buildspec.yml file, and in the first step of the build stage, add the following commands.&lt;br&gt;
&lt;code&gt;build:&lt;br&gt;
    commands:&lt;br&gt;
      - echo get version&lt;br&gt;
      - /bin/bash codebuild-git-wrapper.sh YOUR_REPO_URL YOUR_BRANCH_NAME&lt;br&gt;
      #get release version from git tag&lt;br&gt;
      - RELEASE_VERSION=$(git tag --points-at HEAD)&lt;br&gt;
      #send git version to slack&lt;br&gt;
      - curl YOUR_FUNCTION_URL/?git-tag=$RELEASE_VERSION -o /dev/null&lt;/code&gt;&lt;br&gt;
Make sure to change the following fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;YOUR_REPO_URL: your code commit repository url. You can retrieve this by going to CodeCommit, and clicking on HTTPS under Clone URL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7kesauw0yeemt2ov2um.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7kesauw0yeemt2ov2um.png" alt="Image description" width="298" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;YOUR_BRANCH_NAME: the name of the branch the CodePipeline gets triggered on. Normally this should be: main. but check your pipeline configuration to be sure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;YOUR_FUNCTION_URL: the lambda function url we created and took note of in a previous step.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security considerations
&lt;/h2&gt;

&lt;p&gt;As it stands, your function url can be invoked by anyone that has the url and send messages to your slack channel. You can secure your function by either using IAM authentication or putting your lambda function behind API Gateway and using api keys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Done!
&lt;/h2&gt;

&lt;p&gt;Congratulations! You now have a slack bot that will give you updates on the state of your CodePipeline pipelines. And if you followed the additional steps, it will also send you a message of your release version via git tags.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhpci64soh3dwnyngv9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhpci64soh3dwnyngv9a.png" alt="Image description" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lxdp6od6bws4bd2mghd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lxdp6od6bws4bd2mghd.png" alt="Image description" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>git</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
