<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ali Haydar</title>
    <description>The latest articles on DEV Community by Ali Haydar (@ahaydar).</description>
    <link>https://dev.to/ahaydar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ahaydar"/>
    <language>en</language>
    <item>
      <title>Exploring the Inner Workings of SST Live Lambda</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Sun, 25 Feb 2024 06:10:31 +0000</pubDate>
      <link>https://dev.to/aws-builders/exploring-the-inner-workings-of-sst-live-lambda-8hb</link>
      <guid>https://dev.to/aws-builders/exploring-the-inner-workings-of-sst-live-lambda-8hb</guid>
      <description>&lt;p&gt;In the evolving world of serverless architecture, the developer experience is a commonly debated topic. In particular, the challenges associated with testing locally and obtaining a fast feedback loop.&lt;/p&gt;

&lt;p&gt;Nowadays, we have a few technologies that make it convenient to develop serverless applications locally, enabling a swift integration between your local environment and the cloud (e.g. &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-sync.html" rel="noopener noreferrer"&gt;SAM Sync&lt;/a&gt;, &lt;a href="https://aws.amazon.com/blogs/developer/increasing-development-speed-with-cdk-watch/" rel="noopener noreferrer"&gt;CDK Watch&lt;/a&gt; and &lt;a href="https://docs.sst.dev/live-lambda-development" rel="noopener noreferrer"&gt;SST Live Lambda&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;In a previous &lt;a href="https://ali-haydar.medium.com/exploring-local-development-with-sst-for-serverless-applications-0160f32de10f" rel="noopener noreferrer"&gt;article&lt;/a&gt;, I demonstrated how to use the SST Live Lambda with a small project that uses a deployed endpoint in AWS API Gateway to invoke a locally developed Lambda.&lt;/p&gt;

&lt;p&gt;I wanted to know how this works. Through this post, I aim to delve into the specifics of the underlying architecture of the SST Live Lambda functionality. For the official docs, you can find them &lt;a href="https://docs.sst.dev/live-lambda-development#how-it-works" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requests are proxied to the local machine
&lt;/h2&gt;

&lt;p&gt;SST docs state that the requests are proxied to the local machine, which allows SST to run the local version of the function with the event, context and credentials of the remote Lambda function. The communication between local and remote happens using &lt;a href="https://docs.aws.amazon.com/iot/latest/developerguide/protocols.html" rel="noopener noreferrer"&gt;AWS IoT over WebSocket&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's check the details!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new SST project:
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;npx create-sst@latest&lt;br&gt;
  ? Project name demo-sst&lt;br&gt;
  ✔ Copied template files&lt;br&gt;
  Next steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd demo-sst&lt;/li&gt;
&lt;li&gt;npm install (or pnpm install, or yarn)&lt;/li&gt;
&lt;li&gt;npm run dev&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
* Run `npm install` to install the dependencies
* Update the `stacks` directory with the resources you'd like to provision
* Run `npm run dev` to start the project

Two things happen When you run `npm run dev`:

### SST creates a bootstrap stack in CloudFormation

The stack creates resources that are needed in the deployment process of your app (more details [here](https://docs.sst.dev/advanced/bootstrapping)).

Note that each app needs to be bootstrapped once, so the next time we run `npm run dev` this step won't be executed.

As this article is focused on the Live Lambda, we will only focus on the bootstrap script a little. In summary, the following resources were created:

- An S3 bucket that stores critical information about the apps (e.g. prod/dev mode, the config needed to store secrets and variables, etc.).
- A Lambda function that automatically deletes the S3 bucket objects when they are no longer needed
- A lambda function that handles the collection and uploading of metadata to the S3 bucket
- An EventBridge rule that triggers the MetadataHandler Lambda function

### SST Deploys Your Application Resources

SST deploys the application resources as defined in the `stacks` directory, enabling the usage of the Live Lambda feature.

Let's look at some details beyond what's mentioned in the SST documentation.

I deployed a simple stack that includes API Gateway and a Lambda function:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;import { StackContext, Api } from 'sst/constructs';&lt;/p&gt;

&lt;p&gt;export function API({ stack }: StackContext) {&lt;br&gt;
  const api = new Api(stack, 'api', {&lt;br&gt;
    routes: {&lt;br&gt;
      'GET /': 'packages/functions/src/lambda.handler',&lt;br&gt;
    },&lt;br&gt;
  });&lt;/p&gt;

&lt;p&gt;stack.addOutputs({&lt;br&gt;
    ApiEndpoint: api.url,&lt;br&gt;
  });&lt;br&gt;
}&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
You can see the resources in CloudFormation and verify that the API Gateway and the Lambda function were created successfully.

First, the deployment will include all the resources defined in the `stacks` directory but will stub the lambda function by one defined by SST (not the real lambda function having your code). The diagram would look as follows: 
![SST Live Lambda Deployment](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dsbdi5yk381ioqqesfow.png)

- The "Deployed Stack" is what you defined but with a stub lambda function
- The AWS account is configured with a default IoT endpoint in each region. SST will use that endpoint based on the region configured `sst.config.ts`
- SST starts a local WebSocket client and connects to that IoT endpoint.

Once the stack is deployed, we're outputting the API Gateway Endpoint that we can request. Send a GET request to this endpoint (e.g. `curl https://ser55ggwrf.execute-api.us-east-1.amazonaws.com`). Below is how the flow works:
![SST Live Lambda Invocation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9xrv2b9ure21toqwf1w.png)

There are still more details to be discovered, especially on how the comms happen between the IoT endpoint and the stub Lambda, and the response relayed back to that Lambda.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>softwaredevelopment</category>
      <category>testing</category>
    </item>
    <item>
      <title>Lambda to S3: Better Reliability in High-Volume Scenarios</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Tue, 06 Feb 2024 04:51:29 +0000</pubDate>
      <link>https://dev.to/aws-builders/lambda-to-s3-better-reliability-in-high-volume-scenarios-5043</link>
      <guid>https://dev.to/aws-builders/lambda-to-s3-better-reliability-in-high-volume-scenarios-5043</guid>
      <description>&lt;p&gt;I have an API Gateway endpoint that routes requests to a lambda function that writes data into an S3 bucket.&lt;/p&gt;

&lt;p&gt;In high-volume scenarios, when managing a significant influx of requests, there is an increased risk of failures when writing data to S3. This could be due to various factors such as network issues, S3 service disruptions, concurrent write conflicts, or exceeding capacity limits (Amazon S3 supports request rates of up to 3500 Put requests per second to a single prefix. In some scenarios, rapid concurrent Put requests to the same key can result in a 503 response.). Therefore, addressing and mitigating these potential failure points is essential to ensure the reliability of data writes to S3 in such situations.&lt;/p&gt;

&lt;p&gt;An example where this might occur is having a webhook that hits your endpoint when a change happens in an external system. I've experienced this in tools such as Slack, Github, Jira, etc. Considering the large volume of data generated by these tools, using your endpoint frequently at a larger scale could pose challenges.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvo0ltxinepqujmcev1o7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvo0ltxinepqujmcev1o7.png" alt="Pipelines" width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Settle in with your favourite beverage as we're about to explore some details together; this post is set to be a bit lengthier than usual.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the project
&lt;/h2&gt;

&lt;p&gt;We will start by building a project with &lt;a href="https://sst.dev/"&gt;SST&lt;/a&gt; that provisions an API Gateway, a Lambda, and an S3 bucket. Once implemented, we'll look into testing for concurrent write conflicts or exceeding capacity limits.&lt;/p&gt;

&lt;p&gt;The architecture looks as follows: &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62zevfat4fllxih6na3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62zevfat4fllxih6na3l.png" alt="Architecture diagram apigw-lambda-s3" width="800" height="93"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the following to create the project locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-sst@latest
Ok to proceed? (y) y
? Project name s3-writes-reliability
✔ Copied template files

Next steps:
- cd s3-writes-reliability
- npm install (or pnpm install, or yarn)
- npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete the unnecessary infrastructure under &lt;code&gt;stacks&lt;/code&gt; and &lt;code&gt;packages&lt;/code&gt;. The infrastructure lives under &lt;code&gt;stacks&lt;/code&gt; and the actual Lambda code lives under &lt;code&gt;packages&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create the following stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { StackContext, Api, Bucket, Config } from "sst/constructs";
import * as iam from "aws-cdk-lib/aws-iam";

export function API({ stack }: StackContext) {

  const bucket = new Bucket(stack, "Uploads");

  const api = new Api(stack, "api", {
    routes: {
      "POST /todo": "packages/functions/src/lambda.handler",
    },
  });

  const BUCKET_NAME = new Config.Parameter(stack, "BUCKET_NAME", {
    value: bucket.bucketName,
  });

  api.attachPermissions([
    new iam.PolicyStatement({
      actions: [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:ListBucket",
        "s3:GetObject",
      ],
      effect: iam.Effect.ALLOW,
      resources: [
        bucket.bucketArn + "/*",
      ],
    }),
  ]);

  api.bindToRoute("POST /todo", [BUCKET_NAME, bucket]);

  stack.addOutputs({
    ApiEndpoint: api.url,
  });

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This stack will provision the API Gateway endpoint, the lambda function and the S3 bucket to which we will write.&lt;/p&gt;

&lt;p&gt;Now let's update the lambda function under &lt;code&gt;packages/functions/src/lambda.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { ApiHandler } from "sst/node/api";
import { Config } from "sst/node/config";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";

const client = new S3Client({});

export const handler = ApiHandler(async (_evt) =&amp;gt; {

  const params = {
    Body: _evt.body,
    Bucket: Config.BUCKET_NAME,
    Key: Math.random().toString(),
   };

  const response  = await client.send(new PutObjectCommand(params));

  return {
    statusCode: 200,
    body: JSON.stringify(response)
  };
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We built an API endpoint that invokes a Lambda, which in turn puts an object in S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deliberate Error Creation
&lt;/h2&gt;

&lt;p&gt;We will explore the S3 Capacity limit (3500 requests/second), looking to hit that limit. This exercise's purpose is to mirror real-life scenarios where a Lambda function might be invoked at scale, allowing us to explore the behaviour and performance of the system under heavy load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First attempt - lambda making lots of requests - failed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To achieve this, I first tried to modify the Lambda function to simulate the generation of this significant volume of requests. I changed the lambda function to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { ApiHandler } from "sst/node/api";
import { Config } from "sst/node/config";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";

const client = new S3Client({});

async function uploadObject(key:any, data: any) {
  const params = {
    Bucket: Config.BUCKET_NAME,
    Key: key,
    Body: data
  };

  try {
    await client.send(new PutObjectCommand(params));
    console.log(`Successfully uploaded: ${key}`);
  } catch (error) {
    console.error(`Error uploading ${key}: ${error}`);
  }
}



export const handler = ApiHandler(async (_evt) =&amp;gt; {
  try {

    const promises = [];

    for (let i = 1; i &amp;lt;= 5000; i++) {
      promises.push(uploadObject(String(i), _evt.body));
    }
    await Promise.all(promises);
  } catch (error) {
    console.error("Unhandled error in handler:", error);

    return {
      statusCode: 500,
      body: JSON.stringify({ error: "Internal Server Error", details: error.message }),
    };
  }
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards, I used &lt;code&gt;curl&lt;/code&gt; to invoke the function (Change the API URL to point to the endpoint that SST outputted upon starting the project):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X POST -H "Content-Type: application/json" -d '{"key1": "value1", "key2": "value2"}' https://sdfsdfsdf.execute-api.ap-southeast-2.amazonaws.com/todo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I didn't get any &lt;code&gt;503 Slow Down&lt;/code&gt; from S3, indicating that the number of requests is high - that's because the script above won't run within a single second.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second attempt - loading the API endpoint - successful&lt;/strong&gt;&lt;br&gt;
The second attempt invokes the Lambda function with lots of simultaneous requests. This would spin up multiple Lambda instances to cater for the load, and each instance would hit S3 at the same time. Let's see if that'll work.&lt;/p&gt;

&lt;p&gt;I'll use &lt;a href="https://jmeter.apache.org/"&gt;Apache JMeter&lt;/a&gt; to do this experiment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw45mjfrywv9dchk5yhrj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw45mjfrywv9dchk5yhrj.png" alt="Jmeter test" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This test resulted in many errors related to the Lambda function being throttled due to exceeding the rate limit &lt;code&gt;"errorType":"ThrottlingException","errorMessage":"Rate exceeded"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I've changed the Jmeter threads to 1000 to match the limit of 1000 concurrent executions allowed by Lambda in a single region. At the same time, I kept using &lt;code&gt;promise.all&lt;/code&gt; in the Lambda code to achieve parallel requests to S3. With this combination, I managed to get a few &lt;code&gt;SlowDown: Please reduce your request rate.&lt;/code&gt; errors.&lt;/p&gt;

&lt;p&gt;Using Jmeter is a better approach to simulate a real case scenario, such as having a Slack or Github webhook that is hitting the endpoint.&lt;/p&gt;
&lt;h2&gt;
  
  
  Build a more reliable system
&lt;/h2&gt;

&lt;p&gt;We managed to get the S3 service to return an error that we're exceeding the rate limit. How can we build a more reliable architecture that would help us avoid this kind of error?&lt;/p&gt;

&lt;p&gt;We have a few options here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Distribute objects across multiple S3 bucket prefixes as the rate limit is set by prefix (this means changing the behaviour of the application, which might not be something you're planning to do)&lt;/li&gt;
&lt;li&gt;Use a retry mechanism - this could be handled on the Lambda side, where we retry submitting requests that return a &lt;code&gt;503&lt;/code&gt; error. This is a great solution, but it depends on your traffic and whether postponing the request would work. It's worth testing this solution, especially since it doesn't require a big effort.&lt;/li&gt;
&lt;li&gt;Alleviate the load on S3 by introducing a queue between your APIGW and Lambda function. We'll implement this solution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The architecture looks as follows: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwi1vmulrkcu0ljbayxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwi1vmulrkcu0ljbayxm.png" alt="Architecture Diagram - apigw-sqs-lambda-s3" width="800" height="79"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this diagram, SQS serves as a buffer between APIGW and Lambda. Instead of directly processing incoming requests, we enqueue them in an SQS queue. This decouples the incoming request rate from the rate at which Lambda and S3 can process the requests, which smoothes out spikes in the incoming request rate, reducing the immediate load on S3 and providing a more controlled flow of data. If there are temporary issues with S3 or Lambda, messages can be retained in the SQS queue and retried, providing a more resilient system.&lt;/p&gt;

&lt;p&gt;Let's create this additional infrastructure. Modify the previously created stack as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  const queue = new Queue(stack, "Queue", {
    consumer: {
      function: {
        handler: "packages/functions/src/lambda.handler",
        timeout:10,
        environment:{ BUCKET_NAME: bucket.bucketName },
      }
    }
  });

  queue.attachPermissions([
    new iam.PolicyStatement({
      actions: [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:ListBucket",
        "s3:GetObject",
      ],
      effect: iam.Effect.ALLOW,
      resources: [
        bucket.bucketArn + "/*",
      ],
    }),
  ]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a queue and repurposes the Lambda we created to poll that queue. In addition, we attached the permission to put objects in S3 to the consumer Lambda.&lt;/p&gt;

&lt;p&gt;Now, let's update the API route we have to write into the queue rather than triggering the Lambda function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; const api = new Api(stack, "api", {
    routes: {
      "POST /": {
        type: "aws",
        cdk: {
          integration: {
            subtype: HttpIntegrationSubtype.SQS_SEND_MESSAGE,
            parameterMapping: ParameterMapping.fromObject({
             QueueUrl: MappingValue.custom(queue.queueUrl),
             MessageBody: MappingValue.custom("$request.body"),
            }),
          }
        }
      }
    },
  });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once done, let's update the Lambda function. We need to update the event to be of type &lt;code&gt;SQSEvent&lt;/code&gt;, then pass the SQS Record body to the &lt;code&gt;uplaodObject&lt;/code&gt; function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      const promises = _evt.Records.map(record =&amp;gt; uploadObject(record.messageId,record.body));
      await Promise.all(promises);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Is that enough to alleviate the load from S3 so that we stop having request rate errors? Let's try it again using Jmeter.&lt;/p&gt;

&lt;p&gt;Unfortunately, I still encountered a few &lt;code&gt;|  + Error uploading 9: SlowDown: Please reduce your request rate.&lt;/code&gt; errors. Setting the eventSource batchSize to 5 seems to have fixed the issue of the rate limit on S3. It might not work where requests are being more aggressively sent. In this case, we can still set a &lt;code&gt;deliveryDelay&lt;/code&gt; parameter on the queue, which postpones the delivery of new messages for some time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In addressing challenges related to high-volume data writes on S3, we’ve proposed a more reliable architecture involving queues. However, this solution isn’t one-size-fits-all. The queue solution has its own problems to consider — how to handle failed uploads, and should a dead letter queue be considered? Is the message size limit in SQS a problem?&lt;/p&gt;

&lt;p&gt;Building a more reliable system is an iterative process, and this article provides a foundation for enhancing the robustness of serverless architectures in handling substantial data loads. Ultimately, developers are encouraged to adapt and refine these solutions based on their specific use cases.&lt;/p&gt;

&lt;p&gt;Finally, I wouldn’t recommend rushing into building this kind of solution from the first day, unless you’re dealing with some serious data traffic. As with all software development, it’s important to be pragmatic and iteratively build your system blocks as you need.&lt;/p&gt;

&lt;p&gt;How do you enhance the resilience of your system? Share your thoughts in the comments.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>typescript</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Exploring Local Development with SST for Serverless Applications</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Fri, 29 Dec 2023 06:41:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/exploring-local-development-with-sst-for-serverless-applications-2dgl</link>
      <guid>https://dev.to/aws-builders/exploring-local-development-with-sst-for-serverless-applications-2dgl</guid>
      <description>&lt;p&gt;A couple of years ago, I wrote an &lt;a href="https://www.freecodecamp.org/news/how-to-test-serverless-applications-in-aws/"&gt;article on freeCodeCamp&lt;/a&gt; about testing serverless applications in AWS. In that article, I explained the technical aspects of testing a serverless application formed of an API Gateway, a Lambda function and a DynamoDB table. I talk about how to test the function using unit tests, the DB with a bash script, and the overall app (i.e. the API endpoint) with e2e tests. However, there's an important piece that I missed in that article. Where does all of the testing happen? I assumed that testing occurs on the cloud, except for unit tests.&lt;/p&gt;

&lt;p&gt;A compelling question frequently arises in my enthusiastic discussions about serverless technology: How can we conduct local testing?&lt;/p&gt;

&lt;p&gt;I typically encourage the usage of mocks, but I think it's more suitable for testing in isolation (e.g. testing a single component of your app, mocking external services it integrates with). I am not a fan of Emulators; my experience with these tools that mimic a cloud service has been brittle, mainly due to setup complexity and parity with the real service used.&lt;/p&gt;

&lt;p&gt;A few years ago, I failed a technical interview because I did not offer a way to start and test a serverless app in a local environment. There's a need for a mindset change when working with serverless resources; some might disagree. That is going to be a discussion for a different day. For this post, I wanted to explore how to bridge the gap between cloud-based testing and the convenience of local development.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BIHFYxWK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2khx9e57r6h4y6tiwiwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BIHFYxWK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2khx9e57r6h4y6tiwiwq.png" alt="Modern through arch bridge under cloudy sky" width="800" height="532"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;
&lt;center&gt;
&lt;br&gt;
&lt;a href="https://www.pexels.com/photo/modern-through-arch-bridge-under-cloudy-sky-5707614/"&gt;Photo by Ben Mack&lt;/a&gt; &lt;/center&gt;




&lt;p&gt;I read about SST (Serverless Stack) and the &lt;a href="https://docs.sst.dev/live-lambda-development"&gt;live Lambda&lt;/a&gt; feature it offers. The SST Docs mention:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Live Lambda Development or Live Lambda is feature of SST that allows you to debug and test your Lambda functions locally, while being invoked remotely by resources in AWS. It works by proxying requests from your AWS account to your local machine.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's explore the Live Lambda feature of SST. This will be an opportunity to set up an app with SST and evaluate the developer experience.&lt;/p&gt;

&lt;h1&gt;
  
  
  Building an app with SST
&lt;/h1&gt;

&lt;p&gt;SST relies on &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS CDK&lt;/a&gt; to write the infrastructure code using a set of programming languages (We'll use TypeScript in our example). The CDK converts that code into CloudFormation templates behind the scenes.&lt;/p&gt;

&lt;p&gt;SST offers a few constructs built on top of AWS CDK:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;sst build&lt;/code&gt; runs &lt;code&gt;cdk synth&lt;/code&gt; internally. This converts the code to CloudFormation templates&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pnpm sst dev&lt;/code&gt; or &lt;code&gt;pnpm sst deploy&lt;/code&gt; runs &lt;code&gt;cdk deploy&lt;/code&gt;. This submits the templates to CloudFormation, which creates the stacks and their defined resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's create an SST project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-sst@latest
? Project name demo-sst
✔ Copied template files

Next steps:
- cd demo-sst
- npm install (or pnpm install, or yarn)
- npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The infrastructure code is listed under &lt;code&gt;/stacks&lt;/code&gt;, and the application code is listed under &lt;code&gt;/packages&lt;/code&gt;. Notice that under &lt;code&gt;MyStack.ts&lt;/code&gt;, an event bus and a few API routes are defined. Feel free to remove these, keeping the following only:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { StackContext, Api } from 'sst/constructs';

export function API({ stack }: StackContext) {
  const api = new Api(stack, 'api', {
    routes: {
      'GET /': 'packages/functions/src/lambda.handler',
    },
  });

  stack.addOutputs({
    ApiEndpoint: api.url,
  });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code defines a GET API endpoint that will invoke the lambda function.&lt;/p&gt;

&lt;p&gt;Feel free to delete the folders and files that aren't needed under &lt;code&gt;packages&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once the app is started and deployed, you will get the API URL, which we specified as an output in the &lt;code&gt;/stacks/MyStack.ts&lt;/code&gt; file. Requesting this API will return the value define in the &lt;code&gt;/packages/functions/src.lambda.ts&lt;/code&gt; file. Change the returned value there and submit a new request to the API endpoint. Notice how you got the new value.&lt;/p&gt;

&lt;p&gt;This is the SST Live Lambda. Without the long loop of deploying your changes to AWS, you get to execute your local Lambda function from the API Gateway already deployed in AWS.&lt;/p&gt;

&lt;p&gt;SST mainly creates a WebSocket and proxies requests from Lambda functions to your local machine. You can find more details in the &lt;a href="https://docs.sst.dev/live-lambda-development#how-it-works"&gt;SST docs&lt;/a&gt;.&lt;br&gt;
Here's how it looks like:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/hnTSTm5n11g"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  How is this different from running your Lambda using a NodeJS worker?
&lt;/h3&gt;

&lt;p&gt;Even though the functional behaviour of your Lambda code might be similar (i.e. experiencing similar behaviour of the Lambda function), running a Live Lambda offers a fully integrated environment where the Lambda configuration is as close to production since the whole environment is already running in the cloud. That provides better confidence over a local setup where you rely on AWS creds to do the execution. The integration allows you to test integrations with other AWS services such as API Gateway, Event Bridge, S3, etc. This offers a great win over setting up emulators on your local.&lt;/p&gt;

&lt;p&gt;The next step would be to build a larger-scale project that integrates with services from AWS, such as S3, SQS, Event-Bridge, etc. How do you build a serverless app? and what's your dev experience like?&lt;/p&gt;

</description>
      <category>serverlesss</category>
      <category>aws</category>
      <category>lambda</category>
      <category>development</category>
    </item>
    <item>
      <title>Avoiding Data Overwrites in DynamoDB</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Mon, 18 Dec 2023 21:30:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/avoiding-data-overwrites-in-dynamodb-2gbi</link>
      <guid>https://dev.to/aws-builders/avoiding-data-overwrites-in-dynamodb-2gbi</guid>
      <description>&lt;p&gt;In traditional database management systems, such as MySQL, there is a separation between the "INSERT" and the "UPDATE" operations. The "INSERT" statement is used to add records to a table, and the "UPDATE" statement is used to update existing records. If, for example, you try to insert a new record with an existing primary key, you would get an error.&lt;/p&gt;

&lt;p&gt;On the other hand, DynamoDB uses a single operation &lt;code&gt;PutItem&lt;/code&gt; to both insert or update existing records. When the &lt;code&gt;PutItem&lt;/code&gt; operation is used with an existing primary key on the DynamoDB table, it will overwrite the data on that record.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k_eLmKo---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jb0yrpp32p7ys8ktch0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k_eLmKo---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jb0yrpp32p7ys8ktch0.png" alt="Set of chess pieces on board" width="800" height="1074"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;
&lt;center&gt;Photo by &lt;a href="https://www.pexels.com/@alex-green/"&gt;Alex Green&lt;/a&gt; on &lt;a href="https://www.pexels.com/photo/set-of-chess-pieces-on-board-5691866/"&gt;Pexels&lt;/a&gt;&lt;br&gt;
&lt;/center&gt;




&lt;p&gt;If you're not familiar with it, &lt;a href="https://aws.amazon.com/pm/dynamodb/"&gt;DynamoDB&lt;/a&gt; is a NoSQL database service provided by AWS, known for its seamless scalability, high performance, and simplified data model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is &lt;code&gt;PutItem&lt;/code&gt; a risky operation?
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;PutItem&lt;/code&gt; behaviour, where it overwrites existing data, might be considered risky in some instances if the developers aren't careful about how to update existing records. That might be simply a bug or an oversight of a certain scenario, but could as well be a result of an incorrect DynamoDB table design to start with.&lt;/p&gt;

&lt;p&gt;Of course, DynamoDB offers conditional expression to help mitigate the data overwrite risk. Without conditional expression, we'd need to add the logic in our code to check whether an item exists or not, and that's costly. This also comes with the risk of race conditions if other requests are trying to update the same item.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example where data is overwritten
&lt;/h3&gt;

&lt;p&gt;Let's see this as an example. Assume we have an "Employees" table designed as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0LuxG0tR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgn8gwj4b4xj2wf59do7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0LuxG0tR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgn8gwj4b4xj2wf59do7.png" alt="Table design" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When creating a new employee, we'd want to ensure we don't overwrite an existing user. Below we write the code using Terraform and JavaScript.&lt;/p&gt;

&lt;p&gt;For simplicity, I'll post the Terraform code in a single file &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }
  }
}
variable "aws_region" {
  type    = string
  default = "ap-southeast-2"
}

provider "aws" {
  region = var.aws_region
}

resource "aws_dynamodb_table" "employees_table" {
  name         = "employees"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "username"
  attribute {
    name = "username"
    type = "S"
  }
}

# IAM Role
resource "aws_iam_role" "add_employee_lambda_role" {
  name               = "add_employee_lambda_role"
  assume_role_policy = &amp;lt;&amp;lt;EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

# IAM Policy that allows interaction with DynamoDB
resource "aws_iam_policy" "add_employee_lambda_policy" {
  name        = "add_employee_lambda_policy"
  description = "Allow lambda to access dynamodb"
  policy      = &amp;lt;&amp;lt;EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "dynamodb:PutItem",
        "dynamodb:DeleteItem",
        "dynamodb:GetItem",
        "dynamodb:Scan",
        "dynamodb:Query",
        "dynamodb:UpdateItem"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
EOF
}

# Attach IAM Policy to IAM Role
resource "aws_iam_role_policy_attachment" "add_employee_lambda_role_policy_attachment" {
  role       = aws_iam_role.add_employee_lambda_role.name
  policy_arn = aws_iam_policy.add_employee_lambda_policy.arn
}

# data archive for lambda function
data "archive_file" "add_employee_lambda_zip" {
  type        = "zip"
  source_dir  = "${path.module}/../src"
  output_path = "${path.module}/add_employee.zip"
}

# Add employee lambda function
resource "aws_lambda_function" "add_employee_lambda" {
  filename         = data.archive_file.add_employee_lambda_zip.output_path
  function_name    = "add_employee_lambda"
  role             = aws_iam_role.add_employee_lambda_role.arn
  handler          = "add_employee.handler"
  source_code_hash = data.archive_file.add_employee_lambda_zip.output_base64sha256
  runtime          = "nodejs18.x"
  publish          = true
  environment {
    variables = {
      DYNAMODB_TABLE = aws_dynamodb_table.employees_table.name
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we'll write a simple JS Lambda function to put items into DynamoDB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {
  DynamoDBClient,
  PutItemCommand,
} from '@aws-sdk/client-dynamodb';
import { marshall } from '@aws-sdk/util-dynamodb';

const tableName = process.env.DYNAMODB_TABLE;
const client = new DynamoDBClient({});

export const handler = async (event) =&amp;gt; {
  const { username, name, department, jobTitle } = event;

  const params = {
    TableName: tableName,
    Item: marshall({
      username,
      name,
      department,
      jobTitle,
    }),
  };
  const command = new PutItemCommand(params);
  const response = await client.send(command);
  console.log(response);
  return response;
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy this infrastructure (&lt;code&gt;terraform apply&lt;/code&gt;), then navigate to the Lambda function in AWS Console and test the lambda with the following event object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "username": "ali",
  "name": "Ali Haydar",
  "department": "Engineering",
  "jobTitle": "Platform Lead"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the item got added to DynamoDB.&lt;/p&gt;

&lt;p&gt;Assume a new employee with the same first name joined the company, and the admin just went ahead with the following employee data (let's use the Lambda test functionality to add the new employee - of course, in a real-life scenario there would be a UI for that):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "username": "ali",
  "name": "Ali Wong",
  "department": "Acting",
  "jobTitle": "Stand-up comedian and acress"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few days later, the first employee "ali" got promoted to Chief Finance Officer, so it's time to update this info in the system. When searching for "ali", only "Ali Wong" was found. Where's our intended user?&lt;/p&gt;

&lt;p&gt;That was an example where an oversight and a poor design of the table caused a bug leading to data loss.&lt;/p&gt;

&lt;h3&gt;
  
  
  example with conditional expression
&lt;/h3&gt;

&lt;p&gt;To protect against this kind of mistake, we modify our code slightly, adding a conditional expression to the params:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ConditionExpression: 'attribute_not_exists(username)',
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the change.&lt;/p&gt;

&lt;p&gt;Try to update the existing "Ali Wong" record with the following data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "username": "ali",
  "name": "Ali Haydar",
  "department": "Engineering",
  "jobTitle": "Platform Lead"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"errorType": "ConditionalCheckFailedException",
"errorMessage": "The conditional request failed",
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The conditional expression checks for the non-existence of the username before allowing the update.&lt;/p&gt;

&lt;p&gt;The conditional expressions can be useful in multiple other use cases. Have a look at the &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html"&gt;docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I've seen patterns in relational databases where an operation behaves similarly to PutItem in DynamoDB. Mostly, these were code-built functions that handle the logic of "Insert" or "Update". In some cases, we referred to them as "upsert" operations. The concept is often implemented using statements like &lt;code&gt;INSERT ON DUPLICATE KEY UPDATE&lt;/code&gt; in MySQL.&lt;/p&gt;

&lt;p&gt;In the example above, even though there is no direct impact of the table design; using a username as a primary key makes it more prone to data overwrite, especially if that's a manually entered username. In addition, it does limit the access patterns to the table, but that's a discussion for another day. One way to improve this might be to use a composite key of username and department for example. What do you think?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>softwaredevelopment</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to enhance your Lambda function performance with memory configuration?</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Mon, 21 Aug 2023 22:30:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-enhance-your-lambda-function-performance-with-memory-configuration-3kd1</link>
      <guid>https://dev.to/aws-builders/how-to-enhance-your-lambda-function-performance-with-memory-configuration-3kd1</guid>
      <description>&lt;p&gt;I have a lambda function that's taking a few seconds to execute, and that's not related to a cold start. It's the code in the handler function that's taking time, and that's because it's CPU and memory intensive.&lt;/p&gt;

&lt;p&gt;Performance problems related to cold start can be enhanced through provisioned concurrency and managing the "Init" code. You can read about this in my previous articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.freecodecamp.org/news/how-to-speed-up-lambda-functions/" rel="noopener noreferrer"&gt;How to speed up your Lambda function&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ali-haydar.medium.com/effortlessly-juggling-multiple-concurrent-requests-in-aws-lambda-10f1ffdb20dd" rel="noopener noreferrer"&gt;Effortlessly Juggling Multiple Concurrent Requests in AWS Lambda&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We know the performance is related to CPU and Memory, so the solution might be as simple as increasing the memory. Isn't it?&lt;/p&gt;

&lt;p&gt;Let's do a small demonstration comparing the execution of the Lambda with different memory configurations. It's worth noting that we cannot explicitly control the Lambda CPU; it's automatically increased as we increase the memory. From the AWS docs:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The amount of memory also determines the amount of virtual CPU available to a function. Adding more memory proportionally increases the amount of CPU, increasing the overall computational power available. If a function is CPU-, network- or memory-bound, then changing the memory setting can dramatically improve its performance.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Testing the performance of a Lambda function
&lt;/h2&gt;

&lt;p&gt;For this demo let's use the following Lambda function that I generated with the help of ChatGPT:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const handler = async (event) =&amp;gt; {
    const matrixSize = 100; // Size of the square matrix
    const matrix = generateMatrix(matrixSize);
    const result = performMatrixMultiplication(matrix);

    return result;
};

function generateMatrix(size) {
    const matrix = [];
    for (let i = 0; i &amp;lt; size; i++) {
        const row = [];
        for (let j = 0; j &amp;lt; size; j++) {
            row.push(Math.random());
        }
        matrix.push(row);
    }
    return matrix;
}

function performMatrixMultiplication(matrix) {
    const size = matrix.length;
    const result = new Array(size);

    for (let i = 0; i &amp;lt; size; i++) {
        result[i] = new Array(size);
        for (let j = 0; j &amp;lt; size; j++) {
            let sum = 0;
            for (let k = 0; k &amp;lt; size; k++) {
                sum += matrix[i][k] * matrix[k][j];
            }
            result[i][j] = sum;
        }
    }

    return result;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Lambda function generates a large square matrix, performs matrix multiplication, and returns the resulting matrix. That's an example of a CPU and memory-intensive function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda with 128 MB memory
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create this function manually in the AWS console, selecting NodeJs as the runtime, and keep the rest of the configuration to the default. This will create the Lambda with a &lt;code&gt;128 MB&lt;/code&gt; memory.&lt;/li&gt;
&lt;li&gt;Execute the function by navigating to the "Test" tab and clicking the "Test" button.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice the results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Duration: 298.55 ms Billed Duration: 299 ms Memory Size: 128 MB Max Memory Used: 72 MB  Init Duration: 160.36 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We got a duration of 298.55 ms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda with 256 MB memory
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;In the configuration tab, update the Lambda memory to &lt;code&gt;256 MB&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test the lambda function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice the results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Duration: 133.94 ms Billed Duration: 134 ms Memory Size: 255 MB Max Memory Used: 73 MB  Init Duration: 162.88 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We got a duration of 133.94 ms by doubling the memory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda with 512 MB memory
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;In the configuration tab, update the Lambda memory to &lt;code&gt;512 MB&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test the lambda function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice the results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Duration: 51.12 ms  Billed Duration: 52 ms  Memory Size: 512 MB Max Memory Used: 73 MB  Init Duration: 144.95 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We got a duration of 51.12 ms by doubling the memory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda with 1024 MB memory
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;In the configuration tab, update the Lambda memory to &lt;code&gt;1024 MB&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test the lambda function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice the results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Duration: 30.42 ms  Billed Duration: 31 ms  Memory Size: 1024 MB  Max Memory Used: 73 MB  Init Duration: 163.51 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We got a duration of 30.42 ms by doubling the memory&lt;/p&gt;

&lt;h3&gt;
  
  
  Shall we continue to increase the memory?
&lt;/h3&gt;

&lt;p&gt;We have demonstrated that more memory means faster execution.&lt;/p&gt;

&lt;p&gt;At some point, the increased memory, which comes with an increased CPU won't cause better performance. But for now, let's accept that a 30.42 ms execution time is reasonable. It is worth noting that the Memory used in all execution was 73MB, so it's the CPU that was improving with the memory upgrades.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to choose a memory config?
&lt;/h2&gt;

&lt;p&gt;As with any software configuration, there are some trade-offs. In this case, it's the cost of Execution.&lt;/p&gt;

&lt;p&gt;Lambda is priced based on number of requests and the duration of requests. In addition to that, the price of request duration is dependent on the memory associated with the Lambda function. From the &lt;a href="https://aws.amazon.com/lambda/pricing/" rel="noopener noreferrer"&gt;AWS Docs&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Memory (MB) Price per 1ms
128           $0.0000000021
512           $0.0000000083
1024        $0.0000000167
1536        $0.0000000250
2048        $0.0000000333
3072        $0.0000000500
4096        $0.0000000667
5120        $0.0000000833
6144        $0.0000001000
7168        $0.0000001167
8192        $0.0000001333
9216        $0.0000001500
10240       $0.0000001667
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So how can we find a good balance between memory and cost?&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning" rel="noopener noreferrer"&gt;aws lambda power tuning&lt;/a&gt; tool helps optimise the Lambda performance and cost in a data-driven manner. Let's try it out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There are multiple ways to deploy the application into the AWS account. We'll deploy it from the &lt;a href="https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:451282441545:applications~aws-lambda-power-tuning" rel="noopener noreferrer"&gt;AWS Serverless Application Repository
&lt;/a&gt; as that's the simplest option.&lt;/li&gt;
&lt;li&gt;The deployment of this serverless application will create a step function starting with the name &lt;code&gt;powerTuningStateMachine&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Start the execution of the step function and pass the following JSON:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "lambdaARN": "your-lambda-function-arn",
    "powerValues": [128, 256, 512, 1024, 1536, 2048, 3008],
    "num": 50,
    "payload": {},
    "parallelInvocation": true,
    "strategy": "cost"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once executed this will generate the following output in the "Execution input and output" tab of the step function:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "power": 512,
  "cost": 1.428e-7,
  "duration": 16.566666666666666,
  "stateMachine": {
    "executionCost": 0.00033,
    "lambdaCost": 0.00011801685000000001,
    "visualization": "https://lambda-power-tuning.show/XXXX"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The visualisation looks as follows: &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76j0vdd1ak59bw7n4jix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76j0vdd1ak59bw7n4jix.png" alt="Power Tuning Graph" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This graph shows that memory of 512 MB would offer a good balance of cost and performance.&lt;/p&gt;




&lt;p&gt;Thanks for reading this far. Did you like this article, do you have feedback or would like to further discuss the topic? Please reach out on &lt;a href="https://twitter.com/Alee_Haydar" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/ahaydar/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>serverless</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Effortlessly Juggling Multiple Concurrent Requests in AWS Lambda</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Mon, 26 Jun 2023 16:58:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/effortlessly-juggling-multiple-concurrent-requests-in-aws-lambda-2agj</link>
      <guid>https://dev.to/aws-builders/effortlessly-juggling-multiple-concurrent-requests-in-aws-lambda-2agj</guid>
      <description>&lt;p&gt;Have you ever tried to handle a ton of requests at once with AWS Lambda? What if you had a critical operation that required speed, scaling the performance of a request down to single-digit milliseconds?&lt;/p&gt;

&lt;p&gt;In a previous article, I discussed how to &lt;a href="https://www.freecodecamp.org/news/how-to-speed-up-lambda-functions"&gt;speed up your Lambda function&lt;/a&gt;, touching on the concept of a 'cold start' and how to minimize it by moving some non-critical code outside the Lambda handler, such as establishing a DB connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1JV7f7SS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i4a160te3mpr2cgq8otp.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1JV7f7SS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i4a160te3mpr2cgq8otp.jpeg" alt="Highway multilane" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we send a request to execute a Lambda function, the service initialises an execution environment. That's a virtual machine (MicroVM) dedicated to each concurrent invocation of a Lambda function. Within this VM, there's a kernel, a runtime (e.g. Node.js), function code, and extensions code. The cold start period includes creating this environment, initialising the runtime and extensions, and downloading the code. It also includes the execution of the 'Init' code (the code you write before the handler function).&lt;/p&gt;

&lt;p&gt;With multiple consecutive executions, the Lambda service would reuse a previously created execution environment, and only execute the code within the handler function (that's what we call warm start). That's fascinating and would be sufficient in lots of cases. However, in some instances, we need to execute a single Lambda function concurrently. That would cause the creation of a new execution environment for each execution, which means that we cannot leverage the warm Lambda, and would need to experience the cold start for each invocation.&lt;/p&gt;

&lt;p&gt;The solution in this case is to set up provisioned concurrency. Let's test it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up the project
&lt;/h2&gt;

&lt;p&gt;We will build a simple project consisting of 2 Lambda functions, one orchestrator and one worker that will be invoked concurrently from the orchestrator Lambda.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6lVPAou8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3mv7uzueg8g0lmdo9ru1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6lVPAou8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3mv7uzueg8g0lmdo9ru1.gif" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's create the infrastructure in Terraform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create the IAM policies and roles that will be used by the Lambda functions
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  locals {
    policy_document_cloudwatch = {
      Version = "2012-10-17"
      Statement = [
        {
          Effect = "Allow"
          Action = [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
          ]
          Resource = "arn:aws:logs:*:*:*"
        }
      ]
    }
    policy_document_lambda_invoke = {
      Version = "2012-10-17"
      Statement = [
        {
          Effect   = "Allow"
          Action   = ["lambda:InvokeFunction"]
          Resource = ["arn:aws:lambda:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:function:*"]
        }
      ]
    }
  }

  data "aws_iam_policy_document" "lambda_assume_role" {
    statement {
      actions = ["sts:AssumeRole"]
      principals {
        type        = "Service"
        identifiers = ["lambda.amazonaws.com"]
      }
    }
  }

  resource "aws_iam_role" "worker_lambda_execution_role" {
    name               = "worker_lambda-exec-role"
    assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json
  }


  resource "aws_iam_role_policy_attachment" "lambda_policies_attachment_cloudwatch" {
    role       = aws_iam_role.worker_lambda_execution_role.name
    policy_arn = aws_iam_policy.lambda_policies_cloudwatch.arn
  }

  resource "aws_iam_policy" "lambda_policies_cloudwatch" {
    name        = "lambda_logging_cloudwatch_access"
    description = "lambda logs in CloudWatch"
    policy = jsonencode({
      Version   = "2012-10-17"
      Statement = local.policy_document_cloudwatch.Statement

    })
  }


  resource "aws_iam_role" "orchestrator_lambda_execution_role" {
    name               = "orchestrator-lambda-exec-role"
    assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json
  }


  resource "aws_iam_role_policy_attachment" "lambda_policies_attachment_cloudwatch_and_lambda_invocation" {
    role       = aws_iam_role.orchestrator_lambda_execution_role.name
    policy_arn = aws_iam_policy.lambda_policies_cloudwatch_and_lambda_invocation.arn
  }


  resource "aws_iam_policy" "lambda_policies_cloudwatch_and_lambda_invocation" {
    name        = "lambda_logging_cloudwatch_access_and_lambda_invocation"
    description = "lambda logs in CloudWatch and worker lambda invocation"
    policy = jsonencode({
      Version = "2012-10-17"
      Statement = concat(
        local.policy_document_cloudwatch.Statement,
        local.policy_document_lambda_invoke.Statement
      )
    })
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;lambda.tf&lt;/code&gt; file configuring the lambda functions
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  resource "aws_lambda_function" "orchestrator_lambda" {
    function_name = "orchestrator_lambda"

    handler = "orchestrator_lambda.handler"
    runtime = "nodejs18.x"

    filename = "orchestrator_lambda_function.zip"

    source_code_hash = filebase64sha256("orchestrator_lambda_function.zip")

    role = aws_iam_role.orchestrator_lambda_execution_role.arn

    depends_on = [
      aws_iam_role_policy_attachment.lambda_policies_attachment_cloudwatch_and_lambda_invocation
    ]

    environment {
      variables = {
        FUNCTION_NAME = aws_lambda_function.worker_lambda.function_name
      }
    }
  }


  resource "aws_lambda_function" "worker_lambda" {
    function_name = "worker_lambda"

    handler = "worker_lambda.handler"
    runtime = "nodejs18.x"

    filename = "worker_lambda_function.zip"

    source_code_hash = filebase64sha256("worker_lambda_function.zip")

    role = aws_iam_role.worker_lambda_execution_role.arn

    depends_on = [
      aws_iam_role_policy_attachment.lambda_policies_attachment_cloudwatch
    ]
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have yet to define any concurrency. We will get to this later.&lt;/p&gt;

&lt;p&gt;Now let's implement our Lambda functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new &lt;code&gt;orchestrator_lambda.js&lt;/code&gt; file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;InvokeCommand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LambdaClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LogType&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-sdk/client-lambda&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;invokeLambda&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;LambdaClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;InvokeCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;FunctionName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FUNCTION_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;LogType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LogType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Tail&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;Payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Lambda function invoked!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;promises&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;invocation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
      &lt;span class="nx"&gt;promises&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;invokeLambda&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;promises&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a new &lt;code&gt;worker_lambda.js&lt;/code&gt; file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;here is the event received: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test the concurrent Lambda execution before and after setting up the provisioned concurrency
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Before Provisioned Concurrency
&lt;/h3&gt;

&lt;p&gt;In the AWS Console, navigate to the "Orchestrator Lambda" and click the "Test" button. This will invoke the "Worker Lambda" 10 times.&lt;/p&gt;

&lt;p&gt;Have a look at the CloudWatch logs of the "Worker Lambda":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kb2fIKPB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09jbvvay0eckj1vseurr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kb2fIKPB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09jbvvay0eckj1vseurr.png" alt="Image description" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the "Worker Lambda" got invoked 10 times concurrently. Each time starting a new execution environment and initialising it. That "cold start" duration shows as the "Init Duration" in the previous screenshot; there are ~ 160ms used for "Init" in every invocation. What if we can save this time?&lt;/p&gt;

&lt;h3&gt;
  
  
  After Provisioned Concurrency
&lt;/h3&gt;

&lt;p&gt;Let's set up provisioned concurrency for the "Worker Lambda". This will allocate a specified number of execution environments for the Lambda function, which would ensure rapid function start-up, skipping the "Init" part. The Provisioned concurrency works only with a Lambda alias (a pointer to a specific Lambda version) or Lambda version, so we'll set this up as well.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In your &lt;code&gt;lambda.tf&lt;/code&gt; file, under the &lt;code&gt;worker_lambda&lt;/code&gt; resource, add the &lt;code&gt;publish&lt;/code&gt; property, which will create a Lambda version:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  publish       = true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add the Lambda alias, which will point to the created version of the Lambda
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  resource "aws_lambda_alias" "worker_lambda_alias" {
    name             = "worker_lambda_alias"
    function_name    = aws_lambda_function.worker_lambda.function_name
    function_version = aws_lambda_function.worker_lambda.version // This is the latest published version
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add the provisioned concurrency configuration
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  resource "aws_lambda_provisioned_concurrency_config" "worker_lambda_provisioned_concurrency" {
    function_name                     = aws_lambda_alias.worker_lambda_alias.function_name
    provisioned_concurrent_executions = 10
    qualifier                         = aws_lambda_alias.worker_lambda_alias.name
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;On the "Orchestrator Lambda" config, pass the alias as an environment variable. So the environment tag would look as follows:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  environment {
    variables = {
      WORKER_FUNCTION_NAME = aws_lambda_function.worker_lambda.function_name
      WORLER_ALIAS         = aws_lambda_alias.worker_lambda_alias.name
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Update the Lambda "Orchestrator Lambda" code to hit the alias, in the &lt;code&gt;orchestrator_lambda.js&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;InvokeCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;FunctionName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;WORKER_FUNCTION_NAME&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;WORKER_ALIAS&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;LogType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LogType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Tail&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;Payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you deploy these changes, navigate to the AWS Console and click the "Test" button in the "Orchestrator" Lambda. This will invoke the "Worker Lambda" 10 times.&lt;/p&gt;

&lt;p&gt;Have a look at the CloudWatch logs of the "Worker Lambda": &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9SxMMUIO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1tedlrlwh9p0swmx1mkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9SxMMUIO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1tedlrlwh9p0swmx1mkb.png" alt="Image description" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that we no longer have an "Init" duration, as we have 10 provisioned Worker Lambda functions. If we submitted 11 concurrent requests, then one worker lambda will have an "Init" duration.&lt;/p&gt;

&lt;p&gt;It's worth noting that, even with provisioned concurrency, you might still experience some instances having "Init" duration if you invoke the function immediately after deployment.&lt;/p&gt;

&lt;p&gt;Provisioned concurrency comes with a cost as we have Lambda instances running all the time. One thing to think of is how would you balance the cost and performance benefits of using provisioned concurrency. I'd love to hear your thoughts.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>softwareengineering</category>
      <category>programming</category>
    </item>
    <item>
      <title>Prevent API overload with rate limiting in AWS</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Tue, 11 Apr 2023 21:01:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/prevent-api-overload-with-rate-limiting-in-aws-1dgb</link>
      <guid>https://dev.to/aws-builders/prevent-api-overload-with-rate-limiting-in-aws-1dgb</guid>
      <description>&lt;p&gt;In the early days of the covid pandemic, and for a significant period, when we went to the supermarket, we had to wait in a queue for our turn to enter the supermarket. Only a certain number of customers could shop together at a particular time. The intention was to keep a physical distance between individuals to limit the spread of the virus, hence keeping people safe and limiting the load on the health system.&lt;/p&gt;

&lt;p&gt;That's similar to what we do in Software when we rate limit our APIs. We do it for multiple reasons, including preventing abuse (e.g. malicious user overwhelming the system with too many requests), managing downstream services and resources (e.g. database) or simply using rate limit as a way to monetize the APIs (e.g. enabling free users to use the APIs less frequently than paid users).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3928xj2yu9j7cwyxcpaq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3928xj2yu9j7cwyxcpaq.jpg" alt="Hourglass - Photo by Enrique Zafra on Pexels"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This post will build an API using &lt;a href="https://aws.amazon.com/api-gateway/" rel="noopener noreferrer"&gt;AWS API Gateway&lt;/a&gt; and explore how to rate-limit calls to our endpoints. API Gateway is a managed service from AWS that helps you publish, manage and monitor your APIs. We will use Terraform to set up our infrastructure.&lt;/p&gt;

&lt;p&gt;We deploy APIs to stages in API Gateway, where each stage has a single deployment. You could have one stage for development, one for testing and one for production. Each stage has its URL and settings.&lt;/p&gt;

&lt;p&gt;We will build an endpoint that returns the number of people in the supermarket. It will be of this shape: &lt;code&gt;GET /.../prod/customers&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the API
&lt;/h2&gt;

&lt;p&gt;In Terraform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create the REST API
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  resource "aws_api_gateway_rest_api" "api" {
    name        = "SupermarketCustomers"
    description = "Tracks the number of customers that enter the supermarket"
    endpoint_configuration {
      types = ["REGIONAL"]
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create the "Customers" resource. That will be the first endpoint:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  resource "aws_api_gateway_resource" "customers_resource" {
    rest_api_id = aws_api_gateway_rest_api.api.id
    parent_id   = aws_api_gateway_rest_api.api.root_resource_id
    path_part   = "customers"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create the method for this resource - GET in this case:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  resource "aws_api_gateway_method" "get_method" {
    rest_api_id   = aws_api_gateway_rest_api.api.id
    resource_id   = aws_api_gateway_resource.customers_resource.id
    http_method   = "GET"
    authorization = "NONE"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the http_method in this case, and the authorization set to NONE, as we'd want this API to be accessible to everyone (some people would like to check the number of customers in the supermarket from home)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Until now, we have configured the API and method. Now we must move to the integration part, which covers interacting with the backend. Specifying an AWS service, HTTP Backend, or MOCK is possible. In this example, we will use MOCK to mock the backend and hardcode the response to return on the API Gateway.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  resource "aws_api_gateway_integration" "mock_backend" {
    http_method = aws_api_gateway_method.get_method.http_method
    resource_id = aws_api_gateway_resource.customers_resource.id
    rest_api_id = aws_api_gateway_rest_api.api.id
    type        = "MOCK"
    request_templates = {
      "application/json" = jsonencode(
        {
          statusCode = 200
        }
      )
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Define the integration response and method response:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  resource "aws_api_gateway_method_response" "response_200" {
    rest_api_id = aws_api_gateway_rest_api.api.id
    resource_id = aws_api_gateway_resource.customers_resource.id
    http_method = aws_api_gateway_method.get_method.http_method
    status_code = 200
  }

  resource "aws_api_gateway_integration_response" "mock_backend_response" {
    rest_api_id = aws_api_gateway_rest_api.api.id
    resource_id = aws_api_gateway_resource.customers_resource.id
    http_method = aws_api_gateway_method.get_method.http_method
    status_code = aws_api_gateway_method_response.response_200.status_code

    # Transforms the backend JSON response to XML
    response_templates = {
      "application/json" = &amp;lt;&amp;lt;EOF
    {
      "message": "hello from the mocked backend"
    }
  EOF
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;As mentioned above, changes in API Gateway won't apply without a deployment, which needs to happen to a Stage:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  resource "aws_api_gateway_deployment" "api_deployment" {
    rest_api_id = aws_api_gateway_rest_api.api.id
    lifecycle {
      create_before_destroy = true
    }
  }

  resource "aws_api_gateway_stage" "prod" {
    deployment_id = aws_api_gateway_deployment.api_deployment.id
    rest_api_id   = aws_api_gateway_rest_api.api.id
    stage_name    = "prod"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy this configuration to your AWS account. You should be able to see your API, resource and method as per the following screenshot: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6mj5k7byi6z2iw8acfi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6mj5k7byi6z2iw8acfi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the "Test" button to get a response like the following: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi51dcli462wid4q17sy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi51dcli462wid4q17sy0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Rate &amp;amp; Burst Limits
&lt;/h2&gt;

&lt;p&gt;AWS API Gateway has a default set of 10,000 requests per second limit per region within an AWS account, with a burst of 5000 requests. That's across all APIs of different types (e.g. REST, WebSocket).&lt;/p&gt;

&lt;p&gt;AWS uses the &lt;a href="https://en.wikipedia.org/wiki/Token_bucket#:~:text=The%20token%20bucket%20is%20an,variations%20in%20the%20traffic%20flow" rel="noopener noreferrer"&gt;Token Bucket Algorithm&lt;/a&gt; to throttle requests. Each request is a token in the bucket. The burst limit is the maximum number of concurrent tokens consumed simultaneously. The rate limit is the speed at which the bucket is refilled with new tokens. You can also look at the rate limit as the maximum number of tokens that can be used within one second (or a period).&lt;/p&gt;

&lt;p&gt;Assume we set a rate limit of 10 requests per second and a burst rate of 15 requests. That means that ten requests are added to the bucket every second up to the maximum of 15 requests. If we consume 15 requests per minute, the usage rate will be bigger than the refill rate, which means we'll consume the requests in our bucket faster. So in 2 seconds, we will start throttling requests returning a &lt;code&gt;429: Too Many Requests&lt;/code&gt; error. Here are some details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In 1 second, we have 15 requests in the bucket; we consumed 15 requests and refilled 10. That leaves ten requests in the bucket&lt;/li&gt;
&lt;li&gt;In 2 seconds, we have ten requests in the bucket; we consumed 15 requests and refilled 10. 5 of the requests consumed will return an error, as there are no more available requests in the bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, suppose we're consuming requests at the same speed as the refill rate (rate limit of 10 requests per second). In that case, we will always be able to serve these requests without a problem, as the burst rate (maximum number of requests simultaneously) will never be reached.&lt;/p&gt;

&lt;p&gt;Setting the rate and burst limits properly based on the expected traffic levels and available resources is essential to ensure reliable service.&lt;/p&gt;

&lt;p&gt;This is how we set the rates in Terraform at the stage and route level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_api_gateway_method_settings" "get_method_settings" {
  rest_api_id = aws_api_gateway_rest_api.api.id
  stage_name  = aws_api_gateway_stage.prod.stage_name
  method_path = "*/*"

  settings {
    throttling_burst_limit = 1
    throttling_rate_limit  = 2
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you deploy this change, Invoke the endpoint URL (You can find the URL under &lt;code&gt;stages -&amp;gt; prod -&amp;gt; / -&amp;gt; /customers -&amp;gt; GET&lt;/code&gt;). Notice the "Too many requests" error when invoked more than a single time per second.&lt;/p&gt;

&lt;p&gt;With this approach, a single customer might overuse the API, which would cause throttling for other customers. It is possible to control this with &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html#api-gateway-api-usage-plans-overview" rel="noopener noreferrer"&gt;usage plans&lt;/a&gt;. This requires that each customer has an API key, which we haven't set up in our example.&lt;/p&gt;

&lt;p&gt;Have you implemented rate limiting in the past? What would you do differently?&lt;/p&gt;




&lt;p&gt;Thanks for reading this far. Did you like this article, and do you think others might find it helpful? Feel free to share it on &lt;a href="http://twitter.com/share?text=Check%20this%20article%20by%20@Alee_Haydar%20on%20how%20to%20build%20rate%20limiting%20on%20your%20APIs:%20&amp;amp;url=https://ali-haydar.medium.com/prevent-api-overload-with-rate-limiting-in-aws-94bf34a43b79&amp;amp;hashtags=aws,apigateway,awscommunity,awscommunitybuilders,softwaredevelopment" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/sharing/share-offsite/?url=https://ali-haydar.medium.com/prevent-api-overload-with-rate-limiting-in-aws-94bf34a43b79" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>api</category>
      <category>serverless</category>
      <category>aws</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to securely expose your local app to the internet using EC2?</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Tue, 31 Jan 2023 21:35:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-securely-expose-your-local-app-to-the-internet-using-ec2-4a4m</link>
      <guid>https://dev.to/aws-builders/how-to-securely-expose-your-local-app-to-the-internet-using-ec2-4a4m</guid>
      <description>&lt;p&gt;Are you developing an app and want others to access it before it's available in the cloud? Or are you integrating with a third-party tool and wish to enable access to your local app for a better development experience? This article is for you.&lt;/p&gt;

&lt;p&gt;There are many use cases where you might need to expose your local app (the app you are developing on your local machine) to the internet. That could be a site, an API or a chatbot. I've worked on a few of these cases, most recently a side project developing a Slack application.&lt;/p&gt;

&lt;p&gt;When integrating with Slack, you need to provide Slack with a redirection URL so that the user goes back to your site after granting the necessary permissions (this is common when using &lt;a href="https://en.wikipedia.org/wiki/OAuth" rel="noopener noreferrer"&gt;OAuth&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjnfj2sxrb7zsakzrm2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjnfj2sxrb7zsakzrm2y.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you are working on a local machine, the URL should point to your localhost, which has a private IP. Your home/office router will have a public IP assigned by the internet provider, but any device behind that router will be given a private IP. So how can we expose our app to the internet?&lt;/p&gt;

&lt;h2&gt;
  
  
  How to expose your app to the internet?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Note that this setup might incur some costs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First, I used &lt;a href="https://ngrok.com/" rel="noopener noreferrer"&gt;ngrok&lt;/a&gt;, which creates a tunnel between my local machine and servers that are already exposed to the internet - So your requests to the ngrok server get forwarded to your local app. That's convenient and useful. However, the URL of that server changes every time you connect, which means I need to update my app config every time I connect.&lt;/p&gt;

&lt;p&gt;I wanted something more permanent and maintainable, where I start my app, add my changes and test. One option was to subscribe to the ngrok paid services, where you can get a permanent URL. I need this for a side project and want it to be cost-effective, so I decided to implement a basic solution myself.&lt;/p&gt;

&lt;p&gt;To achieve this, we'll create an &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Instances.html" rel="noopener noreferrer"&gt;EC2 instance&lt;/a&gt; that allows ingress access from the internet, and install an nginx server that acts as a reverse proxy to forward requests to this server to my app running on my local machine, using SSH tunnelling.&lt;/p&gt;

&lt;p&gt;Let's build it with Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an EC2 instance
&lt;/h2&gt;

&lt;p&gt;We will use the t2.micro instance as it's covered in the &lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc&amp;amp;awsf.Free%20Tier%20Types=*all&amp;amp;awsf.Free%20Tier%20Categories=categories%23compute&amp;amp;all-free-tier.q=ec2&amp;amp;all-free-tier.q_operator=AND" rel="noopener noreferrer"&gt;Free Tier for 12 months&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, we will create a Security Group, which will be attached to the EC2 instance, to allow ingress HTTP and ssh access to the machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "allow_http_ssh_and_http_on_80" {
  name        = "allow_http"
  description = "Allow http inbound traffic"


  ingress {
    description = "http"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]

  }
  ingress {
    description = "ssh"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]

  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "allow_http_ssh"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need a key pair to SSH in the EC2 instance, so I used one that I had already created using the Terraform data source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_key_pair" "ec2_instance_key_pair" {
  key_name = "ec2-instances"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You could create a new key pair using the &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/key_pair" rel="noopener noreferrer"&gt;aws_key_pair resource&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next, we will create an EC2 instance - I'll install and set up nginx on this instance as user data. That's the script that runs after the instance starts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
resource "aws_instance" "port_forwarding_server" {
  ami                    = "ami-06bb074d1e196d0d4"
  instance_type          = "t2.micro"
  key_name               = data.aws_key_pair.ec2_instance_key_pair.key_name
  vpc_security_group_ids = [aws_security_group.allow_http_ssh_and_http_on_80.id]

  user_data = &amp;lt;&amp;lt;EOF
#!/bin/bash
echo "installing nginx"
sudo amazon-linux-extras install nginx1 -y

echo "updating nginx config for reverse proxy"
echo "
user nginx;
worker_processes auto;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    error_log /dev/null;
    access_log /dev/null;
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    upstream express_server {
        server 127.0.0.1:8080;
        keepalive 64;
    }
    server {
        listen 80 default_server;
        listen [::]:80 default_server;
        server_name _;
        location / {
            proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
            proxy_set_header X-Real-IP \$remote_addr;
            proxy_set_header Host \$http_host;
            proxy_set_header Upgrade \$http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_http_version 1.1;
            proxy_pass http://express_server/;
            proxy_redirect off;
            proxy_read_timeout 240s;
        }
    }
}" | sudo tee /etc/nginx/nginx.conf
## Starting Nginx Services
sudo chkconfig nginx on
sudo service nginx start
sudo service nginx restart
EOF
  tags = {
    Name = "local-dev-tunneling-server"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This server will receive requests on port 80 and forwards them to the localhost server on port 8080.&lt;br&gt;
The only thing that's left to do is to start your local app and start a remote ssh port forwarding session by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i ~/.ssh/ec2-instances.pem -R 8080:localhost:8080 ec2-user@&amp;lt;public-ipv4-dns&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can copy the instance Public IPv4 DNS from the AWS console.&lt;/p&gt;

&lt;p&gt;Now we need to have the site secure with SSL/TLS. So we can either add a load balancer and associate it with a certificate from &lt;a href="https://aws.amazon.com/certificate-manager/" rel="noopener noreferrer"&gt;AWS ACM&lt;/a&gt; or directly create a certificate on the instance. Let's do the latter using OpenSSL.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, enable ingress access of HTTPS on port 443 on the security group
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ingress {
      description = "https"
      from_port = 443
      to_port   = 443
      protocol  = "tcp"
      cidr_blocks = [
        "0.0.0.0/0"
      ]
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Update the startup script to create the certificate and use it in the nginx config. Update the user_data tag in Terraform to:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    user_data = &amp;lt;&amp;lt;EOF
  #!/bin/bash
  echo "installing nginx"
  sudo amazon-linux-extras install nginx1 -y

  echo "create a cert"
  sudo mkdir /etc/ssl/private
  sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt

  echo "updating nginx config for reverse proxy"
  echo "
  user nginx;
  worker_processes auto;
  include /usr/share/nginx/modules/*.conf;
  events {
      worker_connections 1024;
  }
  http {
      sendfile on;
      tcp_nopush on;
      tcp_nodelay on;
      keepalive_timeout 65;
      types_hash_max_size 2048;
      error_log /dev/null;
      access_log /dev/null;
      include /etc/nginx/mime.types;
      default_type application/octet-stream;
      upstream express_server {
          server 127.0.0.1:8080;
          keepalive 64;
      }
      server {
          listen 80 default_server;
          listen [::]:80 default_server;
          listen 443 ssl;
          ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
          ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
          server_name _;
          location / {
              proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
              proxy_set_header X-Real-IP \$remote_addr;
              proxy_set_header Host \$http_host;
              proxy_set_header Upgrade \$http_upgrade;
              proxy_set_header Connection "upgrade";
              proxy_http_version 1.1;
              proxy_pass http://express_server/;
              proxy_redirect off;
              proxy_read_timeout 240s;
          }
      }
  }" | sudo tee /etc/nginx/nginx.conf
  ## Starting Nginx Services
  sudo chkconfig nginx on
  sudo service nginx start
  sudo service nginx restart
  EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now navigate to the Public IPv4 DNS provided by EC2 in your browser, which should direct the user to your local app on HTTPS.&lt;br&gt;
  How could you further improve the experience? I'd like to hear your thoughts.&lt;/p&gt;




&lt;p&gt;Thanks for reading this far. Did you like this article, and do you think others might find it useful? Feel free to share it on &lt;a href="//twitter.com/intent/tweet?url=https%3A%2F%2Fmedium.com%2Fp%2Feda750569e&amp;amp;text=Check%20this%20article%20by%20%40Alee_Haydar%20on%20how%20to%20securely%20expose%20your%20local%20app%20to%20the%20internet%20with%20AWS%20EC2%3A%20%F0%9F%91%89%20"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/sharing/share-offsite/?url=https://medium.com/p/eda750569e" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>softwareengineering</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Automate infrastructure for manually created resources in AWS</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Sun, 18 Dec 2022 20:09:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-iac-for-manually-created-resources-in-aws-48pe</link>
      <guid>https://dev.to/aws-builders/create-iac-for-manually-created-resources-in-aws-48pe</guid>
      <description>&lt;p&gt;Unless you've been working on greenfield projects all of the time in the past few years, you have likely encountered scenarios where AWS resources are provisioned manually. So, you might have a few EC2 instances, Lambda functions, and databases created manually from the Web console.&lt;/p&gt;

&lt;p&gt;You just encountered a case where you want to modify or extend that infrastructure, and you might be thinking about whether to update it manually from the console. Still, you get that feeling of unease and frustration to update these resources, especially if they are in a production account.&lt;br&gt;
You might opt to do this manually due to time (or other) constraints, and you ping your colleague and pair on doing the update because it is always good to have another pair of eyes when touching production resources. That's fine. However, if you prefer to turn your infrastructure to code with Terraform, here's how you do it.&lt;/p&gt;

&lt;p&gt;In this article, we will build a Lambda manually from the AWS console and import it to Terraform.&lt;/p&gt;
&lt;h2&gt;
  
  
  Build your Lambda Manually
&lt;/h2&gt;

&lt;p&gt;We will create a Lambda function from the AWS Console in this step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv54t8r0pjs63brwaewij.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv54t8r0pjs63brwaewij.gif" alt="Create Lambda Function From AWS Console"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Write your Terraform Config
&lt;/h2&gt;

&lt;p&gt;We want to change the Lambda function but don't want to do it from the console. So, we will create our Terraform configuration.&lt;/p&gt;

&lt;p&gt;I am using &lt;a href="https://app.terraform.io/" rel="noopener noreferrer"&gt;Terraform Cloud&lt;/a&gt;, so I will create a workspace and call it &lt;code&gt;terraform-import&lt;/code&gt;. However, you can run the same config from your local machine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a folder to contain your infrastructure code&lt;/li&gt;
&lt;li&gt;Create a new file called &lt;code&gt;main.tf&lt;/code&gt; to specify the AWS provider
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }

  required_version = "&amp;gt;= 1.1.0"
  cloud {
    organization = "MyAwesomeOrganisation"

    workspaces {
      name = "terraform-import"
    }
  }
  }
  provider "aws" {
  region = "us-east-1"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;terraform login&lt;/code&gt; if you're using Terraform cloud. Otherwise, skip this step&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;terraform init&lt;/code&gt; to initialise the workspace&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;terraform plan&lt;/code&gt; - you will get the following output: &lt;code&gt;No changes. Your infrastructure matches the configuration.&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we need to create the configuration using Terraform import.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;lambda.tf&lt;/code&gt; file
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  resource "aws_lambda_function" "my_awesome_lambda" {
  function_name = "MyAwesomeFunction"
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;terraform import aws_lambda_function.my_awesome_lambda MyAwesomeFunction&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This command will generate a state file that would map the resource in AWS in the terraform state. If you're using Terraform Cloud, Navigate to the &lt;code&gt;states&lt;/code&gt; file in the left panel under your workspace. If you're working on your local, notice a new file generated, &lt;code&gt;terraform.tfstate&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Have a look at the state and see what's different.&lt;/p&gt;

&lt;p&gt;Based on the state file, we need to update the lambda config so that we get &lt;code&gt;No Changes&lt;/code&gt; when we run &lt;code&gt;terraform apply&lt;/code&gt; the next time. We should not have a resource deletion or replacement. That's the goal.&lt;/p&gt;

&lt;p&gt;When I ran &lt;code&gt;terraform plan&lt;/code&gt; first, I got an error saying that the &lt;code&gt;role&lt;/code&gt; argument was required. So I updated my &lt;code&gt;lambda.tf&lt;/code&gt; file to become as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_function" "my_awesome_lambda" {
  function_name = "MyAwesomeFunction"
  role    = "arn:aws:iam::24886324234235555:role/service-role/MyAwesomeFunction-role-r09h0n7l"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I got the role from the state file, the Lambda execution role.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;terraform plan&lt;/code&gt; again. The following error I got is: &lt;code&gt;Error: handler and runtime must be set when PackageType is Zip&lt;/code&gt;. So I updated my Lambda as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_function" "my_awesome_lambda" {
  function_name = "MyAwesomeFunction"

  handler = "index.handler"
  runtime = "nodejs16.x"
  role    = "arn:aws:iam::248869629908:role/service-role/MyAwesomeFunction-role-r09h0n7l"

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;terraform plan&lt;/code&gt; again. That's the result I got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # aws_lambda_function.my_awesome_lambda will be updated in-place
  ~ resource "aws_lambda_function" "my_awesome_lambda" {
        id                             = "MyAwesomeFunction"
      + publish                        = false
      ~ runtime                        = "nodejs18.x" -&amp;gt; "nodejs16.x"
        tags                           = {}
        # (18 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That means it will change my Lambda from NodeJs 18 to NodeJs 16. I don't want that, so I will update my &lt;code&gt;lambda.tf&lt;/code&gt; to use Node 18 runtime.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;terraform plan&lt;/code&gt; again. You should get the following message: &lt;code&gt;No changes. Your infrastructure matches the configuration.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now you have your infrastructure documented, so you can update it, get it reviewed, and add it to the version control with peace of mind.&lt;/p&gt;

&lt;p&gt;We can improve this. So, for example, instead of typing the arn of the Lambda execution role, we can also automate this with Terraform. Try this out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Update the lambda function
&lt;/h2&gt;

&lt;p&gt;Create a new folder, &lt;code&gt;src&lt;/code&gt; and an &lt;code&gt;index.mjs&lt;/code&gt; file. Add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const handler = async (event) =&amp;gt; {
  console.log("This lambda was updated using Terraform");

  const response = {
    statusCode: 200,
    body: JSON.stringify("Hello from Lambda!"),
  };
  return response;
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Package your Lambda by running the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ../src

zip -r awesome_lambda.zip .

cp lambda_function.zip ../infrastructure/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will package the Lambda into a zip file and places it under the &lt;code&gt;infrastructure&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;Add the following line to the lambda resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  filename         = "${path.module}/awesome_lambda.zip"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;terraform plan&lt;/code&gt;. You will get the following changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Terraform will perform the following output:

  # aws_lambda_function.my_awesome_lambda will be updated in-place
  ~ resource "aws_lambda_function" "my_awesome_lambda" {
      + filename                       = "./awesome_lambda.zip"
        id                             = "MyAwesomeFunction"
      ~ last_modified                  = "2022-12-09T18:30:44.000+0000" -&amp;gt; (known after apply)
      + publish                        = false
        tags                           = {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;terraform apply&lt;/code&gt;. Open the Lambda in your AWS console, and notice how the code has changed.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;terraform destroy&lt;/code&gt; to destroy your infrastructure.&lt;/p&gt;

&lt;p&gt;Do you have a better way to automate your legacy infrastructure? I'd like to hear your thoughts.&lt;/p&gt;




&lt;p&gt;Thanks for reading this far. Did you like this article, and do you think others might find it useful? Feel free to share it on &lt;a href="//twitter.com/intent/tweet?url=https%3A%2F%2Fdev.to%2Faws-builders%2Fcreate-iac-for-manually-created-resources-in-aws-48pe%0A%0A%23aws%20%23awscommunity%20%23awscommunitybuilder%20%23softwaredeveloment&amp;amp;text=Check%20this%20article%20by%20%40Alee_Haydar%20on%20how%20to%20automate%20your%20manually%20created%20infrastructure%3A%20%F0%9F%91%89"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/sharing/share-offsite/?url=https://dev.to/aws-builders/create-iac-for-manually-created-resources-in-aws-48pe"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Optimise your AWS Lambda performance with NodeJS top-level await</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Thu, 01 Dec 2022 18:30:10 +0000</pubDate>
      <link>https://dev.to/aws-builders/optimise-your-aws-lambda-performance-with-nodejs-top-level-await-1jgc</link>
      <guid>https://dev.to/aws-builders/optimise-your-aws-lambda-performance-with-nodejs-top-level-await-1jgc</guid>
      <description>&lt;p&gt;A few months ago, I wrote an article about &lt;a href="https://www.freecodecamp.org/news/how-to-speed-up-lambda-functions" rel="noopener noreferrer"&gt;how to speed up your Lambda function&lt;/a&gt;, which explores cold starts a bit and talks about the init code phase and writing code outside the Lambda handler function.&lt;/p&gt;

&lt;p&gt;I got a few comments asking how to call an async function outside the Lambda handler. There are a few use cases where this could be useful. For example, when retrieving a password from AWS Secrets Manager, loading a file from S3, or opening a connection to RDS database, it might not be required to do this operation upon every function invocation. So it would be great to run this operation during the function init.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const {
  SecretsManagerClient,
  GetSecretValueCommand,
} = require("@aws-sdk/client-secrets-manager");

const client = new SecretsManagerClient();
const input = {
  SecretId: "&amp;lt;SECRET_MANAGER_RESOURCE_ID&amp;gt;",
};

const command = new GetSecretValueCommand(input);
const response = await client.send(command); // This will throw an error

exports.handler = async (event) =&amp;gt; {
    // TODO use the returned response
    const response = {
        statusCode: 200,
        body: JSON.stringify('Hello from Lambda!'),
    };
    return response;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will throw the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  "errorType": "Runtime.UserCodeSyntaxError",
  "errorMessage": "SyntaxError: await is only valid in async function",
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only way we used to have to keep an optimal performance was to move &lt;code&gt;const response = await client.send(command);&lt;/code&gt; into the Lambda handler. This implementation is Ok but not the best, because it will be executed upon every Lambda invocation, and it doesn't help with the readability of the code.&lt;/p&gt;

&lt;p&gt;However, NodeJS introduced Top-Level await as an experimental feature in &lt;a href="https://nodejs.org/docs/latest-v14.x/api/all.html#esm_top_level_await" rel="noopener noreferrer"&gt;version 14&lt;/a&gt;, then it became generally available in &lt;a href="https://nodejs.org/docs/latest-v16.x/api/all.html#esm_top_level_await" rel="noopener noreferrer"&gt;version 16&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That's great. How can we use it? We need to change our code to use &lt;a href="https://hacks.mozilla.org/2018/03/es-modules-a-cartoon-deep-dive/" rel="noopener noreferrer"&gt;ES modules&lt;/a&gt;. You can do this by either changing the extension of your JS file to have the &lt;code&gt;.mjs&lt;/code&gt; file extension or simply adding the &lt;code&gt;"type": "module"&lt;/code&gt; to your package.json file.&lt;/p&gt;

&lt;p&gt;The code would look as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {
  SecretsManagerClient,
  GetSecretValueCommand,
} from "@aws-sdk/client-secrets-manager";

const client = new SecretsManagerClient();
const input = {
  SecretId: "&amp;lt;SECRET_MANAGER_RESOURCE_ID&amp;gt;",
};

const command = new GetSecretValueCommand(input);
const response = await client.send(command); // This will work well

export const handler = async (event) =&amp;gt; {
    // TODO use the returned response
    const response = {
        statusCode: 200,
        body: JSON.stringify('Hello from Lambda!'),
    };
    return response;
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do you know of any other valuable tips with Lambda and NodeJS? I'd love to hear from you.&lt;/p&gt;




&lt;p&gt;Thanks for reading this far. Did you like this article, and do you think others might find it useful? Feel free to share it on &lt;a href="//twitter.com/intent/tweet?url=https%3A%2F%2Fdev.to%2Faws-builders%2Foptimise-your-aws-lambda-performance-with-nodejs-top-level-await-1jgc%0A%0A%23aws%20%23awscommunity%20%23awscommunitybuilder%20%23softwaredeveloment%20%20&amp;amp;text=Check%20this%20article%20by%20%40Alee_Haydar%20on%20how%20to%20optimize%20your%20NodeJs%20Lambda%20with%20top-level%20awaits%3A%20%F0%9F%91%89"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/sharing/share-offsite/?url=https://dev.to/ahaydar/optimise-your-aws-lambda-performance-with-nodejs-top-level-await-1jgc"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>career</category>
      <category>softwaredevelopment</category>
      <category>android</category>
      <category>data</category>
    </item>
    <item>
      <title>How to store (or Not) your DB password in AWS</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Tue, 22 Nov 2022 16:24:04 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-store-or-not-your-db-password-in-aws-492c</link>
      <guid>https://dev.to/aws-builders/how-to-store-or-not-your-db-password-in-aws-492c</guid>
      <description>&lt;p&gt;If you've worked with databases, you likely have given some thought to where to store your DB credentials. Should they live on the server along with your application? Should they be in environment variables? Should they be in a vault, and you query them with every request?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrk2m9ixg8b4q97vyzyw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrk2m9ixg8b4q97vyzyw.jpeg" alt="Brass-colored Metal Padlock With Chain&amp;lt;br&amp;gt;
by https://www.pexels.com/@life-of-pix/"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will create a simple application that uses RDS MySQL as its storage and use AWS Lambda to run our code. The app will retrieve a list of users from the database. We will use a single table and a single Lambda function.&lt;/p&gt;

&lt;p&gt;We will go through multiple iterations covering credentials management in different ways, starting from incorrect practices towards correctly addressing secrets in a production system.&lt;/p&gt;

&lt;p&gt;As part of this process, let's build our infrastructure using Terraform because it's fun.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The resources provisioned throughout this post might incur some cost&lt;/p&gt;

&lt;p&gt;Please do not use this code without understanding what it does. The database is public in the snippets below when it is essential to have it secured in a VPC in a real-life scenario.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Build the app with credentials as env variables or in plain text
&lt;/h2&gt;

&lt;p&gt;In this first iteration, we will generate credentials to access the DB and use them to connect from Lambda. Of course it's not a good idea to have the credentials stored in open text, but we'll start with it and evolve it to be secure throughout the article.&lt;/p&gt;

&lt;p&gt;Create an "infrastructure" folder in your project where we will place the Infrastructure code.&lt;/p&gt;
&lt;h3&gt;
  
  
  Create the database
&lt;/h3&gt;

&lt;p&gt;First, let us create a MySQL database; in a &lt;code&gt;rds.tf&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "db" {
  source = "terraform-aws-modules/rds/aws"

  identifier = "demodb"

  engine            = "mysql"
  engine_version    = "8.0.30"
  instance_class    = "db.t3.micro"
  allocated_storage = 5

  db_name  = "demodb"
  username = "user"
  port     = "3306"

  family = "mysql8.0"

  # DB option group
  major_engine_version = "8.0"

  publicly_accessible = true

  skip_final_snapshot = true
}

output "db_instance_password" {
  sensitive = true
  value     = module.DB.db_instance_password
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I am using pre-built modules of Terraform for the database resources rather than building them from scratch (&lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/rds/aws/latest" rel="noopener noreferrer"&gt;https://registry.terraform.io/modules/terraform-aws-modules/rds/aws/latest&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Create your infrastructure by running the following command: &lt;code&gt;terraform apply -var-file=variables.tfvars&lt;/code&gt;. When we create the database using this module, it will generate a random password as an output (that's the password of the master database user). The output bloc will store the generated password in the Terraform state - By running &lt;code&gt;terraform output -json&lt;/code&gt;, you get a JSON object with that password:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "db_instance_password": {
    "sensitive": true,
    "type": "string",
    "value": "pgO5Wle0vXrJX2CL"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's NOT a good way to display and store a password, as it's visible to anyone who might have access to your terraform state (in our case, it's a local file that got generated when we apply the infrastructure - &lt;code&gt;terraform.tfstate&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;You might have noticed that I have &lt;code&gt;publicly_accessible=true&lt;/code&gt; in my configuration. Don't do this in production. That's my lazy workaround to access the DB from my local machine to add data and to avoid setting up fully fleshed-out network &amp;amp; security patterns. It's also better as you create your table through code. For this example, we'll connect to the instance manually and run the following scripts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE users (
  id INTEGER,
  first_name VARCHAR(40),
  last_name VARCHAR(40),
  user_type ENUM ('admin', 'finance', 'dev')
);

INSERT INTO users (id, first_name, last_name, user_type)
VALUES
(1, 'Pam', 'Beesly', 'finance'),
(1, 'Dwight', 'Schrute', 'admin'),
(1, 'Michael', 'Scott', 'finance');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a Lambda function
&lt;/h3&gt;

&lt;p&gt;Create a new file under "infrastructure" - call it &lt;code&gt;lambda.tf&lt;/code&gt; and update it with the following config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_iam_policy_document" "lambda_assume_role" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }
  }
}

resource "aws_iam_role" "lambda_execution_role" {
  name               = "lambda-get-users-exec-role"
  assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json
}

resource "aws_lambda_function" "get_users_lambda" {
  function_name = "get-users-lambda"

  handler = "index.handler"
  runtime = "nodejs16.x"

  filename = "lambda_function.zip"

  source_code_hash = filebase64sha256("lambda_function.zip")

  role = aws_iam_role.lambda_execution_role.arn

  environment {
    variables = {
      "RDS_HOSTNAME" = "demodb.cvwlm7q0gjdn.ap-southeast-2.rds.amazonaws.com"
      "RDS_USERNAME" = "user"
      "RDS_PASSWORD" = "pgO5Wle0vXrJX2CL"
      "RDS_PORT"     = "3306"
      "RDS_DATABASE" = "demodb"
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Did you notice that we included the credentials as environment variables? That is NOT a good practice, as we now have the password in the Terraform state, the Terraform code, and the Lambda configuration (anyone with access to these systems can see the credentials). We will change this in future iterations.&lt;/p&gt;

&lt;p&gt;Create a new &lt;code&gt;src&lt;/code&gt; folder at the same level of infrastructure.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;npm init&lt;/code&gt; inside the folder&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;npm i mysql&lt;/code&gt; to install the package that enables us to connect and query the database&lt;/li&gt;
&lt;li&gt;Add an &lt;code&gt;index.js&lt;/code&gt; file with the following content:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const MySQL = require("mysql");
const con = MySQL.createConnection({
  host: process.env.RDS_HOSTNAME,
  user: process.env.RDS_USERNAME,
  password: process.env.RDS_PASSWORD,
  port: process.env.RDS_PORT,
  database: process.env.RDS_DATABASE,
});

exports.handler = async (event) =&amp;gt; {
  const SQL = "SELECT * FROM users";
  return new Promise((resolve, reject) =&amp;gt; {
    con.query(SQL, function (err, result) {
      if (err) throw err;
      console.log("========Executing Query=======");
      const response = {
        statusCode: 200,
        body: JSON.stringify(result),
      };
      resolve(response);
    });
  });
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running &lt;code&gt;terraform apply&lt;/code&gt; again, you should see your new Lambda function provisioned from your AWS Console (check the configuration).&lt;/p&gt;

&lt;p&gt;Open the created lambda function from the AWS console and click the "Test" button. Notice the duration taken to execute the function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Duration: 237.95 ms Billed Duration: 238 ms Memory Size: 128 MB Max Memory Used: 70 MB  Init Duration: 253.66 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will use these values to benchmark against other solutions to better understand the tradeoffs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the app with credentials stored in an SSM Parameter Store
&lt;/h2&gt;

&lt;p&gt;It doesn't require much thought to identify the risk involved in keeping secrets as plain text, even if your code is private and your AWS account is not accessible to everyone. It's easy to make a mistake or keep a door open that would be an open invitation to hackers.&lt;/p&gt;

&lt;p&gt;Alright, we generated the password during the infrastructure provisioning, and we retrieved this password by running &lt;code&gt;terraform output -json&lt;/code&gt;. Why don't we store it in SSM Parameter Store and retrieve it as needed - this way, we don't need to put it in code, and no one will see it as an environment variable.&lt;/p&gt;

&lt;p&gt;We have a few changes to make:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new &lt;code&gt;SSM&lt;/code&gt; file under the "infrastructure folder:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  resource "aws_ssm_parameter" "rds_password" {
    name  = "RDS_PASSWORD"
    type  = "SecureString"
    value = "placeholder"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;terraform apply&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will create a secure parameter in the SSM Parameter store. In real life, it might be a good idea to use all of the environment variables in the Parameter Store - You could place them under a path and query all of them with a single request from within your Lambda. For now, I will only store the password.&lt;/p&gt;

&lt;p&gt;In AWS Console, navigate to Systems Manager -&amp;gt; Parameter Store, and fill in your password instead of the "placeholder" value we left in &lt;code&gt;ssm.tf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let's remove this value from the Lambda environment variables in &lt;code&gt;lambda.tf&lt;/code&gt;. Then let's update our Lambda code to retrieve the value from SSM.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;npm install @aws-sdk/client-ssm&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Update the &lt;code&gt;index.js&lt;/code&gt; file as follows:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  import { SSMClient, GetParameterCommand } from "@aws-sdk/client-ssm";
  import mysql from "mysql";

  const ssmClient = new SSMClient();

  const input = {
    Name: "RDS_PASSWORD",
    WithDecryption: true,
  };
  const command = new GetParameterCommand(input);
  const rdsPassword = await ssmClient.send(command);

  const con = mysql.createConnection({
    host: process.env.RDS_HOSTNAME,
    user: process.env.RDS_USERNAME,
    password: rdsPassword.Parameter.Value,
    port: process.env.RDS_PORT,
    database: process.env.RDS_DATABASE,
  });

  export const handler = async (event) =&amp;gt; {
    const sql = "SELECT * FROM users";
    return new Promise((resolve, reject) =&amp;gt; {
      con.query(sql, function (err, result) {
        if (err) throw err;
        console.log("========Executing Query=======");
        const response = {
          statusCode: 200,
          body: JSON.stringify(result),
        };
        resolve(response);
      });
    });
  };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The duration taken to execute the function is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Duration: 104.29 ms Billed Duration: 105 ms Memory Size: 128 MB Max Memory Used: 93 MB Init Duration: 779.47 ms

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how the "init" duration is now &lt;code&gt;779.47 ms&lt;/code&gt; compared to when we had the password in the environment variable (&lt;code&gt;253.66 ms&lt;/code&gt;). That's because we have more code running in the function init to connect to SSM and get the password.&lt;/p&gt;

&lt;p&gt;That's a bit slower, but it's way more secure than the first instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the app with credentials stored in AWS Secrets Manager
&lt;/h2&gt;

&lt;p&gt;AWS Secrets Manager shares a few functionalities with AWS Parameter Store. The difference is that Secrets Manager supports secrets rotation and integrates with RDS - That's a very significant security improvement to what we had earlier.&lt;/p&gt;

&lt;p&gt;First, let us create the Secrets Manager resource. Create a new file &lt;code&gt;secrets_manager.tf&lt;/code&gt; under the "infrastructure" folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
resource "aws_secretsmanager_secret" "rds_secret" {
name = "secret-manager-rds"
description = "secret for RDS"
}

resource "aws_secretsmanager_secret_version" "rds_credentials" {
secret_id = aws_secretsmanager_secret.rds_secret.id
secret_string = "placeholder"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I avoid manually opening the AWS console and pasting the secrets in real life. It's better to have the secret automatically generated and passed in Terraform without any manual intervention. Here's an example: &lt;a href="https://stackoverflow.com/a/68200795/1263668" rel="noopener noreferrer"&gt;https://stackoverflow.com/a/68200795/1263668&lt;/a&gt; - for now, we'll use the console for convenience.&lt;/p&gt;

&lt;p&gt;After running &lt;code&gt;terraform apply&lt;/code&gt;, you can see the newly created secret in your AWS console -&amp;gt; Secrets Manager.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click "Retrieve secret value"&lt;/li&gt;
&lt;li&gt;Click the "Edit" button and enter the value of your password instead of the placeholder&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, let us delete the SSM Parameter Store parameters we created (Delete &lt;code&gt;ssm.tf&lt;/code&gt;) and update the Lambda to use Secrets Manager instead.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the lambda execution role to permit it to interact with Secrets Manager. In the &lt;code&gt;iam.tf&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  resource "aws_iam_policy" "lambda_policies" {
  name = "lambda_secrets_manager_access"
  description = "lambda access to secrets manager"
  policy = data.aws_iam_policy_document.lambda_policies_document.json
  }

  data "aws_iam_policy_document" "lambda_policies_document" {
  statement {
  actions = [
  "secretsmanager:GetSecretValue"
  ]
  resources = [aws_secretsmanager_secret.rds_secret.arn]
  }
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the &lt;code&gt;lambda.tf&lt;/code&gt; file, add the following environment variable:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  "SECRET_MANAGER_RESOURCE_ID" : aws_secretsmanager_secret.rds_secret.id

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Update the lambda code in &lt;code&gt;index.js&lt;/code&gt;- Update the code before the handler function to the following:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  import {
  SecretsManagerClient,
  GetSecretValueCommand,
  } from "@aws-sdk/client-secrets-manager";
  import MySQL from "mysql";

  const client = new SecretsManagerClient();

  const input = {
  SecretId: process.env.SECRET_MANAGER_RESOURCE_ID,
  };

  const command = new GetSecretValueCommand(input);
  const rdsPassword = await client.send(command);

  const con = MySQL.createConnection({
  host: process.env.RDS_HOSTNAME,
  user: process.env.RDS_USERNAME,
  password: rdsPassword.SecretString,
  port: process.env.RDS_PORT,
  database: process.env.RDS_DATABASE,
  });

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test the Lambda function.&lt;/p&gt;

&lt;p&gt;The duration taken to execute the function is (the init duration is slightly faster than with SSM parameter store but still slower than having the secret passed as an env variable):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Duration: 109.62 ms Billed Duration: 110 ms Memory Size: 128 MB Max Memory Used: 86 MB Init Duration: 597.35 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Rotate the secrets in AWS Secrets Manager
&lt;/h2&gt;

&lt;p&gt;So far, there isn't much difference between "SSM Parameter Store" and "Secrets Manager". However, AWS Secrets Manager offers a feature to rotate secrets. How awesome is that! - AWS Secrets Manager uses a Lambda function to rotate the secret and sync it to your database in RDS.&lt;/p&gt;

&lt;p&gt;Unfortunately, it's more complicated than selecting a checkbox to enable the secret rotation. There is some configuration to change, including creating the rotator lambda.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new Lambda function that will handle the rotation - here is a template &lt;a href="https://github.com/aws-samples/aws-secrets-manager-rotation-lambdas/blob/master/SecretsManagerRDSMySQLRotationMultiUser/lambda_function.py" rel="noopener noreferrer"&gt;https://github.com/aws-samples/aws-secrets-manager-rotation-lambdas/blob/master/SecretsManagerRDSMySQLRotationMultiUser/lambda_function.py&lt;/a&gt; - I created it in a separate folder within the same repository. If you are like me, new to using Python, you could use this helpful guide:&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/python-package.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/lambda/latest/dg/python-package.html&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Add the following to the &lt;code&gt;lambda.tf&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  resource "aws_lambda_function" "secret_rotator" {
  function_name = "secret-rotator-lambda"

      handler = "rotate.lambda_handler"
      runtime = "nodejs16.x"

      filename = "lambda_rotator_function.zip"

      source_code_hash = filebase64sha256("lambda_rotator_function.zip")

      role = aws_iam_role.lambda_rotator_execution_role.arn

      depends_on = [
        aws_iam_role_policy_attachment.lambda_rotator_policies_attachment
      ]

  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the &lt;code&gt;iam.tf&lt;/code&gt; file add the following - we need to give the rotator lambda access to rotate the secret and permit the Secrets Manager to invoke the Lambda
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  data "aws_iam_policy_document" "secrets_access_policy_document" {
  statement {

        actions = [
          "secretsmanager:DescribeSecret",
          "secretsmanager:GetSecretValue",
          "secretsmanager:PutSecretValue",
          "secretsmanager:UpdateSecretVersionStage",
        ]
        resources = [aws_secretsmanager_secret.rds_secret.arn]

  }
  statement {

        actions = [
          "secretsmanager:GetRandomPassword"
        ]
        resources = ["*"]

  }
  statement {
  actions = [
  "logs:CreateLogGroup",
  "logs:CreateLogStream",
  "logs:PutLogEvents"
  ]
  resources = [
  "arn:aws:logs:*:*:*",
  ]
  }
  }
  resource "aws_iam_policy" "secrets_access_policy" {
  name = "secrets-access-policy"
  policy = data.aws_iam_policy_document.secrets_access_policy_document.json
  }

  resource "aws_iam_role_policy_attachment" "lambda_rotator_policies_attachment" {
  role = aws_iam_role.lambda_rotator_execution_role.name
  policy_arn = aws_iam_policy.secrets_access_policy.arn
  }

  resource "aws_lambda_permission" "allow_secrets_manager" {
  statement_id = "AllowExecutionFromSecretsManager"
  action = "lambda:InvokeFunction"
  function_name = aws_lambda_function.secret_rotator.function_name
  principal = "secretsmanager.amazonaws.com"
  source_arn = aws_secretsmanager_secret.rds_secret.arn
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Update the "Get Users" lambda to read all the info from the secrets manager rather than the env variables. In the &lt;code&gt;index.js&lt;/code&gt; file, change the code just before the handler function to:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  const response = await client.send(command);

  const secret = JSON.parse(response.SecretString);

  const con = mysql.createConnection({
  host: secret.host,
  user: secret.username,
  password: secret.password,
  port: secret.port,
  database: secret.dbname,
  });

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Remove the above variables from the &lt;code&gt;lambda.tf&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Apply your changes (repackage your Lambda, then run &lt;code&gt;terraform apply&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the AWS Console, and notice how the &lt;code&gt;Rotation Configuration&lt;/code&gt; section is updated:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fus8ixpsyhzwt5we0z250.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fus8ixpsyhzwt5we0z250.png" alt="Secrets Manager Rotation Config"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upon the first deployment, the secret should rotate, and you can see this in the CloudWatch logs - You can also see the new secret in Secrets Manager&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's way more secure as we can rotate the secret often; still, we have a master password that we use, and we need to keep it away from casual access.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect to the database without Secrets
&lt;/h2&gt;

&lt;p&gt;Another way to connect to the RDS database is to use IAM user or role credentials and an authentication token instead of a username/password.&lt;/p&gt;

&lt;p&gt;Before getting into the implementation, the max number of connections per second cannot exceed 200 authentication connections imposed by IAM. Look at the limitations and recommendations in the &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html#UsingWithRDS.IAMDBAuth.Limitations" rel="noopener noreferrer"&gt;AWS Docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now let's get to the implementation:&lt;/p&gt;

&lt;p&gt;First, we need to activate IAM Database Authentication. We will do this in Terraform. in the &lt;code&gt;rds.tf&lt;/code&gt; file, add the following attribute: &lt;code&gt;iam_database_authentication_enabled = true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Second, we will create a database that uses AWS Auth token: &lt;code&gt;CREATE USER lambda_user IDENTIFIED WITH AWSAuthenticationPlugin as 'RDS';&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, we need to create an IAM policy that allows connecting to the DB as the created user and attach it to the execution role of the Lambda:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
data "aws_iam_policy_document" "lambda_rds_connect_policy_document" {
  statement {
    actions = [
      "rds-db:connect"
    ]
    resources = ["arn:aws:rds-db:${var.region}:${data.aws_caller_identity.current.account_id}:dbuser:${module.db.db_instance_resource_id}/lambda_user"]
  }
}

resource "aws_iam_policy" "lambda_rds_connect_policy" {
  name        = "lambda_connect_to_rds"
  description = "lambda access to connect to rds"
  policy      = data.aws_iam_policy_document.lambda_rds_connect_policy_document.json
}

resource "aws_iam_role_policy_attachment" "lambda_policies_attachment_db_connect_policy" {
  role       = aws_iam_role.lambda_execution_role.name
  policy_arn = aws_iam_policy.lambda_rds_connect_policy.arn
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note here that we are using &lt;code&gt;db_instance_resource_id&lt;/code&gt; here - I mistakenly used &lt;code&gt;db_instance_id&lt;/code&gt; first, which cost me a couple of hours of investigation.&lt;/p&gt;

&lt;p&gt;The next step would be to modify the Lambda code. Update the &lt;code&gt;index.js&lt;/code&gt; file before the handler function, as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {
  SecretsManagerClient,
  GetSecretValueCommand,
} from "@aws-sdk/client-secrets-manager";
import mysql from "mysql2";
import { Signer } from "@aws-sdk/rds-signer";

const client = new SecretsManagerClient();

const input = {
  SecretId: process.env.SECRET_MANAGER_RESOURCE_ID,
};

const command = new GetSecretValueCommand(input);
const response = await client.send(command);

const secret = JSON.parse(response.SecretString);

const signer = new Signer({
  hostname: secret.host,
  port: secret.port,
  username: "lambda_user",
});

const token = await signer.getAuthToken();

const con = mysql.createConnection({
  host: secret.host,
  user: "lambda_user",
  password: token,
  port: secret.port,
  database: secret.dbname,
  ssl: "Amazon RDS",
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we changed the MySQL client we are using in our Lambda from &lt;code&gt;mysql&lt;/code&gt; to &lt;code&gt;mysql2&lt;/code&gt;, as &lt;code&gt;mysql&lt;/code&gt; does not support this authentication method. So make sure to run &lt;code&gt;npm i mysql2&lt;/code&gt;.&lt;br&gt;
Using the &lt;code&gt;mysql&lt;/code&gt; package would throw this error: &lt;code&gt;"ER_NOT_SUPPORTED_AUTH_MODE: Client does not support authentication protocol requested by server; consider upgrading MySQL client"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I hope this article was helpful, and I would love to hear your thoughts. In summary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Don't use env variables or config to store your credentials&lt;/li&gt;
&lt;li&gt;Using Parameter Store is a good first option if you would like to secure your credentials&lt;/li&gt;
&lt;li&gt;The best option is using Secrets Manager with password rotation, in my opinion, especially if you're using RDS&lt;/li&gt;
&lt;li&gt;IAM Authentication is the most secure, in my opinion, but has a few &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html#UsingWithRDS.IAMDBAuth.Limitations" rel="noopener noreferrer"&gt;limitations&lt;/a&gt;. So good to consider them before using it in a production environment.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Thanks for reading this far. Did you like this article, and do you think others might find it useful? Feel free to share it on &lt;a href="//twitter.com/intent/tweet?url=&amp;amp;text=Check%20this%20article%20by%20%40Alee_Haydar%20on%20how%20to%20store%20your%20DB%20credentials%20in%20AWS%3A%20%F0%9F%91%89%20https%3A%2F%2Fdev.to%2Fahaydar%2Fhow-to-store-or-not-your-db-password-in-aws-492c%0A%0A%23aws%20%23awscommunity%20%23awscommunitybuilder%20%23softwaredeveloment"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/sharing/share-offsite/?url=https://dev.to/ahaydar/how-to-store-or-not-your-db-password-in-aws-492c"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to query a large file in S3</title>
      <dc:creator>Ali Haydar</dc:creator>
      <pubDate>Mon, 22 Aug 2022 16:48:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-query-a-large-file-in-s3-4doa</link>
      <guid>https://dev.to/aws-builders/how-to-query-a-large-file-in-s3-4doa</guid>
      <description>&lt;p&gt;S3 (Simple Storage Service) is a top-rated service for data storage in AWS. It offers high durability, availability and scalability. I've used S3 for various use cases, including storing many files, backups of databases, analytics, data archiving and static site hosting.&lt;/p&gt;

&lt;p&gt;A few months ago, I encountered a case where we needed to query data from a large JSON file in S3. We went with the usual approach of getting the object in our Lambda, filtering the JSON object for the data we need in code. So we loaded the file content in memory and filtered it. As this file grew significantly, the Lambda started timing out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7sp3vyrni564mn6bue9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7sp3vyrni564mn6bue9.png" alt="Full Bucket"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We often need to retrieve the whole file from S3, but if it is large (e.g. 2TB), it will slow down your app. If you are transferring this file Out from Amazon S3 directly to the internet (e.g. if you host your app outside of AWS or you're downloading this file), that's a significant cost increase.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can we do?
&lt;/h2&gt;

&lt;p&gt;AWS provides a feature in S3 called "S3 Select", where the user can query parts of an object using an SQL-like statement. This is good because the filtering happens in S3 before it reaches your application.&lt;br&gt;
S3 Select works on multiple file types, including CSV, JSON and Parquet.&lt;/p&gt;

&lt;p&gt;Please show us the code...&lt;/p&gt;

&lt;p&gt;Before getting into the code, the below diagrams could help us visualize how this works.&lt;br&gt;
&lt;strong&gt;Get Object&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa26vjth7mhngo8x05q3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa26vjth7mhngo8x05q3q.png" alt="S3 Get Object"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 Select&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4w6sgeqmi9hrxwgx86zc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4w6sgeqmi9hrxwgx86zc.png" alt="S3 Select Object Content"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the rest of this post, we will use Terraform to set up an S3 bucket and a lambda. We will first retrieve an entire object in our Lambda, then change it to query parts of the object using "S3 Select".&lt;/p&gt;

&lt;p&gt;If you're impatient, feel free to skip to the "S3 Select" section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;p&gt;I will split the Terraform configuration into multiple files - Feel free to adjust this as you deem appropriate - in an "infrastructure" folder:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;provider.tf&lt;/code&gt; to include AWS as a provider&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

  provider "aws" {
    profile = "default"
    region  = "ap-southeast-2"
  }



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a test.json file and add it to an S3 bucket&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assume we have a JSON file called 'test.json' and has the following content (copied from &lt;a href="https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Objects/JSON" rel="noopener noreferrer"&gt;MDN&lt;/a&gt;) - we will keep the size of the file small for this example:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "squadName": "Super hero squad",
  "homeTown": "Metro City",
  "formed": 2016,
  "secretBase": "Super tower",
  "active": true,
  "members": [
    {
      "name": "Molecule Man",
      "age": 29,
      "secretIdentity": "Dan Jukes",
      "powers": [
        "Radiation resistance",
        "Turning tiny",
        "Radiation blast"
      ]
    },
    {
      "name": "Madame Uppercut",
      "age": 39,
      "secretIdentity": "Jane Wilson",
      "powers": [
        "Million tonne punch",
        "Damage resistance",
        "Superhuman reflexes"
      ]
    },
    {
      "name": "Eternal Flame",
      "age": 1000000,
      "secretIdentity": "Unknown",
      "powers": [
        "Immortality",
        "Heat Immunity",
        "Inferno",
        "Teleportation",
        "Interdimensional travel"
      ]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;


&lt;/li&gt;

&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  - Add the following content to the `s3.tf` file

    ```


      resource "aws_s3_bucket" "s3-select" {
      bucket = var.bucketName
      }

      resource "aws_s3_bucket_policy" "outoffocus-policy" {
      bucket = aws_s3_bucket.s3-select.id
      policy = templatefile("s3-policy.json", { bucket = var.bucketName })
      }

      resource "aws_s3_object" "outoffocus-index" {
      bucket = aws_s3_bucket.s3-select.id
      key    = "test.json"
      source = "./test.json"
      acl    = "public-read"
      }



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Create the IAM roles and permissions needed to get the Lambda to interact with the S3 bucket. In a &lt;code&gt;iam.tf&lt;/code&gt; file:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

  resource "aws_iam_role" "s3_select_lambda_execution_role" {
  name               = "s3_select_lambda_execution_role"
  assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json
  }

  data "aws_iam_policy_document" "lambda_assume_role" {
    statement {
      actions = ["sts:AssumeRole"]
      principals {
        type        = "Service"
        identifiers = ["lambda.amazonaws.com"]
      }
    }
  }

  resource "aws_iam_role_policy_attachment" "lambda_policies_attachment" {
    role       = aws_iam_role.s3_select_lambda_execution_role.name
    policy_arn = aws_iam_policy.lambda_policies.arn
  }

  resource "aws_iam_policy" "lambda_policies" {
    name        = "s3-select-lambda_logging_cloudwatch_access"
    description = "lambda logs in CloudWatch"
    policy      = data.aws_iam_policy_document.lambda_policies_document.json
  }

  data "aws_iam_policy_document" "lambda_policies_document" {
    statement {
      actions = [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ]
      resources = [
        "arn:aws:logs:*:*:*",
      ]
    }
    statement {
      actions = [
        "s3:GetObject"
      ]
      resources = [
        "arn:aws:s3:::${var.bucketName}/*"
      ]
    }
  }



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;lambda.tf&lt;/code&gt; file that would include the following configuration:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

  resource "aws_lambda_function" "s3-select" {
  function_name = "s3-select-lambda"

  handler = "index.handler"
  runtime = "nodejs16.x"

  filename = "lambda_function.zip"

  source_code_hash = filebase64sha256("lambda_function.zip")

  role = aws_iam_role.s3_select_lambda_execution_role.arn

  depends_on = [
    aws_iam_role_policy_attachment.lambda_policies_attachment
  ]
  }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Finally, create a &lt;code&gt;variables.tf&lt;/code&gt; file to include the bucketName variable used earlier
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;variable "bucketName" {&lt;br&gt;
  default = "your-unique-s3-bucket-name"&lt;br&gt;
  type    = string&lt;br&gt;
  }&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
### Lambda Code

Create a new npm project by running `npm init` and following the steps in your terminal. Create a "src" folder with an index.js file in it:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;const { S3Client, GetObjectCommand } = require("@aws-sdk/client-s3");&lt;br&gt;
const consumers = require("stream/consumers");&lt;/p&gt;

&lt;p&gt;const params = {&lt;br&gt;
  Bucket: "your-unique-s3-bucket-name",&lt;br&gt;
  Key: "test.json",&lt;br&gt;
};&lt;/p&gt;

&lt;p&gt;exports.handler = async (event) =&amp;gt; {&lt;br&gt;
  const client = new S3Client({});&lt;br&gt;
  const command = new GetObjectCommand(params);&lt;br&gt;
  const response = await client.send(command);&lt;/p&gt;

&lt;p&gt;const objectText = await consumers.text(response.Body);&lt;/p&gt;

&lt;p&gt;console.log("here is the returned object", objectText);&lt;br&gt;
  return {&lt;br&gt;
    statusCode: 200,&lt;br&gt;
    body: JSON.stringify("Hello from Lambda!"),&lt;br&gt;
  };&lt;br&gt;
};&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Of course, this is not optimal (we could use TS and pass the params as env variables, etc.), but this should be sufficient for this blog post.

To package your Lambda into a zip file and copy it to the infrastructure folder, run the following commands:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;zip -r lambda_function.zip ./&lt;em&gt;.js ./&lt;/em&gt;.json ./node_modules/*&lt;/p&gt;

&lt;p&gt;cp lambda_function.zip ../infrastructure/&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Once done, deploy your code to AWS by running `terraform init` then `terraform apply` in your "infrastructure" folder.

### S3 Select

There's nothing you would need to do to enable S3 Select. It's an API call where we pass the SQL-Like statement to S3, and everything happens auto-magically from there.

Let's start with writing the SQL statement. Consider we want to return the "Molecule Man" age in the "members" array. The query looks as follows:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SELECT s.age FROM s3object[&lt;em&gt;].members[&lt;/em&gt;] s WHERE s.name = 'Molecule Man'&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
The query isn't precisely SQL, but the syntax is very similar. For more details, check this [page](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-glacier-select-sql-reference-select.html). I also find it helpful to try the query in the AWS console. To do that:

- Select the object in the S3 bucket
- Click Action -&amp;gt; S3 Select
- Write the query

![AWS Console - S3 Select](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cs9mwnzhgwz8bfyyfax3.png)

The new Lambda function code looks as follows:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;const { S3Client, SelectObjectContentCommand } = require("@aws-sdk/client-s3");&lt;br&gt;
const consumers = require("stream/consumers");&lt;/p&gt;

&lt;p&gt;const params = {&lt;br&gt;
  Bucket: "your-unique-s3-bucket-name",&lt;br&gt;
  Key: "test.json",&lt;br&gt;
  ExpressionType: "SQL",&lt;br&gt;
  Expression:&lt;br&gt;
    "SELECT s.age FROM s3object[&lt;em&gt;].members[&lt;/em&gt;] s WHERE s.name = 'Molecule Man'",&lt;br&gt;
  InputSerialization: {&lt;br&gt;
    JSON: {&lt;br&gt;
      Type: "DOCUMENT",&lt;br&gt;
    },&lt;br&gt;
  },&lt;br&gt;
  OutputSerialization: {&lt;br&gt;
    JSON: {},&lt;br&gt;
  },&lt;br&gt;
};&lt;/p&gt;

&lt;p&gt;exports.handler = async (event) =&amp;gt; {&lt;br&gt;
  const client = new S3Client({});&lt;br&gt;
  const command = new SelectObjectContentCommand(params);&lt;br&gt;
  const response = await client.send(command);&lt;/p&gt;

&lt;p&gt;const objectText = await consumers.text(response.Body);&lt;/p&gt;

&lt;p&gt;console.log("here is the returned object", objectText);&lt;br&gt;
  return {&lt;br&gt;
    statusCode: 200,&lt;br&gt;
    body: JSON.stringify("Hello from Lambda!"),&lt;br&gt;
  };&lt;br&gt;
};&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
The "InputSerialization" property defines the data format in our query object. The JSON Type could be a "DOCUMENT|LINE" to specify whether the whole file is a JSON object or each line is a JSON object.

The "OutputSerialization" property specifies how the output object is JSON format. We could define a "RecordDelimiter" property inside it, which would be used to separate the output records. Keeping it empty would default to using the newline character ("\n").

For more info, have a look at https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#selectObjectContent-property.

I hope this was helpful. Do you have a different method to query big objects in S3? I'd love to hear from you.

---
Thanks for reading this far. Did you like this article and you think others might find it useful? Feel free to share it on [Twitter](twitter.com/intent/tweet?url=https%3A%2F%2Fdev.to%2Faws-builders%2Fhow-to-query-a-large-file-in-s3-4doa%0A%0A%23aws%20%23awscommunity%20%23awscommunitybuilder%20%23softwaredeveloment&amp;amp;text=Check%20this%20article%20by%20%40Alee_Haydar%20on%20how%20to%20query%20large%20files%20in%20S3%3A%20%F0%9F%91%89) or [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https://dev.to/aws-builders/how-to-query-a-large-file-in-s3-4doa).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
  </channel>
</rss>
