<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lorenzo Hidalgo Gadea</title>
    <description>The latest articles on DEV Community by Lorenzo Hidalgo Gadea (@lhidalgo_dev).</description>
    <link>https://dev.to/lhidalgo_dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lhidalgo_dev"/>
    <language>en</language>
    <item>
      <title>Mastering DynamoDB: Batch Operations Explained</title>
      <dc:creator>Lorenzo Hidalgo Gadea</dc:creator>
      <pubDate>Wed, 16 Oct 2024 12:46:48 +0000</pubDate>
      <link>https://dev.to/aws-builders/mastering-dynamodb-batch-operations-explained-501i</link>
      <guid>https://dev.to/aws-builders/mastering-dynamodb-batch-operations-explained-501i</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR;&lt;/strong&gt; This article covers the usage of DynamoDB BatchWrite and BatchGet operations, and how implementing them can help you improve the efficiency by reducing the amount of requests needed in your workload.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Have you ever developed any type of workload that interacts with DynamoDB?&lt;/p&gt;

&lt;p&gt;If so, you probably have encountered the requirement of retrieving or inserting multiple specific records, be it from a single or various DynamoDB tables.&lt;/p&gt;

&lt;p&gt;This article aims to provide you with it by providing all the required resources and knowledge to implement the usage of DynamoDB batch operations and, as a bonus point, increase the efficiency of your current workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Batch Operations?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;When talking about batch operations or batch processing we refer to the action of aggregating a set of instructions in a single request for them to be executed all at once. In terms of interacting with DynamoDB, we could see it as sending a single request that would allow us to retrieve or insert multiple records at once.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Bad practices
&lt;/h3&gt;

&lt;p&gt;Continuing with the sample situation mentioned in the introduction, you may face the requirement of having to retrieve or store multiple records at once.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1729070560255%2F056bf226-3346-4508-a8ba-4c888bf6a2fd.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1729070560255%2F056bf226-3346-4508-a8ba-4c888bf6a2fd.webp" alt="Code snippet with a " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For that scenario, most junior developers might rely on looping over a set of keys and sending the &lt;code&gt;GetItem&lt;/code&gt; requests in sequence or a mid-level developer might propose to parallelize all those requests using for example a &lt;code&gt;Promise.all&lt;/code&gt;, but both approaches are flawed and wont scale well.&lt;/p&gt;

&lt;p&gt;On one side, the &lt;code&gt;for-loop&lt;/code&gt; will even be detected by some linters (with rules like &lt;code&gt;no-await-in-loop&lt;/code&gt;) as this implementation would increase the execution time exponentially.&lt;/p&gt;

&lt;p&gt;On the other side, the &lt;code&gt;Promise.all&lt;/code&gt; approach will be a tad more efficient by parallelizing the requests, but with high workloads, developers would end up facing issues like the maximum connection limit reached error.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recommended Implementation
&lt;/h3&gt;

&lt;p&gt;Now that we have gone over some bad practices in implementing it and that you have probably thought of a few projects that could be improved, well dive into how we can take the most advantage of it.&lt;/p&gt;

&lt;p&gt;DynamoDB offers two different types of operations &lt;code&gt;BatchGetItem&lt;/code&gt; and &lt;code&gt;BatchWrtieItem&lt;/code&gt; which we will take a look into as part of this article.&lt;/p&gt;

&lt;p&gt;There is also &lt;code&gt;BatchExecuteStatement&lt;/code&gt; for those using PartiQL, but we will leave that one for a future article to cover PartiQL in detail.&lt;/p&gt;

&lt;h4&gt;
  
  
  BatchGetItem
&lt;/h4&gt;

&lt;p&gt;This operation type will allow us to aggregate up to the equivalent of 100 GetItem requests in a single request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3o94c5l78xh05dnm0o2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3o94c5l78xh05dnm0o2g.png" alt="Code snippet showing a  raw `BatchGetCommand` endraw  function for fetching items from two tables, " width="800" height="675"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Meaning that with this operation we could retrieve up to 100 records or 16 MB from a single or multiple table at once.&lt;/p&gt;

&lt;h4&gt;
  
  
  BatchWriteItem
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;💡PutRequests will overwrite any existing records with the provided keys.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This operation, even if it only contains &lt;code&gt;write&lt;/code&gt; as part of its name, will allow us to aggregate up to 25 &lt;code&gt;PutItem&lt;/code&gt; and &lt;code&gt;DeleteItem&lt;/code&gt; operations in a single request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30kpopuvx17yczd4vgay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30kpopuvx17yczd4vgay.png" alt="Screenshot of a JavaScript code snippet for a  raw `BatchWriteCommand` endraw . It includes request items for two tables, " width="800" height="679"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similar to the previous option, well still be limited by the 16 MB maximum, but we would theoretically be able to replace 25 sequential or parallel requests with a single one.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pagination for Batch operations
&lt;/h4&gt;

&lt;p&gt;Pagination is only valid for the 16 MB limit if the requests dont follow the 100 record read or the 25 record write limit DynamoDB will throw a &lt;code&gt;ValidationException&lt;/code&gt; instead.&lt;/p&gt;

&lt;p&gt;Similar to the &lt;code&gt;Scan&lt;/code&gt; and &lt;code&gt;Query&lt;/code&gt; operations, using any of the above &lt;code&gt;Batch*Item&lt;/code&gt; operations can incur in the scenario where the 16 MB maximum is reached and some type of pagination is required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2up2osqy6hfs2dxrwuv1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2up2osqy6hfs2dxrwuv1.png" alt="Screenshot of a JavaScript code snippet defining an asynchronous function named  raw `executeRequest` endraw . It uses a try-catch block to handle a  raw `payload` endraw , checking for  raw `UnprocessedItems` endraw . If any, it recursively calls itself with a  raw `BatchWriteItemCommand` endraw . Errors are logged to the console." width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For Batch* operations this comes in the form of the &lt;code&gt;UnprocessedKeys&lt;/code&gt; attribute that can be part of the response.&lt;/p&gt;

&lt;p&gt;Developers are expected to check for this attribute in the response and, if desired, implement its usage as a recursive function to process them automatically.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Full examples for Retrieving, Inserting, and Deleting records using BatchOperations with a recursive implementation to automatically handle the &lt;code&gt;UnprocessedKeys&lt;/code&gt; can be found &lt;a href="https://github.com/Lorenzohidalgo/dynamodb-batch-samples/tree/main/samples" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Real-world Use Cases
&lt;/h2&gt;

&lt;p&gt;Now that we are aware of all options and limitations regarding how we can process records in batch in DynamoDB, lets see some scenarios that will showcase some real-life improvements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 1: Retrieving Data from Multi-table Design Architecture
&lt;/h3&gt;

&lt;p&gt;For this first scenario, lets imagine we are looking to improve the performance of a REST API that, given an array of &lt;code&gt;productId&lt;/code&gt;, will return us the list of desired product details with their respective stock and exact warehouse location. The data is stored in multiple tables, one for each data model (products, stock tracking, and warehouse product location).&lt;/p&gt;

&lt;h4&gt;
  
  
  Before
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk94zeqcplthzddlsai0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk94zeqcplthzddlsai0z.png" alt="JavaScript code snippet that retrieves product, stock, and location data for a list of product IDs and returns them in an array." width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The initial implementation was developed by having a for-loop to go over all the provided &lt;code&gt;productIds&lt;/code&gt; and sequentially retrieve all the required data from the different tables.&lt;/p&gt;

&lt;h4&gt;
  
  
  After
&lt;/h4&gt;

&lt;p&gt;From that initial implementation, you should be able to detect two distinct flaws:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;no-await-in-loop&lt;/code&gt; - There is a loop with asynchronous operations inside, which is usually a bad practice, as all operations for a given operation will need to be completed before the next one can start.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sequential &lt;code&gt;await getItem&lt;/code&gt; requests - This is also a bad practice, as the three operations are independent from each other and wed ideally not want for them to be blocked by each other.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A better approach would look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5irmd8gpguh53kdgi75o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5irmd8gpguh53kdgi75o.png" alt="A code snippet with four steps: 1) Checks if  raw `idList` endraw  has more than 33 items and throws an error if true. 2) Builds a payload with  raw `buildPayload(idList)` endraw . 3) Awaits a recursive batch get with  raw `recursiveBatchGet(payload)` endraw . 4) Maps the responses to products with  raw `mapResponse(batchGetResponses)` endraw  and returns them." width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Input Validation&lt;/strong&gt; - Set a limit of maximum items to be requested to avoid requiring parallel &lt;code&gt;BatchGetItem&lt;/code&gt; requests.&lt;br&gt;&lt;br&gt;
For example - max. 100 items per &lt;code&gt;BatchGetItem&lt;/code&gt; request and every product requires 3 &lt;code&gt;GetItem&lt;/code&gt; requests means that a single &lt;code&gt;BatchGetItem&lt;/code&gt; request can retrieve up to 33 product details.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build Payloads&lt;/strong&gt; - a helper function will be needed to programmatically build the required payload for the &lt;code&gt;BatchGetItem&lt;/code&gt; operations taking into consideration the different tables that need to be accessed for each product ID.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recursive&lt;/strong&gt; &lt;code&gt;BatchGetItem&lt;/code&gt; - a helper function that recursively calls itself to ensure that all &lt;code&gt;UnprocessedKeys&lt;/code&gt; are retried.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Response parsing&lt;/strong&gt; - a helper function that transforms the &lt;code&gt;BatchGetItem&lt;/code&gt; response to the given schema that the consumers are expecting for this API&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Applying all these changes should significantly increase the efficiency and performance of the API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 2: Inserting Data in a Single-table Design Architecture
&lt;/h3&gt;

&lt;p&gt;The second scenario would imply a DynamoDB single table design architecture where we have a single table to store all the information needed for a Dashboard to analyze racehorses historical data. Records such as basic horse information, performance statistics, and race results are stored in the same table.&lt;/p&gt;

&lt;h4&gt;
  
  
  Before
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg3c24ceya7x06yhtlzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg3c24ceya7x06yhtlzt.png" alt="Code snippet for storing horse details, statistics, and race information using the  raw `putItem` endraw  function in an asynchronous manner." width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similar to the first scenario, we can see that the initial implementation is based on a set of sequential &lt;code&gt;PutItem&lt;/code&gt; requests.&lt;/p&gt;

&lt;h4&gt;
  
  
  After
&lt;/h4&gt;

&lt;p&gt;From that initial implementation, you should be able to detect two distinct flaws:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;no-await-in-loop&lt;/code&gt; - There is a loop with asynchronous operations inside, which is usually a bad practice, as all operations for a given operation will need to be completed before the next one can start.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sequential &lt;code&gt;await putItem&lt;/code&gt; requests - This is also a bad practice, as the three operations are independent from each other and wed ideally not want for them to be blocked by each other.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A better approach would look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4160yqx4j5y6q24a1eys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4160yqx4j5y6q24a1eys.png" alt="Code snippet showing two steps: 1. Building a payload with the function  raw `buildPayload` endraw  using parameters  raw `horse` endraw ,  raw `stats` endraw , and  raw `races` endraw .2. Performing a recursive batch write with the function  raw `recursiveBatchWrite` endraw , using the payload." width="800" height="184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build Payloads&lt;/strong&gt; - a helper function will be needed to programmatically build the required payload for the &lt;code&gt;BatchGetItem&lt;/code&gt; operations taking into consideration the different tables that need to be accessed for each product ID.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recursive&lt;/strong&gt; &lt;code&gt;BatchWriteItem&lt;/code&gt; - a helper function that recursively calls itself to ensure that all &lt;code&gt;UnprocessedKeys&lt;/code&gt; are retried.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Applying all these changes should significantly reduce the required time to upload all information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Utilizing batch operations in DynamoDB is a powerful strategy to optimize your database interactions. By aggregating multiple requests into a single operation, you can improve performance, reduce latency, and manage resources more effectively. Whether you're dealing with multi-table architectures or single-table designs, batch operations offer a scalable solution to handle large volumes of data efficiently. As you continue to work with DynamoDB, consider integrating batch operations into your workflows to maximize the potential of your applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recap of key points
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;BatchGetItem can retrieve up to 100 records or 16 MB of data in a single request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;BatchWriteItem can be used to insert or delete up to 25 records or 16 MB of data in a single request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using Batch* operations can help you reduce the execution time considerably by aggregating requests that were currently being done in sequence.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Additional resources and references
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/Lorenzohidalgo/dynamodb-batch-samples" rel="noopener noreferrer"&gt;https://github.com/Lorenzohidalgo/dynamodb-batch-samples&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchExecuteStatement.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchExecuteStatement.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dynamodb</category>
      <category>aws</category>
      <category>serverless</category>
      <category>nosql</category>
    </item>
    <item>
      <title>Enhance Your AppSync API Development: Easy Steps to Auto-Generate Postman Collections</title>
      <dc:creator>Lorenzo Hidalgo Gadea</dc:creator>
      <pubDate>Fri, 15 Mar 2024 11:32:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/enhance-your-appsync-api-development-easy-steps-to-auto-generate-postman-collections-5f3i</link>
      <guid>https://dev.to/aws-builders/enhance-your-appsync-api-development-easy-steps-to-auto-generate-postman-collections-5f3i</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;TL;DR;&lt;br&gt;&lt;br&gt;
This article covers how developers can take advantage of the &lt;code&gt;serverless-gql-generator&lt;/code&gt; plugin for Serverless Framework to automatically generate new Postman Collections or Raw Requests for an AppSync API.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A few weeks ago, I announced the release of a new Serverless Framework Plugin that I had been working on called &lt;a href="https://www.npmjs.com/package/serverless-gql-generator"&gt;serverless-gql-generator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The idea for this plugin came from the difficulties faced during one of my projects where it was too time-consuming for developers to keep the sample requests and Postman Collection for every new schema update. This pain point was further intensified when we started to develop multiple interdependent AppSync APIs simultaneously, requiring multiple Postman Collections to be updated and shared across teams.&lt;/p&gt;

&lt;p&gt;In this article, we will cover how to integrate the Serverless Framework plugin into your development flow and CICD pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Main Plugin Features
&lt;/h2&gt;

&lt;p&gt;As of writing this article, the current version &lt;code&gt;1.2.1&lt;/code&gt; of the plugin has the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic GraphQL Request Generation&lt;/strong&gt; - The plugin will generate new requests on demand or during deployment, ensuring they are always up to date with the latest schema version.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic URL &amp;amp; API Key Retrieval&lt;/strong&gt; - The URL and API Key will be automatically populated with the latest AppSync configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Choose between Inline or using variable file input&lt;/strong&gt; - Developers can configure the plugin to generate requests with Inline input or with a separate file for variable input.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Exports requests to independent Files and Postman Collections&lt;/strong&gt; - Depending on the use case, developers can configure the plugin to export the generated requests to independent &lt;code&gt;.graphql&lt;/code&gt; files, a Postman Collect or both.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Upload the generated files to S3&lt;/strong&gt; - As the icing on the cake, this plugin also automates the upload of the generated files to the configured S3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In order to be able to implement this plugin into your workflow you will at least need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Node JS installation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Account to deploy the API&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A project that uses the Serverless Framework to deploy an AppSync API - examples will be shown using &lt;a href="https://github.com/Lorenzohidalgo/appsync-decoupling-sample"&gt;this repository&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.postman.com/downloads/"&gt;Postman&lt;/a&gt;, &lt;a href="https://graphbolt.dev/"&gt;GraphBolt&lt;/a&gt; or any other tool to send requests and test the generated requests&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation Process
&lt;/h2&gt;

&lt;p&gt;Installing and adding the plugin to your project is fairly straightforward to accomplish by following two simple steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install the plugin using NPM
&lt;/h3&gt;

&lt;p&gt;Use the following command to install the plugin and save it as a &lt;code&gt;devDependency&lt;/code&gt; in your project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;npm&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;D&lt;/span&gt; &lt;span class="nx"&gt;serverless&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;gql&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;generator&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add the Plugin to your project
&lt;/h3&gt;

&lt;p&gt;Add the plugin under the &lt;code&gt;plugins&lt;/code&gt; list in your &lt;code&gt;serverless.yml&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;

&lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;serverless-gql-generator&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using the plugin
&lt;/h2&gt;

&lt;p&gt;After adding the plugin to the &lt;code&gt;plugins&lt;/code&gt; list, it will start generating the Postman Collections every time the service is deployed.&lt;/p&gt;

&lt;p&gt;The experience can be improved further by configuring the plugin to match your needs better and by using the CLI commands to enable the request generation without the need to redeploy your service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overriding the default behaviour
&lt;/h3&gt;

&lt;p&gt;The plugin's default behaviour is to generate (and save locally under the &lt;code&gt;./output/&lt;/code&gt; folder) a Postman Collection.&lt;/p&gt;

&lt;p&gt;This behaviour can be configured by overriding the defaults and adding any of the configuration attributes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;

&lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;serverless-gql-generator&lt;/span&gt;

&lt;span class="na"&gt;gql-generator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./schema.graphql&lt;/span&gt; &lt;span class="c1"&gt;# Overrides default schema path &lt;/span&gt;
    &lt;span class="na"&gt;encoding&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;utf-8&lt;/span&gt;
    &lt;span class="na"&gt;assumeValidSDL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./output&lt;/span&gt; &lt;span class="c1"&gt;# Output directory&lt;/span&gt;
    &lt;span class="na"&gt;s3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# Enables Upload to AWS S3&lt;/span&gt;
      &lt;span class="na"&gt;bucketName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gql-output-bucket&lt;/span&gt; &lt;span class="c1"&gt;# Mandatory Bucket name&lt;/span&gt;
      &lt;span class="na"&gt;folderPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3folder/path&lt;/span&gt; &lt;span class="c1"&gt;# Override Folder Path inside s3, defaults to `${service}/${stage}`&lt;/span&gt;
      &lt;span class="na"&gt;skipLocalSaving&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="c1"&gt;# if the files should also be saved locally or not&lt;/span&gt;
    &lt;span class="na"&gt;useVariables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;# use variables or have the input inline&lt;/span&gt;
    &lt;span class="na"&gt;maxDepth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt; &lt;span class="c1"&gt;# max depth for schema recursion&lt;/span&gt;
    &lt;span class="na"&gt;rawRequests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="c1"&gt;# set to true to generate raw requests&lt;/span&gt;
    &lt;span class="na"&gt;postman&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;collectionName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-name&lt;/span&gt; &lt;span class="c1"&gt;# Overrides colection name, defaults to `${service}-${stage}`&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://test.com/graphql&lt;/span&gt; &lt;span class="c1"&gt;# Overrides url for postman collection&lt;/span&gt;
      &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;abc-123&lt;/span&gt; &lt;span class="c1"&gt;# Overrides default API Key if any&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Default Schema options
&lt;/h4&gt;

&lt;p&gt;The plugin expects the schema to be in the root folder of the project and be called &lt;code&gt;./schema.graphql&lt;/code&gt;, Developers can update the configuration under &lt;code&gt;gql-generator.schema&lt;/code&gt; in order to update the &lt;code&gt;path&lt;/code&gt; or &lt;code&gt;encoding&lt;/code&gt; of their schema.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;gql-generator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./schema.graphql&lt;/span&gt; 
    &lt;span class="na"&gt;encoding&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;utf-8&lt;/span&gt;
    &lt;span class="na"&gt;assumeValidSDL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Default Request Generation Options
&lt;/h4&gt;

&lt;p&gt;Overriding the configuration on how the requests are being generated can be done by updating the following attributes under &lt;code&gt;gql-generator.output&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;gql-generator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./output&lt;/span&gt;
    &lt;span class="na"&gt;useVariables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;maxDepth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="na"&gt;rawRequests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.directory&lt;/code&gt; - path to the directory where the generated files should be stored at&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.useVariables&lt;/code&gt; - flag to decide if the generated requests should have inline input or dedicated variable files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.maxDepth&lt;/code&gt; - maximum allowed depth for the generated requests, used to avoid infinite recursions in the requests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.rawRequests&lt;/code&gt; - flag to indicate if the raw requests (&lt;code&gt;.graphql&lt;/code&gt; files) should also be stored. This flag has to be set to &lt;code&gt;true&lt;/code&gt; if &lt;code&gt;output.postman&lt;/code&gt; is set to &lt;code&gt;false&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Default Postman Collection Configuration
&lt;/h4&gt;

&lt;p&gt;The plugin will, by default, generate a Postman Collection that will be called based on &lt;code&gt;${service}-${stage}&lt;/code&gt; and fetch the URL and API Key from the deployed API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;gql-generator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;postman&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;collectionName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-name&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://test.com/graphql&lt;/span&gt;
      &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;abc-123&lt;/span&gt;

&lt;span class="c1"&gt;# OR&lt;/span&gt;

&lt;span class="na"&gt;gql-generator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;postman&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Developers can override this configuration by updating the following attributes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.postman&lt;/code&gt; - can be set to &lt;code&gt;false&lt;/code&gt; to avoid the Postman Collection being generated, for that scenario &lt;code&gt;output.rawRequests&lt;/code&gt; will be required to be &lt;code&gt;true&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.postman.collectionName&lt;/code&gt; - used to override the name to be used for the generated collection&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.postman.url&lt;/code&gt; - used to override the API endpoint, especially useful if the API has a custom domain configured or you want to use the path to the CDN or Proxy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.postman.apiKey&lt;/code&gt; - used to override the API KEY to be used to authenticate the requests&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Uploading files to S3
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠&lt;br&gt;
Developers or CI/CD pipelines will require credentials with write access to the desired bucket.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The output is only saved locally by default on the machine that generates the files, but there is the option to upload the files to S3 at the same time, making it even easier to share the results with other developers or teams.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;gql-generator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;s3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;bucketName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gql-output-bucket&lt;/span&gt;
      &lt;span class="na"&gt;folderPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3folder/path&lt;/span&gt;
      &lt;span class="na"&gt;skipLocalSaving&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to enable the export to S3 features, developers will need to update the following configuration under &lt;code&gt;gql-generator.output.s3&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.s3.bucketName&lt;/code&gt; - &lt;strong&gt;mandatory&lt;/strong&gt; field, has to contain the name of an &lt;strong&gt;existing bucket&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.s3.folderPath&lt;/code&gt; - used to override the folder path in S3 where the files will be stored, by default it will create the following folder structure to store the files &lt;code&gt;${service}/${stage}&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;output.s3.skipLocalSaving&lt;/code&gt; - flag to indicate if the files should be stored locally or only uploaded to S3&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CLI commands
&lt;/h3&gt;

&lt;p&gt;Developers might want to trigger the request generation without wanting to redeploy the API again, the plugin exposes some CLI commands to allow for that scenario.&lt;/p&gt;

&lt;h4&gt;
  
  
  Schema Validation
&lt;/h4&gt;

&lt;p&gt;When creating or updating a Schema developers might want to validate that it's formatted appropriately.&lt;/p&gt;

&lt;p&gt;The plugin exposes the following command to allow developers to validate the provided schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;serverless gql-generator validate-schema
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the above line has been executed on the terminal, it will prompt any possible issues or confirm that the schema is valid and ready to be used.&lt;/p&gt;

&lt;h4&gt;
  
  
  Requests generation
&lt;/h4&gt;

&lt;p&gt;By default, the plugin generates all requests on every deployment, but there could be a scenario where one would like to generate them without deploying the API again.&lt;/p&gt;

&lt;p&gt;Developers might use the following command to trigger the Request generation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;serverless gql-generator generate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once executed, the plugin will generate all requests based on the configuration.&lt;/p&gt;

&lt;p&gt;This command can be especially useful to confirm that the plugin is configured as expected or to generate the requests again in case the output folder has been added to &lt;code&gt;.gitignore&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD integration
&lt;/h3&gt;

&lt;p&gt;This plugin is easily integrated into one's CI/CD pipeline as it will be triggered automatically during the deployment process.&lt;/p&gt;

&lt;p&gt;To access the generated files, developers might choose between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Workflow Artifacts&lt;/strong&gt; - Configuring the pipeline to &lt;a href="https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts"&gt;export the artifacts&lt;/a&gt; generated during the process would be equivalent to generating the files locally, but the developers would need to download them from the workflow execution.&lt;br&gt;&lt;br&gt;
This approach is preferred for example for feature branches, where the output might change multiple times during the development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Export&lt;/strong&gt; - Output files will be uploaded automatically to S3 for easier access and sharing.&lt;br&gt;&lt;br&gt;
This is the preferred approach for stable or shared branches, such as &lt;code&gt;dev&lt;/code&gt; or &lt;code&gt;main&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Adding the serverless-gql-generator plugin to your stack can help you and your team save time by automating the process of generating and sharing GraphQL requests.&lt;/p&gt;

&lt;p&gt;I would love to hear about your experience using the plugin. Please use &lt;a href="https://github.com/Lorenzohidalgo/serverless-gql-generator/issues"&gt;GitHub Issues&lt;/a&gt; to report any issues while using the plugin or requesting new features.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>graphql</category>
      <category>appsync</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Unlocking the Power of DynamoDB Condition Expressions</title>
      <dc:creator>Lorenzo Hidalgo Gadea</dc:creator>
      <pubDate>Tue, 30 Jan 2024 09:41:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/unlocking-the-power-of-dynamodb-condition-expressions-4jnk</link>
      <guid>https://dev.to/aws-builders/unlocking-the-power-of-dynamodb-condition-expressions-4jnk</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR;&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This article explores DynamoDB condition expressions, highlighting their potential to enhance database operations. They help prevent accidental data overwriting and facilitate conditional updates. The article also details how to handle errors efficiently when condition checks fail, thus reducing the need for additional requests. By leveraging these expressions, developers can create more efficient and robust applications.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Are you tired of your &lt;code&gt;PutItem&lt;/code&gt; operations overwriting existing items?&lt;br&gt;&lt;br&gt;
Or your &lt;code&gt;UpdateItem&lt;/code&gt; inserting a new object if the provided keys don't exist?&lt;br&gt;&lt;br&gt;
Are you still reading an object before updating it to ensure its current state?&lt;/p&gt;

&lt;p&gt;If you replied to any of the above questions with a yes, the title triggered your inner refactoring freak or you're just keen to learn new approaches, then this article is for you. In this article we will discover and learn how to take the most advantage of (what I think) is a somewhat unknown feature: DynamoDB Condition Expressions.&lt;/p&gt;
&lt;h2&gt;
  
  
  DynamoDB Expressions
&lt;/h2&gt;

&lt;p&gt;When working with DynamoDB developers use &lt;code&gt;expressions&lt;/code&gt; to state what actions or conditions they want DynamoDB to process.&lt;/p&gt;

&lt;p&gt;There are different types of expressions that developers can use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Projection Expressions&lt;/code&gt; - Similar to the &lt;code&gt;SELECT&lt;/code&gt; statement in SQL, this type of expression is used to select what attributes you want DynamoDB to return when reading objects. By default, DynamoDB returns all attributes (same as using &lt;code&gt;SELECT *&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Update Expressions&lt;/code&gt; - Similar to the &lt;code&gt;UPDATE&lt;/code&gt; statement in SQL, this expression is used in the &lt;code&gt;UpdateItem&lt;/code&gt; requests to specify what changes you want to apply to the desired object. Developers can &lt;code&gt;ADD&lt;/code&gt;, &lt;code&gt;SET&lt;/code&gt;, &lt;code&gt;REMOVE&lt;/code&gt; or &lt;code&gt;DELETE&lt;/code&gt; attributes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Condition Expressions&lt;/code&gt; - This article's &lt;em&gt;star of the show;&lt;/em&gt; could be compared to the &lt;code&gt;WHERE&lt;/code&gt; clause of a SQL statement. This expression type will allow developers, as the name implies, to ensure a condition is met before the data is manipulated.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Using Condition Expressions
&lt;/h2&gt;

&lt;p&gt;Developers can use &lt;code&gt;condition expressions&lt;/code&gt; in any DynamoDB operation that manipulates data, such as &lt;code&gt;PutItem&lt;/code&gt; or &lt;code&gt;UpdateItem&lt;/code&gt;, and using such expressions opens the door to a whole new set of business logic and fail-safes that can be implemented without adding any additional requests to DynamoDB.&lt;/p&gt;
&lt;h3&gt;
  
  
  Avoid overriding existing objects
&lt;/h3&gt;

&lt;p&gt;DynamoDB will, by default, override any existing data if an &lt;code&gt;PutItem&lt;/code&gt; operation is triggered with existing keys.&lt;/p&gt;

&lt;p&gt;There are scenarios where one would like to store a new object but only if it doesn't exist, for example, if we want to do some kind of idempotency check or if we want to avoid overriding existing data.&lt;/p&gt;

&lt;p&gt;To do that, one could approach the issue by doing a &lt;code&gt;Get&lt;/code&gt;/&lt;code&gt;Query&lt;/code&gt; and only triggering the &lt;code&gt;Put&lt;/code&gt; if the item doesn't exist, but using condition expressions allows us to do everything in a single request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PutCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="p"&gt;...,&lt;/span&gt;
    &lt;span class="na"&gt;ConditionExpression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;attribute_not_exists(#id)&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;ExpressionAttributeNames&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PK&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example snippet we're doing a &lt;code&gt;PutItem&lt;/code&gt; request and adding the condition expression of &lt;code&gt;attribute_not_exists(#id)&lt;/code&gt;, this will indicate DynamoDB to only store the provided object if the current primary key (&lt;code&gt;PK&lt;/code&gt;) is not in use.&lt;/p&gt;

&lt;p&gt;By doing it, DynamoDB won't override any existing data and will also throw an error if the condition is not met (an object already exists).&lt;/p&gt;

&lt;p&gt;Full example &lt;a href="https://github.com/Lorenzohidalgo/dynamodb-conditional-samples/blob/main/samples/putItemWithCondition.js"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Update only existing items
&lt;/h3&gt;

&lt;p&gt;Similar to the insert, DynamoDBs behaviour on &lt;code&gt;UpdateItem&lt;/code&gt; can also be a tad counter-intuitive, as it will execute and &lt;code&gt;UPSERT&lt;/code&gt; operations and not an &lt;code&gt;UPDATE&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For some scenarios it won't be an issue to always execute &lt;code&gt;UPSERT&lt;/code&gt; operations by default, but in general one will want the &lt;code&gt;UPDATE&lt;/code&gt; operation to fail if the record doesn't exist to avoid creating new records with only partial data that will probably trigger errors downstream.&lt;/p&gt;

&lt;p&gt;Implementing that change is very easy, as one just needs to add a simple condition expression as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;UpdateCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="p"&gt;...,&lt;/span&gt;
      &lt;span class="na"&gt;ConditionExpression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;attribute_exists(#id)&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;ExpressionAttributeNames&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PK&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;docClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example snippet we're doing a &lt;code&gt;UpdateItem&lt;/code&gt; request and adding the condition expression of &lt;code&gt;attribute_exists(#id)&lt;/code&gt;, this will indicate DynamoDB to only update the desired object if the current primary key (&lt;code&gt;PK&lt;/code&gt;) exists.&lt;/p&gt;

&lt;p&gt;With this condition, if DynamoDB can't find an update with the provided keys, the current primary key (&lt;code&gt;PK&lt;/code&gt;) will not exist and the condition would fail, meaning that the &lt;code&gt;UpdateItem&lt;/code&gt; request will throw an error.&lt;/p&gt;

&lt;p&gt;Full example &lt;a href="https://github.com/Lorenzohidalgo/dynamodb-conditional-samples/blob/main/samples/putItemWithCondition.js"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conditional Updates
&lt;/h3&gt;

&lt;p&gt;But the magic of Condition Expressions doesn't stop on avoiding unexpected behaviours from DynamoDB, they open a whole new range of possibilities.&lt;/p&gt;

&lt;p&gt;For example, let's imagine a scenario where, during the login flow, you want to check if the account is frozen or not (user should not be able to log in) and if the account is active, update a field storing the timestamp and device information of the current login.&lt;/p&gt;

&lt;p&gt;For this scenario, one could first query the data to check the account status and if the check is successfull send another request to update the object with the desired login information. This approach is technically correct and would probably work for most requirements, but it opens the following concerns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Number of Requests&lt;/strong&gt; - The most common scenario will be that the account is not frozen and the login should succeed, meaning that for almost all logins two requests will be sent to DynamoDB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Consistency&lt;/strong&gt; - This approach would be unable to handle or control any change on the data between both requests, meaning that, if the account is being deactivated at the same time, the login could succeed when it should have failed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the use of the Condition Expression we can move that business logic to be execute on DynamoDBs side.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🤯&lt;br&gt;
Did you know that you can edit nested attributes directly? You just need to use the &lt;code&gt;.&lt;/code&gt;(dot) or &lt;code&gt;[x]&lt;/code&gt; notation when referencing it! &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.Attributes.html#Expressions.Attributes.NestedAttributes"&gt;Check the docs&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;UpdateCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="p"&gt;...,&lt;/span&gt;
      &lt;span class="na"&gt;ConditionExpression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;attribute_exists(#id) AND #obj.#key = :expected&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;ExpressionAttributeNames&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PK&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#obj&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;accountInformation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#key&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;isFrozen&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;ExpressionAttributeValues&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;:expected&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;docClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example snippet we're adding the condition expression of &lt;code&gt;attribute_exists(#id) AND #obj.#key = :expected&lt;/code&gt;, this will indicate DynamoDB to only update the desired object if the current primary key (&lt;code&gt;PK&lt;/code&gt;) exists and if the nested attribute &lt;code&gt;accountInformation.isFrozen&lt;/code&gt; is set to &lt;code&gt;false&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With this condition, we move the business logic to check the account status to be executed by DynamoDB directly, meaning that the object will only be updated if the condition is met. This way we only need to consider that the login should only fail if the &lt;code&gt;UpdateItem&lt;/code&gt; request has failed.&lt;/p&gt;

&lt;p&gt;Full example &lt;a href="https://github.com/Lorenzohidalgo/dynamodb-conditional-samples/blob/main/samples/putItemWithCondition.js"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Handling Conditional Check Failures
&lt;/h3&gt;

&lt;p&gt;Now we know how to implement the condition expressions but we are missing what to do when the condition fails.&lt;/p&gt;

&lt;p&gt;What if the expression contains multiple conditions and we want to have a custom error message depending on wich condition failed, should we trigger a new request to query the object?&lt;/p&gt;

&lt;p&gt;The answer is no, and this is probably one of the hidden features that will help you the most. At this point, you are probably aware of the &lt;code&gt;ReturnValues&lt;/code&gt; option to configure what information you would like DynamoDB to return, but did you know there is also &lt;code&gt;ReturnValuesOnConditionCheckFailure&lt;/code&gt;?&lt;/p&gt;

&lt;p&gt;It works similarly as the &lt;code&gt;ReturnValues&lt;/code&gt; option, but with only two available options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;NONE&lt;/code&gt; - the default, the thrown error won't have any information regarding the current object attributes and values.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ALL_OLD&lt;/code&gt; - setting this option will request DynamoDB to throw an error that contains the old (current) object information in the response.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;UpdateCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="p"&gt;...,&lt;/span&gt;
      &lt;span class="na"&gt;ReturnValuesOnConditionCheckFailure&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ALL_OLD&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;docClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nx"&gt;ConditionalCheckFailedException&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;oldRecord&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;unmarshall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// Apply any logic to check why the update failed&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above snippet showcases how one could approach the error handling for conditional expressions with three simple steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Set &lt;code&gt;ReturnValuesOnConditionCheckFailure: "ALL_OLD"&lt;/code&gt; to ensure you receive the object information if the condition fails.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure you only catch the expected exception types by re-throwing any error that is not an instance of &lt;code&gt;ConditionalCheckFailedException&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Retrieve and unmarshall the record data from the returned error with &lt;code&gt;unmarshall(error.Item)&lt;/code&gt;. The &lt;code&gt;Item&lt;/code&gt; value will be the marshalled object that we tried to update, it's important to unmarshall it or ensure we access it's properties respecting the marshalling.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After those steps have been implemented, one can add any additional desired business logic to check why the operation failed and handle the error appropriately based on which of the conditions failed.&lt;/p&gt;

&lt;p&gt;Full example &lt;a href="https://github.com/Lorenzohidalgo/dynamodb-conditional-samples/blob/main/samples/putItemWithCondition.js"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;In conclusion, DynamoDB condition expressions provide a powerful tool that can greatly enhance the efficiency and reliability of your database operations. They allow developers to implement complex business logic directly within their database operations, reducing the number of requests and mitigating risks of data inconsistency.&lt;/p&gt;

&lt;p&gt;This feature can help prevent unintended data overwriting, ensure updates only apply to existing items, and enable conditional updates based on specific criteria. With the ability to return current object data when condition checks fail, developers can handle errors more effectively without making additional requests.&lt;/p&gt;

&lt;p&gt;By fully utilizing this feature, you can unlock the true potential of DynamoDB and create more robust and efficient applications.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>javascript</category>
      <category>dynamodb</category>
      <category>serverless</category>
    </item>
    <item>
      <title>AWS AppSync Subscriptions: Detaching Prolonged Operations</title>
      <dc:creator>Lorenzo Hidalgo Gadea</dc:creator>
      <pubDate>Tue, 05 Dec 2023 08:47:16 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-appsync-subscriptions-detaching-prolonged-operations-4hin</link>
      <guid>https://dev.to/aws-builders/aws-appsync-subscriptions-detaching-prolonged-operations-4hin</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;TL;DR: This article discusses how to use AppSync Subscriptions to decouple long-running tasks from front-end requests in a serverless chat application. It provides a step-by-step guide to implement this solution, including an architecture overview, decoupling and processing steps, prerequisites, GraphQL schema, AppSync API configuration, DataSources and Resolvers, and Lambda function setup. The sample code and complete implementation can be found in the provided GitHub repository.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When building APIs, developers often face the issue of long-running tasks timing out requests from the Front End or difficulty decoupling those longer tasks from the actual FE requests while informing it of the execution status.&lt;/p&gt;

&lt;p&gt;In this article, and as part of the &lt;a href="https://hackathon.serverless.guru/"&gt;Serverless Holiday Hackathon&lt;/a&gt;, we will review how developers can take advantage of AppSync Subscriptions to decuple long-running tasks from the actual FE request.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;The Hackathon challenges participants to build holiday-themed chat applications that use Generative AI.&lt;/p&gt;

&lt;p&gt;While building such an application, developers will probably face the possible difficulties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Requests to Badrock or any other LLM API could entice long-running requests to generate longer responses, these could take longer than 30 seconds and timeout the requests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Not knowing how to take advantage of streamed responses, which would provide the final user with a more interactive experience.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this example, we will cover how to resolve both scenarios in two simple steps, while still leaving room for improvement and personalization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Decoupling
&lt;/h3&gt;

&lt;p&gt;As the first step, we will need to decouple the front-end request from the actual processing. To do so, we can configure a JS Resolver to send the message to be processed to a SQS queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rAe5AYsk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1701719096831/de956cb7-d165-4655-91ef-10d5c94728f5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rAe5AYsk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1701719096831/de956cb7-d165-4655-91ef-10d5c94728f5.png" alt="sendPrompt Mutation overview" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The flow would look like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The user sends a request to AppSync.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The JS Resolver creates a unique identifier for the received prompt and adds the message to the SQS Queue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AppSync returns the unique identifiers to the User.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The User will need the response for the following step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Process and Notify
&lt;/h3&gt;

&lt;p&gt;The second step will handle the prompt processing and notifying the user by sending a dummy mutation request that will trigger a subscription.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lFjgmiqP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1701719513475/c2259d07-aa7c-48d7-8a6f-1b4e727615a5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lFjgmiqP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1701719513475/c2259d07-aa7c-48d7-8a6f-1b4e727615a5.png" alt="Subscription Trigger overview" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The flow for this step would be composed of the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The user subscribes via AppSync to updates on the &lt;code&gt;streamedResponse&lt;/code&gt; mutation using the provided identifiers in the previous response.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The SQS Queue will trigger the Lambda for each message added to the same Queue by the above-explained mutation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Lambda Function will send a &lt;code&gt;streamedResponse&lt;/code&gt; mutation request for all updates that we want to notify the user with.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each request sent as a &lt;code&gt;streamedResponse&lt;/code&gt; mutation will trigger subscribed users to be notified by every response that matches the filtering requirements.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Implementing the solution
&lt;/h2&gt;

&lt;p&gt;In this section, we will go over how to implement the solution that we described in the previous step.&lt;/p&gt;

&lt;p&gt;All details and code can be found in the following &lt;a href="https://github.com/Lorenzohidalgo/appsync-decoupling-sample"&gt;sample application repository&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To correctly follow and deploy the sample application, developers will need to fulfill the following requisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Node JS installation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Account to deploy the API&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Postman or GraphBolt to send requests and test the flow&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  GraphQL Schema
&lt;/h3&gt;

&lt;p&gt;A mock schema has been defined for this application, part of the schema can be seen here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;schema {
  query: Query
  mutation: Mutation
  subscription: Subscription
}

type Query @aws_api_key @aws_iam {
  getSessionId: ID!
}

type Mutation {
  sendPrompt(userPrompt: userPrompt!): promptResponse @aws_api_key @aws_iam
  streamedResponse(streamedResponseInput: StreamedResponseInput!): StreamedResponse @aws_iam
}

type Subscription @aws_api_key @aws_iam {
  onStreamedResponse(sessionId: ID!): StreamedResponse @aws_subscribe(mutations: ["streamedResponse"])
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important part of it is the auth directives for the mutations, where &lt;code&gt;streamedResponse&lt;/code&gt; is only enabled for &lt;code&gt;@aws_iam&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is an important configuration aspect as we want only our Back End services to be able to trigger this mutation.&lt;/p&gt;

&lt;h3&gt;
  
  
  AppSync API
&lt;/h3&gt;

&lt;p&gt;To configure the AppSync API using Serverless Framework we will be taking advantage of the &lt;a href="https://github.com/sid88in/serverless-appsync-plugin"&gt;Serverless AppSync Plugin&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;appSync&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.base}-appsync&lt;/span&gt;
  &lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ALL&lt;/span&gt;
    &lt;span class="na"&gt;retentionInDays&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;xrayEnabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;authentication&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS_IAM&lt;/span&gt;
  &lt;span class="na"&gt;additionalAuthentications&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;API_KEY&lt;/span&gt;
  &lt;span class="na"&gt;apiKeys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${self:custom.base}-key&lt;/span&gt;
  &lt;span class="na"&gt;substitutions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;accountId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::AccountId&lt;/span&gt;
    &lt;span class="na"&gt;queueName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;decoupling-sqs&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some key insights from the above configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cloudwatch can end up being expensive, but to avoid racking up a high bill we configured them to only be retained for one day. We kept the log level to &lt;code&gt;ALL&lt;/code&gt; to ensure we can see all logs during debugging, but make sure to lower that for any production projects, AppSync Logs are very verbose.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multiple authentication methods, we want two different auth methods to ensure that: Our API is private and that we can limit who can trigger the &lt;code&gt;streamedResponse&lt;/code&gt; mutation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Substitutions: AppSync resolvers don't support environment variables, a workaround for that would be to use substitutions. This feature will act as environment variables by substituting some mock text in the resolver code with the actual required values.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  DataSources and Resolvers
&lt;/h3&gt;

&lt;p&gt;Apart from the above API configuration we also need to ensure we configure the code that will resolve each operation and the different data sources used by them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
  &lt;span class="na"&gt;dataSources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;localResolverDS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NONE"&lt;/span&gt;
    &lt;span class="na"&gt;sqsDS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;HTTP"&lt;/span&gt;
      &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s"&gt;https://sqs.${AWS::Region}.amazonaws.com/&lt;/span&gt;
        &lt;span class="na"&gt;iamRoleStatements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Allow"&lt;/span&gt;
            &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sqs:*"&lt;/span&gt;
            &lt;span class="na"&gt;Resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;Fn::GetAtt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MyQueue&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Arn&lt;/span&gt;
        &lt;span class="na"&gt;authorizationConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;authorizationType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS_IAM&lt;/span&gt;
          &lt;span class="na"&gt;awsIamConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;signingRegion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;Ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Region&lt;/span&gt;
            &lt;span class="na"&gt;signingServiceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sqs&lt;/span&gt;
  &lt;span class="na"&gt;resolvers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Mutation.sendPrompt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;UNIT&lt;/span&gt;
      &lt;span class="na"&gt;dataSource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sqsDS&lt;/span&gt;
      &lt;span class="na"&gt;code&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./src/appsync/sendPrompt.js"&lt;/span&gt;
    &lt;span class="na"&gt;Mutation.streamedResponse&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;UNIT&lt;/span&gt;
      &lt;span class="na"&gt;dataSource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localResolverDS&lt;/span&gt;
      &lt;span class="na"&gt;code&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./src/appsync/streamedResponse.js"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key takeaways from this config are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Data Sources: What Resolvers use to fetch data and resolve operations. In this case, we configure two different types.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resolvers: In this section, we define what kind, data source and code will be used to resolve a specific operation or data type.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Decoupling Lambda
&lt;/h3&gt;

&lt;p&gt;Once we have the API up and running, we can focus on how to configure a Lambda function to process all messages from the SQS Queue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;sqsHandler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;src/decoupled.handler&lt;/span&gt;
    &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LambdaRole&lt;/span&gt;
    &lt;span class="na"&gt;logRetentionInDays&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;GRAPHQL_ENDPOINT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;Fn&lt;/span&gt;&lt;span class="pi"&gt;::&lt;/span&gt;&lt;span class="nv"&gt;GetAtt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;GraphQlApi&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;GraphQLUrl&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;Ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Region&lt;/span&gt;
    &lt;span class="na"&gt;events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;sqs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;Fn::GetAtt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MyQueue&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Arn&lt;/span&gt;
          &lt;span class="na"&gt;batchSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration is not different than any other Lambda Function triggered by a SQS Queue. But there are still some takeaway points from this configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;IAM Role: Developers will need to add and configure a custom IAM role for this Lambda role to be able to sign requests to AppSync.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Log retention: Similar to the AppSync configuration, we want to limit the time that the logs are stored, in this case, the logs should be deleted after one day.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AppSync API Endpoint: Something that developers can struggle with is getting the URL endpoint from the AppSync API generated in the same &lt;code&gt;serverless.yml&lt;/code&gt;. To get that value one could use &lt;code&gt;{ Fn::GetAtt: [GraphQlApi, GraphQLUrl] }&lt;/code&gt; to resolve it during deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Implementing the code
&lt;/h3&gt;

&lt;p&gt;The code to complete the above example configuration can be found on the provided &lt;a href="https://github.com/Lorenzohidalgo/appsync-decoupling-sample"&gt;Github Repository&lt;/a&gt;, but the following is an example of one of the trickiest parts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;util&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-appsync/utils&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;accountId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#accountId#&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;queueName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;#queueName#&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;userPrompt&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;msgBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;userPrompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;messageId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;util&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;autoId&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stash&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;msgBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;msgBody&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2018-05-29&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;resourcePath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;accountId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;queueName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Action=SendMessage&amp;amp;Version=2012-11-05&amp;amp;MessageBody=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nx"&gt;msgBody&lt;/span&gt;
      &lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;content-type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/x-www-form-urlencoded&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code sample is part of the JS resolver configured for the &lt;code&gt;sendPrompt&lt;/code&gt; mutation. As part of this sample, we can learn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;JS Resolver substitutions: When using substitutions with JS resolvers, developers need to make sure they define a variable &lt;code&gt;const accountId = "#accountId#";&lt;/code&gt; where the value will be replaced with the value provided in the configuration with the same name as the one between the &lt;code&gt;#&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Building a &lt;code&gt;HTTP&lt;/code&gt; request: The returned object by the &lt;code&gt;request&lt;/code&gt; function is an example of how to build an HTTP request for accessing/triggering the SQS API.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;In conclusion, AWS AppSync Subscriptions can effectively decouple long-running tasks from front-end requests in serverless chat applications.&lt;/p&gt;

&lt;p&gt;By implementing the two-step process of decoupling and processing with notifications, developers can enhance user experience and avoid request timeouts.&lt;/p&gt;

&lt;p&gt;The provided sample code and repository offer a practical guide to implementing this solution, showcasing the use of GraphQL schema, AppSync API configuration, data sources, resolvers, and Lambda function setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/Lorenzohidalgo/appsync-decoupling-sample"&gt;Sample Github Repository&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/sid88in/serverless-appsync-plugin"&gt;Serverless AppSync Plugin Repository&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>appsync</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Using AWS X-RAY to improve Serverless performance</title>
      <dc:creator>Lorenzo Hidalgo Gadea</dc:creator>
      <pubDate>Wed, 19 Jul 2023 10:00:00 +0000</pubDate>
      <link>https://dev.to/lhidalgo_dev/using-aws-x-ray-to-improve-serverless-performance-1c5h</link>
      <guid>https://dev.to/lhidalgo_dev/using-aws-x-ray-to-improve-serverless-performance-1c5h</guid>
      <description>&lt;p&gt;Building &lt;strong&gt;high performance serverless applications&lt;/strong&gt; can be tough, but a service like &lt;strong&gt;AWS X-Ray&lt;/strong&gt; will help you understand your &lt;strong&gt;AWS Lambda&lt;/strong&gt; application code better, &lt;strong&gt;slow DynamoDB queries&lt;/strong&gt; or HTTPS requests, and then track how your changes improve over time.&lt;/p&gt;

&lt;p&gt;AWS X-Ray is a service that collects data from all the requests handled by your services, allowing you to visualize and analyze it. It generates service maps, response time or duration distribution, and segment timelines to help developers debug performance issues and improve the overall performance of their code. Setting up X-Ray is straightforward and only requires a few simple steps, including enabling tracing and configuring what X-Ray should capture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Observability and code profiling is, at least in most cases, an afterthought that comes into play once it’s too late. Most developers start thinking about it when they face issues, timeouts, or bottlenecks that they can’t easily resolve or justify with only the execution logs.&lt;/p&gt;

&lt;p&gt;There are a lot of third-party observability and profiling third-party tools out there that might be useful, for example, &lt;a href="https://docs.sentry.io/platforms/node/guides/aws-lambda/profiling/?utm_m=&amp;amp;utm_source=blog"&gt;Sentry.io&lt;/a&gt;, &lt;a href="https://www.dynatrace.com/solutions/full-stack-monitoring/"&gt;Dynatrace&lt;/a&gt;, or &lt;a href="https://www.datadoghq.com/product/apm/"&gt;DataDog&lt;/a&gt;. But if you want to stay inside the AWS ecosystem, and have an almost effortless integration, &lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html"&gt;AWS X-Ray&lt;/a&gt; would be the tool to choose.&lt;/p&gt;

&lt;p&gt;In this article, we will go over what AWS X-Ray is, how we can enable it, and how it will help us to better understand and debug the performance of our code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS X-Ray?
&lt;/h2&gt;

&lt;p&gt;As per AWS's words, “AWS X-Ray is a service that collects data from all the requests handled by your services and allows you to visualize and analyze it.” In other words, AWS X-Ray is for requests and services interaction what Cloudwatch is for execution logs.&lt;/p&gt;

&lt;p&gt;When enabled and correctly configured, AWS X-Ray will collect and measure all service interactions for a specific request. This information will be stored and made available for analysis in the following ways:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Service Maps
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS X-Ray&lt;/strong&gt; generates service maps to allow for a visual understanding of how all different services interact with each other. Using a somewhat simple CRUD orders API as an example; we’d be able to see the following two service maps:&lt;/p&gt;

&lt;h4&gt;
  
  
  Global Service Map:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lqhQCk9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/654501de56a77e22260138b2_aws-x-ray-diagram-trace-map.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lqhQCk9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/654501de56a77e22260138b2_aws-x-ray-diagram-trace-map.webp" alt="An AWS X-Ray trace map" width="690" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This type of service map will allow the user to understand and visualize how all the services under the same domain interact. In this screenshot, we can see that the end client interacts with a single API Gateway which, depending on the request type, depends on a set of Lambda functions to process the request and interact with a single DynamoDB table.&lt;/p&gt;

&lt;h4&gt;
  
  
  Trace Service Map:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NWQ3FT3e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/65450219c56d4f85f3b56a8c_aws-x-ray-trace-map.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NWQ3FT3e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/65450219c56d4f85f3b56a8c_aws-x-ray-trace-map.webp" alt="An AWS X-Ray trace between client, API Gateway, and AWS Lambda" width="606" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second kind of service map can be found on the console when reviewing a single trace and will contain only the interactions with services for that single trace or user request. This is the one that will help us the most when trying to debug performance issues for different features since we’ll be able to see what services are being used and how the different executions took place.&lt;/p&gt;

&lt;p&gt;For example, in the provided screenshot we’re able to see that the execution failed since the nodes displayed for both API Gateway and AWS Lambda are in an error state.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Response Time or Duration Distribution
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PnLdLlS7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545029f26a92d8fe031d0f1_aws-x-ray-metrics-chart-duration.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PnLdLlS7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545029f26a92d8fe031d0f1_aws-x-ray-metrics-chart-duration.webp" alt="An AWS X-Ray chart showing response time duration" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS X-Ray&lt;/strong&gt; also provides a distribution diagram for groups of traces. This is specifically useful when trying to understand the overall performance of your service, how it affects the end users, and how much effort should be put into improving the service for each scenario.&lt;/p&gt;

&lt;p&gt;For example, performance issues should be prioritized differently if it affects only 0.5% of all requests or if it affects 50% of them.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Segment Timelines
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QVr-Sn0P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/654502d6fa5466ac813aeea6_aws-x-ray-segment-timeline.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QVr-Sn0P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/654502d6fa5466ac813aeea6_aws-x-ray-segment-timeline.webp" alt="An AWS X-Ray segment timeline of serverless rest api" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Segment timelines&lt;/strong&gt; are the most detailed diagrams that X-Ray will provide and allow developers to understand exactly how and when each service is being used.&lt;/p&gt;

&lt;p&gt;These timelines are specifically useful when debugging performance issues. For example, in this provided example we can see that the culprit for the long execution time is the Lambda Initialization (a.k.a. Cold Start) which took 859ms to complete, and that most of the execution time for the &lt;strong&gt;AWS Lambda&lt;/strong&gt; invocation itself was spent requesting the deletion of the item in &lt;strong&gt;DynamoDB&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enable AWS X-Ray in AWS Lambda
&lt;/h2&gt;

&lt;p&gt;Enabling AWS X-Ray on your AWS Lambda functions is very straightforward, and can normally be accomplished within a few minutes following two simple steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Enable AWS X-Ray tracing on Your Services
&lt;/h3&gt;

&lt;p&gt;The easiest way to do so is to enable it from your IaC template. For example, when using &lt;strong&gt;Serverless Framework&lt;/strong&gt;, you can just add the following attributes under &lt;strong&gt;'provider.tracing'&lt;/strong&gt; to enable AWS X-Ray on all your defined AWS Lambda functions and the generated API Gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b15U7a-8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545033cc26611a9495107d5_code-snippet-serverless-framework-tracing-enabled.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b15U7a-8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545033cc26611a9495107d5_code-snippet-serverless-framework-tracing-enabled.webp" alt="A code snippet of serverless framework enabling tracing on Lambda and API Gateway" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another option, in case you’re not deploying your services with an IaC template, would be to enable it manually through the AWS Console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OTqy-aea--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545039b00c83f9050382e51_aws-console-enabling-x-ray.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OTqy-aea--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545039b00c83f9050382e51_aws-console-enabling-x-ray.webp" alt="Setting up AWS X-Ray tracing in the AWS Console" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will just need to head over to your Lambda configuration section and Edit the &lt;strong&gt;'Monitoring tools'&lt;/strong&gt; section by enabling the tracing switch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure What X-Ray Should Capture
&lt;/h2&gt;

&lt;p&gt;After enabling &lt;strong&gt;AWS X-Ray&lt;/strong&gt; on your Lambda function, you will also need to add the &lt;strong&gt;'aws-xray-sdk-core'&lt;/strong&gt; library to your project's dependencies and configure it to add the required traces.&lt;/p&gt;

&lt;h4&gt;
  
  
  Capturing AWS SDK Usage:
&lt;/h4&gt;

&lt;p&gt;Wrapping the &lt;strong&gt;'aws-sdk client'&lt;/strong&gt; with the showcased functions will allow AWS X-Ray to capture and trace how the client is being used. For a Node JS project, there are two different configurations depending on the aws-sdk version your project is currently using.&lt;/p&gt;

&lt;p&gt;If you are using aws-sdk v2, you'll only need to wrap the library once and it will have a “global” effect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_zjTSlwi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/654503ecc179ea09c9aeb91d_nodejs-enabling-x-ray.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_zjTSlwi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/654503ecc179ea09c9aeb91d_nodejs-enabling-x-ray.webp" alt="A code snippet showing setting up AWS Lambda with AWS X-Ray tracing" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, with the provided code snippet, AWS X-Ray will be able to capture all requests done with the v2 SDK, independently of the client (SSM, DynamoDB, S3, etc).&lt;/p&gt;

&lt;p&gt;When using the v3 SDK, the developer will need to add the AWS X-Ray wrapper to all the instantiated clients.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IVetlS8c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545049541aebf71c961716c_nodejs-lambda-x-ray-dyanmodb-tracing.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IVetlS8c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545049541aebf71c961716c_nodejs-lambda-x-ray-dyanmodb-tracing.webp" alt="A code snippet showing how to trace Amazon DynamoDB" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This snippet, for example, will allow AWS X-Ray to only capture the requests made with that specific DynamoDB client. If you instantiate another DynamoDB client (or for any other service) in another file without adding the wrapper, AWS X-Ray won’t be able to capture the usage.&lt;/p&gt;

&lt;h4&gt;
  
  
  Capturing HTTPS Requests:
&lt;/h4&gt;

&lt;p&gt;In some cases, your project may also use some HTTPS requests to, for example, access third-party APIs which you’d also like to capture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Frv7FUq2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545050ab52131bd4bc7daf2_nodejs-lambda-tracing-https.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Frv7FUq2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545050ab52131bd4bc7daf2_nodejs-lambda-tracing-https.webp" alt="A code snippet showing how to trace HTTPS requests with AWS X-Ray" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to do so, and similarly to wrapping the v2 SDK, you can add the above snippet to your index file to allow AWS X-Ray to capture all the HTTPS requests made by your code.&lt;/p&gt;

&lt;h4&gt;
  
  
  Adding Custom Subsegments:
&lt;/h4&gt;

&lt;p&gt;Custom subsegments will allow AWS X-Ray to capture and measure the execution time of the desired part of your code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BgnPATGy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545048941aebf71c9616c40_nodejs-lambda-segmenting-example.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BgnPATGy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545048941aebf71c9616c40_nodejs-lambda-segmenting-example.webp" alt="A code snippet showing an example of custom subsegments" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One may add a custom segment by following the provided code snippet, where AWS X-Ray will measure the execution time of the code written between the &lt;strong&gt;'segment.addNewSubsegment(…)'&lt;/strong&gt; and &lt;strong&gt;'subsegment.close()'&lt;/strong&gt; functions. The custom subsegments will be displayed in the AWS X-Ray console under the Segment Timelines diagram.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Can AWS X-Ray Help You Find Performance Bottlenecks?
&lt;/h2&gt;

&lt;p&gt;Now that we know what AWS X-Ray has to offer and how to set it up, most of you will already have guessed how it can be a very useful tool but here are my favorite ways to take advantage of it:&lt;/p&gt;

&lt;h3&gt;
  
  
  Discovering Critical Services Under the Same Domain
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS X-Ray&lt;/strong&gt; provides (and builds based on the selected traces) a visual map of all the services linked to a specified domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lqhQCk9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/654501de56a77e22260138b2_aws-x-ray-diagram-trace-map.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lqhQCk9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/654501de56a77e22260138b2_aws-x-ray-diagram-trace-map.webp" alt="A trace map of a serverless REST API using AWS X-Ray" width="690" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the example used above, we can see that this domain is composed of one API Gateway, four AWS Lambdas, and a single DynamoDB table.&lt;/p&gt;

&lt;p&gt;Apart from seeing all the services linked to a domain, AWS X-Ray will also visually display the percentage of error executions that nodes might have.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MJU3BHlA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545061d5da6f8685fe95a1f_x-ray-trace-map-client-to-appsync.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MJU3BHlA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545061d5da6f8685fe95a1f_x-ray-trace-map-client-to-appsync.webp" alt="A trace map of AWS AppSync" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this provided example, we can easily see that the &lt;strong&gt;AppSync GraphQL API&lt;/strong&gt; has an average error rate of around 10%.&lt;/p&gt;

&lt;h3&gt;
  
  
  Identifying the Performance Bottlenecks
&lt;/h3&gt;

&lt;p&gt;Once the developer has found the nodes to be analyzed, the next step would be taking a look at the different traces for that node to better understand where it’s spending most of the execution time.&lt;/p&gt;

&lt;p&gt;For this task, one could take advantage of the Segment Timelines. These timelines will visually display how long each action took to be executed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I7B_w9Oz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545066ca2f331272701826b_x-ray-lambda-segment-timeline-2.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I7B_w9Oz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/6545066ca2f331272701826b_x-ray-lambda-segment-timeline-2.webp" alt="A segment timeline showing AWS Lambda" width="800" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From this example, we could see that the bottlenecks for the execution of this Lambda were:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The cold start, which added 733ms to the execution time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The API request, which took 5.53s of the actual invocation time of the lambda.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After seeing these results, a developer would know that he would need to work possibly on avoiding cold starts and on improving (if owned) the API that is called during the execution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Custom Subsegments to Profile &amp;amp; Improve Your Code
&lt;/h3&gt;

&lt;p&gt;At last, one of the best-kept secrets of AWS X-Ray, using custom subsegments to profile your code and have a better understanding of where the time is spent during an execution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mp5tVrip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/654506b4f09d338503279844_x-ray-lambda-segment-timeline-3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mp5tVrip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/654506b4f09d338503279844_x-ray-lambda-segment-timeline-3.webp" alt="A segment timeline with a custom subsegment to show a time consuming operation" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Given this Segment Timeline, we would know that the execution is taking longer than it should since we expected the DynamoDB request to be the only time-consuming operation. Without the above traces, we wouldn’t be able to understand what is taking so long.&lt;/p&gt;

&lt;p&gt;At this point, a developer would have two options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Spend hours reviewing the code and blindly updating it to try to find the time-consuming task.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add custom subsegments to the operations we suspect could be the culprit of the high execution time.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After adding a custom subsegment to the suspected function, we would be able to see a trace like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BhS572qI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/65450709bb1876da6c448d24_x-ray-lambda-segment-timeline.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BhS572qI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/5f16ec6886fb3fb049008f9a/65450709bb1876da6c448d24_x-ray-lambda-segment-timeline.webp" alt="A segment timeline showing after adding the custom subsegment" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can clearly see that the &lt;strong&gt;'time-consuming operation'&lt;/strong&gt; function is responsible for the extra 2.5 seconds of execution time. Thanks to this insight, the developer would know that they only need to focus on reviewing and improving that specific operation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We highly recommend correctly setting up AWS X-Ray when developing services on AWS, especially if they are serverless, in order to allow for better observability and profiling. As showcased in this article, AWS X-Ray will allow developers to better understand how a service is performing and speed up the process of debugging performance bottlenecks or timeouts once the service is deployed to production.&lt;/p&gt;

&lt;p&gt;Did you find this article interesting or useful? Do you have any questions or would like to chat more about it? I’d love to connect with you on my social media, you can find me on &lt;a href="https://www.linkedin.com/in/lorenzo-hidalgo-gadea/"&gt;LinkedIn&lt;/a&gt; or &lt;a href="https://twitter.com/lhidalgo_dev"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html"&gt;https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-nodejs-subsegments.html"&gt;https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-nodejs-subsegments.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>lambda</category>
      <category>aws</category>
      <category>observability</category>
      <category>xray</category>
    </item>
  </channel>
</rss>
