<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matt</title>
    <description>The latest articles on DEV Community by Matt (@mattzcarey).</description>
    <link>https://dev.to/mattzcarey</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mattzcarey"/>
    <language>en</language>
    <item>
      <title>Empowering Enterprises with Serverless Generative AI: Amazon Bedrock</title>
      <dc:creator>Matt</dc:creator>
      <pubDate>Fri, 13 Oct 2023 15:38:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/empowering-enterprises-with-serverless-generative-ai-amazon-bedrock-4m7d</link>
      <guid>https://dev.to/aws-builders/empowering-enterprises-with-serverless-generative-ai-amazon-bedrock-4m7d</guid>
      <description>&lt;p&gt;As the number of Large Language Models (LLMs) continues to grow and enterprises seek to leverage their advantages, the practical difficulties of running multiple LLMs in production is becoming evident. Established hyper-scale cloud providers, such as AWS, are in a favourable position to facilitate the adoption of Generative AI, due to their existing computing infrastructure, established security measures, and modern cloud-native patterns like Serverless.&lt;/p&gt;

&lt;p&gt;AWS’s introduction of Bedrock stands out as a poignant reaction to these trends and is well positioned through it’s platform centric, model agnostic and serverless operating model to be the facilitator for this next era of GenAI. Last week, Bedrock became Generally Available (GA), giving AWS customers their first look at the tools which allow them to integrate the GenAI into all aspects of their operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Advantage
&lt;/h2&gt;

&lt;p&gt;Infrastructure management of LLMs at high scale, especially for those not entrenched in the ML/AI domain, can be daunting. Managing compute, load balancers and exposed APIs requires platform teams and few businesses are willing to make that up front investment.&lt;/p&gt;

&lt;p&gt;Amazon Bedrock alleviates these concerns. As a serverless service, businesses only pay for the tokens consumed and generated by the LLM. These scaling challenges become things of the past.&lt;/p&gt;

&lt;p&gt;For instance, you are building a customer support chatbot: It has 100x the users on Black Friday compared to a Monday in March. You do not have to provision extra servers. Bedrock handles the scale out and back in to meet demand.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T9u1Wl5---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s6y820yr3t2qlrld1box.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T9u1Wl5---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s6y820yr3t2qlrld1box.png" alt="Image description" width="800" height="257"&gt;&lt;/a&gt;&lt;br&gt;
Example API Request from the AWS Console&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Security
&lt;/h2&gt;

&lt;p&gt;With an increasing emphasis on ensuring good data governance and audit trails, Bedrock provides peace of mind for enterprises seeking to adopt GenAI. All data provided to the Bedrock LLMs is encrypted at both rest and in transit and customers are free to use their own keys.&lt;/p&gt;

&lt;p&gt;Amazon Bedrock has achieved HIPAA eligibility and GDPR compliance and provided data is never used to improve the base models or shared with third-party model providers.&lt;/p&gt;

&lt;p&gt;Enterprises can even use AWS PrivateLink with Amazon Bedrock to establish private connectivity between LLMs and their VPC’s to avoid exposing traffic to the public internet. This gives businesses the security to create tools using LLMs that can use their own sensitive data archives as context.&lt;/p&gt;

&lt;p&gt;Imagine having a tool which provides enhanced search capabilities for diverse data types, from research papers and medical journals to company meeting notes and tech standards. Like your own personal Google on steroids. We have seen the beginnings of this with Bing Chat for web browsing and tools like Quivr for personal data, but now imagine searching through your company’s internal files with the same volition.&lt;/p&gt;

&lt;p&gt;It would be pretty incredible right?&lt;/p&gt;

&lt;blockquote&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Find the provisional projections that me and Jeff came up with last week for Q2. 
What is fonts do we use for external product pitches?
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is made possible by using Retrieval Augmented Generation (RAG) techniques. RAG allows you to provide Language Models (LLMs) with external knowledge that they were not trained on.&lt;/p&gt;

&lt;p&gt;When using Bedrock, whether you are ingesting data directly from S3 or storing it in vector databases like OpenSearch, Aurora, or RDS, the data remains within AWS data centres. This allows for greater security and easier compliance with relevant data governance requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eOvhy9nf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/thuu6cfdavxlwbot4m1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eOvhy9nf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/thuu6cfdavxlwbot4m1d.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;br&gt;
Basic RAG demo using Amazon Bedrock&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;p&gt;Amazon Bedrock’s offering designed to be a full suite of tools to empower builders:&lt;/p&gt;

&lt;h4&gt;
  
  
  Easy Model Selection
&lt;/h4&gt;

&lt;p&gt;‍Bedrock supports a selection of both proprietary and open-source models including Amazon’s new Titan models. Depending on the task at hand, whether it's long form text generation, quick summarisation, or back and forth conversation, you will be able to find a model which meets your use-case.&lt;/p&gt;

&lt;p&gt;Bedrock also offers a playground, allowing teams to try out various models and work out the best prompts for their chosen model.&lt;/p&gt;

&lt;p&gt;The confirmed addition of Meta’s Llama 2 model through a Serverless API is definitely a unique selling point of Bedrock. AWS recently partnered with Hugging Face and made a significant investment into their $235 million Series D funding round so it’s a safe bet to expect more open-source models to be included with Bedrock in the coming months.&lt;/p&gt;

&lt;p&gt;While Amazon are the first cloud provider to react to the need for model federation we are seeing advances in 3rd party libraries. Libraries like LiteLLM standardise calls to other model providers by exposing a common interface compatible with OpenAI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vwa0y5zc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/badvsn8mi4pd593ie9on.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vwa0y5zc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/badvsn8mi4pd593ie9on.png" alt="Image description" width="800" height="499"&gt;&lt;/a&gt;&lt;br&gt;
Bedrock Chat Playground&lt;/p&gt;

&lt;h4&gt;
  
  
  Agents
&lt;/h4&gt;

&lt;p&gt;‍Autonomous agents for Bedrock are now in preview. Agents are capable of automating tasks normally done by humans; such as pitch deck creation, replying to emails or coding tasks.&lt;/p&gt;

&lt;p&gt;Companies like Canva and Adobe have already integrated GenAI to resize images, remove backgrounds and objects, and it won't be long before we can also incorporate external style guides as context for these creations. With just a selection of notes, these tools will be able to create slides, fliers and other materials.&lt;/p&gt;

&lt;p&gt;Code generation is also becoming easier with single shot accuracy possible for increasingly more complex use-cases. Progress in this area has been rocketing recently with AI code assistants, code generation through test driven development and autonomous code reviews becoming more common.&lt;/p&gt;

&lt;p&gt;Although the output of agents may not be perfect, even at around 70% accuracy, it is a significant time saver for the human operator. The days of paying analysts substantial sums for mundane tasks like colour adjustments on slide decks may soon be seen as nostalgic.&lt;/p&gt;

&lt;h4&gt;
  
  
  Fine-Tuning
&lt;/h4&gt;

&lt;p&gt;‍LLMs work well when they have been trained on the general context of the task and are able to rephrase it. If it has no knowledge of the underlying concept, for instance a particular part or process which only happens in your company, you may get better results from fine-tuning your own custom model. Users can fine-tune any of the Bedrock LLMs directly in the console, using data stored in S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nvNCO3pz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5wlt8bxziko1w7r7z3gn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nvNCO3pz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5wlt8bxziko1w7r7z3gn.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
Fine-tuning Custom Models&lt;/p&gt;

&lt;h2&gt;
  
  
  Bedrock over OpenAI?
&lt;/h2&gt;

&lt;p&gt;Many of the arguments for using Bedrock are also applicable to OpenAI’s platform. Both have a choice of models, a Serverless cost per token pricing model and support fine-tuning in the console.&lt;/p&gt;

&lt;p&gt;However Bedrock supports models from a large variety of providers and has much clearer data governance policies. Bedrock also benefits from the huge AWS ecosystem, allowing for closer integrations with other services such as Lambda for compute, OpenSearch for vectors and S3 for object storage.&lt;/p&gt;

&lt;p&gt;Pricing is also in favour of Bedrock: 1k tokens using Claude 2 through Bedrock will cost $0.01102 for input and $0.03268 for output whereas the closest offering from OpenAI (GPT4 32k context) will cost $0.06 for input and $0.12 for output.&lt;/p&gt;

&lt;p&gt;There is a situation where an individual may opt for using OpenAI’s GPT models. If they are particularly invested in their prompts or are making use of function calling where the LLM returns strictly JSON then sticking with OpenAI could be a good option. Otherwise switching to Bedrock is straight forward, especially if your application uses a library like LangChain which has a drop in replacement for Bedrock.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Amazon Bedrock is not just another AWS service; it's the platform that gives leaders confidence in how they are leveraging their data, whilst giving developers the tools to make their best applications without the managing the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;As we look towards the future, it's becoming increasingly clear: GenAI is not merely an add-on. It's a necessity, an integral component that will underpin the next generation of products and services. With Amazon Bedrock, the future of GenAI integration is not just possible; it's here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.genaidays.org/post/empowering-enterprises-with-serverless-generative-ai-amazon-bedrock-matt-carey"&gt;This post was originally published on the GenAI Days blog.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>generativeai</category>
      <category>ai</category>
    </item>
    <item>
      <title>Building a Robust Serverless Messaging Service with Amazon EventBridge Pipes and CDK</title>
      <dc:creator>Matt</dc:creator>
      <pubDate>Fri, 19 May 2023 15:56:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-robust-serverless-messaging-service-with-amazon-eventbridge-pipes-and-cdk-2i9a</link>
      <guid>https://dev.to/aws-builders/building-a-robust-serverless-messaging-service-with-amazon-eventbridge-pipes-and-cdk-2i9a</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://medium.com/serverless-transformation/building-a-robust-serverless-messaging-service-with-amazon-eventbridge-pipes-and-cdk-bf8250d10825"&gt;Medium&lt;/a&gt; as part of the &lt;a href="https://medium.com/serverless-transformation?source=post_page-----bf8250d10825--------------------------------"&gt;Serverless Transformation Publication&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;EventBridge Pipes is a powerful new tool from Amazon Web Services (AWS) that makes it easy to move data between sources and targets while filtering, enriching and even transforming the data en route. EventBridge used to be a singular product, the EventBridge Bus. However, in the last few months, AWS has expanded it into a complete product suite for building event-driven applications, including; EventBuses, Pipes and Schedulers.&lt;/p&gt;

&lt;p&gt;High-scale messaging systems are traditionally complex to build. They must be able to handle a high volume of messages concurrently whilst ensuring that messages are not lost due to failures. In addition, they often require complex features such as routing, filtering, and transformation of messages, which can be challenging to implement and maintain. Pipes solves these problems by providing industry-leading horizontal scaling, redundancy with dead letter queues and inbuilt transformations, filtering and enrichment capabilities.&lt;/p&gt;

&lt;p&gt;Using the Cloud Development Kit (CDK), we can build and deploy our messaging service without leaving our chosen development environment (in Typescript, too! Bye-bye AWS console and bye-bye writing YAML).&lt;/p&gt;

&lt;h2&gt;
  
  
  Example Application
&lt;/h2&gt;

&lt;p&gt;We are building a web app with a fully serverless backend. We need a microservice that sends an email to the user whenever the user changes something security-related on their account, such as an address. This account data is stored in a DynamoDB table. The messaging service should construct and send an email to the user with identifying information about them and the change that has occurred.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ToAddresses: [userEmail]

Subject: `Notice: ${modifiedAttribute} Change Successful`

Body: `Dear ${firstName} ${lastName},

This is confirmation that your ${modifiedAttribute} for your account associated with this email address has been changed. 
If this is a change you made, we're good to go!

If you did not personally request this change, please reset your password or contact support.`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The catch is that the DynamoDB table the user has modified does not store names or email addresses, only a Cognito sub (user id). The Cognito User Pool is used as a single source of truth for security reasons and GDPR compliance. Therefore, we must query the rest of the information, such as name and email address, from Cognito directly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;AWS Cognito is a user authentication and identity management service. It has its own data storage called User Pools which stores sign-up information. This is separate from the database, in this case, an AWS DynamoDB table.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The DynamoDB table may store a projection of the user data maintained with event-driven updates. However, the DynamoDB table data will only be &lt;a href="https://hackernoon.com/eventual-vs-strong-consistency-in-distributed-databases-282fdad37cf7"&gt;eventually constant, not strongly consistent&lt;/a&gt;, and so cannot be relied upon apon for high-priority features.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What are Pipes?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS released Pipes on 1st December 2022. It makes life easier for developers to “create point-to-point integrations between event producers and consumers” while reducing the need for writing integration code. Essentially a Pipe automatically routes events from source to target, reducing the amount of extra code to write when building event-driven applications.&lt;/p&gt;

&lt;p&gt;Less code means less maintenance and faster building times, making Pipes one of AWS’s most exciting features this year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How are EventBridge Pipes different from EventBridge Buses?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Event buses are like post offices for AWS services. Different resources can send and receive messages or “events” to each other through this central service. Think of it as a middleman using forwarding rules to direct messages to the target services. For example, a user might change something on their account, which creates an event on the event bus. If the event matches a rule, then the rule target will be invoked and passed the event. This is a modern-day implementation of the &lt;a href="https://www.ibm.com/topics/esb"&gt;Enterprise Service Bus&lt;/a&gt; architectural pattern.&lt;/p&gt;

&lt;p&gt;EventBridge Pipes, on the other hand, passes events from one AWS service directly to another. They allow you to connect an event source to a target service with no extra glue code needed. In addition, you can connect a Lambda function or other AWS service to manipulate or “enrich” the events before they reach the target. Pipes is essentially &lt;a href="https://www.ibm.com/uk-en/cloud/learn/etl"&gt;ETL&lt;/a&gt; as a managed AWS product. Optional filtering and event transformation are available directly in the Pipe, giving almost limitless flexibility to your serverless event-driven architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why use Pipes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The flexibility of Pipes makes it perfect for our messaging service. We have an event source needing to send an event to another AWS service. The event needs extra information fetching and adding somewhere along the flow. With Pipes, we can plug together each piece of infrastructure with minimal extra code. What’s more, these individual pieces of infrastructure do not have to be Pipe specific. As we will see later, a significant benefit of Pipes is that it allows us to reuse existing resources to create new event-driven patterns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s0zvpZTl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4628/1%2AAPIKRa0X0W7QAo7ZvGN9yQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s0zvpZTl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4628/1%2AAPIKRa0X0W7QAo7ZvGN9yQ.png" alt="EventBridge Pipes as described in AWS console" width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Building the Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To build the Pipe, we must first choose a source producing the event. In our case, this is the DynamoDB stream of the table.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;DynamoDB Streams is a feature of Amazon DynamoDB that allows you to capture changes to the data in a DynamoDB table in near real-time.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next, we can specify a filter to only process events in the Pipe that match the rule. AWS does not charge for events that are filtered out before the enrichment step, so it’s better to do as much of the filtering here rather than later on. That being said, Pipes is super cheap. 5 million requests/month post filtering will cost you a grand total of $2.00. That is 60% cheaper than EventBridge Bus events costing a whopping $5.00 for the same number of events. Prices are correct in us-east-1 as of time of writing, see &lt;a href="https://aws.amazon.com/eventbridge/pricing/"&gt;here&lt;/a&gt; for the current pricing per region.&lt;/p&gt;

&lt;p&gt;To send an email, we need to get the user details, such as; first name, last name, and email, which are stored in Cognito. With Pipes, we can enrich the events using AWS Lambda, amongst several other services. In this case, we should use a Lambda function to query the user id in the Cognito User Pool. The beauty of EventBridge Pipes is that, since this is a reasonably common access pattern, we probably already have this exact Lambda function in our architecture. All that is needed is to specify the function ARN to the Pipe.&lt;/p&gt;

&lt;p&gt;Finally, we need to pick a target destination. Pipes supports all the major AWS services, including Amazon Step Functions, Kinesis Data Streams, AWS Lambda, and third-party APIs using EventBridge API destinations as targets. In our case, we will send the event to an SQS queue. Using a queue, we can restrict message throughput and protect the email service from overloading. We can also store messages in a dead letter queue in the event of downtime of the email service.&lt;/p&gt;

&lt;p&gt;An email Lambda which will construct and send the email with SES, can be triggered as an event source of the queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3CRc5U94--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AIQBjEbnLjUyAdVhKWsomag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3CRc5U94--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AIQBjEbnLjUyAdVhKWsomag.png" alt="Architecture diagram" width="691" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For this project, we are using CDK for IaC. CDK allows us to write IaC in imperative programming languages such as TypeScript or Python rather than declarative ones such as YAML or JSON. This means developers can take advantage of features such as code editor support for syntax highlighting, formatters and linters. Type safety provides added security and helps catch errors easily. IaC written in these familiar languages will be more comprehensible and readable to most developers than in YAML. CDK also has the added benefit of being open source, so bugs, issues and new features are all in the public domain.&lt;/p&gt;

&lt;p&gt;CDK released the L1 construct for Pipes in v2.55 back in December 2022. Unfortunately, the L2 construct is still in progress, so we will have to manually specify most of the CloudFormation template but bear with me, it won’t be too painful. The community is working on an L2 construct; you can find the issue on GitHub &lt;a href="https://github.com/aws/aws-cdk-rfcs/issues/473"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Please check out the example repo linked at the end of this article for the project structure. The aim with the code snippets below is to give you the information to be able to implement Pipes into your own project not a quick start guide to CDK. I can recommend &lt;a href="https://medium.com/serverless-transformation/easily-provision-your-cloud-infrastructure-as-code-with-aws-cdk-6e75213550e3"&gt;this article&lt;/a&gt; for an intro to CDK.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DynamoDB&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The table is created as shown below. Note the use of generic Partition and Sort keys “PK” and “SK” as recommended by &lt;a href="https://www.alexdebrie.com/posts/dynamodb-one-to-many/"&gt;Alex DeBrie&lt;/a&gt; and the inclusion of the “stream” attribute to create the DynamoDB stream. Here I have used “RemovalPolicy.DESTROY” as it is a development table. In production you would want to “RETAIN” the table.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const sourceTable = new Table(this, 'example-table-id', {
  tableName: 'example-table',
  partitionKey: { name: 'PK', type: AttributeType.STRING },
  sortKey: { name: 'SK', type: AttributeType.STRING },
  stream: StreamViewType.NEW_AND_OLD_IMAGES,
  billingMode: BillingMode.PAY_PER_REQUEST,
  removalPolicy: RemovalPolicy.DESTROY,
  pointInTimeRecovery: true,
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Lambdas&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The infrastructure for Lambda is generic so we won’t include it here but please see the included repo if you are interested. The code for the enrichment Lambda handler function is found below. Here you can see the DynamoDB “NewImage” being compared to the “OldImage” to determine modified table attributes. The “getUser” function queries the “userId” in Cognito. A string is returned by the enrichment Lambda and is piped straight into the target SQS messaging queue body.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const handler = async (event: DynamoDBStreamEvent): Promise&amp;lt;string&amp;gt; =&amp;gt; {
  const record: DynamoDBRecord = event.Records[0];

  if (record.dynamodb?.NewImage == null &amp;amp;&amp;amp; record.dynamodb?.OldImage == null) {
    throw new Error('No NewImage or OldImage found');
  }

  const modifiedAttributes: string[] = [];
  for (const key in record.dynamodb.NewImage) {
    if (record.dynamodb.NewImage[key].S !== record.dynamodb.OldImage?.[key].S) {
      modifiedAttributes.push(key);
    }
  }

  if (modifiedAttributes.length === 0) {
    throw new Error('No changed parameters found');
  }

  const userId = record.dynamodb.NewImage?.userId.S;

  if (userId == null) {
    throw new Error('No userId found');
  }

  const user = await getUser(userId);

  return JSON.stringify({
    ...user,
    modifiedAttributes,
  } as MessageBody);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;SQS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The IaC for creating the queue is trivial.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const queue = new Queue(this, 'ExampleQueue', {
  queueName: buildResourceName('example-queue'),
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Remember to create an event source and add the email Lambda function to it. The “batchSize” is the number of queue messages consumed by the target per target invocation. Setting it to 1 means, we only have to deal with 1 message per invocation of the email Lambda. If you are concerned about the throughput of the queue, you could increase “batchSize”, but you would have to add loops to the email Lambda to process each message individually.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const eventSource = new SqsEventSource(props.targetQueue, {
  batchSize: 1,
});

emailLambda.addEventSource(eventSource);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;EventBridge Pipes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s create a role for the Pipe to assume. We need to give permissions to read the DynamoDB stream, execute a Lambda and send a message to SQS. We will pass the sourceTable, enrichmentLambda, and targetQueue as props from other stacks.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const pipesRole = new Role(this, 'PipesRole', {
  roleName: 'PipesRole',
  assumedBy: new ServicePrincipal('pipes.amazonaws.com'),
  inlinePolicies: {
    PipesDynamoDBStream: new PolicyDocument({
      statements: [
        new PolicyStatement({
          actions: [
            'dynamodb:DescribeStream',
            'dynamodb:GetRecords',
            'dynamodb:GetShardIterator',
            'dynamodb:ListStreams',
          ],
          resources: [props.sourceTable.tableStreamArn],
          effect: Effect.ALLOW,
        }),
      ],
    }),
    PipesLambdaExecution: new PolicyDocument({
      statements: [
        new PolicyStatement({
          actions: ['lambda:InvokeFunction'],
          resources: [props.enrichmentLambda.functionArn],
          effect: Effect.ALLOW,
        }),
      ],
    }),
    PipesSQSSendMessage: new PolicyDocument({
      statements: [
        new PolicyStatement({
          actions: ['sqs:SendMessage', 'sqs:GetQueueAttributes'],
          resources: [props.targetQueue.queueArn],
          effect: Effect.ALLOW,
        }),
      ],
    }),
  },
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, in the same stack, let us specify the Pipe itself. Note the “sourceParameters” attribute. Some targets also require extra “targetParameters”. Here we set the batch size of the stream as 1 for convenience for testing, but as long as it’s less than the batch size of the target queue, you’ll never have an issue.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new CfnPipe(this, 'MessagingPipe', {
  name: 'MessagingPipe',
  roleArn: pipesRole.roleArn,
  source: props.sourceTable.tableStreamArn,
  sourceParameters: {
    dynamoDbStreamParameters: {
      startingPosition: 'LATEST',
      batchSize: 1,
    },
  },
  enrichment: props.enrichmentLambda.functionArn,
  target: props.targetQueue.queueArn,
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;How could the Architecture be Improved?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Filtering Options&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Filtering of the source event in CDK is available under the “sourceParameters” props for CfnPipe. An example of how you might filter events to only leave those where one or more attributes are modified and “someAttribute” is present in the “NewImage” of the table can be seen below.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"eventName": [{
 "prefix": "MODIFY"
 }],
  "dynamodb": {
    "NewImage": {
      "someAttribute": {
        "S": [{
          "exists": true
        }]
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will remove the vast majority of events but will still leave some filtering to be done in the enrichment Lambda function. AWS actually open-sourced the event-ruler which powers these filters. You can find the repo &lt;a href="https://github.com/aws/event-ruler"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the future, I hope the EventBridge team and the open-source community will improve the filtering capabilities to allow you to compare exact table values. Being able to do something like this below would be really handy to avoid overpaying for events you don’t want.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"eventName": [{
 "prefix": "MODIFY"
 }],
  "dynamodb": {
    "NewImage": {
      "someAttribute": {
        "S": [{
          "not-equals": &amp;lt;$.OldImage.someAttribute.S&amp;gt;
        }]
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Best Practice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Speaking with the AWS Pipes team, they told me they would prefer if users return JSON instead of strings from the enrichment Lambda and use transformations available in the console to parse the event. However, I’m not a fan of the console, especially for large applications with multiple environments, so I’m waiting for the transformation attribute to be available in CloudFormation and CDK before the swap to do this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future Improvements to Pipes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The next feature we are waiting for is CloudWatch logs for Pipes. We’ve been informed it’s a priority and is actively being worked on. It should dramatically improve the developer experience working with EventBridge Pipes. The same goes for cross-account capabilities. We build a lot of multi-account architectures, so using a Pipe instead of a direct Lambda invocation would simplify them in many cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In conclusion, Amazon EventBridge Pipes offers a practical solution for building a robust serverless messaging service. Pipes allow developers to easily connect event sources and targets while filtering and enriching events, using existing infrastructure and reducing the need for additional code and maintenance. For example, it is possible to connect a DynamoDB stream as the event source, filter and enrich the events with information from Cognito using a Lambda function, and then send the event to an SQS queue for email delivery. &lt;strong&gt;This flexible and powerful feature makes EventBridge Pipes ideal for building serverless, event-driven architectures.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources and Code
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/mattzcarey/pipes-example-cdk"&gt;The example repo&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://serverlessland.com/patterns?services=eventbridge-pipes"&gt;Open-source patterns with EventBridge Pipes&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/eventbridge/pipes/"&gt;Amazon EventBridge Pipes Documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>aws</category>
      <category>awscommunity</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
