<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matheus das Mercês</title>
    <description>The latest articles on DEV Community by Matheus das Mercês (@matheusdasmerces).</description>
    <link>https://dev.to/matheusdasmerces</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/matheusdasmerces"/>
    <language>en</language>
    <item>
      <title>AWS Lambda Durable Functions on Hexagonal Architecture: The Pattern You’ve Been Looking For</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Wed, 25 Feb 2026 06:45:45 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-lambda-durable-functions-on-hexagonal-architecture-the-pattern-youve-been-looking-for-5hne</link>
      <guid>https://dev.to/aws-builders/aws-lambda-durable-functions-on-hexagonal-architecture-the-pattern-youve-been-looking-for-5hne</guid>
      <description>&lt;p&gt;Yes, you read it right. When building serverless applications on AWS, one little thing seems to be forgotten in &lt;em&gt;2026&lt;/em&gt;: &lt;strong&gt;design patterns&lt;/strong&gt;. And that's especially true when using &lt;strong&gt;Lambda Durable Functions&lt;/strong&gt; and its new open-source Durable execution SDK.&lt;/p&gt;

&lt;p&gt;And no, this is not another "Step Functions vs Lambda Durable Functions" comparison. In this article, &lt;strong&gt;we will not look back&lt;/strong&gt;. We will explore how you can build a strong foundation for Durable Functions with &lt;strong&gt;Hexagonal Architecture,&lt;/strong&gt; from a developer's perspective, and why this pattern might be the &lt;strong&gt;missing piece&lt;/strong&gt; for building durable applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;At &lt;em&gt;AWS re:Invent 2025&lt;/em&gt;, &lt;a href="https://aws.amazon.com/about-aws/whats-new/2025/12/lambda-durable-multi-step-applications-ai-workflows/" rel="noopener noreferrer"&gt;AWS introduced Lambda Durable Functions&lt;/a&gt; with an interesting premise: build like a &lt;strong&gt;monolith&lt;/strong&gt;, deploy to &lt;strong&gt;microservices&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;As an all-time big fan of the microservices approach, I have to admit: I got super excited that we can now build a Lambdalith without a guilty conscience. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A little over a year ago, I wrote an &lt;a href="https://dev.to/aws-builders/refactoring-a-lambda-monolith-to-microservices-using-hexagonal-architecture-1em0"&gt;article&lt;/a&gt; explaining how to refactor a Lambdalith to microservices, using Hexagonal. What was considered before as an anti-pattern, it is fascinating that Durable Functions allows us now to do the opposite, but now, with the microservices benefits.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Plus, at first glance, it looked like an immediate replacement for AWS Step Functions, &lt;em&gt;meant for developers&lt;/em&gt;. And that's what we've been seeing the community doing so far: comparing both services and exploring ways of migrating existing state machines to Durable Functions.&lt;/p&gt;

&lt;p&gt;The new Durable execution SDK is powerful, and it can do pretty much everything that you already have available in Step Functions, but when building a Lambda function that handles orchestration, it is also easy to fall into the trap of building the well-known Lambda Bogeyman: &lt;strong&gt;spaghetti code&lt;/strong&gt;, which makes the application hard to explain and evolve.&lt;/p&gt;

&lt;p&gt;The problem isn't Durable Functions.&lt;br&gt;
The problem isn't its SDK.&lt;br&gt;
The problem is the lack of boundaries.&lt;/p&gt;

&lt;p&gt;And if there is one thing that I learned with my Durable endeavors, that thing is: now, we need better ways of organizing the application code.&lt;/p&gt;
&lt;h2&gt;
  
  
  The old-fashioned way of building software
&lt;/h2&gt;

&lt;p&gt;Lately, something keeps hammering my mind: we need, more than ever, principles.&lt;/p&gt;

&lt;p&gt;Back when coding tools were nothing but a daydream, we used to think differently. Coding was the most important skill of a developer, and the ability to structure the code in a way that is, among other things, readable and testable. In object-oriented programming, the &lt;strong&gt;SOLID principles&lt;/strong&gt;, for instance, remain a great starting point for designing clean software.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ag0r1ppocrvxkvwsohx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ag0r1ppocrvxkvwsohx.png" alt="SOLID" width="617" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But SOLID alone was never the end goal.&lt;/p&gt;

&lt;p&gt;You can absolutely apply SOLID principles when building with Lambda, but when orchestration becomes central, &lt;strong&gt;clear separation of concerns&lt;/strong&gt; matters even more. That’s where Hexagonal Architecture comes in.&lt;/p&gt;
&lt;h2&gt;
  
  
  Hexagonal Architecture
&lt;/h2&gt;

&lt;p&gt;Hexagonal Architecture, also known as "Ports and Adapters," offers a way to modularize your application so it can be more flexible and maintainable. By isolating the core business logic from external systems, this architecture promotes separation of concerns, where the application's core logic isn't tightly coupled to any specific technology or service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckl0873l488ha66u9yr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckl0873l488ha66u9yr5.png" alt="Hexagonal Architecture" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Core Logic (Domain)&lt;/strong&gt;: The core contains the application's core business rules, completely isolated from the outer layers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ports&lt;/strong&gt;: Defined interfaces that describe actions available to the core.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adapters&lt;/strong&gt;: Connect external systems to the application's core through ports, making it easy to switch out databases, API integrations, or other dependencies without impacting the core logic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The real strength of Hexagonal Architecture is the boundaries it creates. It keeps business logic isolated, dependencies replaceable, and &lt;strong&gt;infrastructure concerns at the edges&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That becomes especially important when we introduce the Durable execution SDK. It brings powerful workflow capabilities, but it also introduces execution-specific mechanics that should be kept separate from the rest of your code.&lt;/p&gt;

&lt;p&gt;Hexagonal Architecture &lt;strong&gt;doesn’t remove that complexity&lt;/strong&gt;. It gives it a &lt;strong&gt;place to live&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Durable 🤝 Hexagonal
&lt;/h2&gt;

&lt;p&gt;Durable Functions changed something important: the &lt;em&gt;what&lt;/em&gt; and the &lt;em&gt;how&lt;/em&gt; now live in the same place. &lt;/p&gt;

&lt;p&gt;With regular Lambda functions, we mostly wrote the &lt;em&gt;what&lt;/em&gt;: validate an order, process a payment, update a record. How it was executed wasn’t something we had to think much about, as this used to be part of our infrastructure code (A.K.A Step Functions). Plus, what was before split into microservices, now it can be part of a single monolith, as Durable Functions gives us "distributed system reliability".&lt;/p&gt;

&lt;p&gt;More importantly, with the Durable execution SDK, the &lt;em&gt;how(s)&lt;/em&gt; is part of the code. Parallel steps, maps, and child contexts all sit next to the business logic. That’s where it can get confusing.&lt;/p&gt;

&lt;p&gt;Hexagonal Architecture is not a silver bullet, but it allows us to separate those concerns a bit. We can make the domain stay focused on &lt;em&gt;what&lt;/em&gt; the system does (with a little bit of &lt;em&gt;how&lt;/em&gt;). The workflow base layer handles &lt;em&gt;how&lt;/em&gt; it runs. The adapters handle external calls.&lt;/p&gt;

&lt;p&gt;Durable Functions gives us reliability in a monolith. Hexagonal keeps the structure clean. And when that happens, the SOLID principles can make sense again.&lt;/p&gt;
&lt;h2&gt;
  
  
  Inversion of control (IoC)
&lt;/h2&gt;

&lt;p&gt;This is where things start to get really interesting.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;em&gt;In software design, inversion of control (IoC) is a design principle in which custom-written portions of a computer program receive the flow of control from an external source (e.g., a framework). In procedural programming, a program's custom code calls reusable libraries to take care of generic tasks, but with inversion of control, it is the external code or framework that is in control and calls the custom code.&lt;/em&gt;"&lt;br&gt;
Source: &lt;a href="https://en.wikipedia.org/wiki/Inversion_of_control" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Inversion_of_control&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In short, &lt;strong&gt;IoC&lt;/strong&gt; is the practical tool that makes Hexagonal Architecture work. It's how you implement the &lt;strong&gt;D&lt;/strong&gt; in SOLID, where abstractions should not depend on details. Details should depend on abstractions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayn1bkw0s45772xxsr86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayn1bkw0s45772xxsr86.png" alt="D in Solid" width="510" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And how to leverage &lt;strong&gt;IoC&lt;/strong&gt; when building with Lambda Durable Functions?&lt;/p&gt;
&lt;h2&gt;
  
  
  A concrete example
&lt;/h2&gt;

&lt;p&gt;You are a developer. You need to build a data pipeline with the requirements to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Ingest&lt;/strong&gt; some data, &lt;strong&gt;transform&lt;/strong&gt; it, and &lt;strong&gt;store&lt;/strong&gt; it into a database. &lt;/li&gt;
&lt;li&gt;Support more than one data source, so the ingestion and transformation &lt;strong&gt;code will be different&lt;/strong&gt; depending on the data type.&lt;/li&gt;
&lt;li&gt;Build the application in a &lt;strong&gt;modular&lt;/strong&gt; way so that it's easier to evolve in the future with potential new data types.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After some conversations with your team, you decided to build it like a &lt;strong&gt;monolith&lt;/strong&gt;, so that you don't have the cognitive overhead of splitting the application into microservices. Single code base, single deployment, super &lt;strong&gt;straightforward&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Of course, when someone asks how exactly you are going to design it, the universal engineering answer applies:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;em&gt;It depends.&lt;/em&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Although in fact it does depend, &lt;strong&gt;you are a great developer&lt;/strong&gt;. You want to build a future-proof application, and despite all possibilities, you decided to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;Lambda Durable Functions&lt;/strong&gt;. Build a monolith and use &lt;strong&gt;Parallel execution&lt;/strong&gt; with automatic retries for each data type for reliability.&lt;/li&gt;
&lt;li&gt;Leverage &lt;strong&gt;Hexagonal Architecture&lt;/strong&gt; to keep the code structure clean.&lt;/li&gt;
&lt;li&gt;Apply &lt;strong&gt;IoC (Inversion of Control)&lt;/strong&gt; so that different ingestion and transformation code can be injected without modifying the orchestration logic, avoiding condition-heavy, tightly coupled code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let’s look at what that means in practice.&lt;/p&gt;
&lt;h2&gt;
  
  
  It's all about code
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;To demonstrate, I have used &lt;a href="https://inversify.io/" rel="noopener noreferrer"&gt;InversifyJS&lt;/a&gt;, a library used to create inversion of control (IoC) container for TypeScript. An IoC container uses a class constructor to identify and inject its dependencies. While Hexagonal Architecture is particularly well-suited for typed languages, it is language-agnostic and can be implemented in any language or framework of your choice.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  1️⃣ The IoC container: wiring behavior, not hard-coding it
&lt;/h3&gt;

&lt;p&gt;The container defines multiple implementations for the same abstractions.&lt;br&gt;
Each data source and mapper is bound by name, allowing us to switch behavior based on context instead of conditionals.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Container&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Bind multiple data source implementations with names&lt;/span&gt;
&lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IDataSource&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataSource&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;CustomerDataSource&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;whenNamed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;customers&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IDataSource&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataSource&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ProductDataSource&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;whenNamed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;products&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IDataSource&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataSource&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;OrderDataSource&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;whenNamed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;orders&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Bind multiple data mappers implementations with names&lt;/span&gt;
&lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IDataMapper&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataMapper&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;CustomerDataMapper&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;whenNamed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;customers&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IDataMapper&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataMapper&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ProductDataMapper&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;whenNamed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;products&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IDataMapper&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataMapper&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;OrderDataMapper&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;whenNamed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;orders&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;container&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Factory&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IDataSource&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;dataMapper&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IDataMapper&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataSourceFactory&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toFactory&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ResolutionContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;named&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IDataSource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IDataSource&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataSource&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;named&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;});&lt;/span&gt;

            &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dataMapper&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IDataMapper&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataMapper&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;named&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;});&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="nx"&gt;dataMapper&lt;/span&gt;
            &lt;span class="p"&gt;};&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, IoC allows us to inject different ingestion and transformation strategies &lt;strong&gt;without changing the workflow structure&lt;/strong&gt;. No &lt;code&gt;if (type === 'customers')&lt;/code&gt; spread all over the codebase.&lt;/p&gt;

&lt;h3&gt;
  
  
  2️⃣ The Lambda entrypoint: keeping Durable at the edge
&lt;/h3&gt;

&lt;p&gt;This file acts as the Lambda entrypoint. It wraps the handler with Durable execution and delegates the actual logic to a resolved use case. With that, we have access to the &lt;code&gt;DurableContext&lt;/code&gt;, and we use it later down the line.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;withDurableExecution&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;DurableContext&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@aws/durable-execution-sdk-js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;TYPES&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../container/inversify.config&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;durableFunction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;withDurableExecution&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DurableContext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; 
&lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;useCase&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DurableFunction&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;useCase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3️⃣ The workflow base layer: centralizing the initial orchestration mechanic
&lt;/h3&gt;

&lt;p&gt;This abstract class centralizes the Durable parallel execution pattern. It takes an &lt;code&gt;eventContexts&lt;/code&gt; array and runs the same workflow in parallel for each data type, with a parent context called &lt;code&gt;execute-contexts-in-parallel&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"eventContexts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"customers"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"products"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"orders"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;DurableContext&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@aws/durable-execution-sdk-js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;abstract&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DurableParallelAbstractHandler&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DurableFunctionEvent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DurableContext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;DurableFunctionResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;contextsToBeExecuted&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eventContexts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;eventContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;eventContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;func&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DurableContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;eventContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}));&lt;/span&gt;

      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parallel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;execute-contexts-in-parallel&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;contextsToBeExecuted&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Durable Function completed successfully&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toISOString&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Durable Function failed:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toISOString&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="kd"&gt;abstract&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;childContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DurableContext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;DurableParallelAbstractHandler&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each value in &lt;code&gt;eventContexts&lt;/code&gt; corresponds to a named binding in the IoC container (for example, customers resolves to CustomerDataSource + CustomerDataMapper). The workflow structure remains the same, only the injected behavior changes.&lt;/p&gt;

&lt;p&gt;Because this class is abstract, it will be extended by other Lambdas that require the same &lt;strong&gt;parallel orchestration pattern&lt;/strong&gt;. The parallel mechanics &lt;strong&gt;live in one place&lt;/strong&gt;. Concrete implementations only need to provide the &lt;code&gt;execute&lt;/code&gt; method.&lt;/p&gt;

&lt;h3&gt;
  
  
  4️⃣ The concrete Durable use case
&lt;/h3&gt;

&lt;p&gt;This class implements the actual use case while inheriting orchestration mechanics from the base class. &lt;/p&gt;

&lt;p&gt;Plus, it is not pure domain logic. It is the workflow use case layer.&lt;br&gt;
Its responsibility is to coordinate execution, not to define how ingestion or transformation works.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;injectable&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DurableFunction&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;DurableParallelAbstractHandler&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;dataFactoryInstance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IDataSource&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;dataMapper&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IDataMapper&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;inject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataSourceFactory&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;dataSourceFactory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;named&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IDataSource&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;dataMapper&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IDataMapper&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;inject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Storage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IStorage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DurableContext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dataSourceType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;dataSource&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;customers&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;step&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;create-data-source-factory&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dataFactoryInstance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dataSourceFactory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dataSourceType&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rawResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;step&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`fetch-data`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dataFactoryInstance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rawResponse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;_index&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;runInChildContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`process-item-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;childCtx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="na"&gt;transformedData&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DomainResponse&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

                &lt;span class="nx"&gt;childCtx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;step&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;transform-data&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="nx"&gt;transformedData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dataFactoryInstance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dataMapper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapToDomain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;transformedData&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="p"&gt;});&lt;/span&gt;

                &lt;span class="nx"&gt;childCtx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;step&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;store-transformed-item&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;storageKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`processed-data/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;transformedData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;entity&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;transformedData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;entity&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;.json`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;storageKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;transformedData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Data processed and stored successfully&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;DurableFunction&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What matters here is &lt;strong&gt;isolation&lt;/strong&gt;. If tomorrow the ingestion logic changes for customers, or a new data type is introduced, the Durable workflow does not need to change, but only the injected implementation. The orchestration remains &lt;strong&gt;untouched&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  For those who come after
&lt;/h2&gt;

&lt;p&gt;What we built here is a monolith with distributed system reliability, by keeping a single codebase while letting Durable Functions handle the hard parts: parallel execution, retries, etc.&lt;/p&gt;

&lt;p&gt;And the payoff shows up immediately in the Durable operations graph: you can literally see the top-level &lt;code&gt;execute-contexts-in-parallel&lt;/code&gt;, from the abstract class, each branch running its own context (customers/products/…), and the steps and map iterations underneath.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4y1f4pg1g6krdi7aqph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4y1f4pg1g6krdi7aqph.png" alt="Durable operations" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A nice side-effect of this pattern is testing. You can test the container (IoC bindings) independently from workflow execution, and you don’t need to touch the orchestration layer when you change a mapper or data source.&lt;/p&gt;

&lt;p&gt;IoC container test (short version):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;reflect-metadata&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../../src/container/inversify.config&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;TYPES&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../../src/container/types&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;IStorage&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../../src/interfaces/storageIF&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;IoC container wiring&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;resolves the data factory for a named context&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;factory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;dataMapper&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DataSourceFactory&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;dataMapper&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;factory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;products&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBeDefined&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;function&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dataMapper&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBeDefined&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;dataMapper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mapToDomain&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;function&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;resolves infrastructure providers&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;storage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IStorage&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Storage&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBeDefined&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And your Durable function test file, also in a short version, using the &lt;a href="https://github.com/aws/aws-durable-execution-sdk-js/blob/main/packages/aws-durable-execution-sdk-js-testing/README.md" rel="noopener noreferrer"&gt;Durable Execution SDK JS Testing library&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;reflect-metadata&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LocalDurableTestRunner&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@aws/durable-execution-sdk-js-testing&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;TYPES&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../../../src/container/inversify.config&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;durableFunction&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../../../src/example-app/handlers&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;DurableFunctionEvent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../../../src/example-app/durableAbstractHandler&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;IStorage&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../../../src/interfaces/storageIF&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;LocalStorage&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../adapters/storage/local/localStorage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Durable workflow&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="na"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LocalDurableTestRunner&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nf"&gt;beforeAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ENVIRONMENT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rebind&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IStorage&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;TYPES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Storage&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;LocalStorage&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;whenDefault&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;LocalDurableTestRunner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setupTestEnvironment&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;skipTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nf"&gt;beforeEach&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;runner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;LocalDurableTestRunner&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;handlerFunction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;durableFunction&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nf"&gt;afterAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;LocalDurableTestRunner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;teardownTestEnvironment&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;delete&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ENVIRONMENT&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;runs multiple data sources in parallel&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DurableFunctionEvent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;eventContexts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;customers&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;products&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;orders&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;execution&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;execution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getStatus&lt;/span&gt;&lt;span class="p"&gt;()).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SUCCEEDED&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;execution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getResult&lt;/span&gt;&lt;span class="p"&gt;()?.&lt;/span&gt;&lt;span class="nx"&gt;success&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The full example, including the complete test files, adapters, and the full hexagonal implementation, can be found &lt;a href="https://github.com/matheusdasmerces/inversify-hexagonal-example" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Lambda Durable Functions are more than a replacement for Step Functions. They are meant for a &lt;strong&gt;different kind of problem and a different kind of developer experience&lt;/strong&gt;. When you choose Durable, you are choosing to write your workflows in code. And once orchestration lives in your codebase, boundaries become essential.&lt;/p&gt;

&lt;p&gt;Durable Functions gives you distributed system reliability inside a monolith: parallel execution, maps, retries, etc. But reliability alone is not enough. Without &lt;strong&gt;clear organization&lt;/strong&gt;, it’s easy to mix workflow mechanics with business logic and slowly create something hard to understand.&lt;/p&gt;

&lt;p&gt;Also, Durable Functions &lt;strong&gt;work just as well for simple workflows as for complex ones&lt;/strong&gt;. The difference is not in the workflow size or the number of steps, it’s in how you structure the code around it. Hexagonal Architecture helps keep things in place, while IoC helps keep dependencies clean.&lt;/p&gt;

&lt;p&gt;Together, they allow you to build a &lt;strong&gt;Lambdalith that is reliable&lt;/strong&gt; at runtime and maintainable in the long run.&lt;/p&gt;

&lt;p&gt;The monolith vs microservices debate is still relevant. Durable Functions change &lt;strong&gt;how we approach it&lt;/strong&gt;. You can build a great monolith, but only if you design it properly.&lt;/p&gt;

&lt;p&gt;Durable Functions are powerful, but they require discipline. Design patterns were never optional; we just stopped talking about them. It doesn’t remove the need for architecture. &lt;strong&gt;It makes it impossible to ignore.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>typescript</category>
      <category>microservices</category>
    </item>
    <item>
      <title>You Don’t Need a Construct for That: Best Practices for Serverless Infrastructure with AWS CDK Blueprints</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Thu, 17 Jul 2025 07:29:16 +0000</pubDate>
      <link>https://dev.to/aws-builders/you-dont-need-a-construct-for-that-best-practices-for-serverless-infrastructure-with-aws-cdk-3dfd</link>
      <guid>https://dev.to/aws-builders/you-dont-need-a-construct-for-that-best-practices-for-serverless-infrastructure-with-aws-cdk-3dfd</guid>
      <description>&lt;p&gt;If you use AWS CDK (Cloud Development Kit) to create infrastructure as code, you're probably familiar with &lt;strong&gt;Constructs&lt;/strong&gt; and &lt;strong&gt;Aspects&lt;/strong&gt;. Have you heard about CDK Blueprints? You can inject properties at L2 Constructs and apply best practices to all your resources at scale.&lt;/p&gt;

&lt;p&gt;In this article, we are going to define best practices for a few resources by understanding the real meaning of CDK building blocks, see how easy it is to define &lt;strong&gt;Blueprints&lt;/strong&gt; in CDK, explore the benefits of property injection, and understand why &lt;strong&gt;you don't need a Construct&lt;/strong&gt; for applying best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;"Everything in CDK is a Construct", but not everything needs to be a Construct.&lt;/p&gt;

&lt;p&gt;No, this article is not &lt;em&gt;only&lt;/em&gt; about CDK Constructs. What you're going to read briefly in the next session is a rough summary of the Constructs documentation, &lt;strong&gt;the way AWS wanted us to understand them&lt;/strong&gt;, and understanding the &lt;strong&gt;semantics&lt;/strong&gt; of it is going to make a big difference in your &lt;strong&gt;IaC&lt;/strong&gt; (Infrastructure as Code) project.&lt;/p&gt;

&lt;h3&gt;
  
  
  For what would you need a Construct?
&lt;/h3&gt;

&lt;p&gt;According to the documentation from CDK, Constructs are the basic building blocks of a CDK app, and they are &lt;strong&gt;a collection of one or more resources&lt;/strong&gt; to be deployed via CloudFormation.&lt;/p&gt;

&lt;p&gt;Moreover, Constructs are also defined in 3 levels (&lt;strong&gt;L1&lt;/strong&gt; for CloudFormation definition, &lt;strong&gt;L2&lt;/strong&gt; for a small layer of abstraction, and &lt;strong&gt;L3&lt;/strong&gt; for higher abstraction or often a collection of AWS resources). The documentation also explains that you can create your own Constructs for a &lt;strong&gt;specific pattern&lt;/strong&gt; (example, S3 Bucket to SNS topic). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdew7n1bqn67qvm35lphg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdew7n1bqn67qvm35lphg.png" alt="CDK construct levels" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The purpose of them is to help you reach the desired state of your infrastructure as code by grouping resources and logic into ~&lt;strong&gt;basically&lt;/strong&gt;~ classes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Things you can do with Construct, but you don't need to (or should not?)
&lt;/h3&gt;

&lt;p&gt;What you just read is all that CDK Constructs mean, &lt;strong&gt;according to AWS&lt;/strong&gt;. But it's 2025, and we are creative people. Let's have a look at this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-cdk-lib&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Construct&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;constructs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;logs&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-cdk-lib/aws-logs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;ValidatedLogGroupProps&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LogGroupProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ValidatedLogGroup&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Construct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;logGroup&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LogGroup&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ValidatedLogGroupProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;retention&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RetentionDays&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TWO_WEEKS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Retention must be set to TWO_WEEKS&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;removalPolicy&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RemovalPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DESTROY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Removal policy must be set to DESTROY&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;logGroup&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;LogGroup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LogGroup&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since you write CDK with, in this example, TypeScript, you can validate the Construct props and throw exceptions if they are not compliant with standards defined by you. In practice, this means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You created a new layer of abstraction upon the L2 Construct of LogGroup, which contains default props from CDK Lib that might change between versions. This can lead to operational overhead when bumping versions.&lt;/li&gt;
&lt;li&gt;You will not allow LogGroups to be created without a two-week retention (in this hypothetical situation, this configuration is mandatory).&lt;/li&gt;
&lt;li&gt;You write it, you maintain it. The extra logic is going to be maintained by you, even though it is a small class.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Although this is possible, you need to ask yourself the questions: "Is it worth it? Are there other options? Am I &lt;strong&gt;misusing&lt;/strong&gt; CDK Constructs?"&lt;/p&gt;

&lt;h2&gt;
  
  
  Gladly, we have alternatives
&lt;/h2&gt;

&lt;p&gt;The beauty of &lt;strong&gt;imperative&lt;/strong&gt; IaC is that you can do pretty much everything you want, preferably following the &lt;strong&gt;semantics&lt;/strong&gt; of the tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Aspects
&lt;/h3&gt;

&lt;p&gt;Hold on, we are almost there. Let's first see if Aspects help us validate props values.&lt;/p&gt;

&lt;p&gt;The documentation starts explaining that &lt;strong&gt;Aspects&lt;/strong&gt; are "a way to apply an &lt;strong&gt;operation&lt;/strong&gt; to all constructs in a given scope, or it could verify something about the &lt;strong&gt;state&lt;/strong&gt; of the constructs, such as making sure that all buckets are encrypted". Ok, things are getting more interesting.&lt;/p&gt;

&lt;p&gt;The purpose of an Aspect is to modify resources at the CloudFormation level based on what was defined by a &lt;strong&gt;higher-level construct (L2)&lt;/strong&gt; or throw exceptions if they deviate from a standard, pretty much what we tried to achieve with Constructs above. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BucketVersioningChecker&lt;/span&gt; &lt;span class="k"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;IAspect&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;visit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IConstruct&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// See that we're dealing with a CfnBucket&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CfnBucket&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

      &lt;span class="c1"&gt;// Check for versioning property, exclude the case where the property&lt;/span&gt;
      &lt;span class="c1"&gt;// can be a token (IResolvable).&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;versioningConfiguration&lt;/span&gt;
        &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;Tokenization&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isResolvable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;versioningConfiguration&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;versioningConfiguration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Enabled&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;Annotations&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;addError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Bucket versioning is not enabled&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Later, apply to the stack&lt;/span&gt;
&lt;span class="nx"&gt;Aspects&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stack&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BucketVersioningChecker&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Seems like we are getting there, but here are a few important points from the Aspects &lt;strong&gt;semantics&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want to modify L1 Constructs (CloudFormation level)&lt;/li&gt;
&lt;li&gt;You want to inspect (not modify) L2 Constructs props&lt;/li&gt;
&lt;li&gt;You have to navigate the whole tree of resources and check one by one to inspect&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since we want to leverage the existing L2 LogGroup construct from CDK and modify the props to be compliant with your standards, we could use Aspects and modify the CloudFormation template after the fact - again, because it is possible does not mean it is the best way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Blueprints
&lt;/h3&gt;

&lt;p&gt;You know the drill: Let's see what the documentation says about CDK Blueprints.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;em&gt;Use AWS CDK Blueprints to standardize and distribute L2 construct configurations across your organization. With Blueprints, you can ensure that AWS resources are configured consistently according to your organizational standards and best practices. For example, you can automatically enable encryption for all Amazon S3 buckets, apply specific logging configurations to all AWS Lambda functions, or enforce standard security rules for all security groups.&lt;/em&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is exactly what we are trying to achieve, and this is not me saying, but how AWS wants us to apply resource best practices.&lt;/p&gt;

&lt;p&gt;Blueprints are powered by &lt;em&gt;property injection&lt;/em&gt;, and each Blueprint is responsible for applying default properties to a specific L2 Construct when they are instantiated. We can think about &lt;strong&gt;security standards&lt;/strong&gt;, &lt;strong&gt;cost optimization&lt;/strong&gt;, &lt;strong&gt;compliance requirements&lt;/strong&gt;, etc.&lt;/p&gt;

&lt;p&gt;Let's get down to business and create some examples of Blueprints for a few resources:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CloudWatch Log Group&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;LogGroupPropsInjector&lt;/span&gt; &lt;span class="k"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;IPropertyInjector&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;constructUniqueId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;constructUniqueId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;LogGroup&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PROPERTY_INJECTION_ID&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;inject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;originalProps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LogGroupProps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;_context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InjectionContext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;LogGroupProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;originalProps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;removalPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RemovalPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DESTROY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;retention&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RetentionDays&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TWO_WEEKS&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Lambda Function&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyNodejsFunctionPropsInjector&lt;/span&gt; &lt;span class="k"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;IPropertyInjector&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;constructUniqueId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;constructUniqueId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;NodejsFunction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PROPERTY_INJECTION_ID&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;inject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;originalProps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FunctionProps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;_context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InjectionContext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;FunctionProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;originalProps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Runtime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NODEJS_22_X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;memorySize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;seconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;SQS Queue&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyQueuePropsInjector&lt;/span&gt; &lt;span class="k"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;IPropertyInjector&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;constructUniqueId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;constructUniqueId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Queue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PROPERTY_INJECTION_ID&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;inject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;originalProps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;QueueProps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;_context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InjectionContext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;QueueProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;originalProps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;visibilityTimeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;seconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="na"&gt;retentionPeriod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;days&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some extra remarks about CDK Blueprints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Place default properties before &lt;code&gt;…​originalProps&lt;/code&gt; to allow overrides.&lt;/li&gt;
&lt;li&gt;Place forced properties after &lt;code&gt;…​originalProps&lt;/code&gt; to prevent overrides.&lt;/li&gt;
&lt;li&gt;Use CDK context to enable/disable injectors for testing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The combination of the two
&lt;/h3&gt;

&lt;p&gt;Since we are doing things "&lt;strong&gt;by the book&lt;/strong&gt;" here, we also need to consider this sentence from the Blueprints docs:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;em&gt;Blueprints are not a compliance enforcement mechanism. Developers can still override the defaults if needed. For strict compliance enforcement, consider using AWS CloudFormation Guard, Service Control Policies, or CDK Aspects in addition to Blueprints&lt;/em&gt;".&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Even though it is possible to prevent overrides with Blueprints only (but remember the &lt;strong&gt;semantics&lt;/strong&gt;!), you could combine it with Aspects in the following way:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;With your Blueprints in place, your L2 constructs are going to have some properties injected during initialization. &lt;/li&gt;
&lt;li&gt;If, for some reason, your Blueprints are not applied to the scope of a specific resource, you can have an Aspect as an extra layer of validation (Aspects inspection happens after construct initialization).&lt;/li&gt;
&lt;li&gt;Your Aspect will then check if a specific property was not injected for a specific resource. You can throw an exception or add a warning message saying you could use a pre-existing Blueprint from your project.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In the realm of AWS CDK, while &lt;strong&gt;Constructs&lt;/strong&gt; are fundamental building blocks for defining infrastructure as code, they are not always necessary for &lt;strong&gt;enforcing best practices.&lt;/strong&gt; The introduction of CDK &lt;strong&gt;Blueprints&lt;/strong&gt; offers a more streamlined and efficient approach to standardizing configurations across your organization. By leveraging property injection, Blueprints allow you to apply default properties to L2 Constructs, ensuring consistency in resource configurations without the need for additional abstraction layers.&lt;/p&gt;

&lt;p&gt;Ultimately, the combination of &lt;strong&gt;Blueprints and Aspects&lt;/strong&gt; allows for a balance between flexibility and compliance, enabling you to maintain best practices at scale while still allowing for necessary customizations. This approach aligns with AWS's vision of how infrastructure as code should be managed, providing a powerful toolkit for modern cloud infrastructure management.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/constructs.html" rel="noopener noreferrer"&gt;CDK Constructs documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/aspects.html" rel="noopener noreferrer"&gt;CDK Aspects documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/blueprints.html" rel="noopener noreferrer"&gt;CDK Blueprints documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws/aws-cdk-rfcs/blob/main/text/0693-property-injection.md" rel="noopener noreferrer"&gt;Property Injection Implementation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>infrastructureascode</category>
      <category>aws</category>
      <category>serverless</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Hands-On with Amazon Q Developer in GitHub (Preview): First Impressions</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Mon, 05 May 2025 22:00:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/hands-on-with-amazon-q-developer-in-github-preview-first-impressions-3baa</link>
      <guid>https://dev.to/aws-builders/hands-on-with-amazon-q-developer-in-github-preview-first-impressions-3baa</guid>
      <description>&lt;p&gt;This week, Amazon &lt;a href="https://aws.amazon.com/about-aws/whats-new/2025/05/amazon-q-developer-integration-github-preview-available/" rel="noopener noreferrer"&gt;released&lt;/a&gt; the preview of &lt;strong&gt;Amazon Q Developer in GitHub&lt;/strong&gt;, allowing support across the software development lifecycle from coding, testing, and deploying to troubleshooting and modernizing applications.&lt;/p&gt;

&lt;p&gt;I tested it, and I have to say—&lt;strong&gt;it's awesome&lt;/strong&gt;! In this article, I will share my impressions and explain everything you can and can not do with the current version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/q/developer/" rel="noopener noreferrer"&gt;Amazon Q Developer&lt;/a&gt; is a &lt;strong&gt;generative AI–powered assistant&lt;/strong&gt; that helps developers and IT professionals build, operate, and transform software. It assists in writing, debugging, and optimizing code, as well as managing AWS resources and performing &lt;strong&gt;code transformations&lt;/strong&gt;. ​&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new?
&lt;/h2&gt;

&lt;p&gt;The preview release of Amazon Q Developer in GitHub brings several new capabilities:​&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Feature development&lt;/strong&gt;: Amazon Q Developer lets you generate code from natural language prompts, helping you build features, fix bugs, add tests, and refine logic directly within GitHub issues—saving time and reducing errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code transformation&lt;/strong&gt;: It modernizes outdated Java codebases by updating tech stacks and improving performance, all while maintaining original functionality and minimizing technical debt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code review&lt;/strong&gt;: Amazon Q Developer automates code reviews in GitHub by analyzing pull requests, offering feedback, and suggesting fixes you can easily apply.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  My experiment
&lt;/h2&gt;

&lt;p&gt;Using Amazon Q Developer in GitHub feels just like &lt;strong&gt;assigning tasks to a real human being&lt;/strong&gt;. It understands the context of repository issues, provides relevant code changes, and you can even interact with the assistant in the pull request. If you have a clear goal and you can give the proper instructions, Amazon Q Developer will most probably achieve it, of course, with some &lt;em&gt;caveats&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;To experiment with the agent, I have created two issues in a repository that uses &lt;strong&gt;Hexagonal Architecture&lt;/strong&gt; (not like that matters, but I wanted to try somewhere as close as possible to a real-life repository). If you want to understand more about this design pattern, check out &lt;a href="https://dev.to/aws-builders/refactoring-a-lambda-monolith-to-microservices-using-hexagonal-architecture-1em0"&gt;my blog post&lt;/a&gt; about it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a new issue
&lt;/h3&gt;

&lt;p&gt;The first thing I wanted to try was to add a &lt;strong&gt;new adapter for SSM Parameter Store&lt;/strong&gt; to my &lt;a href="https://github.com/matheusdasmerces/inversify-hexagonal-example" rel="noopener noreferrer"&gt;repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frusasvudqkijsg90ni90.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frusasvudqkijsg90ni90.png" alt="Defining issue" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this issue, I tried to be as much as clear as possible, but did not write much text. Remember, &lt;strong&gt;it is all about the prompt&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;I selected "&lt;strong&gt;Amazon Q Developer Agent&lt;/strong&gt;" under "&lt;strong&gt;Labels&lt;/strong&gt;". For me, it felt like assigning this issue to a peer, but this time, not to a human but to an &lt;strong&gt;agent&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2kve1cyi57s2v3qc1nd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2kve1cyi57s2v3qc1nd.png" alt="Label" width="790" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As soon as you add, you can see the update from the Amazon Q Developer agent:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypdoxea1v23rb0xytlzi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypdoxea1v23rb0xytlzi.png" alt="Update" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;less than 2 minutes&lt;/strong&gt;, I could see a new comment from the agent on the issue, and a link to a pull request!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cxvf0b1i1dns75xd3lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cxvf0b1i1dns75xd3lw.png" alt="New update" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pull-request interaction
&lt;/h3&gt;

&lt;p&gt;Amazon Q Developer created the pull request the way I usually do (and like): Clear description and link to which issue this PR resolves:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3f3pe5wtskfljjdzwq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3f3pe5wtskfljjdzwq4.png" alt="Pull request" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It also explains how to interact with the agent and how to request changes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdvgskz8jldd1w6hgbly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdvgskz8jldd1w6hgbly.png" alt="Request changes explanation" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reviewing the changes, there were only 4 files changed, so it was a very straightforward PR since this repository is not very complex.&lt;/p&gt;

&lt;p&gt;The only thing I noticed was a violation of &lt;strong&gt;Hexagonal Architecture&lt;/strong&gt; - the port (interface) created by Amazon Q Developer has "SSM" name on it, whereas it should not contain any service specifics in this layer. I was not expecting the agent to know this design pattern, so no big issue here. Then, I asked the agent to update it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwo7je0p5qj7o6shcdgc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwo7je0p5qj7o6shcdgc3.png" alt="Adding a comment" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few seconds after clicking on "&lt;strong&gt;Request changes&lt;/strong&gt;", Amazon Q Developer added a comment to the PR:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6pmj4r36wqewrlnzhwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6pmj4r36wqewrlnzhwq.png" alt="Agent response" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And again, a couple of minutes later, &lt;strong&gt;voila&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbjz2qi4vbjk0rlln7j0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbjz2qi4vbjk0rlln7j0.png" alt="Changes made" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Code review
&lt;/h3&gt;

&lt;p&gt;Assigning the issue to Amazon Q Developer also gives it the freedom to review its code changes against &lt;strong&gt;security vulnerabilities&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yqsyvevfofhyb7graw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yqsyvevfofhyb7graw6.png" alt="Comment from the agent" width="800" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if something is found, Amazon Q Developer will suggest a change, and you can decide whether you accept or not:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x8o1sy36dpi4vehetzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x8o1sy36dpi4vehetzx.png" alt="Suggestions by the agent" width="800" height="674"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Not perfect (yet)
&lt;/h2&gt;

&lt;p&gt;I have to say, for its preview version, Amazon Q Developer in GitHub is awesome. There were a couple of things I personally would like to see improved in a later version, not in any specific order:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code review&lt;/strong&gt; for security issues and vulnerabilities happens on every iteration with the agent in a pull request, so you can eventually expect new suggestions every time you ask for a change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Renaming files&lt;/strong&gt; during your code review can be an issue for the agent. I asked a couple of times to rename/remove a file, but it can not understand the purpose of it really well.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Branch name&lt;/strong&gt; can be an issue if you want your repository to follow branch name standards.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source of issues&lt;/strong&gt; now it is only GitHub. However, I wish to register issues in a third-party tool and have Amazon Q Developer pool them to GitHub somehow.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;You can check both issues I've created for this experiment and its pull requests &lt;a href="https://github.com/matheusdasmerces/inversify-hexagonal-example/issues?q=is%3Aissue%20state%3Aclosed" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Start using it now!
&lt;/h2&gt;

&lt;p&gt;The first (and only) thing you need to do is install the &lt;a href="https://github.com/apps/amazon-q-developer" rel="noopener noreferrer"&gt;Amazon Q Developer application&lt;/a&gt; in GitHub. There is no need to connect an AWS account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4e1o2cx8rk4qy8ilfz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4e1o2cx8rk4qy8ilfz8.png" alt="Installing app in GitHub" width="800" height="1031"&gt;&lt;/a&gt;&lt;br&gt;
You can choose to install it in all of your repositories or select the ones you want, and that is it! You can now create issues and assign them to Amazon Q Developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon Q Developer in GitHub&lt;/strong&gt; marks a significant step towards integrating AI into the &lt;strong&gt;software development lifecycle&lt;/strong&gt;, not only in your IDE, but also in resolving issues defined in GitHub.&lt;/p&gt;

&lt;p&gt;The interaction with the tool is great. If you are already used to GitHub issues and pull requests, using Amazon Q Developer will feel &lt;strong&gt;100% natural&lt;/strong&gt; to you.&lt;/p&gt;

&lt;p&gt;While there are areas for improvement, it is good to remember that this is the &lt;strong&gt;preview mode&lt;/strong&gt;, but the current capabilities are worth trying.&lt;/p&gt;

&lt;p&gt;Have you tried it already? How was your experience? Leave a comment below, I would love to hear!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>github</category>
      <category>programming</category>
    </item>
    <item>
      <title>AWS Community Day Italy 2025: My Experience as an Attendee and Session Takeaways</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Fri, 04 Apr 2025 08:49:42 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-community-day-italy-2025-my-experience-as-an-attendee-and-session-takeaways-57c4</link>
      <guid>https://dev.to/aws-builders/aws-community-day-italy-2025-my-experience-as-an-attendee-and-session-takeaways-57c4</guid>
      <description>&lt;p&gt;I went to Italy to enjoy some holidays and my birthday, and it happened to be during the same week as the AWS Community Day Milan 2025, so of course, I did not miss the opportunity to attend - and the event was an absolute blast! Such a great atmosphere and amazing speakers, so it was a great opportunity to learn a lot and connect with like-minded AWS enthusiasts. &lt;/p&gt;

&lt;p&gt;As someone who loves propagating knowledge, I've decided to write the key takeaways from my 3 favorite sessions and my thoughts about the event in general.&lt;/p&gt;

&lt;h2&gt;
  
  
  Welcome Booth
&lt;/h2&gt;

&lt;p&gt;Right when entering the venue to claim my badge (I had to register a couple of days in advance) I got a bag with some cool AWS swags and information about the sponsors. More importantly, I received some vouchers for coffee. I mean, it is Italy, so the coffee is always good everywhere.&lt;/p&gt;

&lt;p&gt;I got my double espresso, had a quick look at the space and seeing so many familiar faces from the international community, I immediately knew the day ahead was about to be awesome.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keynotes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Kick-off
&lt;/h3&gt;

&lt;p&gt;After breakfast and networking, Monica Colangelo (AWS Hero and event organizer) started the event by explaining the venue's logistics, the agenda, and so on. The information was very clear to me as an attendee, and it gave me the impression that the event was thoroughly planned and well-organized. I knew exactly what to expect, and the keynote speaker was about to get on stage!&lt;/p&gt;

&lt;h3&gt;
  
  
  Generative AI: tech du-jour or the next big thing?
&lt;/h3&gt;

&lt;p&gt;The keynote speech was thrown by Massimo Re Ferrè, Director and Product Management at AWS.&lt;/p&gt;

&lt;p&gt;It was such an interesting perspective when Massimo made me reflect on GenAI and LLMs throughout the whole session. He started the session by explaining when his "aha" moment was for GenAI - a simple Lambda@Edge function asked at the very beginning of ChatGPT. The code worked almost 100% and made him think "there is something here".&lt;/p&gt;

&lt;p&gt;I could relate to Massimo as my "aha" moment was quite similar - a piece of code that before seemed impossible to write without a person, now an assistant would write within seconds - especially when Masismo explain the deepness of LLM nowadays, where "if you have a clear goal and ask the assistant, it will most probably achieve your goal" (think as a feature request).&lt;/p&gt;

&lt;p&gt;"Where are these assistants going?" made me reflect on what I use LLM for nowadays. According to Massimo, LLM can make you go home early, and I agree. It is all about you letting "AI do the laundry and dishes for you to focus on art", not the opposite.&lt;/p&gt;

&lt;p&gt;Approaching the end of the session, Massimo explores the risk management of GenAI. It can do a lot nowadays, so we have to be able to determine what benefits we want to extract out of it and what we are willing to give up to leverage the good parts of it. &lt;/p&gt;

&lt;p&gt;If the mindset is not changed yet, we should. "The developer role will not disappear, it will change" and - the verb I like the most for this subject - evolve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sessions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Building Secure and Efficient SaaS Platforms on AWS Serverless
&lt;/h3&gt;

&lt;p&gt;I have attended this session before in an AWS event in the Netherlands, delivered by Luciano Mammino and Guilherme Dalla Rosa, and I really liked it.&lt;/p&gt;

&lt;p&gt;The session starts by explaining why multi-tenant architecture is important: It is mainly about optimizing resources and being more efficient by sharing that results across multiple customers.&lt;br&gt;
The audience also has a good overview of the trade-offs between multi-tenant and single-tenant applications.&lt;/p&gt;

&lt;p&gt;Then, they start a walkthrough in the solution built on AWS using API Gateway, an custom Lambda Authorizer, Cognito, and a DynamoDB table that is accessible through the gateway for the different tenants.&lt;/p&gt;

&lt;p&gt;I like the idea of the "danger-zone" brought up by Luciano and Guilherme, when a tenant could possibly access data from a different tenant due to a bug, security issue, injection, or anything related. That happens because the differentiation between tenants happens directly in the query expression to DynamoDB and there is no distinguishing in the IAM role: the Lambda service role can freely query data on this table. "The Lambda has too much responsibility".&lt;/p&gt;

&lt;p&gt;In my opinion, their solution is brilliant: they created a policy with well-defined permission boundaries, and, on the Lambda authorizer code, they narrowed down the permissions of this IAM Role based on the claims (tenant ID) in the token. This role now is assumed by the authorizer.&lt;/p&gt;

&lt;p&gt;Now, ff there is a bug when tenant A is trying to access tenant B, it won't work anymore because the policy won't allow it. If there is a "noisy tenant", it is also possible to use an API Gateway usage plan to limit the number of requests for that specific tenant.&lt;/p&gt;

&lt;p&gt;What I could extract from this session is that the example showed is a great example of how to apply least privilege permissions for a multi-tenant application without reinventing the wheel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event-Driven and serverless in world of IoT
&lt;/h3&gt;

&lt;p&gt;The session, delivered by Jimmy Dahlqvist (AWS Hero), explored how event-driven and serverless architectures can address key challenges in IoT systems, including scalability, monitoring, latency, security, data volume, and cost per device.&lt;/p&gt;

&lt;p&gt;Jimmy showed the audience a use case (a connecting entrance solution) where it must handle unpredictable traffic, long-running processing tasks, and strict budget constraints.&lt;/p&gt;

&lt;p&gt;The initial architecture relied, amongst other components, on IoT rules for routing, small objects in S3, and debugging issues in DynamoDB due to query limitations. These bottlenecks led to a need for architectural changes, including improved components for routing, data storage, and debugging. Jimmy then started to dive deep into the architecture components, where he replaced some pieces of the solution to meet the requirements.&lt;/p&gt;

&lt;p&gt;Jimmy concluded the session by summarizing the benefits of serverless and event-drive architectures in IoT environments, like, for example scalability, cost efficiency (pay-per-use model), and responsiveness of the solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Road to compliance: will your internal users hate your Platform Team?
&lt;/h3&gt;

&lt;p&gt;Davide de Paolis, Engineering Manager of Platform Team, explained how the mindset shift between Software Engineers and Cloud Engineers can be hard and how to minimize the effects for the platform users.&lt;/p&gt;

&lt;p&gt;When migrating to a more robust AWS Organization setup, there is a change of habit as well: now, engineers need to log in to multiple accounts and create multiple policies with SCPs, for example. He also explained the benefits of AWS Organization in a platform, which makes it easier to apply security controls, quotas, data isolation, cost allocation, etc.&lt;/p&gt;

&lt;p&gt;When Davide navigates the audience through the mechanism of creating policies and enforcing tags on resource creation, for example, he created a good bridge to what was coming next: Enforce x inform. &lt;/p&gt;

&lt;p&gt;More important than enforcing policies, communicating is crucial to make sure there is enough time for development teams to comply with the guardrails. Even more important: identifying who to inform. Davided showed a great example of an architecture of a solution responsible for identifying and notifying resource owners not compliant with the guardrails defined by the platform team.&lt;/p&gt;

&lt;p&gt;The insightful idea of MVG (Minimum Viable Governance) brought up by Davide helps the team focus on what really matters, integrated based on feedback and changing priorities, and share goals and purpose. For him, a platform team can fail when they lose focus on the goals and create a disconnection between them and the development teams.&lt;/p&gt;

&lt;p&gt;My key takeaway from this session is: A Platform Team, among other responsibilities, needs also to keep sharing knowledge, learning with each other, which takes time and patience, but it will create the desired connection with the development teams - without that, the "internal users will hate your platform team".&lt;/p&gt;

&lt;h2&gt;
  
  
  Honorable mentions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  BuildersCard
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogci2bgxu2wgwi3mc7vt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogci2bgxu2wgwi3mc7vt.jpg" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
Game time! Throughout the event, there was a game room, as shown above, where attendees could relax a bit and have fun between the sessions. The &lt;a href="https://aws.amazon.com/gametech/buildercards/" rel="noopener noreferrer"&gt;BuildersCards&lt;/a&gt; is an educational game that helps people understand how AWS services can work together to design well-architected applications. You don't necessarily need to have AWS knowledge or a technical background, although it is fun when you do because you can discuss the usage of the services during the game. It was also a great opportunity to meet new people in the community.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Spotlight on Women Speakers
&lt;/h3&gt;

&lt;p&gt;A special mention goes to the strong representation of women among the speakers, an encouraging step toward more inclusive conversations in our field. Hats off to the organizers for making it happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I have a bias to give my opinion about an AWS event in Italy, as I love the Italian culture and I am an AWS &lt;del&gt;nerd&lt;/del&gt; enthusiastic, but I was happy to spend one day of my holidays in this event. Had so much fun and the sessions were great, so I really enjoyed the AWS Community Day Milan 2025. Connecting, sharing, and learning from the community is something that I truly enjoy doing.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>community</category>
      <category>cloud</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Are Your AWS CloudTrail Costs Out of Control? Here’s Why</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Mon, 17 Mar 2025 00:30:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/are-your-aws-cloudtrail-costs-out-of-control-heres-why-pn9</link>
      <guid>https://dev.to/aws-builders/are-your-aws-cloudtrail-costs-out-of-control-heres-why-pn9</guid>
      <description>&lt;p&gt;Are you happy with your CloudTrail bill? I asked that same question in my &lt;a href="https://dev.to/matheusdasmerces/optimizing-amazon-cloudwatch-costs-for-high-traffic-lambda-functions-with-advanced-logging-controls-5406"&gt;previous article&lt;/a&gt; about CloudWatch, and now, it is time to reflect on AWS CloudTrail. &lt;/p&gt;

&lt;p&gt;In this article, I will explore possible reasons why you are overspending on CloudTrail, and discuss ways to keep the costs of CloudTrail well controlled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;AWS CloudTrail is one of those services that delivers huge value to the AWS ecosystem while not spending tons of money. It enables auditing, security monitoring, and operational troubleshooting by tracking your user activity and API calls.&lt;/p&gt;

&lt;p&gt;However, it is also one of those services that can get very expensive quite easily - especially because it tracks all activities in an AWS account/organization and generates &lt;strong&gt;audit logs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Understanding how AWS charges you is a crucial step to avoid misconfiguration and keep compliance at the &lt;strong&gt;lowest price point&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Serverless Paradigm
&lt;/h2&gt;

&lt;p&gt;Yes, AWS CloudTrail is a &lt;strong&gt;serverless&lt;/strong&gt; service. And that is because it is a fully managed service that automatically records AWS API activity and stores it in Amazon S3 or CloudWatch Logs. You don’t need to provision or manage servers - AWS handles everything behind the scenes.&lt;/p&gt;

&lt;p&gt;Serverless is not just about the &lt;strong&gt;literal&lt;/strong&gt; meaning of "Server-less" but also the principles behind it, for example, no infrastructure management and &lt;strong&gt;pay-per-use pricing&lt;/strong&gt;. However, the trick part lies in the last one: the first copy of trails is free, but the second copy onwoards inccur charges, and this is where you might be spending quite some money.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudTrail pricing
&lt;/h2&gt;

&lt;p&gt;Let's understand how AWS customers pay for CloudTrail. I want to focus on &lt;strong&gt;Trails&lt;/strong&gt;, which is the subject of this article. On the CloudTrail &lt;a href="https://aws.amazon.com/cloudtrail/pricing/" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt;, there are also details about &lt;strong&gt;Lake&lt;/strong&gt; and &lt;strong&gt;Insights&lt;/strong&gt;, which I will not cover here for now. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Do you want me to investigate and share insights about Lake or Insights for a future article? Let me know in the comments!&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;management events&lt;/strong&gt; delivered to S3, you pay $2.00 per 100,000 events delivered (after the first free copy)&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;data events&lt;/strong&gt; delivered to S3, you pay $0.10 per 100,000 events delivered&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;network activity events&lt;/strong&gt; delivered to S3, you pay $0.10 per 100,000 events delivered&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;NOTE: Amazon S3 charges apply and are not included in this analysis.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you take a close look at the pricing and compare it to the trails in your AWS account, you can immediately see where the eventual high costs come from, so now, it is important to understand why.&lt;/p&gt;

&lt;h2&gt;
  
  
  Duplicated trails
&lt;/h2&gt;

&lt;p&gt;The sentence I want you to reflect on from the pricing is "&lt;strong&gt;after the first free copy&lt;/strong&gt;". What does that mean?&lt;/p&gt;

&lt;p&gt;Let's imagine you set up &lt;strong&gt;organization trails&lt;/strong&gt; in all member accounts. These trails go to an S3 bucket in a centralized logging account, for example. To deliver the same events to other destinations to allow different groups (for instance, developers, security, auditors, etc) to get their copy of these audit logs, you also created trails in the &lt;strong&gt;individual accounts&lt;/strong&gt;. Although this is a valid use case, this can be costly - the first copy of these events is free of charge and you pay for the other ones - and that is why I call this duplicated trails. They generate a metric called &lt;em&gt;&lt;strong&gt;PaidEventsRecorded&lt;/strong&gt;&lt;/em&gt;, and this is how AWS charges you.&lt;/p&gt;

&lt;p&gt;Still on the hypothetical AWS account, let's say this environment generates &lt;strong&gt;5 million management events&lt;/strong&gt; delivered to S3 per month:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first trail, delivered to an S3 bucket that developers can access, is free of charge.&lt;/li&gt;
&lt;li&gt;The second trail, delivered to an S3 bucket that the security team can access, will cost $100 (&lt;em&gt;&lt;strong&gt;5,000,000 / 100,000 * $2.00 = $100&lt;/strong&gt;&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;The third trail, delivered to an S3 bucket that the auditors team can access, will cost $100 (&lt;em&gt;&lt;strong&gt;5,000,000 / 100,000 * $2.00 = $100&lt;/strong&gt;&lt;/em&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In total, these duplicated trails cost you $200. But is there something we can do in this scenario while still keeping compliance and the least privileges?&lt;/p&gt;

&lt;p&gt;To avoid unnecessary costs from the &lt;em&gt;&lt;strong&gt;PaidEventsRecorded&lt;/strong&gt;&lt;/em&gt; metric, you can opt to remove the trails created in the specific accounts and keep the organization trail. By doing that, all the management events trail logs are still delivered to a centralized S3 bucket, but now you can control access per account with &lt;strong&gt;IAM roles&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;For example, to give access to developers on their specific accounts within the organization, you can create an IAM role to be assumed by the developers from a specific account but only allow them to access the &lt;strong&gt;bucket prefix&lt;/strong&gt; of the trails from their account. Your policy would look more or less like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Effect": "Allow"
    "Resource": "arn:aws:s3:::organization-trail-bucket/AWSLogs/OU_ID/ACCOUNT_ID/*",
    "Action": [
        "s3:Get*",
        "s3:HeadObject",
        "s3:List*",
        "s3:RestoreObject"
    ],
},
{
    "Effect": "Allow"
    "Resource": "arn:aws:s3:::organization-trail-bucket",
    "Action": "s3:ListBucket",
    "Condition": {
        "StringLike": {
            "s3:prefix": [
                "AWSLogs/OU_ID/ACCOUNT_ID/*"
            ]
        }
    },
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By keeping the organization trail delivering logs to a centralized S3 bucket and controlling the access via IAM roles, you can eradicate the costs of having duplicated trails.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data events
&lt;/h2&gt;

&lt;p&gt;Slightly more complicated than the Management Events, you also pay for Data Events delivered to S3. Because there are no free copies (they always incur charges from the first copy) and the data events can generate a lot more audit logs, it's more difficult to reduce costs. They generate a metric called &lt;em&gt;&lt;strong&gt;DataEventsRecorded&lt;/strong&gt;&lt;/em&gt;, and this is how AWS charges you.&lt;/p&gt;

&lt;p&gt;Nevertheless, data events are important for audit and compliance purposes. Of course, you can always negotiate with the audit team or the team responsible for those logs if they are necessary according to the policies defined by the organization.&lt;/p&gt;

&lt;p&gt;AWS's advice is to "filter out AWS KMS or Amazon RDS Data API events by choosing Exclude AWS KMS events or Exclude Amazon RDS Data API events on the Create trail or Update trail pages". This can help you reduce the number of logs generated by data events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring CloudTrail costs
&lt;/h2&gt;

&lt;p&gt;Using the &lt;strong&gt;Cost Explorer&lt;/strong&gt; console, you can get an overview of the &lt;em&gt;&lt;strong&gt;PaidEventsRecorded&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;DataEventsRecorded&lt;/strong&gt;&lt;/em&gt; metrics and how their costs increase/decrease over time. &lt;/p&gt;

&lt;p&gt;You can select the metrics under the "Usage Type" filter:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97zxicehg4ub51q10n80.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97zxicehg4ub51q10n80.png" alt="CloudTrail Metrics" width="313" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, managing AWS CloudTrail costs effectively requires understanding the pricing structure and identifying areas where expenses can be reduced. &lt;/p&gt;

&lt;p&gt;By avoiding duplicated trails and utilizing IAM roles to control access to a centralized S3 bucket, you can eliminate unnecessary charges from multiple copies of management events. &lt;/p&gt;

&lt;p&gt;For data events, consider filtering out less critical logs to reduce costs. &lt;/p&gt;

&lt;p&gt;Regularly monitoring these expenses using tools like Cost Explorer can help you track and manage your CloudTrail spending more efficiently, ensuring compliance without overspending.&lt;/p&gt;

&lt;p&gt;I would love to hear your thoughts! Let me know in the comments how your experience was with CloudTrail trails costs.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>compliance</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How To Mount an Amazon Elastic File System on Amazon CodeBuild From Another VPC</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Mon, 03 Mar 2025 00:30:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-mount-an-amazon-elastic-file-system-on-amazon-codebuild-from-another-vpc-b1j</link>
      <guid>https://dev.to/aws-builders/how-to-mount-an-amazon-elastic-file-system-on-amazon-codebuild-from-another-vpc-b1j</guid>
      <description>&lt;p&gt;With &lt;strong&gt;CodeBuild&lt;/strong&gt;, depending on the compute and environment type you configure, the runner offers a pretty good amount of disk space. However, after the build is completed, you lose the data in its disk because the storage is ephemeral. Mounting an &lt;strong&gt;Amazon EFS&lt;/strong&gt; (Elastic File System) in the CodeBuild runner on the fly might be useful if you need to persist data across builds, run integration tests if the file system is used across multiple services, or manage large files efficiently.&lt;/p&gt;

&lt;p&gt;In this quick how-to guide, I will show you how to mount an EFS on a CodeBuild ephemeral host that lives in a different VPC.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before mounting an EFS in the CodeBuild ephemeral host, make sure you take the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make sure you have a way to access your CodeBuild runner (for example, via Systems Manager). This guide uses a self-hosted GitHub Actions CodeBuild runner. In a previous &lt;a href="https://dev.to/matheusdasmerces/building-scalable-cicd-pipelines-with-self-hosted-github-actions-on-amazon-codebuild-14di"&gt;article&lt;/a&gt;, I shared how to set up a CodeBuild runner to self-host GitHub Actions in a large organization environment. Here is an &lt;a href="https://github.com/matheusdasmerces/github-codebuild-selfhosted" rel="noopener noreferrer"&gt;example&lt;/a&gt; using CDK to show how that can be configured.&lt;/li&gt;
&lt;li&gt;The CodeBuild project must be configured inside a VPC and have a security group assigned to it.&lt;/li&gt;
&lt;li&gt;Both VPCs must be connected either via peering or Transit Gateway.&lt;/li&gt;
&lt;li&gt;It is important that both VPCs are located in the same AWS region.&lt;/li&gt;
&lt;li&gt;The CodeBuild project must be configured to run in privileged mode.&lt;/li&gt;
&lt;li&gt;Make sure your local environment has the AWS CLI properly configured.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step one: Configuring the EFS
&lt;/h2&gt;

&lt;p&gt;If you haven't created the EFS yet, let's use the CLI to create a simple file system with the configuration needed for mounting on CodeBuild.&lt;/p&gt;

&lt;p&gt;Create a new security group. Later, you will assign this security group to the newly created EFS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-security-group \
--group-name efs-example-sg \
--description "SG for the EFS mount target" \
--vpc-id vpc-id-example \
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Write down the security group ID. &lt;/p&gt;

&lt;p&gt;Create a new inbound rule for the new security group, allowing the CodeBuild runner security group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 authorize-security-group-ingress \
--group-id ID of the security group created for Amazon EFS mount target \
--protocol tcp \
--port 2049 \
--source-group ID of the security group of the CodeBuild runner \
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a new file system with the name "ExampleFileSystem"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws efs create-file-system \
    --performance-mode generalPurpose \
    --throughput-mode bursting \
    --encrypted \
    --tags Key=Name,Value=ExampleFileSystem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Write down the file system ID.&lt;/p&gt;

&lt;p&gt;Create a mount target. The mount target must live in the same AZ as the CodeBuild runner and inside a private subnet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws efs create-mount-target \
--file-system-id file-system-id \
--subnet-id private-subnet-id \
--security-group ID-of-the security-group-created-for-mount-target \
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;NOTE: &lt;em&gt;If you manage the CodeBuild runner configuration, you can create/modify your CodeBuild project to have the EFS mounted by default on every build. This reduces the complexity of having to manually mount it on every build. If that is your case, you can follow &lt;a href="https://docs.aws.amazon.com/codebuild/latest/userguide/sample-efs.html" rel="noopener noreferrer"&gt;this guide&lt;/a&gt; from the AWS documentation and skip the following steps.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;If the CodeBuild runner is managed by a DevOps/Platform team and you don't have control of its configuration or need to manually mount the EFS on the fly for any reason, you can continue with this guide.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step two: Mounting the EFS
&lt;/h2&gt;

&lt;p&gt;Make sure you have the DNS name of your file system as you follow the steps in this section. You can construct this DNS name using the following generic form:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;file-system-id.efs.aws-region.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, inside the CodeBuild runner (assuming you have a way to access it, as mentioned before), install the EFS utils as in the example below. Make sure you add the desired region in the &lt;em&gt;/etc/amazon/efs/efs-utils.conf&lt;/em&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install -y amazon-efs-utils
sudo sed -i 's/#region = us-east-1/region = $REGION/' /etc/amazon/efs/efs-utils.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the EFS mount helper installed, you can now mount the file system, pointing it to its DNS name. Make sure you add the IP address of the EFS in &lt;code&gt;/etc/hosts&lt;/code&gt;, mapping the mount target IP address to your EFS file system's hostname:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir efs
EFS_IP_ADDR=$(aws efs describe-mount-targets --file-system-id $FILE_SYSTEM_ID --region $REGION | jq -r '.MountTargets[0].IpAddress')
echo "${EFS_IP_ADDR} $FILE_SYSTEM_DNS_NAME" | sudo tee -a /etc/hosts
sudo mount -t efs -o tls,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport $FILE_SISTEM_DNS_NAME efs/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because both VPCs are connected via peering or a Transit Gateway, the CodeBuild runner can resolve the DNS name of the EFS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step three: Testing the EFS
&lt;/h2&gt;

&lt;p&gt;To test the file system, you can run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd efs
sudo mkdir testing-efs
cd testing-efs
touch test.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila! Now, with you EFS mounted in the CodeBuild runner, this setup is particularly useful for sharing data across services or maintaining state between builds.&lt;/p&gt;

&lt;p&gt;Is this a scenario you would use on your builds? Let me know in the comments!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>serverless</category>
      <category>aws</category>
      <category>containers</category>
    </item>
    <item>
      <title>Optimizing Amazon CloudWatch Costs for High-Traffic Lambda Functions with Advanced Logging Controls</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Mon, 17 Feb 2025 00:30:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/optimizing-amazon-cloudwatch-costs-for-high-traffic-lambda-functions-with-advanced-logging-controls-5406</link>
      <guid>https://dev.to/aws-builders/optimizing-amazon-cloudwatch-costs-for-high-traffic-lambda-functions-with-advanced-logging-controls-5406</guid>
      <description>&lt;p&gt;Are you happy with your CloudWatch bill? If you have one or more high-traffic Lambda functions, you pay for their duration and number of requests per month. However, there is a chance these functions are also increasing your CloudWatch costs - and this is why you need Advanced Logging Controls.&lt;/p&gt;

&lt;p&gt;In this article, I explain how to optimize CloudWatch costs while respecting compliance, and leveraging a simple AWS Systems Manager Automation Runbook to achieve full control of your logs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;CloudWatch log groups have two storage classes and their costs vary from region to region, but according to the &lt;a href="https://aws.amazon.com/cloudwatch/pricing/" rel="noopener noreferrer"&gt;CoudWatch pricing page&lt;/a&gt;, in &lt;strong&gt;&lt;em&gt;Ireland&lt;/em&gt;&lt;/strong&gt; (eu-west-1):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Standard&lt;/strong&gt; class, where you can have all log group features, like a subscription filter, for example, you pay &lt;em&gt;$0.57&lt;/em&gt; per GB. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrequent-Access&lt;/strong&gt; class, where features are limited and you can not have a subscription filter in place and the data can only be queried using Log Insights, the cost is &lt;em&gt;$0.285&lt;/em&gt; per GB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Knowing that you pay per GB for the data ingested into a log group, how can you determine the size of a message?&lt;/p&gt;

&lt;p&gt;The short answer is doing that before sending it to CloudWatch is not very handy and there are situations where you can not or do not want to reduce its size. Instead, you can control what kind of data is ingested into log groups, when, and for how long - and this is where the benefits are.&lt;/p&gt;

&lt;h2&gt;
  
  
  It is all about bytes
&lt;/h2&gt;

&lt;p&gt;Whenever your Lambda function sends logs to a log group, it uses the CloudWatch sub-feature of &lt;strong&gt;Collect (data ingestion)&lt;/strong&gt;. Under the hoods, your Lambda uses the PutLogEvents API from CloudWatch, generating a metric called &lt;em&gt;DataProcessing-Bytes&lt;/em&gt; for the Standard Class and &lt;em&gt;DataProcessingIA-Bytes&lt;/em&gt; for Infrequent-Access. Then, based on these two metrics, AWS creates your bill (more about monitoring these metrics later in this article).&lt;/p&gt;

&lt;p&gt;To put this into perspective, let's imagine that you have a high-traffic Lambda function that executes &lt;strong&gt;5 million times&lt;/strong&gt; a day. I've separated some examples, including the number of bytes that message can generate and how much they can cost you at scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application logs
&lt;/h3&gt;

&lt;p&gt;Application logs are custom messages generated by your Lambda function. For instance, you want to log the event received, or debug messages you add to your code along the way, and so on.&lt;/p&gt;

&lt;p&gt;Imagine that your high-traffic Lambda function handles API Gateway requests. For some reason, you want to log the event received. Let's have a look at an example of that payload:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "resource": "/my/path",
  "path": "/my/path",
  "httpMethod": "GET",
  "headers": {
    "header1": "value1",
    "header2": "value1,value2"
  },
  "multiValueHeaders": {
    "header1": [
      "value1"
    ],
    "header2": [
      "value1",
      "value2"
    ]
  },
  "queryStringParameters": {
    "parameter1": "value1,value2",
    "parameter2": "value"
  },
  "multiValueQueryStringParameters": {
    "parameter1": [
      "value1",
      "value2"
    ],
    "parameter2": [
      "value"
    ]
  },
  "requestContext": {
    "accountId": "123456789012",
    "apiId": "id",
    "authorizer": {
      "claims": null,
      "scopes": null
    },
    "domainName": "id.execute-api.us-east-1.amazonaws.com",
    "domainPrefix": "id",
    "extendedRequestId": "request-id",
    "httpMethod": "GET",
    "identity": {
      "accessKey": null,
      "accountId": null,
      "caller": null,
      "cognitoAuthenticationProvider": null,
      "cognitoAuthenticationType": null,
      "cognitoIdentityId": null,
      "cognitoIdentityPoolId": null,
      "principalOrgId": null,
      "sourceIp": "IP",
      "user": null,
      "userAgent": "user-agent",
      "userArn": null,
      "clientCert": {
        "clientCertPem": "CERT_CONTENT",
        "subjectDN": "www.example.com",
        "issuerDN": "Example issuer",
        "serialNumber": "a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1",
        "validity": {
          "notBefore": "May 28 12:30:02 2019 GMT",
          "notAfter": "Aug  5 09:36:04 2021 GMT"
        }
      }
    },
    "path": "/my/path",
    "protocol": "HTTP/1.1",
    "requestId": "id=",
    "requestTime": "04/Mar/2020:19:15:17 +0000",
    "requestTimeEpoch": 1583349317135,
    "resourceId": null,
    "resourcePath": "/my/path",
    "stage": "$default"
  },
  "pathParameters": null,
  "stageVariables": null,
  "body": "Hello from Lambda!",
  "isBase64Encoded": false
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This message generates &lt;em&gt;1.903&lt;/em&gt; bytes. If this Lambda executes an average of 5 million a day, by the end of the month, the size in GB will be &lt;em&gt;274.66GB&lt;/em&gt; (&lt;em&gt;1903 x 5000000 = 8.86GB a day, times 31 = 274.66GB&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;Looking at the pricing for the Standard class, by the end of the month, this single log from a single Lambda cost you &lt;em&gt;$156.55&lt;/em&gt; (&lt;em&gt;274.66GB x $0.57&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;If you think this is an undesired cost - and somehow you want to keep logging these messages for debugging purposes, you need Advanced Logging Controls.&lt;/p&gt;

&lt;h3&gt;
  
  
  System logs
&lt;/h3&gt;

&lt;p&gt;System logs are log messages generated by the Lambda service by default. For instance, Lambda reports data with the &lt;strong&gt;duration&lt;/strong&gt;, &lt;strong&gt;billed duration&lt;/strong&gt;, &lt;strong&gt;memory&lt;/strong&gt;, and &lt;strong&gt;start&lt;/strong&gt; and &lt;strong&gt;end&lt;/strong&gt; time. Even if your Lambda code does not generate any application logs, by default, the system logs will always appear in your log group.&lt;/p&gt;

&lt;p&gt;Let's have a look at a report message generated by a &lt;strong&gt;warm&lt;/strong&gt; Lambda function in plain text (a cold Lambda has a different report message):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;START RequestId: 1716e630-0997-4bd6-aae3-0f681ef1e69c Version: $LATEST
END RequestId: 1716e630-0997-4bd6-aae3-0f681ef1e69c
REPORT RequestId: 1716e630-0997-4bd6-aae3-0f681ef1e69c Duration: 1.80 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 68 MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These logs created by one warm Lambda execution generate &lt;em&gt;262&lt;/em&gt; bytes. If this Lambda executes an average of 5 million a day (not considering cold starts), by the end of the month, the size in GB will be &lt;em&gt;37.82GB&lt;/em&gt; (&lt;em&gt;262 x 5000000 = 1.22GB a day, times 31 = 37.82GB&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;Looking at the pricing for the Standard class, by the end of the month, this report messages from a single Lambda cost you &lt;em&gt;$21.55&lt;/em&gt; (&lt;em&gt;37.82GB x $0.57&lt;/em&gt;). &lt;/p&gt;

&lt;p&gt;Not a fortune, right? The important question to ask yourself is, do you need these log messages? Commonly, you would want to analyze the duration, and memory consumed for a Lambda to reduce costs related to Lambda execution, for example. But if you don't actively look at these messages, it is wise to change them to optimize your costs at scale. &lt;/p&gt;

&lt;h3&gt;
  
  
  Measuring the size of a log message
&lt;/h3&gt;

&lt;p&gt;How can you calculate the size, in bytes, of a message sent to CloudWatch, as I did for the previous examples?&lt;/p&gt;

&lt;p&gt;You can use the same query I've used in CloudWatch Log Insights:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FIELDS @message, ingested_bytes
#| filter @message like 'REPORT'
| STATS sum(strlen(@message)) AS ingested_bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;ATTENTION&lt;/strong&gt;: Querying data using Insights can also be expensive. For each GB of data scanned, you will pay $0.0057 (eu-west-1). Make sure you run this query in a log group that does not retain much data or only for a short time window.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Finding the perfect balance
&lt;/h2&gt;

&lt;p&gt;There is always a balance between &lt;strong&gt;compliance&lt;/strong&gt; and &lt;strong&gt;cost optimization&lt;/strong&gt;. This is especially true in large organizations, where there might be constraints related to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Retention period, as in how long the data must be stored to be compliant with the company's policies&lt;/li&gt;
&lt;li&gt;The type of data ingested, which is more often related to the company's technical guidelines for specific AWS services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The important aspect of this trade-off is &lt;strong&gt;being able to negotiate&lt;/strong&gt;. By clearly identifying these constraints alongside the analysis of cost reduction, you have better arguments to start a negotiation that benefits all parties involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up a retention policy
&lt;/h2&gt;

&lt;p&gt;Knowing how long you have to keep the data in a log group, you can start by setting up a retention policy for your log groups.&lt;/p&gt;

&lt;p&gt;Less expensive than the &lt;em&gt;DataProcessing-Bytes&lt;/em&gt; metric, there is another metric called &lt;em&gt;TimedStoraged-BytesHrs&lt;/em&gt; metric, which is the amount of time that the data is stored - which AWS also charges you for. The cost for the Ireland region is &lt;em&gt;$0.03&lt;/em&gt; per GB compressed (0.15 compression ratio for each uncompressed byte).&lt;/p&gt;

&lt;p&gt;Having a retention policy can help your application optimize some costs and also be more sustainable. Especially when using &lt;strong&gt;CDK&lt;/strong&gt; (AWS Cloud Development Kit) as your IaC tool (Infrastructure as Code), when the default configuration from a log group is to never expire the data stored. If you don't use it after some period, it's best to set up a retention policy.&lt;/p&gt;

&lt;p&gt;Below you can find an example, in TypeScript, of how to change the CDK default when creating a Log Group and setting the retention policy you define:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new LogGroup(this, 'MyLogGroup', {
  logGroupName: '/aws/lambda/advanced-logging-control',
  retention: RetentionDays.ONE_WEEK,
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Instrumenting your code
&lt;/h2&gt;

&lt;p&gt;One common mistake when developing a Lambda function using &lt;strong&gt;JavaScript&lt;/strong&gt; (whether you use or not TypeScript features) is to log everything using the &lt;code&gt;console.log&lt;/code&gt; method. In order to fully leverage &lt;strong&gt;Advanced Logging Controls&lt;/strong&gt; and be able to manipulate the log level, it is important to use the correct methods when logging with JavaScript.&lt;/p&gt;

&lt;p&gt;Something not very well-known is that, natively, the Lambda function &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs-advanced.html#monitoring-cloudwatchlogs-log-level" rel="noopener noreferrer"&gt;logging configuration has 6 application log levels&lt;/a&gt; that you can define: &lt;strong&gt;INFO&lt;/strong&gt;, &lt;strong&gt;WARN&lt;/strong&gt;, &lt;strong&gt;ERROR&lt;/strong&gt;, &lt;strong&gt;TRACE&lt;/strong&gt;, &lt;strong&gt;DEBUG&lt;/strong&gt;, and FATAL.&lt;/p&gt;

&lt;p&gt;I have tested all JavaScript &lt;strong&gt;console&lt;/strong&gt; methods when sending to a CloudWatch log group, and I separated a few examples from the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/console" rel="noopener noreferrer"&gt;console object documentation&lt;/a&gt; and how they match the Lambda application log-level configuration. Depending on the log level you set, different console methods will affect your CloudWatch log group. When using them properly, you make your Lambda function ready to switch the log level on the fly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;INFO&lt;/strong&gt; level shows messages from all JavaScript console methods, except for &lt;strong&gt;console.debug&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WARN&lt;/strong&gt; level shows messages from &lt;strong&gt;console.warn&lt;/strong&gt;, &lt;strong&gt;console.assert&lt;/strong&gt;, and &lt;strong&gt;console.error&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ERROR&lt;/strong&gt; level shows messages from &lt;strong&gt;console.error&lt;/strong&gt; only.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TRACE&lt;/strong&gt; level shows messages from all JavaScript console methods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DEBUG&lt;/strong&gt; level shows messages from all JavaScript console methods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FATAL&lt;/strong&gt; level does not show any messages from any JavaScript console method.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using the proper JavaScript console methods for specific situations not only makes your Lambda function ready to use Advanced Logging Controls but also helps your JavaScript comply with &lt;strong&gt;JavaScript's semantics&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Switching log level on the fly
&lt;/h2&gt;

&lt;p&gt;Now, let's get down to business and see how to natively switch the log level of your Lambda function when needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial setup
&lt;/h3&gt;

&lt;p&gt;I've created a very simple Lambda function with a few examples of using the JavaScript console methods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const handler = async (
    event: any,
): Promise&amp;lt;string&amp;gt; =&amp;gt; {
    console.log('Received event:', JSON.stringify(event, null, 2));
    console.info('Info: Processing event');
    console.debug('Debug: Event details', event);
    console.warn('Warning: This is a sample warning message');
    console.error('Error: This is a sample error message');
    console.assert(false, 'Assert: This is a sample assert message');

    return 'Hello World!';
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using CDK, I set the System Log Level to WARN and the Application Log Level to ERROR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new NodejsFunction(this, 'AdvancedLoggingControlFunction', {
  functionName: 'advanced-logging-control',
  entry: 'src/hello-world/handler.ts',
  handler: 'handler',
  //set application log level to ERROR and system log level to WARN
  applicationLogLevelV2: ApplicationLogLevel.ERROR,
  systemLogLevelV2: SystemLogLevel.WARN,
  //logging format must be set to JSON
  loggingFormat: LoggingFormat.JSON,
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;IMPORTANT&lt;/strong&gt;: When changing the System Log Level and Application Log Level, the logging format must be set to JSON (defaults to Text).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This means that I am &lt;strong&gt;overriding CDK's default configuration&lt;/strong&gt; for logging and reducing the messages sent to the CloudWatch log group. According to the explanation in the session above, now only console.error methods will show up in my CloudWatch log group.&lt;/p&gt;

&lt;p&gt;If I execute this Lambda function, this is what I have in my CloudWatch Log Group:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dfqgi2vsgj3fr7nokk8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dfqgi2vsgj3fr7nokk8.png" alt=" " width="800" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that there are no System Log Messages (for instance, Lambda report) because I set the System Log Level to WARN, and only the console.error method affects the Log Group because the Application Log Level is &lt;strong&gt;ERROR&lt;/strong&gt; only.&lt;/p&gt;

&lt;p&gt;By only logging what you need, you can significantly reduce the costs of the &lt;em&gt;DataProcessing-Bytes&lt;/em&gt; metric for a high-traffic Lambda Function and stop &lt;strong&gt;polluting&lt;/strong&gt; your log group.&lt;/p&gt;

&lt;p&gt;However, there are going to be situations where you want your Lambda function to be more &lt;strong&gt;verbose&lt;/strong&gt;, to debug something, or to have extra information on your log group. Instead of always logging everything since the start, you might want to have the &lt;strong&gt;least detail&lt;/strong&gt; possible and change it afterward, by updating your Logging Configuration to DEBUG (&lt;strong&gt;most detail&lt;/strong&gt;) when needed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda update-function-configuration \
  --function-name advanced-logging-control \
  --logging-config LogFormat=JSON,ApplicationLogLevel=DEBUG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: By not specifying the System Log Level in the parameters, AWS will automatically set it to INFO.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After the change, executing the Lambda function now gives us more messages in the log group:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fal756yx4mnxehscxhs7t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fal756yx4mnxehscxhs7t.png" alt=" " width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As mentioned before, the DEBUG level will show messages from all console JavaScript methods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automating the process
&lt;/h3&gt;

&lt;p&gt;Executing a CLI command every time you need to debug something is not very handy. More than that, you might want to execute this change in a more controlled and auditable way, especially when you do not have/must not have elevated privileges to do so.&lt;/p&gt;

&lt;p&gt;To improve operational excellence, I have created a simple System Manager Automation Runbook, where this change can be executed in the AWS environment without the need to manually update the Lambda configuration using your IAM permissions. Instead, I have created an &lt;strong&gt;IAM role to be assumed&lt;/strong&gt; by the SSM Automation Runbook. Plus, by doing that, the changes can be recorded by &lt;strong&gt;CloudTrail&lt;/strong&gt; for compliance reasons.&lt;/p&gt;

&lt;p&gt;More importantly, you want to analyze the Lambda function &lt;strong&gt;for a short period&lt;/strong&gt; and reset it to the previous configuration to avoid extra costs with the &lt;em&gt;DataProcessing-Bytes&lt;/em&gt; metric.&lt;/p&gt;

&lt;p&gt;First of all, defining the IAM role to be assumed by the Automation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const automationIamRole = new Role(this, 'AutomationIamRole', {
  assumedBy: new ServicePrincipal('ssm.amazonaws.com'),
});

automationIamRole.addToPolicy(
  new PolicyStatement({
    actions: [
      'lambda:GetFunctionConfiguration',
      'lambda:UpdateFunctionConfiguration',
    ],
    resources: [`arn:aws:lambda:${Aws.REGION}:${Aws.ACCOUNT_ID}:function:*`],
  })
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: Achieving the &lt;em&gt;least privileges&lt;/em&gt; can be a challenge because the Automation does not know upfront what Lambda function will be manipulated. Do you have an idea of how can this be solved? Let me know in the comments!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Automation Runbook definition in CDK looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new CfnDocument(this, 'ModifyLambdaLogLevelDocument', {
  documentType: "Automation",
  name: 'ModifyLambdaLogLevelDocument',
  documentFormat: "YAML",
  updateMethod: "NewVersion",
  content: {
    schemaVersion: "0.3",
    description: "Modify the log level of a Lambda function temporarily. After 10 minutes, the log level will be reset to the original value.",
    assumeRole: automationIamRole.roleArn,
    parameters: {
      FunctionName: {
        type: "String",
        description: "The name of the Lambda Function",
      },
      LogLevel: {
        type: "String",
        description: "The log level to set",
        allowedValues: [
          "DEBUG",
          "INFO",
          "WARN",
        ],
      },
      Reason: {
        type: "String",
        description: "The reason for the change",
      },
    },
    mainSteps: [
      {
        name: "GetCurrentLoggingConfig",
        action: "aws:executeAwsApi",
        inputs: {
          Service: "Lambda",
          Api: "getFunctionConfiguration",
          FunctionName: "{{FunctionName}}",
        },
        outputs: [
          {
            Name: "CurrentLoggingConfig",
            Selector: "$.LoggingConfig",
            Type: "StringMap",
          },
        ],
      },
      {
        name: "ModifyLogLevel",
        action: "aws:executeAwsApi",
        inputs: {
          Service: "Lambda",
          Api: "updateFunctionConfiguration",
          FunctionName: "{{FunctionName}}",
          Description: "Update log level to {{LogLevel}}",
          LoggingConfig: {
            ApplicationLogLevel: "{{LogLevel}}",
            LogFormat: "JSON",
            SystemLogLevel: "{{LogLevel}}",
          },
        },
      },
      {
        name: "Wait10Minutes",
        action: "aws:sleep",
        inputs: {
          Duration: "PT10M",
        },
      },
      {
        name: "ResetLogLevel",
        action: "aws:executeAwsApi",
        inputs: {
          Service: "Lambda",
          Api: "updateFunctionConfiguration",
          FunctionName: "{{FunctionName}}",
          Description: "Reset log level to original value",
          LoggingConfig: "{{GetCurrentLoggingConfig.CurrentLoggingConfig}}",
        },
      }
    ]
  },
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This document executes the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get the &lt;strong&gt;current logging configuration&lt;/strong&gt; for the provided Lambda function, saving it in a variable. This value will be used in the last step to reset the logging configuration to its original value.&lt;/li&gt;
&lt;li&gt;Executes the API call to update the Lambda logging configuration with the &lt;strong&gt;provided log level&lt;/strong&gt;. It accepts DEBUG, INFO, and WARN, which are more important levels to change on the fly.&lt;/li&gt;
&lt;li&gt;Sleep for &lt;strong&gt;10 minutes before changing it back&lt;/strong&gt;. If 10 minutes is not enough time to debug, consider increasing the time or receiving the duration as a parameter in the document.&lt;/li&gt;
&lt;li&gt;Reset the log level to its &lt;strong&gt;original value&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: It is also possible to add an &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-action-approve.html" rel="noopener noreferrer"&gt;approval step&lt;/a&gt; to be approved by one of your colleagues, although I have not included that in this example. &lt;/p&gt;

&lt;p&gt;The full example, including the Lambda function and the SSM Automation Runbook can be found &lt;a href="https://github.com/matheusdasmerces/lambda-advanced-logging-control" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Monitoring CloudWatch costs
&lt;/h2&gt;

&lt;p&gt;Using the &lt;strong&gt;Cost Explorer&lt;/strong&gt; console can give you an overview of the &lt;em&gt;DataProcessing-Bytes&lt;/em&gt; and &lt;em&gt;TimedStoraged-BytesHrs&lt;/em&gt; metrics and how their costs increase/decrease over time. &lt;/p&gt;

&lt;p&gt;You can select, under the "Usage Type" filter both of the metrics:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauzt1mx5co0vxh0g6qyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauzt1mx5co0vxh0g6qyi.png" alt=" " width="322" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Managing CloudWatch costs efficiently is crucial, especially for high-traffic Lambda functions. By implementing Advanced Logging Controls, you can significantly reduce unnecessary log ingestion and storage costs while maintaining compliance.&lt;/p&gt;

&lt;p&gt;Key takeaways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor metrics like DataProcessing-Bytes and TimedStoraged-BytesHrs to track expenses.&lt;/li&gt;
&lt;li&gt;Ensure logs are only stored for the necessary period to avoid excessive storage fees.&lt;/li&gt;
&lt;li&gt;Leverage Lambda’s built-in log levels to filter out unnecessary logs and avoid polluting CloudWatch.&lt;/li&gt;
&lt;li&gt;Utilize AWS Systems Manager Automation Runbooks to temporarily adjust log levels when debugging, without requiring constant manual intervention.&lt;/li&gt;
&lt;li&gt;Use Cost Explorer to track trends and make informed decisions on further optimizations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By adopting these practices, you can strike the right balance between compliance and cost efficiency, ensuring that your CloudWatch bills remain manageable while still providing the insights your applications need.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>serverless</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Building Scalable CI/CD Pipelines with Self-Hosted GitHub Actions on Amazon CodeBuild</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Wed, 05 Feb 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/matheusdasmerces/building-scalable-cicd-pipelines-with-self-hosted-github-actions-on-amazon-codebuild-14di</link>
      <guid>https://dev.to/matheusdasmerces/building-scalable-cicd-pipelines-with-self-hosted-github-actions-on-amazon-codebuild-14di</guid>
      <description>&lt;p&gt;Since Grady Booch introduced &lt;strong&gt;continuous integration (CI)&lt;/strong&gt; in 1991, code changes can be tested and automatically integrated into a shared repository. Although CI/CD solutions have evolved quickly, the concept remains the same: you push, test, and deploy, ensuring the fast delivery of software updates.&lt;/p&gt;

&lt;p&gt;In this post, you will discover how PostNL has designed fully serverless and centralized CI/CD pipelines in the AWS environment. These pipelines provide scalability, security, and convenience through services such as IAM and Amazon VPC.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Nowadays, building your software on CI/CD pipelines is an easy task. Especially with tools like GitHub and Atlassian BitBucket, which offer hosted virtual machines to execute pipelines in a plug-and-play approach: you create a build file, define your build steps, and voila - your pipeline is running.&lt;/p&gt;

&lt;p&gt;At PostNL, we use GitHub for collaboration and version control and we leverage the GitHub-hosted runners as our CI/CD tool, to run workflows for many use cases across the organization, like for instance deploying infrastructure and software, executing all kinds of tests and so on. &lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;AWS Center of Excellence (AWS CoE)&lt;/strong&gt; team is dedicated to provisioning and establishing foundational infrastructure for PostNL development teams, to efficiently develop and deploy business-critical applications according to the best practices. &lt;/p&gt;

&lt;p&gt;With that in mind, we have decided to design a solution that allows private communication between GitHub and AWS and supports architectures such as ARM64 in the pipelines, leveraging the &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/04/aws-codebuild-managed-github-action-runners/" rel="noopener noreferrer"&gt;Amazon CodeBuild feature that allows self-hosting GitHub action runners&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  About GitHub-Hosted Runners
&lt;/h2&gt;

&lt;p&gt;GitHub offers hosted virtual machines to run workflows. The Github-hosted runners are used by default across the PostNL organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Networking
&lt;/h3&gt;

&lt;p&gt;By default, GitHub-hosted runners have access to the public internet. This means establishing communication with resources AWS, the traffic goes over the public internet. They are also shared across all GitHub customers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbu4php6pnz44qj8e9qhv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbu4php6pnz44qj8e9qhv.png" alt="GitHub-Hosted runners" width="713" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, things can get slightly complicated when a team needs, for example, to run integration tests against private resources inside an Amazon VPC, without having to expose these resources through a public Application Load Balancer or Amazon API Gateway.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compute images
&lt;/h3&gt;

&lt;p&gt;GitHub offers different types of runners for public and private repositories. Since all the repositories inside the PostNL Organization are private, teams are subject to use the supported runners for private repositories only. The list does not include, for example, the Linux Virtual Machine on &lt;strong&gt;ARM64&lt;/strong&gt; Architecture. This limits PostNL teams to executing builds using the &lt;strong&gt;Intel&lt;/strong&gt; architecture only.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: According to &lt;a href="https://docs.github.com/en/actions/using-github-hosted-runners/using-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories" rel="noopener noreferrer"&gt;GitHub documentation&lt;/a&gt;, the ARM64 Linux runner is in public preview and subject to change. It is only available for public repositories.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why Self-Host GitHub Action Runners?
&lt;/h2&gt;

&lt;p&gt;A self-hosted runner is a system that we deploy and manage to execute jobs from GitHub Actions. This approach offers more control of hardware, operating system, and software tools than GitHub-hosted runners provide to meet PostNL needs, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create &lt;strong&gt;custom hardware configurations&lt;/strong&gt; with processing power or memory to run larger jobs;&lt;/li&gt;
&lt;li&gt;Communicate with resources available on the &lt;strong&gt;internal network&lt;/strong&gt;;&lt;/li&gt;
&lt;li&gt;Choose a &lt;strong&gt;CPU architecture&lt;/strong&gt; not offered by GitHub-hosted runners;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Self-hosted runners can be physical, virtual, in a container, on-premises, or in a cloud. This also means that by self-hosting GitHub runners we are responsible for the cost of maintaining the runner system.&lt;/p&gt;

&lt;h2&gt;
  
  
  PostNL CCoE Shared Services
&lt;/h2&gt;

&lt;p&gt;As the PostNL CCoE (Cloud Center of Excellence), our mission is to drive innovation and business agility for the PostNL engineering community and enable them to build cloud-native solutions. We do this by delivering a Landing Zone and facilitating knowledge sharing for teams to build, innovate, and own cloud solutions while delivering maximum customer value.&lt;/p&gt;

&lt;p&gt;One of the practice areas under the CCoE structure is called &lt;strong&gt;Shared Services&lt;/strong&gt;, managed by the AWS CoE team. In practice, the Shared Services is an AWS account inside the PostNL organization in AWS, responsible for offering business and engineering functions to PostNL development teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Network setup
&lt;/h3&gt;

&lt;p&gt;At PostNL we are using a &lt;strong&gt;Transit Gateway&lt;/strong&gt; to enable private network communications. All AWS accounts of the development teams are connected to &lt;strong&gt;two different routing tables&lt;/strong&gt; on the Transit Gateway: one for production accounts and one for non-production accounts. &lt;/p&gt;

&lt;p&gt;By providing the accounts with CIDR ranges coming from a big supernet we could prevent network traffic between production and non-production accounts creating just one &lt;strong&gt;blackhole&lt;/strong&gt; route in the routing table containing the supernet. Since we wanted all teams to be able to access our &lt;strong&gt;Shared Services Account&lt;/strong&gt; this account is attached to its routing table providing access to all production and non-production accounts. &lt;/p&gt;

&lt;p&gt;For example, one of the services we provide in the Shared Services Account is the DNS service. For DNS we are using both public and private hosted zones. Although the public-hosted zones are &lt;strong&gt;authoritative&lt;/strong&gt;, the private-hosted zones need to be reached from the PostNL internal DNS server so we created DNS resolver endpoints in the Shared Services Account. &lt;/p&gt;

&lt;p&gt;For all the development team accounts we create a private hosted zone and associate that zone with all the VPCs of its &lt;strong&gt;workload&lt;/strong&gt; and with the VPC of the Shared Services Account so the entries in all the &lt;strong&gt;private hosted zones&lt;/strong&gt; can be retrieved from the Shared Services Account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5l6mintftn4pstg9b0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5l6mintftn4pstg9b0v.png" alt="Network setup shared services" width="800" height="853"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because the Shared Services is at the organization level and can seamlessly communicate with the &lt;strong&gt;PostNL network in the AWS ecosystem&lt;/strong&gt;, we have decided to design the Self-Hosted Runner in the Shard Services account, making it possible to process jobs for multiple repositories in the organization, while keeping security and compliance in the centralized CCoE environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The PostNL Self-Hosted Runner
&lt;/h2&gt;

&lt;p&gt;The PostNL Self-Hosted Runner leverages the Amazon CodeBuild feature that allows self-hosting GitHub action runners. This solution provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A fully managed service to handle builds on &lt;strong&gt;ephemeral hosts&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;No need for managing &lt;strong&gt;the underlying infrastructure&lt;/strong&gt;, fully serverless.&lt;/li&gt;
&lt;li&gt;Seamless integration with the &lt;strong&gt;PostNL internal network&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Full control over the runner's &lt;strong&gt;image and compute type supported by CodeBuild&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Connection between GitHub and AWS
&lt;/h3&gt;

&lt;p&gt;As the solution is offered and managed by the PostNL CCoE in the Shared Services account, the centralized approach enables development teams to execute builds on AWS without having to manage the CodeBuild project or the connection between AWS and GitHub.&lt;/p&gt;

&lt;p&gt;This is a &lt;a href="https://docs.aws.amazon.com/codebuild/latest/userguide/connections-github-app.html" rel="noopener noreferrer"&gt;one-time configuration&lt;/a&gt; made in the Shared Services AWS account, creating a GitHub App connection for GitHub for Amazon CodeBuild. This is necessary to allow CodeBuild to execute jobs in the GitHub PostNL organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  One CodeBuild project per team
&lt;/h3&gt;

&lt;p&gt;By creating one CodeBuild project for each team, we can achieve the least privileges by using ABAC (Attribute-Based Access Control) based on the repository prefix, where only the GitHub repositories belonging to that specific team can run builds on their specific CodeBuild project. This is achieved by using the Filter Group of the CodeBuild source:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm4x7ul71a1gkpjkmgpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm4x7ul71a1gkpjkmgpg.png" alt="CodeBuild project config" width="793" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since CodeBuild allows to assignment of at least one security group to the project, having one security group per team also makes it more fine-grained.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq33es39tzhm2t0tgfi19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq33es39tzhm2t0tgfi19.png" alt="Project" width="341" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CodeBuild supports &lt;strong&gt;5,000 build projects&lt;/strong&gt; within a single region.&lt;/p&gt;

&lt;h3&gt;
  
  
  Leveraging existing IAM role permissions
&lt;/h3&gt;

&lt;p&gt;CodeBuild assumes the existing OIDC (Open ID Connect) role configured for that specific GitHub repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttuc2ys4e8w3ku7epi4q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttuc2ys4e8w3ku7epi4q.png" alt="IAM" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Internal network communication
&lt;/h3&gt;

&lt;p&gt;The solution also enables development teams to execute end-to-end and integration tests against resources in the internal network, leveraging the Shared Services Account network setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw724sezpi8dvxlh4xhz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw724sezpi8dvxlh4xhz.png" alt="Internal network" width="725" height="927"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This can be achieved by using Security Group Referencing towards the security group of the CodeBuild project, adding an extra layer of security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customizing image and compute type
&lt;/h3&gt;

&lt;p&gt;By default, the Self-Hosted CodeBuild projects are created using the image type of Linux ARM64 (not supported by standard GitHub-Hosted Runners) and compute type Medium (8 GB memory, 4 vCPUs).&lt;br&gt;
When running builds, development teams have the freedom to customize the image and compute type by using all types supported by CodeBuild:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;runs-on:
  - codebuild-&amp;lt;project-name&amp;gt;-${{ github.run_id }}-${{ github.run_attempt }}
  - image:&amp;lt;environment-type&amp;gt;-&amp;lt;image-identifier&amp;gt;
  - instance-size:&amp;lt;instance-size&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;An example of how the CodeBuild project can be set up can be found &lt;a href="https://github.com/matheusdasmerces/github-codebuild-selfhosted" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;The PostNL Self-Hosted Runner is currently in its first version, although it is already a reliable choice for running builds that require more power and connectivity across the organization, we want to evolve it into a more robust solution with a couple of ideas in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Limit the size of the runners to a list pre-defined by the CCoE to the team's use case, since running very large builds can lead to cost increases. We can achieve that by creating, for example, a reusable GitHub Workflow to share across the PostNL organization.&lt;/li&gt;
&lt;li&gt;Create our buildspec.yml file: since CodeBuild manages the build commands, we would have to manipulate the build steps (INSTALL, PRE_BUILD, and POST_BUILD) to install new packages/software needed for the builds at PostNL. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As CCoE, we want to be close to the development teams using the PostNL Self-Hosted Runner to gather feedback on how the solution can be improved.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The CCoE PostNL's implementation of Self-Hosted GitHub Action Runners using Amazon CodeBuild represents a significant advancement in our CI/CD pipeline capabilities. &lt;/p&gt;

&lt;p&gt;By leveraging AWS's serverless infrastructure and integrating it with our internal network, we have achieved a scalable, secure, and efficient solution that meets the diverse needs of our development teams. This centralized approach not only enhances our ability to run complex builds and tests but also ensures compliance and security within the PostNL ecosystem.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Implementing Regression Tests for AWS Lambda with CDK Fine-Grained Assertions</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Mon, 20 Jan 2025 00:30:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/implementing-regression-tests-for-aws-lambda-with-cdk-fine-grained-assertions-jfa</link>
      <guid>https://dev.to/aws-builders/implementing-regression-tests-for-aws-lambda-with-cdk-fine-grained-assertions-jfa</guid>
      <description>&lt;p&gt;When creating infrastructure on AWS using IaC (Infrastructure as Code) tools, knowing if the CloudFormation Template generated by the code has the expected definition is often challenging. There is also a need to ensure that the eventual changes introduce no unintended breaks to your infrastructure code.&lt;/p&gt;

&lt;p&gt;In this article, I will explore why CDK (Cloud Development Kit) is a game-changer for testing IaC and how to leverage the CDK Fine-Grained Assertions approach in TypeScript to check the integrity of your Lambda functions definition before deploying your infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;One of the biggest challenges of using IaC tools such as Serverless Framework, AWS SAM (Serverless Application Model), or Terraform is to make sure the output is correct and according to the standards defined by you or your organization, because these tools are YAML or JSON based. &lt;/p&gt;

&lt;p&gt;This is especially true for Lambda functions, where you want to make sure that your infrastructure code is generating a Lambda function that has, for instance, the desired timeout time, memory size, and the required IAM permissions to serve its purpose.&lt;/p&gt;

&lt;p&gt;Gladly, when using &lt;strong&gt;CDK&lt;/strong&gt; as your IaC tool, it is possible to create assertions and let it fail in the CI if some configuration is deviated due to code change, for example. This approach is also known as &lt;strong&gt;Regression Tests&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  CDK: A game changer for testing IaC​
&lt;/h2&gt;

&lt;p&gt;When AWS &lt;a href="https://aws.amazon.com/about-aws/whats-new/2019/07/the-aws-cloud-development-kit-aws-cdk-is-now-generally-available1/" rel="noopener noreferrer"&gt;introduced&lt;/a&gt; CDK in 2019, the possibility of writing infrastructure using familiar programming languages such as TypeScript and Java sounded nice. However, I have to be honest - it did not catch my attention at first. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhu8mmoxek1gjct2pi6r4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhu8mmoxek1gjct2pi6r4.png" alt="From https://docs.aws.amazon.com/cdk/v2/guide/home.html" width="774" height="576"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;From &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/home.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cdk/v2/guide/home.html&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I've had quite some experience with Serverless Framework and AWS SAM, and I was more in favor of using these tools because their use of YAML files provides a more explicit and declarative way to define infrastructure - less abstraction - compared to CDK.&lt;/p&gt;

&lt;p&gt;It was only in &lt;strong&gt;November 2021&lt;/strong&gt; that I changed my mind: The &lt;strong&gt;CDK assertions library&lt;/strong&gt; was &lt;a href="https://aws.amazon.com/blogs/developer/testing-cdk-applications-in-any-language/" rel="noopener noreferrer"&gt;announced&lt;/a&gt; and that was, for me, a true game changer: being able to write infrastructure in TypeScript was already great, but writing fine-grained tests towards infrastructure code really caught my eye.&lt;/p&gt;
&lt;h2&gt;
  
  
  Does that mean that CDK is better?
&lt;/h2&gt;

&lt;p&gt;The short and cliche answer is: &lt;strong&gt;It depends&lt;/strong&gt;. I personally still like YAML-based IaC tools for specific use cases, for instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple and/or small projects, when you need a more descriptive infrastructure definition with lower abstraction.&lt;/li&gt;
&lt;li&gt;When you work in a team that is already familiar with YAML and has less knowledge of programming languages, reducing the learning curve.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nowadays, for new projects that I know it is going to grow, and demand strict testing, I'd rather start with CDK especially due to the possibility of testing these resources before I even have to deploy them - and having full control of what is been tested.&lt;/p&gt;

&lt;p&gt;Let's get down to business and explore how to implement regression tests with CDK fine-grained assertions.&lt;/p&gt;
&lt;h2&gt;
  
  
  CDK Fine-Grained Assertions
&lt;/h2&gt;

&lt;p&gt;Assuming you have a code base with a CDK app, it is very easy to start writing assertions against the defined resources.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To demonstrate implementing regression tests in the CDK code, I've written a simple CDK app that creates a Lambda function, with a CloudWatch Log Group and an IAM Role with a defined IAM Policy, using TypeScript, as it's the language I am most familiar with.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've created a test folder on the root of the project and a test file to start implementing the tests.&lt;br&gt;
&lt;code&gt;test/cdk-fine-grained-tests.test.ts&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Template } from 'aws-cdk-lib/assertions';
import { App } from 'aws-cdk-lib';
import { CdkFineGrainedTestsStack } from '../lib/cdk-fine-grained-tests-stack';

describe('MyFunction Fine-Grained Tests', () =&amp;gt; {
  const app = new App();
  const stack = new CdkFineGrainedTestsStack(app, 'CdkFineGrainedTestsStack', {});

  const template = Template.fromStack(stack);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the &lt;code&gt;Template&lt;/code&gt; class from the CDK assertions library, I've created a "copy" of the CloudFormation template generated by the CDK stack. &lt;/p&gt;

&lt;p&gt;After defining the main structure of the test file, we can start writing the assertions against the CloudFormatin template, to check the integrity of the configuration we want to include in the regression test. For the Lambda Function, it is important to check if the memory size does not get too big for its purpose, possibly incurring cost increases. The same goes for duration, architecture, and runtime.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;it('should have created a lambda function with default configuration', () =&amp;gt; {
    template.hasResourceProperties('AWS::Lambda::Function', {
      FunctionName: 'my-function',
      Handler: 'index.handler',
      Runtime: 'nodejs22.x',
      Architectures: ['arm64'],
      Timeout: 30,
      MemorySize: 128,
    });
  });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also check if the Lambda function's CloudWatch Log Group has the desired retention policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;it('should have created a log group with the correct retention policy', () =&amp;gt; {
  template.hasResourceProperties('AWS::Logs::LogGroup', {
    LogGroupName: '/aws/lambda/my-log-group',
    RetentionInDays: 7,
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally, ensuring the Lambda has the required IAM permissions to execute the actions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;it('should have created a iam service role with the lambda basic execution role', () =&amp;gt; {
    template.hasResourceProperties('AWS::IAM::Role', {
      ManagedPolicyArns: [
        {
          'Fn::Join': [
            '',
            [
              'arn:',
              { Ref: 'AWS::Partition' },
              ':iam::aws:policy/service-role/AWSLambdaBasicExecutionRole',
            ],
          ],
        },
      ],
    });
});

it('should have created a iam policy with the correct permissions', () =&amp;gt; {
    template.hasResourceProperties('AWS::IAM::Policy', {
      PolicyDocument: {
        Statement: [
          {
            Action: 's3:GetObject',
            Effect: 'Allow',
            Resource: 'arn:aws:s3:::my-bucket/*',
          },
        ],
      },
    });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You eventually would like to ensure that these properties remain untouched on new code changes unless this change is deliberated. If that is the case, the assertion also would need to be updated, otherwise the regression test will fail - and that is what we want.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The full example can be found &lt;a href="https://github.com/matheusdasmerces/cdk-fine-grained-tests" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Executing the tests on the CI
&lt;/h2&gt;

&lt;p&gt;After implementing the assertions in your code base, it is wise to execute the tests when you push changes to your branch. &lt;/p&gt;

&lt;p&gt;For the example above, I've implemented a workflow to execute the tests in GitHub Actions on every push to the &lt;code&gt;main&lt;/code&gt; branch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build and run tests
on:
    push:
        branches:
            - "main"

jobs:
    build-and-test:
      runs-on: ubuntu-latest
      permissions:
        id-token: write
        contents: read
      steps:
        - name: Checkout
          uses: actions/checkout@v4

        - name: Install dependencies
          id: install-dependencies
          run: |
            npm ci

        - name: Run CDK fine-grained tests
          id: cdk-fine-grained-tests
          run: |
            npm run test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way you ensure that this step is executed before deployment, allowing early feedback about how the changes impacted your infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Lambda Functions
&lt;/h2&gt;

&lt;p&gt;Although this article is focused on Lambda Functions, CDK Fine-Grained Assertions can be used for any piece of infrastructure you want to implement regression tests. &lt;/p&gt;

&lt;p&gt;Let's say that your CDK code creates an API Gateway. You could implement regression tests against, for example, a definition of a Gateway Response for the &lt;code&gt;BAD_REQUEST&lt;/code&gt; exception as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;it('should have created a gateway response for bad request', () =&amp;gt; {
   template.hasResourceProperties(
      'AWS::ApiGateway::GatewayResponse',
        {
          ResponseType: 'BAD_REQUEST_BODY',
          StatusCode: '400',
          ResponseTemplates: {
            'application/json':
              '{
               \n "message": "$context.error.validationErrorString"\n      
               }',
          },
        },
   );
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As long as you know what resources your infrastructure has and what you would like to test, you have a lot of possibilities. That is because the assertions are made against the CloudFormation template generated by your CDK code.&lt;/p&gt;

&lt;p&gt;Please refer to the &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.assertions-readme.html" rel="noopener noreferrer"&gt;CDK assertions library&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS CDK with Fine-Grained Assertions enables precise testing of infrastructure by validating the CloudFormation templates it generates. This approach ensures that resources, such as Lambda functions, adhere to specific configurations like memory size, timeout, and IAM permissions. &lt;/p&gt;

&lt;p&gt;By integrating these tests into a CI/CD pipeline, you can catch unintended changes early and maintain confidence in your infrastructure code. &lt;/p&gt;

&lt;p&gt;While YAML-based tools may suit simpler projects, CDK’s combination of programming flexibility and fine-grained testing makes it a nice choice for scalable and test-driven infrastructure development.&lt;/p&gt;

&lt;p&gt;What are your thoughts? Let me know in the comments below, I would love to hear what you have to say!&lt;/p&gt;

</description>
      <category>testing</category>
      <category>aws</category>
      <category>infrastructureascode</category>
      <category>typescript</category>
    </item>
    <item>
      <title>The Difference Between Building on AWS and Making Ice Cream</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Mon, 06 Jan 2025 07:32:31 +0000</pubDate>
      <link>https://dev.to/matheusdasmerces/the-difference-between-building-on-aws-and-making-ice-cream-ibj</link>
      <guid>https://dev.to/matheusdasmerces/the-difference-between-building-on-aws-and-making-ice-cream-ibj</guid>
      <description>&lt;p&gt;What does building solutions on AWS have in common with making ice cream? On the surface, not much. Yet, if you dig deeper, they share some surprising parallels - and understanding these similarities can teach you a lot about building better software on AWS.&lt;/p&gt;

&lt;p&gt;In this article, I will step into AWS's metaphorical ice cream shop and explore what separates quality from average.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Imagine that you own an ice cream shop. You are not an ice cream wizard yet - you know one thing here and there, but you start to feel confident about what you do. The possibilities seem endless: you have easy access to every flavor, topping, and combination imaginable.&lt;/p&gt;

&lt;p&gt;Meet Scoopy, an ice cream loving customer who just entered your shop. You want to show your worth, and Scoopy has high expectations.&lt;/p&gt;

&lt;p&gt;After some time (there were so many options) Scoopy has decided. You deliver the pistachio flavor (my favorite one) in a bowl to Scoopy in just a couple of minutes, with just a few pistachio nuts on top, nothing too fancy. Scoopy takes a bite, smiles, and sits to enjoy the ice cream. &lt;strong&gt;He is happy.&lt;/strong&gt; Mission accomplished.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1sj6g1elor6jhatvyq53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1sj6g1elor6jhatvyq53.png" alt="Pistachio Ice Cream" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Making great ice cream requires skill, creativity, and, most importantly, respect for what your customer wants if you want to give a good impression. It's similar to building scalable, efficient solutions on AWS - with &lt;strong&gt;one major difference&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  To whom do you make ice cream?
&lt;/h2&gt;

&lt;p&gt;You might think, "OK, I do not own my own company and am far from owning an ice cream shop. Then, to whom should I give a good impression with the solutions I build?"&lt;/p&gt;

&lt;p&gt;Your customer isn’t just someone who walks into your ice cream shop. It’s anyone who interacts with the solutions you build. Whether you're working as part of a development team, answering to stakeholders, or creating internal tools, &lt;strong&gt;the customer is anyone who benefits from your work&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In an AWS context - or even in a generic development universe - your customer could be the end user of your website hosted on an S3 Bucket, the business team relying on your Lambda function to improve operations or even your colleagues who need your Step Function to automate their work. The key is understanding their needs, solving their problems, and delivering value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3l735if9j5shru6b9s8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3l735if9j5shru6b9s8.png" alt="Coding" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every line of code, every decision you make, is aimed at impressing and satisfying the customer—whoever they may be.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes your customer happy
&lt;/h2&gt;

&lt;p&gt;One of my favorites &lt;a href="https://www.amazon.jobs/content/en/our-workplace/leadership-principles" rel="noopener noreferrer"&gt;Amazon Leadership Principles&lt;/a&gt; is &lt;strong&gt;Customer Obsession&lt;/strong&gt;, the ability to "work vigorously to earn and keep customer trust". Customer Obsession is more than just a principle - it’s a mindset that ensures everything you do is aligned with delivering value to your customers.&lt;/p&gt;

&lt;p&gt;Now, let’s reflect on a few characteristics of your ice cream that impressed your customer.&lt;/p&gt;

&lt;p&gt;It took you around 2 minutes to prepare it. You put some high-quality Pistachio nuts on top, which was a personalized experience based on your customer needs, and that made the ice cream well presented. Scoopy did not want anything too fancy — and you respected that. He just wanted a nice Pistachio ice cream. Although you could have put some white chocolate on the top (let's be honest, that is a great match), that was not what Scope wanted.&lt;/p&gt;

&lt;p&gt;You keep it consistent. Scoopy will not easily find similar ice cream elsewhere: you earned trust. Scoopy will come back for more.&lt;/p&gt;

&lt;p&gt;These elements - personalization, attention to detail, quality, and consistency - are what leave a good impression on your customer, whether you're serving ice cream or building software on AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  The big difference
&lt;/h2&gt;

&lt;p&gt;Building software on AWS has a lot in common with making ice cream—but there is one thing that differs completely: the unstoppable, unpredictable, &lt;strong&gt;time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiumkks5v4y0zy5lq1cl9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiumkks5v4y0zy5lq1cl9.png" alt="Coding takes time" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you were in Scoopy's shoes, would you rather wait 2 minutes for an amazing creamy Pistachio ice cream or get an average frozen watery premade ice cream in 15 seconds?&lt;/p&gt;

&lt;p&gt;The answer to that is clear. Scoopy was not looking for speed.&lt;/p&gt;

&lt;p&gt;As much as you want to impress your customer, it’s important to set realistic expectations. In an ice cream shop, you can promise to Scoopy their ice cream in a couple of minutes because the process is straightforward and predictable. But when building solutions on AWS (or in any software development context) things are rarely that simple.&lt;/p&gt;

&lt;p&gt;Here’s why it’s critical to stay realistic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building scalable, reliable, and secure solutions isn’t a quick task. It requires design, testing, and iteration. Rushing the process can lead to costly mistakes or unexpected results. Complexity takes time.&lt;/li&gt;
&lt;li&gt;If a project will take weeks to deliver, be honest about it. Overpromising on speed might impress initially but will hurt your credibility in the long run. Transparency builds trust.&lt;/li&gt;
&lt;li&gt;Customers appreciate timely delivery, but they value solutions that work even more. Prioritize getting it right over getting it done fast. Focus on quality, not speed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember, delivering value to your customers is about focusing on quality and being honest about what’s achievable, and your customers will trust you more for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building solutions on AWS and making ice cream might seem worlds apart, but they share a surprising amount in common. Both require creativity, attention to detail, and a focus on delivering value to the customer. Whether you're making the perfect ice cream bowl for Scoopy or designing an AWS-based application, success depends on your ability to combine the right tools and ingredients to build something &lt;strong&gt;that will make your customer happy&lt;/strong&gt; - no matter what it takes.&lt;/p&gt;

&lt;p&gt;But here’s the twist: the difference between building on AWS and making ice cream is that the ice cream is ready in 2 minutes.&lt;/p&gt;

&lt;p&gt;What are your thoughts? Do you aim to impress your customers daily? Let me know in the comments below!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Three Steps to Build a More Cost-Effective Solution on AWS</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Sat, 28 Dec 2024 16:51:59 +0000</pubDate>
      <link>https://dev.to/matheusdasmerces/three-steps-to-build-a-more-cost-effective-solution-on-aws-mlj</link>
      <guid>https://dev.to/matheusdasmerces/three-steps-to-build-a-more-cost-effective-solution-on-aws-mlj</guid>
      <description>&lt;p&gt;When managing costs in AWS, the need for savings is often clear, but sometimes there's a lack of precise insight into how much needs to be saved within a specific timeframe. Whether you work for a company that wants you to respond to budget pressures or are looking for ways to optimize your environment, some daily reflections can help you speed up your cost-saving efforts and plant the seed for the future of your application.&lt;/p&gt;

&lt;p&gt;In this post, I'll share three simple steps to help you build a more cost-effective solution on AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do you really need it?
&lt;/h2&gt;

&lt;p&gt;During the AWS re:Invent 2024 keynote, Werner Vogels emphasized the importance of "&lt;strong&gt;simplexity&lt;/strong&gt;" - the art of designing architectures that are powerful yet simple. "Simplexity" isn't just a fancy term; it's a mindset that drives innovation while avoiding unnecessary complexity.&lt;/p&gt;

&lt;p&gt;Inspired by these principles, I've found one question to be a true game-changer: &lt;strong&gt;Do you really need it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This question forces us to challenge assumptions and well-known architectures and focus on what truly matters. It aligns with the goals of keeping your architecture simple and driving &lt;strong&gt;sustainability&lt;/strong&gt; and &lt;strong&gt;cost efficiency&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step one: challenge decisions made in the past
&lt;/h3&gt;

&lt;p&gt;How often have you encountered the response, "because we've always done it this way," when questioning why certain setups exist?&lt;/p&gt;

&lt;p&gt;If you find yourself still using this rationale, it's time to reconsider - after all, it's 2024 (almost 2025!). Technology, particularly in the AWS ecosystem, evolves quickly every day. Continuously reevaluating these decisions is essential, both for the performance of your applications and for optimizing costs.&lt;/p&gt;

&lt;p&gt;Imagine the following architecture diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqvg7h9c0dti6izkh0bp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqvg7h9c0dti6izkh0bp.png" alt="CloudTrail log ingestion to CloudWatch and S3 Bucket" width="380" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This AWS CloudTrail log aggregation solution centralizes logs from all member accounts in an AWS Organization into a single account. Each account sends its CloudTrail logs to both a CloudWatch Log Group and an S3 bucket in the central account. This setup provides centralized governance and storage for auditing and compliance purposes.&lt;/p&gt;

&lt;p&gt;Looking at this diagram, I asked myself: &lt;strong&gt;do we really need&lt;/strong&gt; to store the logs in both locations? A CloudWatch Log Group offers real-time monitoring capabilities, such as using Log Insights, while an S3 bucket provides long-term, cost-efficient storage that can be easily queried with Athena.&lt;/p&gt;

&lt;p&gt;In this case, real-time monitoring through CloudWatch was not required. Challenging this past decision allowed us to eliminate CloudWatch log ingestion and retain only the S3 bucket - reducing unnecessary complexity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faaqmel6rve1cbagcnuh0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faaqmel6rve1cbagcnuh0.png" alt="CloudTrail log ingestion to S3 Bucket" width="550" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The result of this simple change was a &lt;strong&gt;reduction of $4,038.81 per month&lt;/strong&gt; in CloudWatch costs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflw13d24aj9uk0fvuq6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflw13d24aj9uk0fvuq6z.png" alt="CloudWatch costs comparision: reduction of $4,038.81" width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Simplifying the setup resulted in significant cost savings. The key takeaway is that past decisions may no longer be relevant today - it's essential to continuously challenge them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step two: challenge assumptions
&lt;/h3&gt;

&lt;p&gt;Another common sentence people use is "&lt;strong&gt;because I think it is more secure&lt;/strong&gt;". However, it's important to question whether this assumption still holds. What may have been considered correct in the past might no longer be the best approach today. Continuously reassessing decisions - based on facts - ensures you're adopting the most effective and efficient solutions for your current needs.&lt;/p&gt;

&lt;p&gt;Let's have a look at another scenario in the same large AWS Organizations setup, where each member account had its own S3 Bucket used for CloudFormation deployment assets:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5r8a7hxva5n87o1ofayl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5r8a7hxva5n87o1ofayl.png" alt="S3 Bucket + SS3-KMS encryption type" width="331" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do we really need&lt;/strong&gt; to have an SSE-KMS encryption type with a Customer Managed Key in KMS? Reflecting on use cases for that: key rotation, centralized key management, and detailed access control over who can use the key. None of that applied to this scenario. We decided to remove the key and have SSE-S3 encryption type, where AWS owns and manages the encryption key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwawuqpkhrcg5gkc79oxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwawuqpkhrcg5gkc79oxo.png" alt="S3 Bucket + SSE-S3 encryption type" width="161" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The result of this simple change was a &lt;strong&gt;reduction of $2,040.43 per month&lt;/strong&gt; in KMS costs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdc9ug4xim5dczturupyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdc9ug4xim5dczturupyr.png" alt="KMS costs comparision: reduction of $2,040.43" width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This change allowed a simpler solution - while the data is still encrypted at rest, reducing maintainability since the KMS keys were removed. As a consequence, more cost reduction.&lt;/p&gt;

&lt;p&gt;Regularly challenging assumptions is crucial for improving efficiency, reducing complexity, and optimizing costs. Continuous evaluation helps ensure that solutions evolve alongside technological advancements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step three: adopt "simplexity"
&lt;/h3&gt;

&lt;p&gt;Cost efficiency in AWS is as much about avoiding unnecessary expenses as it is about building efficient systems - adopting "simplexity".&lt;/p&gt;

&lt;p&gt;Reflecting on these steps daily can help you achieve quick wins that not only save some money today but also set your application up for a sustainable future.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Refactoring a Lambda Monolith to Microservices Using Hexagonal Architecture</title>
      <dc:creator>Matheus das Mercês</dc:creator>
      <pubDate>Tue, 17 Dec 2024 10:35:07 +0000</pubDate>
      <link>https://dev.to/aws-builders/refactoring-a-lambda-monolith-to-microservices-using-hexagonal-architecture-1em0</link>
      <guid>https://dev.to/aws-builders/refactoring-a-lambda-monolith-to-microservices-using-hexagonal-architecture-1em0</guid>
      <description>&lt;p&gt;In many applications, a single Lambda function contains all the application logic that handles all external events. The Lambda function sometimes acts as an orchestrator, handling different business workflows within complex logic. This approach has several drawbacks, like big package size, difficulty enforcing least privilege principles, and hard to test.&lt;/p&gt;

&lt;p&gt;In this blog post, I will explain how to leverage the Hexagonal Architecture, also known as the "Ports and Adapters" approach, to refactor a Lambda monolith into microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  The big, complex, and tightly-coupled
&lt;/h2&gt;

&lt;p&gt;AWS Lambda is incredibly easy to build and deploy, but it's also easy to fall into the trap of creating a "Lambda monolith". This approach bundles all your logic, processes, and dependencies into a single function that attempts to manage various events. For example, a Lambda monolith function would handle all API Gateway routes and integrate with all necessary downstream resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfrnfib1i2439rhpz67n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfrnfib1i2439rhpz67n.png" alt="From " width="653" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It becomes even more complex when the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/orchestrator.html" rel="noopener noreferrer"&gt;Lambda function acts as an orchestrator&lt;/a&gt;, handling different business workflows resulting in "spaghetti code" when many if-else statements are used.&lt;/p&gt;

&lt;p&gt;These approaches are considered anti-patterns in Lambda-based applications and have several drawbacks, for instance:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Big Package Size&lt;/strong&gt;: As logic grows, so does the size of your deployment package, which can slow down deployment times and the Lambda cold start time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hard to enforce the least privilege&lt;/strong&gt;: It becomes challenging to assign least-privilege permissions, as one Lambda may require broad access to different resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testing Complexities&lt;/strong&gt;: Testing a large Lambda function is problematic because every modification impacts a broad range of code. Unit testing is difficult, and integration testing is even more challenging to build.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The preferred approach is to decompose the monolithic Lambda function into individual microservices, with each Lambda function dedicated to a single, well-defined task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decomposing the monolith
&lt;/h2&gt;

&lt;p&gt;Before you enter Hexagonal Architecture, you need to think about how you would decompose your monolith. Moving from a monolithic Lambda to microservices is more than just splitting code into smaller parts; it's about strategically decomposing the monolith to create services that are independent, scalable, and maintainable. Without careful decomposition, you risk creating a set of microservices that are still tightly coupled or that mirror the same complexities and limitations of the original monolith.&lt;/p&gt;

&lt;p&gt;My favorite approach is the "decompose by business capability". This means that each microservice is structured around a specific business capability or function, ensuring that each service corresponds to a particular area of the application's logic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw3xzhbvs0fjrp6oo9rw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw3xzhbvs0fjrp6oo9rw.png" alt="From " width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This method of decomposition is especially valuable in larger applications, where different business functions - such as product catalog management, order management, or delivery - have distinct lifecycles, data requirements, scaling needs, and sometimes distinct development teams. More importantly, you build a stable architecture since the business capabilities are relatively stable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strangling the monolith
&lt;/h2&gt;

&lt;p&gt;With your business capabilities mapped to services, you need to think about where to start coding.&lt;/p&gt;

&lt;p&gt;Instead of attempting a full rewrite, which can be risky and disruptive, the &lt;a href="https://microservices.io/refactoring/index.html" rel="noopener noreferrer"&gt;Strangler Pattern&lt;/a&gt; allows us to migrate functionality piece by piece. This technique involves incrementally replacing parts of the monolith with microservices, gradually "strangling" the monolith until it's fully decomposed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww7tbtcnvryst29n2tsl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww7tbtcnvryst29n2tsl.png" alt="From " width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For each business function that we pull out of the monolith, we create a new microservice that can handle it independently. Over time, as more services are extracted, the monolith becomes smaller until it's eventually replaced by a cohesive set of microservices.&lt;/p&gt;

&lt;p&gt;This approach is ideal for modernizing applications in stages, reducing downtime and risks while moving toward a microservices architecture.&lt;/p&gt;

&lt;p&gt;At this point, you probably already know where to start. It's time to dive a little bit deeper into Hexagonal Architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Hexagonal Architecture
&lt;/h2&gt;

&lt;p&gt;Hexagonal Architecture, also known as "Ports and Adapters," offers a way to modularize your application so it can be more flexible and maintainable. By isolating the core business logic from external systems, this architecture promotes separation of concerns, where the application's core logic isn't tightly coupled to any specific technology or service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckl0873l488ha66u9yr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckl0873l488ha66u9yr5.png" alt="Hexagonal Architecture" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Core Logic (Domain)&lt;/strong&gt;: The core contains the application's core business rules, completely isolated from the outer layers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ports&lt;/strong&gt;: Defined interfaces that describe actions available to the core.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adapters&lt;/strong&gt;: Connect external systems to the application's core through ports, making it easy to switch out databases, API integrations, or other dependencies without impacting the core logic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach allows us to design a Lambda setup where each function or service remains lean and single-purpose, eliminating the challenges of a monolithic structure. Let's look at how each layer of the Hexagonal can be implemented in a Lambda-based microservice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing the hexagonal layers
&lt;/h2&gt;

&lt;p&gt;Each layer requires specific attention to refactor your Lambda application using Hexagonal Architecture. Here's a breakdown of the layers and how they work in practice.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To demonstrate implementing Hexagonal Architecture layers in code, I'll write an simple web-application backed by a Lambda function, which serves API Gateway requests and communicates with both DynamoDB and S3, using TypeScript, as it's the language I am most familiar with. While Hexagonal Architecture is particularly well-suited for typed languages, it is language-agnostic and can be implemented in any language or framework of your choice.&lt;br&gt;
I have used InversifyJS, a library used to create inversion of control (IoC) container for TypeScript. An IoC container uses a class constructor to identify and inject its dependencies&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Core Logic (Domain)
&lt;/h3&gt;

&lt;p&gt;The core business rules reside here. The domain layer is completely isolated from AWS-specific code or other external dependencies, making it easy to test and modify.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { injectable, inject } from "inversify";
import TYPES from "../../container/types";
import IRepository from "../../interfaces/repositoryIF";
import IStorage from "../../interfaces/storageIF";

@injectable()
class HelloWorld {
    constructor(
        @inject(TYPES.Repository) private repository: IRepository,
        @inject(TYPES.Storage) private storage: IStorage,
    ) {}

    async handler(_event: any): Promise&amp;lt;any&amp;gt; {
        //get data from storage
        const storageData = await this.storage.get("123");

        //update repository with new data
        await this.repository.update("dummy-id", {
            name: "dummy-name",
            age: 20,
            storageData,
        });
    }
}

export default HelloWorld;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Ports
&lt;/h3&gt;

&lt;p&gt;The ports layer coordinates interactions between the domain and external services but doesn't contain direct calls to those services. It is agnostic about what the downstream service is called. In oriented-object programming, a port can be directly related to an interface.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export default interface IRepository {
    get(id: string): void;
    update(id: string, data: any): void;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export default interface IStorage {
    get(id: string): void;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Adapters
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inbound Adapters&lt;/strong&gt;: These handle incoming requests, such as API Gateway events or other triggers, and pass them to the core logic. This is also the entry file for the Lambda Function.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { APIGatewayProxyEvent, APIGatewayProxyHandler, APIGatewayProxyResult } from "aws-lambda";
import { container, TYPES } from "../container/inversify.config";
import HelloWorld from "./hello-world/helloWorld";

export const helloWorld: APIGatewayProxyHandler = async (
    event: APIGatewayProxyEvent,
  ): Promise&amp;lt;APIGatewayProxyResult&amp;gt; =&amp;gt; {
    const lambda = container.get&amp;lt;HelloWorld&amp;gt;(
      TYPES.HelloWorld,
    );

    return lambda.handler(event);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Outbound Adapters&lt;/strong&gt;: These manage outbound calls, like calls to databases (in this case, DynamoDB and S3) or third-party APIs, abstracting them through well-defined interfaces (ports).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { DynamoDBClient, GetItemCommand, UpdateItemCommand } from '@aws-sdk/client-dynamodb';
import { injectable } from "inversify";
import IRepository from "../../../interfaces/repositoryIF";

@injectable()
class DynamoDbRepository implements IRepository {
    private client: DynamoDBClient;

    async update(id: string): Promise&amp;lt;void&amp;gt; {
        const updateItemCommand = new UpdateItemCommand({
            TableName: "example-table",
            Key: {
                id: { S: id },
            },
            UpdateExpression: "SET #name = :name",
            ExpressionAttributeNames: {
                "#name": "name",
            },
            ExpressionAttributeValues: {
                ":name": { S: "dummy-name" },
            },
        });

        await this.client.send(updateItemCommand);
    }

    async get(id: string): Promise&amp;lt;void&amp;gt; {
        const getItemCommand = new GetItemCommand({
            TableName: "example-table",
            Key: {
                id: { S: id },
            },
        });

        await this.client.send(getItemCommand);
    }
}

export default DynamoDbRepository;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { GetObjectCommand, PutObjectCommand, S3Client } from '@aws-sdk/client-s3';
import { injectable } from "inversify";
import IStorage from "../../../interfaces/storageIF";

@injectable()
class S3Repository implements IStorage {
    private client = new S3Client({});

    async get(id: string): Promise&amp;lt;void&amp;gt; {
        const getItemCommand = new GetObjectCommand({
            Bucket: "example-bucket",
            Key: id,
        });

        await this.client.send(getItemCommand);
    }
}

export default S3Repository;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The full example can be found &lt;a href="https://github.com/matheusdasmerces/inversify-hexagonal-example" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;By isolating each concern, the Lambda functions become much smaller, enabling us to enforce least-privilege permissions and streamline testing. Each function now interacts with its dependencies through well-defined interfaces, making it easier to manage and test independently. This is what the Hexagonal looks like in the example above:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka3eo31a8xu475mc9k2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka3eo31a8xu475mc9k2z.png" alt="Hexagonal Architecture in AWS" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The modular, scalable, and flexible
&lt;/h2&gt;

&lt;p&gt;Using Hexagonal Architecture, it is possible to gradually decouple the monolith into smaller and independent microservices, for example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6xuawbbifpfv68r1fe2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6xuawbbifpfv68r1fe2.png" alt="From " width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach improves a Lambda-based application and brings many potential benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified Maintenance&lt;/strong&gt;: Clear boundaries allow for focused modifications without fear of inadvertently breaking other parts of the application, optimizing package size, and reducing deployment and cold start times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clarity and Separation of Concerns&lt;/strong&gt;: Each Lambda function has a single responsibility, making the codebase easier to read and navigate, and easy to enforce the least privileges principle.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testing Efficiency&lt;/strong&gt;: Testing becomes simpler, as services can be isolated and mocked cleanly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reusability&lt;/strong&gt;: Since adapters are cleanly abstracted, they can be reused across multiple functions, reducing development redundancy.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  It isn't all smooth sailing…
&lt;/h2&gt;

&lt;p&gt;While this transition has numerous advantages, some challenges accompany the move to microservices using Hexagonal Architecture:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How to decouple&lt;/strong&gt;: Breaking apart a monolithic Lambda into separate microservices requires careful thought about how to decouple services and deploy them effectively. Ensuring that each service is truly independent can be difficult.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testing Overload&lt;/strong&gt;: With multiple services and adapters, testing can grow exponentially. The question arises: where to draw the line with testing? The focus shifts to defining clear boundaries for the unit, integration, and end-to-end tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Repository Management&lt;/strong&gt;: As the codebase grows, it's crucial to enforce strict practices against code duplication and maintain clear documentation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Traceability&lt;/strong&gt;: With the application composed of several microservices, it can become tricky to trace a request and log, instead of handling it as a single service, you need to make sure the request is been traced across all your microservices.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;Refactoring from a monolithic Lambda to a microservice-based Hexagonal Architecture setup involves a significant initial effort, but the rewards make it worthwhile:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reusable Adapters&lt;/strong&gt;: Once built, adapters can be used across multiple Lambda functions, making development faster and more consistent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified Testing&lt;/strong&gt;: With a clear separation of concerns, testing becomes easier and more reliable. By abstracting dependencies, we eliminate the need for complex Jest mocks, focusing instead on testing business logic directly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Flexibility&lt;/strong&gt;: Adapters provide a modular approach, so if there's a need to swap AWS services or add new integrations, the core logic remains unaffected. This adaptability allows for seamless changes without impacting the entire application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Refactoring your Lambda setup with Hexagonal Architecture can transform a complex, tightly coupled monolith into a streamlined, testable, and flexible microservice ecosystem, providing a solid foundation for future growth.&lt;/p&gt;

&lt;p&gt;How are your experiences with such a migration? Leave a comment below!&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>aws</category>
      <category>typescript</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
