<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Harsha Mathan</title>
    <description>The latest articles on DEV Community by Harsha Mathan (@hvmathan).</description>
    <link>https://dev.to/hvmathan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hvmathan"/>
    <language>en</language>
    <item>
      <title>I Built a Serverless X (Twitter) Quote Bot with AWS Lambda + S3 + DynamoDB</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Thu, 19 Feb 2026 05:07:49 +0000</pubDate>
      <link>https://dev.to/aws-builders/i-built-a-serverless-x-twitter-quote-bot-with-aws-lambda-s3-dynamodb-3mnd</link>
      <guid>https://dev.to/aws-builders/i-built-a-serverless-x-twitter-quote-bot-with-aws-lambda-s3-dynamodb-3mnd</guid>
      <description>&lt;p&gt;I wanted to build something &lt;em&gt;simple but real&lt;/em&gt;: a bot that posts a quote to my X account every day — reliably — without me logging in and copy-pasting.&lt;/p&gt;

&lt;p&gt;This post walks through the exact architecture and setup I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3&lt;/strong&gt; for storing quotes (&lt;code&gt;quotes.csv&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB&lt;/strong&gt; for tracking which quote is next&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Manager&lt;/strong&gt; for storing X API credentials&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt; for the posting logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EventBridge&lt;/strong&gt; (optional) to schedule it daily&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Result: a serverless bot that posts one quote per run.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Flow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lambda is triggered (manual test or daily schedule)&lt;/li&gt;
&lt;li&gt;Lambda reads &lt;code&gt;quotes.csv&lt;/code&gt; from S3&lt;/li&gt;
&lt;li&gt;Lambda checks DynamoDB to get the next quote index&lt;/li&gt;
&lt;li&gt;Lambda posts the quote to X via API (OAuth 1.0a)&lt;/li&gt;
&lt;li&gt;Lambda updates DynamoDB (&lt;code&gt;next_index = next_index + 1&lt;/code&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account (I used &lt;strong&gt;ap-south-1 (Mumbai)&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;An X Developer App with &lt;strong&gt;Read and Write&lt;/strong&gt; permissions enabled&lt;/li&gt;
&lt;li&gt;Python 3.11 Lambda runtime&lt;/li&gt;
&lt;li&gt;A basic CSV file with quotes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4jr2nwr105z4vp7o3up.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4jr2nwr105z4vp7o3up.png" alt=" " width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create your quotes file (CSV)
&lt;/h2&gt;

&lt;p&gt;Create a file named &lt;code&gt;quotes.csv&lt;/code&gt; with a header named &lt;code&gt;quote&lt;/code&gt; (one quote per line):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;quote
Start where you are. Use what you have. Do what you can.
Small steps every day add up to big changes.
Your future needs you—your past doesn’t.
Breathe. This moment is enough.
Progress, not perfection.
You don’t have to see the whole staircase—just take the next step.
Be kind to yourself; you’re learning.
Consistency beats intensity when it’s done daily.
One good decision today can change your tomorrow.
You are capable of more than you think.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqf4fgrw33qoohsjkha2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqf4fgrw33qoohsjkha2.png" alt=" " width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Upload the CSV to S3
&lt;/h2&gt;

&lt;p&gt;Create an S3 bucket (private) and upload &lt;code&gt;quotes.csv&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Region:&lt;/strong&gt; ap-south-1 (Mumbai)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bucket:&lt;/strong&gt; &lt;code&gt;har-vmat-xbot-ap-south-1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key:&lt;/strong&gt; &lt;code&gt;quotes.csv&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;Amazon S3 → Buckets → Create bucket&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose your region (Mumbai / ap-south-1)&lt;/li&gt;
&lt;li&gt;Keep &lt;strong&gt;Block all public access&lt;/strong&gt; enabled&lt;/li&gt;
&lt;li&gt;Create the bucket&lt;/li&gt;
&lt;li&gt;Open the bucket → &lt;strong&gt;Upload&lt;/strong&gt; → select &lt;code&gt;quotes.csv&lt;/code&gt; → Upload&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step 3: Create DynamoDB table for state
&lt;/h2&gt;

&lt;p&gt;We’ll use DynamoDB to track which quote should be posted next, so we don’t need to modify the CSV file.&lt;/p&gt;

&lt;p&gt;Create a DynamoDB table:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Table name:&lt;/strong&gt; &lt;code&gt;x_quote_bot_state&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partition key:&lt;/strong&gt; &lt;code&gt;pk&lt;/code&gt; (String)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;DynamoDB → Tables → Create table&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enter &lt;code&gt;x_quote_bot_state&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Partition key: &lt;code&gt;pk&lt;/code&gt; (String)&lt;/li&gt;
&lt;li&gt;Create table&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now insert the initial state item:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pk&lt;/code&gt; = &lt;code&gt;har_vmat&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;next_index&lt;/code&gt; = &lt;code&gt;0&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the table → &lt;strong&gt;Explore table items&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create item&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Add attributes:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pk&lt;/code&gt; (String): &lt;code&gt;har_vmat&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;next_index&lt;/code&gt; (Number): &lt;code&gt;0&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Save&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyn2sjkzcbkmkvmcg4gr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyn2sjkzcbkmkvmcg4gr.png" alt=" " width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Store X credentials in AWS Secrets Manager
&lt;/h2&gt;

&lt;p&gt;Store your X OAuth 1.0a credentials in AWS Secrets Manager.&lt;/p&gt;

&lt;p&gt;Create a secret:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Secret name:&lt;/strong&gt; &lt;code&gt;x-bot-credentials&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret type:&lt;/strong&gt; Other type of secret&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Add these &lt;strong&gt;4 key/value&lt;/strong&gt; pairs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;consumer_key&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;consumer_secret&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;access_token&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;access_token_secret&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;AWS Secrets Manager → Store a new secret&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Other type of secret&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Add 4 rows for the keys above and paste the values from your X Developer app&lt;/li&gt;
&lt;li&gt;Next → name it &lt;code&gt;x-bot-credentials&lt;/code&gt; → store&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb9rjj972m8sq4psx2pc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb9rjj972m8sq4psx2pc.png" alt=" " width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ Do not commit these keys to GitHub or hardcode them in Lambda.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Step 5: Create a Lambda Layer for dependencies
&lt;/h2&gt;

&lt;p&gt;Lambda doesn’t include &lt;code&gt;requests-oauthlib&lt;/code&gt; by default, so we create a layer containing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;requests&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;requests-oauthlib&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In &lt;strong&gt;AWS CloudShell&lt;/strong&gt;, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; x-layer/python
pip3 &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; x-layer/python requests requests-oauthlib
&lt;span class="nb"&gt;cd &lt;/span&gt;x-layer
zip &lt;span class="nt"&gt;-r&lt;/span&gt; x-requests-oauthlib-layer.zip python
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-lh&lt;/span&gt; x-requests-oauthlib-layer.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Create the Lambda function
&lt;/h2&gt;

&lt;p&gt;Create a Lambda function:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name:&lt;/strong&gt; &lt;code&gt;x-quote-bot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime:&lt;/strong&gt; Python 3.11&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture:&lt;/strong&gt; x86_64&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;AWS Lambda → Functions → Create function&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Author from scratch&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Function name: &lt;code&gt;x-quote-bot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Runtime: &lt;strong&gt;Python 3.11&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Architecture: &lt;strong&gt;x86_64&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Permissions&lt;/strong&gt;, choose &lt;strong&gt;Use an existing role&lt;/strong&gt; and select your role (example: &lt;code&gt;x-quote-bot-lambda-role&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create function&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj6rwj2ygxc2xds335eq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj6rwj2ygxc2xds335eq.png" alt=" " width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 6.1: IAM permissions for the Lambda role
&lt;/h3&gt;

&lt;p&gt;Your Lambda execution role must be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read the CSV from S3&lt;/li&gt;
&lt;li&gt;Read the secret from Secrets Manager&lt;/li&gt;
&lt;li&gt;Read/write the DynamoDB state item&lt;/li&gt;
&lt;li&gt;Write logs to CloudWatch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At minimum, allow these actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;s3:GetObject&lt;/code&gt; on &lt;code&gt;arn:aws:s3:::har-vmat-xbot-ap-south-1/quotes.csv&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;secretsmanager:GetSecretValue&lt;/code&gt; on the secret &lt;code&gt;x-bot-credentials&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;dynamodb:GetItem&lt;/code&gt; and &lt;code&gt;dynamodb:PutItem&lt;/code&gt; on table &lt;code&gt;x_quote_bot_state&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;CloudWatch logs permissions (usually via &lt;code&gt;AWSLambdaBasicExecutionRole&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficss5iold6zc5mhqyyp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficss5iold6zc5mhqyyp3.png" alt=" " width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 6.2: Attach the Lambda Layer
&lt;/h3&gt;

&lt;p&gt;Attach your dependency layer (created earlier) to the function:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open your function → scroll to &lt;strong&gt;Layers&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add a layer&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Custom layers&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select your layer (example: &lt;code&gt;x-requests-oauthlib&lt;/code&gt;) and pick the latest version&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After attaching, you should see it listed with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compatible runtime: &lt;strong&gt;python3.11&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Compatible architecture: &lt;strong&gt;x86_64&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jkyyad9ik9eyoljz1za.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jkyyad9ik9eyoljz1za.png" alt=" " width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 6.3: Add environment variables
&lt;/h3&gt;

&lt;p&gt;Go to &lt;strong&gt;Configuration → Environment variables → Edit&lt;/strong&gt; and add:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;BUCKET&lt;/code&gt; = &lt;code&gt;har-vmat-xbot-ap-south-1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;KEY&lt;/code&gt; = &lt;code&gt;quotes.csv&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SECRET_ID&lt;/code&gt; = &lt;code&gt;x-bot-credentials&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TABLE&lt;/code&gt; = &lt;code&gt;x_quote_bot_state&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PK_VALUE&lt;/code&gt; = &lt;code&gt;har_vmat&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ucdq6fsnr1zed2odkeo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ucdq6fsnr1zed2odkeo.png" alt=" " width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 7: Add the Lambda code (CSV from S3 → post to X → update DynamoDB)
&lt;/h2&gt;

&lt;p&gt;Open &lt;strong&gt;Code → lambda_function.py&lt;/strong&gt;, replace the contents with the code below, then click &lt;strong&gt;Deploy&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;csv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;requests_oauthlib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OAuth1&lt;/span&gt;

&lt;span class="n"&gt;s3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;ddb&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dynamodb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;secrets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;secretsmanager&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;BUCKET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BUCKET&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;SECRET_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SECRET_ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;TABLE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TABLE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;PK_VALUE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PK_VALUE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;har_vmat&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;X_CREATE_POST_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.x.com/2/tweets&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_secret&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;secrets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_secret_value&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SecretId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;SECRET_ID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SecretString&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;load_quotes_from_s3&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;obj&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_object&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Bucket&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;BUCKET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;raw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8-sig&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;UnicodeDecodeError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;raw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cp1252&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;f&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StringIO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;csv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DictReader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;quotes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;q&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;quote&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;quotes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;quotes&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_next_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n_quotes&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_item&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pk&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;PK_VALUE&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Item&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{}).&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;next_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n_quotes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;set_next_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;next_idx&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put_item&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Item&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pk&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;PK_VALUE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;next_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;next_idx&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;updated_at&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;utcnow&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;post_to_x&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;creds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;auth&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OAuth1&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;creds&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;consumer_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;creds&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;consumer_secret&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;creds&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;access_token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;creds&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;access_token_secret&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_CREATE_POST_URL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;RuntimeError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X post failed: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;creds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_secret&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;quotes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load_quotes_from_s3&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;quotes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;no_quotes&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ddb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TABLE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_next_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;quotes&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;quotes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;280&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="mi"&gt;277&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;…&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;post_to_x&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;creds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;set_next_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;posted&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tweet&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 8: Test it
&lt;/h2&gt;

&lt;p&gt;Now that your Lambda code, environment variables, IAM role, and layer are all set, it’s time to test an end-to-end run.&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;strong&gt;AWS Lambda → Functions → &lt;code&gt;x-quote-bot&lt;/code&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Go to the &lt;strong&gt;Test&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new test event:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event name: &lt;code&gt;test&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Event JSON: &lt;code&gt;{}&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Test&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Expected results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda returns something like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"posted"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"index"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tweet"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Start where you are. Use what you have. Do what you can."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"YOUR_TWEET_ID"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;A new post appears on your X profile&lt;/li&gt;
&lt;li&gt;DynamoDB item &lt;code&gt;pk = har_vmat&lt;/code&gt; updates:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;next_index&lt;/code&gt; increments (e.g., &lt;code&gt;0 → 1&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fum5pz7m7hs3t3sj0o4mf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fum5pz7m7hs3t3sj0o4mf.png" alt=" " width="800" height="633"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxfemi8ou8cg56rebxo7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxfemi8ou8cg56rebxo7.png" alt=" " width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;That’s it — you now have a working, serverless X quote bot powered by AWS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3&lt;/strong&gt; stores your content (&lt;code&gt;quotes.csv&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB&lt;/strong&gt; tracks which quote is next&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Manager&lt;/strong&gt; securely stores your X credentials&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda&lt;/strong&gt; posts to X using OAuth 1.0a&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EventBridge&lt;/strong&gt; runs it on a daily schedule (optional but recommended)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best part: once it’s set up, it’s boring (in a good way). It just runs.&lt;/p&gt;

&lt;p&gt;If you’re building your own version, I’d recommend starting simple (like this), and only then adding improvements like randomization, richer content types, or alerting.&lt;/p&gt;




</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>webdev</category>
    </item>
    <item>
      <title>AWS Trends to Watch in 2026: Agentic AI, FinOps, Serverless, and Sustainable Infra</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Sun, 25 Jan 2026 16:20:41 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-trends-to-watch-in-2026-agentic-ai-finops-serverless-and-sustainable-infra-4e5j</link>
      <guid>https://dev.to/aws-builders/aws-trends-to-watch-in-2026-agentic-ai-finops-serverless-and-sustainable-infra-4e5j</guid>
      <description>&lt;p&gt;2026 is shaping up to be the year AWS “building blocks” start behaving more like &lt;em&gt;systems&lt;/em&gt;: AI that can plan and act, cloud spend that’s governed like product quality, serverless that streams and scales for real-time experiences, and sustainability metrics that finally show up in the same conversations as latency and cost.&lt;/p&gt;

&lt;p&gt;This post summarizes notable themes AWS has emphasized recently across official announcements, “What’s New” updates, and AWS blog deep dives (AWS News, Machine Learning, Cloud Financial Management, Compute/Serverless, and Infrastructure &amp;amp; Sustainability).&lt;/p&gt;

&lt;p&gt;Let’s break down four trends—and what to &lt;em&gt;do&lt;/em&gt; about each.&lt;/p&gt;




&lt;h2&gt;
  
  
  1) Agentic AI goes from demos to production systems
&lt;/h2&gt;

&lt;p&gt;The hype cycle is real, but the platform pieces are getting more concrete: &lt;strong&gt;agents&lt;/strong&gt;, &lt;strong&gt;tool use&lt;/strong&gt;, &lt;strong&gt;memory&lt;/strong&gt;, &lt;strong&gt;identity&lt;/strong&gt;, &lt;strong&gt;governance&lt;/strong&gt;, and &lt;strong&gt;async orchestration&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s changing in 2026
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Agents become an application tier&lt;/strong&gt;, not a feature. Instead of “chat with your data,” teams are shipping agents that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;take goals and constraints,&lt;/li&gt;
&lt;li&gt;select tools,&lt;/li&gt;
&lt;li&gt;execute multi-step workflows,&lt;/li&gt;
&lt;li&gt;and report outcomes (with guardrails).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A key shift: treating agent runs like reliable jobs (traceable, retryable, auditable), not like a single chat completion.&lt;/p&gt;

&lt;h3&gt;
  
  
  What you should build toward
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Pattern: “Agent + event loop” (async by default)
&lt;/h4&gt;

&lt;p&gt;For production, treat agent runs like jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kickoff event&lt;/li&gt;
&lt;li&gt;plan&lt;/li&gt;
&lt;li&gt;tool calls happen in steps&lt;/li&gt;
&lt;li&gt;results stream back&lt;/li&gt;
&lt;li&gt;failures retry safely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This reduces user-facing latency and makes “long thinking” feasible without timeouts.&lt;/p&gt;

&lt;h4&gt;
  
  
  Guardrails move left
&lt;/h4&gt;

&lt;p&gt;In 2026, the winning teams will bake in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;least-privilege identity&lt;/strong&gt; for tool calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;audit trails&lt;/strong&gt; for decisions and actions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;bounded autonomy&lt;/strong&gt; (what an agent &lt;em&gt;may&lt;/em&gt; do)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cost and rate limits&lt;/strong&gt; per agent/task&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical checklist
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Make agent runs &lt;strong&gt;idempotent&lt;/strong&gt; (retry-safe)&lt;/li&gt;
&lt;li&gt;[ ] Log every tool call with inputs/outputs (redact secrets)&lt;/li&gt;
&lt;li&gt;[ ] Add “stop conditions” (budget, time, max steps)&lt;/li&gt;
&lt;li&gt;[ ] Default to &lt;strong&gt;async orchestration&lt;/strong&gt; (queues + workflows)&lt;/li&gt;
&lt;li&gt;[ ] Separate “planner” from “executor” in your design&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  2) FinOps becomes continuous spend engineering (and AI joins the loop)
&lt;/h2&gt;

&lt;p&gt;In 2026, cost management stops being a monthly report and becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a daily operational practice,&lt;/li&gt;
&lt;li&gt;a product metric,&lt;/li&gt;
&lt;li&gt;and something teams can query conversationally—&lt;em&gt;with governance&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What’s changing in 2026
&lt;/h3&gt;

&lt;p&gt;The trend is clear: more mature allocation and governance workflows, and cost insights becoming easier to operationalize. Practically, that means teams are moving from “reduce the bill” to &lt;strong&gt;engineering unit economics&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What you should build toward
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Treat cost like reliability: SLOs, alerts, and ownership
&lt;/h4&gt;

&lt;p&gt;A simple model that works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each workload has a &lt;strong&gt;cost owner&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Define &lt;strong&gt;cost SLOs&lt;/strong&gt; (e.g., “&amp;lt;$X per 1k requests” or “&amp;lt;$Y per tenant per day”)&lt;/li&gt;
&lt;li&gt;Alert on drift, not just spikes&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Normalize “unit cost”
&lt;/h4&gt;

&lt;p&gt;Instead of “our bill went up,” push toward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cost per order&lt;/li&gt;
&lt;li&gt;cost per inference&lt;/li&gt;
&lt;li&gt;cost per active user&lt;/li&gt;
&lt;li&gt;cost per GB processed&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Make cost explainable
&lt;/h4&gt;

&lt;p&gt;A surprising amount of waste is “unknown unknowns.” Your goal is not just savings—it’s &lt;strong&gt;predictability&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical checklist
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Define 2–3 &lt;strong&gt;unit cost metrics&lt;/strong&gt; per product&lt;/li&gt;
&lt;li&gt;[ ] Add &lt;strong&gt;budget + anomaly&lt;/strong&gt; alerts per team/workload&lt;/li&gt;
&lt;li&gt;[ ] Standardize tagging + allocation rules&lt;/li&gt;
&lt;li&gt;[ ] Create a weekly “top changes” cost review (15 minutes)&lt;/li&gt;
&lt;li&gt;[ ] Treat optimizations as backlog items with owners and deadlines&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  3) Serverless grows up: streaming, event enrichment, and AI-native architectures
&lt;/h2&gt;

&lt;p&gt;Serverless isn’t just “Lambda + API Gateway” anymore. In 2026, it’s the default fabric for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;event-driven systems,&lt;/li&gt;
&lt;li&gt;real-time experiences,&lt;/li&gt;
&lt;li&gt;and AI apps that need to stream results fast.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What’s changing in 2026
&lt;/h3&gt;

&lt;p&gt;The design center is moving toward &lt;strong&gt;stream-first UX&lt;/strong&gt; and &lt;strong&gt;event-first backend design&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;stream partial results quickly&lt;/li&gt;
&lt;li&gt;run long tasks asynchronously&lt;/li&gt;
&lt;li&gt;keep services loosely coupled via events&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What you should build toward
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Pattern: Stream-first user experiences
&lt;/h4&gt;

&lt;p&gt;Users tolerate longer responses if they see progress quickly.&lt;/p&gt;

&lt;p&gt;Architect for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fast &lt;strong&gt;time-to-first-byte&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;progressive rendering&lt;/li&gt;
&lt;li&gt;partial results + follow-up refinement&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pattern: Event enrichment
&lt;/h4&gt;

&lt;p&gt;Instead of pushing raw triggers downstream, enrich events early so consumers stay simple and reusable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical checklist
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Prefer streaming for any response &amp;gt; 1–2 seconds&lt;/li&gt;
&lt;li&gt;[ ] Use queues/workflows for long tasks; stream progress&lt;/li&gt;
&lt;li&gt;[ ] Enrich events early so consumers stay simple&lt;/li&gt;
&lt;li&gt;[ ] Design for retries, dedupe, and dead-letter handling&lt;/li&gt;
&lt;li&gt;[ ] Measure: TTFB, p95 latency, error rate, cost per request&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  4) Sustainable infrastructure becomes measurable (and measurable becomes manageable)
&lt;/h2&gt;

&lt;p&gt;Sustainability is shifting from goals to instrumentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s changing in 2026
&lt;/h3&gt;

&lt;p&gt;The big trend: sustainability metrics are becoming easier to attribute and discuss alongside cost and performance—so teams can make practical tradeoffs, not vague promises.&lt;/p&gt;

&lt;h3&gt;
  
  
  What you should build toward
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Sustainability joins the non-functional requirements list
&lt;/h4&gt;

&lt;p&gt;In 2026, mature teams treat sustainability like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;security&lt;/li&gt;
&lt;li&gt;reliability&lt;/li&gt;
&lt;li&gt;cost&lt;/li&gt;
&lt;li&gt;performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Meaning: tracked, reviewed, improved continuously.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pattern: Reduce waste first (it helps cost too)
&lt;/h4&gt;

&lt;p&gt;If you do nothing else:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;right-size&lt;/li&gt;
&lt;li&gt;remove idle resources&lt;/li&gt;
&lt;li&gt;prefer managed services when appropriate&lt;/li&gt;
&lt;li&gt;match capacity to demand&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical checklist
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Review carbon metrics quarterly and align them to workload ownership&lt;/li&gt;
&lt;li&gt;[ ] Tag workloads so carbon and cost can be attributed together&lt;/li&gt;
&lt;li&gt;[ ] Optimize idle and overprovisioned resources first&lt;/li&gt;
&lt;li&gt;[ ] Bake sustainability checks into architecture reviews&lt;/li&gt;
&lt;li&gt;[ ] Track improvements like you track cost savings (because they’re related)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The “2026 architecture”: Agents + Serverless + FinOps + Sustainability
&lt;/h2&gt;

&lt;p&gt;Here’s the meta-trend: these four areas are converging.&lt;/p&gt;

&lt;p&gt;A practical “north star” architecture looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Agentic layer&lt;/strong&gt; decides what to do
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serverless event fabric&lt;/strong&gt; executes steps reliably and scales instantly
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FinOps telemetry&lt;/strong&gt; measures unit cost and prevents budget surprises
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sustainability metrics&lt;/strong&gt; provide carbon visibility and reduction targets
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The companies that win in 2026 won’t be the ones adopting the most services—they’ll be the ones building the tightest feedback loops.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick-start plan
&lt;/h2&gt;

&lt;p&gt;If you want a simple, high-leverage sequence:&lt;/p&gt;

&lt;p&gt;1) Pick one workflow and build an &lt;strong&gt;async agent&lt;/strong&gt; prototype (bounded scope, strict permissions).&lt;br&gt;&lt;br&gt;
2) Put it behind &lt;strong&gt;serverless orchestration&lt;/strong&gt; and stream progress/results to users.&lt;br&gt;&lt;br&gt;
3) Define &lt;strong&gt;unit cost&lt;/strong&gt; for the workflow and add budget/anomaly alerts.&lt;br&gt;&lt;br&gt;
4) Treat waste removal (idle, overprovisioned, unnecessary data movement) as roadmap items.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing thought
&lt;/h2&gt;

&lt;p&gt;2026 isn’t about picking &lt;em&gt;one&lt;/em&gt; trend. It’s about building systems where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI can act safely,&lt;/li&gt;
&lt;li&gt;serverless can deliver instantly,&lt;/li&gt;
&lt;li&gt;cost is governed continuously,&lt;/li&gt;
&lt;li&gt;and sustainability is measurable and improvable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What trend are you most excited (or worried) about in 2026—agents, FinOps, serverless streaming, or sustainability?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>ai</category>
      <category>serverless</category>
    </item>
    <item>
      <title>My Experience as a Speaker at Amazon, Hyderabad</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Thu, 20 Nov 2025 12:00:09 +0000</pubDate>
      <link>https://dev.to/aws-builders/my-experience-as-a-speaker-at-aamazon-hyderabad-5110</link>
      <guid>https://dev.to/aws-builders/my-experience-as-a-speaker-at-aamazon-hyderabad-5110</guid>
      <description>&lt;p&gt;A few days ago, I had the privilege of speaking at &lt;strong&gt;AWS Data Day&lt;/strong&gt;, hosted at the Amazon HQ in Hyderabad.&lt;br&gt;&lt;br&gt;
This wasn’t just another event for me — it turned into one of the most defining moments of my career.&lt;/p&gt;

&lt;p&gt;Being invited as a &lt;strong&gt;speaker&lt;/strong&gt;, and in fact the &lt;strong&gt;only non-AWS speaker&lt;/strong&gt; for the entire day, was both humbling and energizing.&lt;br&gt;&lt;br&gt;
I presented on a topic deeply connected to my work at Verisk:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Our SQL Server to Snowflake Migration Journey using AWS DMS — Best Practices, Pitfalls, and Lessons Learned.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🎤 The Preparation
&lt;/h2&gt;

&lt;p&gt;I wanted this session to be more than a technical walkthrough.&lt;br&gt;&lt;br&gt;
So I focused on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storytelling
&lt;/li&gt;
&lt;li&gt;Real-world failures and mistakes
&lt;/li&gt;
&lt;li&gt;Architecture decisions
&lt;/li&gt;
&lt;li&gt;Lessons learned the hard way
&lt;/li&gt;
&lt;li&gt;Practical patterns teams can apply immediately
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I spent days refining the narrative, improving diagrams, and making sure the session was valuable and relatable for engineers and architects.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktsgfzqyvpl1sy2w5n0l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktsgfzqyvpl1sy2w5n0l.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6f0lajvobc6x1xuyv637.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6f0lajvobc6x1xuyv637.png" alt=" " width="759" height="1011"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvxyk4r8f0qbp13bejxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvxyk4r8f0qbp13bejxf.png" alt=" " width="759" height="1011"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzt0s8spph5qvgvezc9s0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzt0s8spph5qvgvezc9s0.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🏟️ The Event Day: A Surge of Energy
&lt;/h2&gt;

&lt;p&gt;Walking into the AWS office and seeing a room full of cloud engineers, architects, and data practitioners was surreal.&lt;/p&gt;

&lt;p&gt;Once I started speaking, something clicked.&lt;br&gt;&lt;br&gt;
It didn’t feel like presenting — it felt like &lt;strong&gt;teaching&lt;/strong&gt;, something I naturally enjoy and have done years ago as a freelance trainer.&lt;/p&gt;

&lt;p&gt;The audience engagement was incredible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;People took notes
&lt;/li&gt;
&lt;li&gt;Asked deep, technical questions
&lt;/li&gt;
&lt;li&gt;Took photos of the slides
&lt;/li&gt;
&lt;li&gt;Reacted to the stories
&lt;/li&gt;
&lt;li&gt;Stayed fully connected throughout the talk
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Several attendees later told me:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Yours was one of the standout sessions today.”&lt;br&gt;&lt;br&gt;
“This answered exactly the challenges our teams are facing.”&lt;br&gt;&lt;br&gt;
“Wish this was a longer session.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Those words meant a lot.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3n0q0izrx5bnkymjwe4.png" alt=" " width="739" height="1600"&gt;
&lt;/h2&gt;




&lt;h2&gt;
  
  
  ⭐ The Feedback: 4.9/5 Rating
&lt;/h2&gt;

&lt;p&gt;At the end of the day, my session received a &lt;strong&gt;4.9/5 rating&lt;/strong&gt; from attendees.&lt;/p&gt;

&lt;p&gt;This wasn’t just feedback — it was a reminder of what I truly enjoy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplifying complexity
&lt;/li&gt;
&lt;li&gt;Explaining concepts clearly
&lt;/li&gt;
&lt;li&gt;Public speaking
&lt;/li&gt;
&lt;li&gt;Storytelling
&lt;/li&gt;
&lt;li&gt;Teaching
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This event brought that clarity back to the surface.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔍 Key Takeaways I Shared
&lt;/h2&gt;

&lt;p&gt;Here are some of the core points from my talk:&lt;/p&gt;

&lt;h3&gt;
  
  
  ✔️ Common pitfalls in AWS DMS
&lt;/h3&gt;

&lt;h3&gt;
  
  
  ✔️ Architecture choices that matter
&lt;/h3&gt;

&lt;h3&gt;
  
  
  ✔️ Why “lift and shift” doesn’t work for real migrations
&lt;/h3&gt;

&lt;h3&gt;
  
  
  ✔️ How to redesign for Snowflake instead of porting blindly
&lt;/h3&gt;

&lt;h3&gt;
  
  
  ✔️ Handling performance gaps early
&lt;/h3&gt;

&lt;h3&gt;
  
  
  ✔️ Lessons learned from production-scale workloads
&lt;/h3&gt;

&lt;h3&gt;
  
  
  ✔️ Frameworks to make migrations predictable
&lt;/h3&gt;

&lt;h2&gt;
  
  
  These resonated strongly with the audience, especially the practical examples based on real-world challenges.
&lt;/h2&gt;

&lt;h1&gt;
  
  
  🙏 Thank You, AWS &amp;amp; the Community
&lt;/h1&gt;

&lt;p&gt;I’m deeply grateful to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The AWS team for the invitation
&lt;/li&gt;
&lt;li&gt;The audience for the engagement
&lt;/li&gt;
&lt;li&gt;Everyone who messaged afterward with kind words
&lt;/li&gt;
&lt;li&gt;My colleagues who supported the migration journey
&lt;/li&gt;
&lt;li&gt;The larger Data &amp;amp; AI community that continues to inspire learning
&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  🚀 What’s Next for Me?
&lt;/h1&gt;

&lt;p&gt;This experience encouraged me to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Speak at more meetups
&lt;/li&gt;
&lt;li&gt;Build and share more architecture content
&lt;/li&gt;
&lt;li&gt;Conduct hands-on workshops
&lt;/li&gt;
&lt;li&gt;Mentor engineers
&lt;/li&gt;
&lt;li&gt;Strengthen my voice in the Data &amp;amp; AI community
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’d like me to speak at your event or meetup, feel free to reach out!&lt;/p&gt;




&lt;h1&gt;
  
  
  🌟 Final Thoughts
&lt;/h1&gt;

&lt;p&gt;AWS Data Day wasn’t just a speaking opportunity — it felt like a &lt;strong&gt;career turning point&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
It reminded me of what I love doing, what I’m good at, and what I want to do more of.&lt;/p&gt;

&lt;p&gt;If you attended the session, thank you for making it special.&lt;br&gt;&lt;br&gt;
If not, I hope to meet you at another event soon!&lt;/p&gt;

</description>
      <category>genai</category>
      <category>aws</category>
      <category>dms</category>
      <category>rag</category>
    </item>
    <item>
      <title>Building a Proactive AI Travel Agent on AWS Bedrock AgentCore (Final Part )</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Tue, 23 Sep 2025 08:56:06 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-bedrock-agentcore-final-part--p5n</link>
      <guid>https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-bedrock-agentcore-final-part--p5n</guid>
      <description>&lt;p&gt;This is the final version of my 4-part blog series on using the AWS service &lt;strong&gt;Bedrock AgentCore&lt;/strong&gt;. In this post, I’ll share a high-level overview of what AgentCore is, what we’re trying to accomplish, and how we can unlock its full potential.  &lt;/p&gt;

&lt;p&gt;If you’re seeing this for the first time, I’ve detailed the development journey in the earlier three posts—you can quickly refer to them if you’d like more depth or clarifications:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-1-36c7"&gt;Part 1&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-2-199i"&gt;Part 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/aws-builders/migrating-a-local-multi-agent-travel-system-to-aws-bedrock-agentcorepart-3-45eh"&gt;Part 3&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reference to Github : &lt;a href="https://github.com/hvmathan/bedrock-travel-agent" rel="noopener noreferrer"&gt;https://github.com/hvmathan/bedrock-travel-agent&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That said, you can still follow along with just this post—it’s self-contained and focuses on &lt;strong&gt;production deployment with real APIs&lt;/strong&gt;.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Why Bedrock AgentCore Matters
&lt;/h2&gt;

&lt;p&gt;In this series, I’ve been building an AI travel agent. Now, creating a travel agent chatbot isn’t new, there are plenty of ways to do it. The &lt;em&gt;real difference&lt;/em&gt; here is &lt;strong&gt;how we deploy it&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;We start by developing and testing locally in Python, but once the logic is solid, instead of following the &lt;strong&gt;traditional cloud setup&lt;/strong&gt;,ECS/EKS cluster provisioning, load balancer configuration, auto-scaling, IAM role/policy management, CloudWatch setup, and ALB rules—we bypass all of that.  &lt;/p&gt;

&lt;p&gt;With AgentCore, all of these happen &lt;strong&gt;behind the scenes&lt;/strong&gt;, automatically. The deployment itself took me &lt;strong&gt;less than a minute&lt;/strong&gt;, which is mind-blowing compared to the days of setup we usually spend on infrastructure.  &lt;/p&gt;

&lt;h2&gt;
  
  
  The Framework Question
&lt;/h2&gt;

&lt;p&gt;Apart from Bedrock AgentCore, the next key choice was the &lt;strong&gt;framework&lt;/strong&gt;. Since I’m working with a &lt;strong&gt;multi-agent system&lt;/strong&gt;, I chose &lt;strong&gt;Strands Agent&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;But wait, what’s a framework, and why does it matter?  &lt;/p&gt;

&lt;p&gt;Think of building an AI system like constructing a house:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;AWS cloud (AgentCore)&lt;/strong&gt; is the land and utilities (electricity, water, internet).
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;framework&lt;/strong&gt; is the architecture + scaffolding, it gives you ready-made walls, wiring, and plumbing so you don’t have to reinvent everything.
&lt;/li&gt;
&lt;li&gt;You can always build with raw bricks and cement (custom implementation), but that means drawing up blueprints, laying pipes, and wiring everything from scratch. It’s possible, but slow.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strands Agent&lt;/strong&gt; was like a pre-built house plan &lt;strong&gt;already designed for AgentCore land&lt;/strong&gt; it fit perfectly. No hacks, no duct tape, no “custom adapters.” Just code the agent logic and deploy.  &lt;/p&gt;

&lt;p&gt;That’s why Strands became my framework of choice, it aligned directly with AWS Bedrock, unlike general-purpose frameworks such as LangChain or CrewAI, which are more powerful in some ways but require lots of integration work for AgentCore.  &lt;/p&gt;

&lt;h3&gt;
  
  
  File Structure Breakdown
&lt;/h3&gt;

&lt;p&gt;Let me walk through the final project structure and what each file accomplishes. These files are available in GitHub link that I've shared above.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AgentCore Production Files:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;app.py&lt;/code&gt; - Main AgentCore wrapper with CORS support&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;orchestrator.py&lt;/code&gt; - Core coordination logic (unchanged from local)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;planning_agent.py&lt;/code&gt; - Enhanced with regex parsing for flight requests&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;travel_tool.py&lt;/code&gt; - Updated with RealFlightAPI integration&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;flight_api.py&lt;/code&gt; - Aviationstack API integration with fallback system&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;agentcore.yaml&lt;/code&gt; - AgentCore deployment configuration&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;requirements.txt&lt;/code&gt; - Dependencies for cloud deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Local Development/Testing Files:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;mock_travel_api.py&lt;/code&gt; - Original mock data for development&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;booking_tools.py&lt;/code&gt; - Early booking agent prototype&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;planning_tools.py&lt;/code&gt; - Standalone planning utilities&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;run_agent.py&lt;/code&gt; - Local testing script&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;my_agent.py&lt;/code&gt; - Initial agent prototype&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;orchestrator copy.py&lt;/code&gt; - Backup of original orchestrator&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;UI and Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;travel_ui.html&lt;/code&gt; - React-based professional interface&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Dockerfile&lt;/code&gt; - Auto-generated by AgentCore&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.env&lt;/code&gt; - API keys (not in repo)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;venv/&lt;/code&gt; - Python virtual environment&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deployment: Success and Challenges
&lt;/h2&gt;

&lt;p&gt;I first developed and tested the application locally, and it worked exactly as I envisioned. For this, I integrated a third-party API that provides 100 free calls per month—more than enough for our demo purposes (covered in detail in Part 2 of this series).&lt;/p&gt;

&lt;p&gt;Now that we’ve thoroughly tested the application in our local environment, it’s finally time to &lt;strong&gt;promote it to AWS&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;This is where the &lt;strong&gt;real magic and power of AgentCore&lt;/strong&gt; shines through. Remember the contrast I mentioned earlier—traditional deployments require hours (sometimes days) of configuring ECS/EKS clusters, IAM roles, load balancers, scaling policies, and monitoring.  &lt;/p&gt;

&lt;p&gt;With AgentCore, all of that complexity disappears. What normally takes days can now be done in &lt;strong&gt;just a few minutes&lt;/strong&gt; with a single command.  &lt;/p&gt;

&lt;p&gt;In the screenshots below, you’ll see how effortlessly the application was configured and launched to AWS using AgentCore—no manual setup, no YAML headaches, just a seamless push from local code to a production-ready environment.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6tjqulmx3203tz7csdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6tjqulmx3203tz7csdv.png" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiv7qdqprvy75lna47u5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiv7qdqprvy75lna47u5w.png" alt=" " width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jpvoxqyv2zbzjf8h02q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jpvoxqyv2zbzjf8h02q.png" alt=" " width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deploying to AWS Bedrock AgentCore was remarkably smooth:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;agentcore configure &lt;span class="nt"&gt;-e&lt;/span&gt; app.py &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2
agentcore launch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within minutes, I had:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ARM64 container built in AWS CodeBuild&lt;/li&gt;
&lt;li&gt;ECR repository created and populated&lt;/li&gt;
&lt;li&gt;IAM roles automatically configured&lt;/li&gt;
&lt;li&gt;Production endpoint deployed and ready&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can now see our application up and running on AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdco3il9uaq77flsii8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdco3il9uaq77flsii8k.png" alt=" " width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I further have the ability the test the same from local usine the below command. The deployed agent works perfectly via CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;agentcore invoke &lt;span class="s1"&gt;'{"message": "Find flights from LHR to CDG"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  UI Development: Building a Professional Interface
&lt;/h2&gt;

&lt;p&gt;So far, we’ve explored one of the most powerful components of Bedrock AgentCore—the &lt;strong&gt;Runtime&lt;/strong&gt;. To take things a step further, I wanted to give the project a more realistic touch with a proper user interface. After all, what’s an AI booking agent without an engaging front end?  &lt;/p&gt;

&lt;p&gt;I decided to build a &lt;strong&gt;React-based interface&lt;/strong&gt; that showcases both local and cloud capabilities. It’s packaged as a simple HTML file, but it includes all the core functionalities I envisioned for an AI-powered booking experience as you can see in the below screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foli9sdpyav9qqejjizw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foli9sdpyav9qqejjizw8.png" alt=" " width="800" height="850"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Endpoint switching between local development and cloud deployment&lt;/li&gt;
&lt;li&gt;Real-time chat interface with typing indicators&lt;/li&gt;
&lt;li&gt;Quick action buttons for common requests&lt;/li&gt;
&lt;li&gt;Professional gradient design with AWS branding&lt;/li&gt;
&lt;li&gt;Mobile-responsive layout&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The UI reveals an interesting architectural decision: while the local mode works seamlessly with HTTP calls, the cloud mode requires AWS SDK integration for proper authentication, making direct browser-to-AgentCore communication complex.&lt;/p&gt;

&lt;p&gt;I performed a flight search and it gave me the perfect result as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxavlokotvyxxwongcuy7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxavlokotvyxxwongcuy7.png" alt=" " width="800" height="850"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3edgu6boztys0gcp07xp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3edgu6boztys0gcp07xp.png" alt=" " width="800" height="850"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  How AgentCore Simplified My AWS Deployment Experience
&lt;/h1&gt;

&lt;p&gt;One of the finest use cases of &lt;strong&gt;AgentCore&lt;/strong&gt; I personally found impressive was how effortlessly we deployed a tested version into AWS production.  &lt;/p&gt;

&lt;p&gt;Instead of juggling YAML files, IAM roles, and cluster configs, AgentCore’s &lt;strong&gt;Runtime&lt;/strong&gt; automated everything in one command.  &lt;/p&gt;

&lt;h2&gt;
  
  
  What AgentCore Runtime Handled Automatically
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Container Building&lt;/strong&gt; → ARM64 container built in AWS CodeBuild (no Docker setup needed)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ECR Repository&lt;/strong&gt; → Created and configured without lifting a finger
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM Roles&lt;/strong&gt; → Both execution and CodeBuild roles provisioned with proper policies
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-scaling&lt;/strong&gt; → Production-ready scaling configuration applied out of the box
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Networking&lt;/strong&gt; → VPC, security groups, and load balancing automatically wired
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt; → CloudWatch logging enabled and integrated
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment Pipeline&lt;/strong&gt; → Complete CI/CD from code → container → running service
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Traditional AWS Deployment Would Require
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Manual ECS/EKS cluster setup
&lt;/li&gt;
&lt;li&gt;Load balancer configuration
&lt;/li&gt;
&lt;li&gt;Auto-scaling group creation
&lt;/li&gt;
&lt;li&gt;IAM policy writing &amp;amp; role management
&lt;/li&gt;
&lt;li&gt;ECR repository and permission setup
&lt;/li&gt;
&lt;li&gt;CloudWatch configuration
&lt;/li&gt;
&lt;li&gt;VPC and security group management
&lt;/li&gt;
&lt;li&gt;ALB rules &amp;amp; health check setup
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ⏱️ Time Comparison
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;With AgentCore&lt;/strong&gt; → &lt;code&gt;agentcore launch&lt;/code&gt; → &lt;strong&gt;5 minutes total&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traditional AWS&lt;/strong&gt; → ~&lt;strong&gt;2–3 days&lt;/strong&gt; of infrastructure setup
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Efficiency Achievement
&lt;/h2&gt;

&lt;p&gt;You literally go from &lt;strong&gt;local Python code → production-ready, auto-scaling AWS infrastructure&lt;/strong&gt; in one command.  &lt;/p&gt;

&lt;p&gt;That’s the &lt;strong&gt;core value proposition of AgentCore Runtime&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
✅ Abstract away infrastructure complexity&lt;br&gt;&lt;br&gt;
✅ Maintain enterprise-grade reliability and security&lt;br&gt;&lt;br&gt;
✅ Run seamlessly on the same AWS backbone powering major applications  &lt;/p&gt;

&lt;p&gt;And the best part? When your agent responds via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;agentcore invoke
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  AgentCore Components Assessment
&lt;/h2&gt;

&lt;p&gt;Let me evaluate which AgentCore components I successfully leveraged:&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Successfully Implemented
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Runtime&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-agent system deployed to AgentCore Runtime&lt;/li&gt;
&lt;li&gt;Agent status: READY and fully functional&lt;/li&gt;
&lt;li&gt;Handles complex orchestration between planning and execution agents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tools&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real API integration with external services (Aviationstack)&lt;/li&gt;
&lt;li&gt;Multi-agent coordination and delegation&lt;/li&gt;
&lt;li&gt;Natural language processing for travel requests&lt;/li&gt;
&lt;li&gt;Robust fallback mechanisms for reliability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re interested in exploring additional functionalities of Bedrock AgentCore, this project can be further extended with:&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next: Enhancement Roadmap
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Phase 5 (Future): Complete AgentCore Integration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Memory Service&lt;/strong&gt;: Implement persistent user preferences and travel history&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identity Service&lt;/strong&gt;: Add user authentication and personalized experiences&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gateway Service&lt;/strong&gt;: Route all external API calls through AgentCore Gateway&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Observability&lt;/strong&gt;: Custom metrics, dashboards, and alerting&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Advanced Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Proactive Monitoring&lt;/strong&gt;: Price alerts for saved routes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-modal Input&lt;/strong&gt;: Voice commands and image processing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration Expansion&lt;/strong&gt;: Hotels, car rentals, weather, and events&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mobile App&lt;/strong&gt;: Native iOS/Android applications&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What I'd Do Differently
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start with AgentCore from Day 1&lt;/strong&gt;: I spent time building local infrastructure that I later had to adapt. Starting with AgentCore patterns would have been more efficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plan Authentication Early&lt;/strong&gt;: The browser-to-cloud authentication challenge was more complex than anticipated. A backend proxy service would have simplified this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Implement Memory from the Start&lt;/strong&gt;: User context and preferences would significantly improve the experience.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  What Exceeded Expectations
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment Simplicity&lt;/strong&gt;: The &lt;code&gt;agentcore launch&lt;/code&gt; command handling infrastructure setup was remarkably seamless.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real API Integration&lt;/strong&gt;: Once properly configured, the Aviationstack integration worked flawlessly with 500 requests/month being generous for development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;System Resilience&lt;/strong&gt;: The multi-layered fallback system ensures users always get a response, even when external APIs fail.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://aviationstack.com/login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbogpjxwcdlvky8wayac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbogpjxwcdlvky8wayac.png" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Reflection
&lt;/h2&gt;

&lt;p&gt;This project taught me that AWS Bedrock AgentCore excels at what it promises: rapidly transforming local agent prototypes into production-ready cloud services. The Runtime component is outstanding, handling deployment, scaling, and infrastructure management seamlessly.&lt;/p&gt;

&lt;p&gt;The real learning came from understanding the ecosystem complexity beyond basic deployment. Authentication, persistent memory, and advanced observability require thoughtful architecture decisions that go beyond the initial deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Recommendation
&lt;/h3&gt;

&lt;p&gt;For developers considering AgentCore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start here&lt;/strong&gt; if you want to focus on agent logic rather than infrastructure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan authentication patterns early&lt;/strong&gt; if you need browser-based access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Leverage the ecosystem&lt;/strong&gt; - AgentCore's value multiplies when using Memory, Identity, and Gateway services together&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Begin with CLI access&lt;/strong&gt; and add web interfaces as a separate layer&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Bottom Line
&lt;/h3&gt;

&lt;p&gt;The journey from local Python scripts to production AWS deployment took just a few weeks, validating AgentCore's promise of accelerated agent development and deployment.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This concludes my 4-part series on building an AI travel agent with AWS Bedrock AgentCore. The complete code and deployment configurations are available in my repository, and I'm excited to see what the community builds with these patterns.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-1-36c7"&gt;Part 1: Foundation and Architecture&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-2-199i"&gt;Part 2: Multi-Agent System Design&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/migrating-a-local-multi-agent-travel-system-to-aws-bedrock-agentcorepart-3-45eh"&gt;Part 3: AgentCore Migration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 4: Production Deployment and Lessons Learned (This Post)&lt;/li&gt;
&lt;li&gt;Aviation stack account creation - &lt;a href="https://aviationstack.com/" rel="noopener noreferrer"&gt;https://aviationstack.com/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you,&lt;br&gt;
Harsha&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Migrating a Local Multi-Agent Travel System to AWS Bedrock AgentCore(Part 3)</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Sun, 07 Sep 2025 17:07:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/migrating-a-local-multi-agent-travel-system-to-aws-bedrock-agentcorepart-3-45eh</link>
      <guid>https://dev.to/aws-builders/migrating-a-local-multi-agent-travel-system-to-aws-bedrock-agentcorepart-3-45eh</guid>
      <description>&lt;p&gt;This is the continuation of Blog 1 - &lt;a href="https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-1-36c7"&gt;https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-1-36c7&lt;/a&gt; and Blog 2- &lt;a href="https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-2-199i"&gt;https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-2-199i&lt;/a&gt; After successfully building a multi-agent travel planning system locally, I faced the next challenge: integrating it with AWS Bedrock AgentCore for cloud deployment. This post covers the technical journey of wrapping existing Python agents with AgentCore and the lessons learned along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge: From Local to Cloud-Ready
&lt;/h2&gt;

&lt;p&gt;I had a working travel agent system with multiple components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Orchestrator&lt;/strong&gt;: Main coordination logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Planning Agent&lt;/strong&gt;: AI-powered trip planning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Booking Tools&lt;/strong&gt;: Flight and hotel search capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Travel Tools&lt;/strong&gt;: External API integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal was to integrate this with AWS Bedrock AgentCore without completely rewriting the existing architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;My local system used a clean multi-agent pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Orchestrator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;planning_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;PlanningAgent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;travel_tool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;TravelTool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;handle_user_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Coordinate between agents and tools
&lt;/span&gt;        &lt;span class="n"&gt;plan&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;planning_agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_plan&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="c1"&gt;# Execute based on plan...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The AgentCore Integration Journey
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Installing the Right Packages
&lt;/h3&gt;

&lt;p&gt;The key was identifying the correct AgentCore packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;bedrock-agentcore
pip &lt;span class="nb"&gt;install &lt;/span&gt;strands-agents  
pip &lt;span class="nb"&gt;install &lt;/span&gt;bedrock-agentcore-starter-toolkit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Creating the AgentCore Wrapper
&lt;/h3&gt;

&lt;p&gt;Rather than rewriting my agents, I created a thin wrapper that bridges my existing code with AgentCore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;bedrock_agentcore&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BedrockAgentCoreApp&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;orchestrator&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Orchestrator&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BedrockAgentCoreApp&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;orchestrator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Orchestrator&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nd"&gt;@app.entrypoint&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;user_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;orchestrator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handle_user_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;result&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;success&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;powered_by&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AWS Bedrock AgentCore&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Debugging Import Issues
&lt;/h3&gt;

&lt;p&gt;Initially encountered import errors with the Strands framework:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ImportError: cannot import name 'Tool' from 'strands.tools'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The solution was understanding Strands uses lowercase &lt;code&gt;tool&lt;/code&gt; decorator, not &lt;code&gt;Tool&lt;/code&gt; class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;  &lt;span class="c1"&gt;# Not from strands.tools import Tool
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Handling Class Name Mismatches
&lt;/h3&gt;

&lt;p&gt;My wrapper initially looked for &lt;code&gt;TravelOrchestrator&lt;/code&gt; but my class was named &lt;code&gt;Orchestrator&lt;/code&gt;. The fix was straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Instead of:
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;orchestrator&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TravelOrchestrator&lt;/span&gt;
&lt;span class="n"&gt;orchestrator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;TravelOrchestrator&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Use:
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;orchestrator&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Orchestrator&lt;/span&gt;
&lt;span class="n"&gt;orchestrator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Orchestrator&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing the Integration
&lt;/h2&gt;

&lt;p&gt;Once the wrapper was working, testing locally revealed the integration was successful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python app.py
&lt;span class="c"&gt;# Server starts on http://localhost:8080&lt;/span&gt;

&lt;span class="c"&gt;# Test flight search&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8080/invocations &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"message": "Find flights from NYC to Tokyo"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"result"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Flight Search Results:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;SkyWings Flight F101: NYC → Tokyo&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;Price: $349.99"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"session_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; 
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"success"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"orchestrator"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"powered_by"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS Bedrock AgentCore"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  AWS Configuration and Infrastructure
&lt;/h2&gt;

&lt;p&gt;AgentCore's configuration process is streamlined:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;agentcore configure &lt;span class="nt"&gt;-e&lt;/span&gt; app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02yxf1k2t9pk3kwcczdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02yxf1k2t9pk3kwcczdn.png" alt=" " width="800" height="675"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detected my &lt;code&gt;requirements.txt&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Set up IAM execution roles&lt;/li&gt;
&lt;li&gt;Configured ECR repository&lt;/li&gt;
&lt;li&gt;Generated Docker files for deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The configuration summary showed everything was ready:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Region: us-west-2&lt;/li&gt;
&lt;li&gt;Auto-created execution role and ECR repository&lt;/li&gt;
&lt;li&gt;IAM authorization configured&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deployment Challenges
&lt;/h2&gt;

&lt;p&gt;Attempted cloud deployment with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;agentcore launch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The deployment successfully:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Created ECR repository in us-west-2&lt;/li&gt;
&lt;li&gt;Set up IAM execution roles&lt;/li&gt;
&lt;li&gt;Configured CodeBuild roles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndzp10j0sk0o92ibrp2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndzp10j0sk0o92ibrp2o.png" alt=" " width="800" height="675"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, encountered SSL certificate validation issues due to corporate network restrictions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SSL validation failed for https://bedrock-agentcore-codebuild-sources-*.s3.amazonaws.com/
certificate verify failed: unable to get local issuer certificate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Technical Insights
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Wrapper Pattern Works Well
&lt;/h3&gt;

&lt;p&gt;Rather than rewriting existing agents, the thin AgentCore wrapper approach preserved the original architecture while adding cloud capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Import Debugging Strategy
&lt;/h3&gt;

&lt;p&gt;When facing import errors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check exact class/function names in your modules&lt;/li&gt;
&lt;li&gt;Verify framework-specific import patterns&lt;/li&gt;
&lt;li&gt;Use debugging to see what's actually available in modules&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. AgentCore's Infrastructure Automation
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;agentcore configure&lt;/code&gt; and &lt;code&gt;agentcore launch&lt;/code&gt; commands handle significant infrastructure setup automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM role creation with proper policies&lt;/li&gt;
&lt;li&gt;ECR repository setup&lt;/li&gt;
&lt;li&gt;CodeBuild project configuration&lt;/li&gt;
&lt;li&gt;ARM64 container building in the cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Benefits Achieved
&lt;/h2&gt;

&lt;p&gt;The integration maintained the original multi-agent benefits while adding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Ready for AWS cloud deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standardization&lt;/strong&gt;: Uses AgentCore patterns for consistency
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt;: Built-in health checks and logging&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production-Ready&lt;/strong&gt;: Automatic infrastructure provisioning&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;With the local AgentCore integration working, the next phase involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adding AgentCore Memory for user preference persistence&lt;/li&gt;
&lt;li&gt;Implementing proactive monitoring capabilities&lt;/li&gt;
&lt;li&gt;Completing cloud deployment from a non-restricted network&lt;/li&gt;
&lt;li&gt;Integrating real-time APIs to replace mock data&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Code Repository
&lt;/h2&gt;

&lt;p&gt;The complete implementation is available with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Original multi-agent system&lt;/li&gt;
&lt;li&gt;AgentCore wrapper integration&lt;/li&gt;
&lt;li&gt;Configuration files for deployment&lt;/li&gt;
&lt;li&gt;Testing commands and examples&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Migrating existing Python agents to AWS Bedrock AgentCore doesn't require a complete rewrite. The wrapper pattern allows preserving existing architecture while gaining cloud deployment capabilities. The main challenges were understanding framework-specific imports and navigating network restrictions rather than fundamental architectural issues.&lt;/p&gt;

&lt;p&gt;The result is a production-ready travel agent that maintains its original multi-agent design while being deployable on AWS infrastructure with auto-scaling, monitoring, and enterprise-grade security features.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post is part of my journey building AI agents for the AWS Bedrock AgentCore challenge. Previous posts covered the initial multi-agent architecture and planning system design.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Proactive AI Travel Agent on AWS: My Journey with Bedrock AgentCore (Part 2)</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Thu, 04 Sep 2025 07:58:42 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-2-199i</link>
      <guid>https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-2-199i</guid>
      <description>&lt;h1&gt;
  
  
  Building a Proactive AI Travel Agent on AWS: My Journey with Bedrock AgentCore (Part 2)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🧠 From One Agent to a Multi-Agent System
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-1-36c7"&gt;Part 1&lt;/a&gt;, I walked through the foundation: getting a single AI agent running on Amazon Bedrock AgentCore. That first step gave me a working runtime, a clean HTTP interface, and confidence that I could build a travel concierge on top of it.&lt;/p&gt;

&lt;p&gt;But a real travel concierge needs more than just chat. It needs to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand &lt;strong&gt;complex travel requests&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan&lt;/strong&gt; multi-step itineraries
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search&lt;/strong&gt; for flights and hotels
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manage bookings&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Be &lt;strong&gt;modular&lt;/strong&gt; enough to extend and scale
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s exactly what Phase 2 is all about.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏗️ Phase 2: Local Multi-Agent Travel System
&lt;/h2&gt;

&lt;p&gt;In this phase, I focused on building a fully functional &lt;strong&gt;travel orchestration system&lt;/strong&gt; — still local, but structured to integrate cleanly with Bedrock AgentCore.&lt;/p&gt;

&lt;p&gt;💻 &lt;strong&gt;GitHub repo&lt;/strong&gt;: &lt;a href="https://github.com/hvmathan/bedrock-travel-agent/tree/main" rel="noopener noreferrer"&gt;bedrock-travel-agent&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  🎯 Goals
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Modular architecture separating &lt;strong&gt;reasoning&lt;/strong&gt;, &lt;strong&gt;planning&lt;/strong&gt;, and &lt;strong&gt;execution&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A realistic set of travel tools (flights, hotels, bookings) for testing&lt;/li&gt;
&lt;li&gt;Clean interfaces that map easily to Bedrock AgentCore’s &lt;strong&gt;tool calling&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Keep it &lt;strong&gt;Bedrock-ready&lt;/strong&gt; — no rewriting later&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛠️ Core Components
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;orchestrator.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The “brain” — routes user requests to the right agents and tools&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;planning_agent.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Interprets natural language, extracts intent, decides actions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;travel_tool.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Implements flight and hotel search (ready for real APIs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;booking_tools.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manages bookings: create, confirm, cancel&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;planning_tools.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Handles budgets, date parsing, itinerary generation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mock_travel_api.py&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Provides realistic test data locally&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;my_agent.py&lt;/code&gt; / &lt;code&gt;run_agent.py&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Future Bedrock AgentCore integration entry points&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🔄 How It Works
&lt;/h2&gt;

&lt;p&gt;User Input → Orchestrator → Planning Agent → Travel Tools → Response&lt;/p&gt;

&lt;p&gt;User: "Plan a 5-day trip to Paris"&lt;br&gt;
↓&lt;br&gt;
Planning Agent: Understands intent, breaks it into sub-tasks&lt;br&gt;
↓&lt;br&gt;
Travel Tools: Fetch flights, hotels, estimate budget&lt;br&gt;
↓&lt;br&gt;
Response: Detailed itinerary and costs&lt;/p&gt;




&lt;h2&gt;
  
  
  🚦 Sample Interactions
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; “Find flights from NYC to London”&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Agent:&lt;/strong&gt; Returns flights with prices and timings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; “I need hotels in Tokyo for next week”&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Agent:&lt;/strong&gt; Returns hotels with ratings, prices, and amenities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User:&lt;/strong&gt; “Plan a 5-day trip to Paris”&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Agent:&lt;/strong&gt; Suggests day-by-day itinerary, estimated costs, and booking options.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5f0sruptfh8s44j5oy9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5f0sruptfh8s44j5oy9w.png" alt=" " width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5nww7dhu0ubc8nghatz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5nww7dhu0ubc8nghatz.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Architecture Overview
&lt;/h2&gt;

&lt;p&gt;🏁 Current Status: Ready for Bedrock&lt;/p&gt;

&lt;p&gt;At this point, I’ve got:&lt;/p&gt;

&lt;p&gt;✅ A multi-agent travel orchestration system&lt;/p&gt;

&lt;p&gt;✅ Clean separation of logic&lt;/p&gt;

&lt;p&gt;✅ Tools and agents that match Bedrock’s tool-calling paradigm&lt;/p&gt;

&lt;p&gt;✅ Mock APIs for safe, realistic testing&lt;/p&gt;

&lt;p&gt;But everything is still local. AWS doesn’t know about it — yet.&lt;/p&gt;

&lt;p&gt;🚀 What’s Next (Phase 3)&lt;/p&gt;

&lt;p&gt;In the upcoming phase, I’ll:&lt;/p&gt;

&lt;p&gt;Define tool schemas for Bedrock AgentCore&lt;br&gt;
Register these tools and agents with AgentCore&lt;br&gt;
Deploy the entire system into AWS for scaling, routing, and observability&lt;br&gt;
Test real Bedrock-driven multi-agent orchestration in production-like conditions&lt;/p&gt;

&lt;p&gt;Please drop you comments in you have any questions while performing the above.&lt;/p&gt;

&lt;p&gt;Thank you,&lt;br&gt;
Harsha&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Proactive AI Travel Agent on AWS: My Journey with Bedrock AgentCore (Part 1)</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Mon, 01 Sep 2025 17:06:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-1-36c7</link>
      <guid>https://dev.to/aws-builders/building-a-proactive-ai-travel-agent-on-aws-my-journey-with-bedrock-agentcore-part-1-36c7</guid>
      <description>&lt;h2&gt;
  
  
  Introduction: The Vision of an Intelligent Travel Concierge
&lt;/h2&gt;

&lt;p&gt;AWS recently introduced Bedrock AgentCore, a powerful new capability packed with exciting features. Before diving into development, I highly recommend watching the official AWS YouTube walkthrough on Bedrock AgentCore to gain a solid understanding of its core concepts and potential.&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;Part 1&lt;/strong&gt; of a series documenting my journey: from a foundational prototype to a sophisticated, multi-agent system on AWS.&lt;/p&gt;

&lt;p&gt;Amazon Bedrock AgentCore covers the following components: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runtime&lt;/li&gt;
&lt;li&gt;Gateway &lt;/li&gt;
&lt;li&gt;Memory&lt;/li&gt;
&lt;li&gt;Identity &lt;/li&gt;
&lt;li&gt;Tools &lt;/li&gt;
&lt;li&gt;Observability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This blog focuses exclusively on the &lt;strong&gt;&lt;em&gt;Runtime&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this installment, I focused on the first goal: &lt;strong&gt;get a foundational agent running&lt;/strong&gt; using &lt;strong&gt;Amazon Bedrock AgentCore&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 1: Laying the Foundation
&lt;/h2&gt;

&lt;p&gt;Before booking flights or planning itineraries, I needed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;secure, scalable runtime&lt;/strong&gt; to host the AI agent.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;clean abstraction&lt;/strong&gt; to communicate with a Large Language Model (LLM).&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;simple HTTP interface&lt;/strong&gt;, so future systems can invoke it cleanly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enter &lt;strong&gt;Amazon Bedrock AgentCore&lt;/strong&gt;—a serverless agent runtime that handles the scaffolding: scaling, security, endpoints, and more. This frees me to concentrate on the actual logic of the agent, not infrastructure plumbing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tools of the Trade
&lt;/h2&gt;

&lt;p&gt;For this phase, I’m using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Bedrock AgentCore&lt;/strong&gt; – the serverless Agent runtime on AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strands Agent framework&lt;/strong&gt; – makes wrapping LLM calls in “agents” easy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anthropic Claude&lt;/strong&gt; – the conversational foundation model accessible via Bedrock.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python 3.10+&lt;/strong&gt; – the language for my agent code.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Getting Started: Step-by-Step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1. Environment Setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Ensure your AWS account is provisioned for &lt;strong&gt;Bedrock&lt;/strong&gt; and &lt;strong&gt;AgentCore&lt;/strong&gt; in a supported region (e.g., &lt;code&gt;us-east-1&lt;/code&gt;, &lt;code&gt;us-west-2&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;aws configure&lt;/code&gt; to set up your credentials.&lt;/li&gt;
&lt;li&gt;In a terminal, create a Python virtual environment (optional but recommended)&lt;/li&gt;
&lt;li&gt;Install the required packages:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;pip install bedrock-agentcore strands-agents bedrock-agentcore-starter-toolkit&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2. Write the Agent (my_agent.py)
&lt;/h3&gt;

&lt;p&gt;Create a file named my_agent.py and paste in this code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from bedrock_agentcore import BedrockAgentCoreApp
from strands import Agent

# Initialize the AgentCore application
app = BedrockAgentCoreApp()

# Initialize the agent to use a specific model
agent = Agent(
    model="anthropic.claude-3-sonnet-20240229-v1:0"  # Use a model ID you have access to
)

@app.entrypoint
def invoke(payload):
    """
    Entrypoint for AgentCore: receives JSON payload, invokes LLM, returns response.
    Expecting payload like: {"prompt": "Your prompt here"}
    """
    user_message = payload.get("prompt", "Hello! How can I help you today?")
    result = agent(user_message)
    return {"result": result.message}

if __name__ == "__main__":
    app.run()  # Starts local server (default: http://127.0.0.1:8080)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have hardcoded the agent here otherewise by default, the strands.Agent() in your my_agent.py uses a default LLM (Anthropic Claude 4.0) through Amazon Bedrock.&lt;/p&gt;

&lt;p&gt;Also, please double check on the below settings before executig the code.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Model access must be granted&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need to go to the AWS Management Console → Amazon Bedrock → Model access, and explicitly enable Claude 4.0 (or whichever model you want).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was struck at point for a while as Claude 4 was not provisioned to my account and hence I ended up in hardcoding to another version. &lt;/p&gt;

&lt;p&gt;The below screenshots shows on how to obtain access to Claude 4.0 model.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Your AWS user/role must have the right permissions&lt;br&gt;
You need to attach the following managed policies (as per quickstart):&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AmazonBedrockFullAccess&lt;/li&gt;
&lt;li&gt;BedrockAgentCoreFullAccess&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4urkfi7xzjllok02rp2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4urkfi7xzjllok02rp2h.png" alt=" " width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5vjhgq1swhfbwadwbtr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5vjhgq1swhfbwadwbtr.png" alt=" " width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 3. Run the Agent Locally
&lt;/h3&gt;

&lt;p&gt;Start the agent:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python my_agent.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;You should see:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;INFO:     Uvicorn running on http://127.0.0.1:8080 (Press CTRL+C to quit)&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4. Test the Agent with a Prompt
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Invoke-WebRequest -Uri "http://localhost:8080/invocations" `
    -Method POST `
    -ContentType "application/json" `
    -Body '{"prompt": "Plan a 3-day trip to Goa!"}'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get a JSON response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
  "result": {
    "role": "assistant",
    "content": [
      {
        "text": "Here's a suggested 3-day itinerary for a trip to Goa..."
      }
    ]
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thats it! Your AgentCore-backed agent is up and responding to real prompts.&lt;/p&gt;

&lt;p&gt;The AgentCore approach wraps your logic inside a server that stays alive, waits for prompts, and returns model outputs—making it production-ready and extensible for multi-agent orchestration.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;So far, we have:&lt;/p&gt;

&lt;p&gt;Built a living agent service backed by an LLM.&lt;/p&gt;

&lt;p&gt;Wrapped it in a clean, HTTP-accessible interface via AgentCore.&lt;/p&gt;

&lt;p&gt;✅ Runtime&lt;br&gt;
Gateway&lt;br&gt;
Memory&lt;br&gt;
Identity&lt;br&gt;
Tools&lt;br&gt;
Observability&lt;/p&gt;

&lt;p&gt;Next: Phase 2! I’ll introduce real-world capabilities—connecting the agent to travel APIs via Amazon Bedrock AgentCore Gateway so it can start retrieving flights, hotels, and more. Stay tuned for that in the next post of this series.&lt;/p&gt;

&lt;p&gt;Thank you,&lt;br&gt;
Harsha&lt;/p&gt;

</description>
      <category>aws</category>
      <category>genai</category>
    </item>
    <item>
      <title>Tokenization &amp; Data Masking using Lambda - The $700 Million Question</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Sun, 29 Jun 2025 09:05:25 +0000</pubDate>
      <link>https://dev.to/aws-builders/tokenization-data-masking-using-lambda-the-700-million-question-1h0g</link>
      <guid>https://dev.to/aws-builders/tokenization-data-masking-using-lambda-the-700-million-question-1h0g</guid>
      <description>&lt;h1&gt;
  
  
  Building a Serverless PII Protection Pipeline: From Equifax's $700M Mistake to a Secure Solution
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;How a simple tokenization pipeline could have prevented one of history's worst data breaches&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=vMQwMV62gtU" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=vMQwMV62gtU&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The $700 Million Question
&lt;/h2&gt;

&lt;p&gt;September 7, 2017. Equifax announces a data breach that would forever change how we think about data security. &lt;strong&gt;147 million people's&lt;/strong&gt; personal information—names, Social Security numbers, birth dates, addresses—exposed to cybercriminals. The aftermath? A staggering $700 million penalty, irreparable reputational damage, and shattered customer trust. Source : &lt;a href="https://archive.epic.org/privacy/data-breach/equifax/" rel="noopener noreferrer"&gt;https://archive.epic.org/privacy/data-breach/equifax/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The root cause? Yes, it was a missed Apache Struts patch. But dig deeper, and you'll find the real culprit: &lt;strong&gt;lack of layered, proactive data protection&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The "What If" That Started It All
&lt;/h3&gt;

&lt;p&gt;What if Equifax had implemented basic tokenization at their data ingestion points? What if sensitive data was automatically scrambled the moment it entered their systems? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The stolen data would have been useless to attackers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That "what if" haunted me for years and eventually inspired this project: a lightweight, automated PII protection pipeline that could prevent the next Equifax.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: A Serverless PII Protection Pipeline
&lt;/h2&gt;

&lt;p&gt;I built an event-driven, serverless pipeline that automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;Detects&lt;/strong&gt; PII fields in uploaded CSV files&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Tokenizes&lt;/strong&gt; sensitive data using reversible encoding&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Masks&lt;/strong&gt; data for safe sharing and analysis&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Detokenizes&lt;/strong&gt; when authorized access is needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why Serverless?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;💰 Cost-effective&lt;/strong&gt;: Pay only when processing files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🚀 Scalable&lt;/strong&gt;: Handles 1 file or 1000 files automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🔒 Secure&lt;/strong&gt;: Built-in AWS security and compliance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;⚡ Fast&lt;/strong&gt;: Near-instant processing for typical datasets&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture: Simple Yet Powerful
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuwhln6p3od0gg5d20qdo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuwhln6p3od0gg5d20qdo.png" alt="Image description" width="800" height="262"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
┌─────────────────┐    ┌───────────────────┐    ┌─────────────────┐
│  Raw CSV Upload │───▶│  PII Detector     │───▶│   Metadata      │
│   (S3 /raw)     │    │     Lambda        │    │  (S3 /metadata) │
└─────────────────┘    └───────────────────┘    └─────────────────┘
                                                           │
                                                           ▼
┌─────────────────┐    ┌───────────────────┐    ┌─────────────────┐
│   Detokenized   │◀───│   Detokenizer     │◀───│   Tokenizer     │
│  (S3 /detok)    │    │     Lambda        │    │     Lambda      │
└─────────────────┘    └───────────────────┘    └─────────────────┘
                                                           │
                                                           ▼
                                                ┌─────────────────┐
                                                │   Tokenized     │
                                                │ (S3 /tokenized) │
                                                └─────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The magic happens in 4 steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Upload&lt;/strong&gt;: Drop a CSV into S3's &lt;code&gt;/raw&lt;/code&gt; folder&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detect&lt;/strong&gt;: Lambda automatically identifies PII fields&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tokenize&lt;/strong&gt;: Sensitive data becomes reversible tokens&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access Control&lt;/strong&gt;: Only authorized users can detokenize&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm8cynughqhyp34udxm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm8cynughqhyp34udxm2.png" alt="Image description" width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgogajjwosbwzeoo40pjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgogajjwosbwzeoo40pjc.png" alt="Image description" width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tokenization Logic
&lt;/h3&gt;

&lt;p&gt;I chose Base64 encoding for this MVP—simple, reversible, and perfect for proof-of-concept:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;

&lt;span class="c1"&gt;# Transform "John Doe" into a token
&lt;/span&gt;&lt;span class="n"&gt;original&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;John Doe&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;original&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;()).&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Output: "Sm9obiBEb2U="
&lt;/span&gt;
&lt;span class="c1"&gt;# Reverse it when needed
&lt;/span&gt;&lt;span class="n"&gt;decoded&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;decoded&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Output: "John Doe"
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Smart Masking for Safe Sharing
&lt;/h3&gt;

&lt;p&gt;The detokenizer includes field-specific masking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;mask_value&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;masked&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;masked&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;

    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;phone&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;:]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Real Data Transformation Examples
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Input: Customer Data
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name,Email,Phone,DOB,TransactionID
John Doe,john@example.com,9876543210,1990-01-01,TXN1001
Jane Smith,jane@gmail.com,9123456789,1991-03-22,TXN1002
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 1: PII Detection → Metadata
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Email"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Phone"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Tokenization → Safe Storage
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name,Email,Phone,DOB,TransactionID
TOKEN_1,TOKEN_2,TOKEN_3,1990-01-01,TXN1001
TOKEN_4,TOKEN_5,TOKEN_6,1991-03-22,TXN1002
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Masked Output → Safe Sharing
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name,Email,Phone,DOB,TransactionID
J*** D**,j**n@example.com,******3210,1990-01-01,TXN1001
J*** S****,j**e@gmail.com,******6789,1991-03-22,TXN1002
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Notice how non-PII fields (DOB, TransactionID) remain untouched—preserving data utility while protecting privacy.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lambda Functions: Event-Driven Excellence
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔍 PII Detector Lambda
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Triggered by S3 upload to /raw folder
&lt;/span&gt;    &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Records&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;bucket&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Records&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;object&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;key&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;# Analyze CSV headers and content for PII patterns
&lt;/span&gt;    &lt;span class="n"&gt;pii_fields&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;detect_pii_fields&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;csv_content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Save metadata for tokenizer
&lt;/span&gt;    &lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put_object&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;Bucket&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metadata/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;_pii_fields.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pii_fields&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🔐 Tokenizer Lambda
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Triggered by metadata upload
&lt;/span&gt;    &lt;span class="c1"&gt;# Read original CSV + PII metadata
&lt;/span&gt;    &lt;span class="c1"&gt;# Replace sensitive values with tokens
&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;csv_reader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;field&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;pii_fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;field&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
                &lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TOKEN_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;token_counter&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                &lt;span class="n"&gt;token_counter&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🎭 Detokenizer Lambda (with Masking)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Triggered by tokenized file upload
&lt;/span&gt;    &lt;span class="c1"&gt;# Decode tokens and apply field-specific masking
&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;csv_reader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;field&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;pii_fields&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;field&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;decoded&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;mask_value&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;decoded&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Security &amp;amp; Compliance by Design
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🛡️ Multi-Layered Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Encryption in Transit&lt;/strong&gt;: All S3 operations use HTTPS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM Policies&lt;/strong&gt;: Granular access control per folder&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit Trail&lt;/strong&gt;: CloudTrail logs every operation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Segregation&lt;/strong&gt;: Clear separation between raw/tokenized/detokenized data&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📋 Compliance Ready
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GDPR Compliant&lt;/strong&gt;: Right to be forgotten through detokenization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SOX Friendly&lt;/strong&gt;: Audit trails and access controls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HIPAA Considerations&lt;/strong&gt;: De-identification through tokenization&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Performance &amp;amp; Cost: The Serverless Advantage
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ⚡ Performance Metrics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Small files (&amp;lt;1MB)&lt;/strong&gt;: 2-3 seconds end-to-end&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium files (1-10MB)&lt;/strong&gt;: 5-15 seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concurrent processing&lt;/strong&gt;: Up to 1000 files simultaneously&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💰 Cost Breakdown (Monthly)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lambda executions&lt;/strong&gt;: $0.20 per 1M requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 storage&lt;/strong&gt;: $0.023 per GB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data transfer&lt;/strong&gt;: Minimal (internal processing)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Total&lt;/strong&gt;: &lt;strong&gt;&amp;lt;$10/month&lt;/strong&gt; for typical workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Compare that to traditional data protection solutions costing thousands per month!&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned: Building in the Real World
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ✅ What Worked Well
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Event-driven architecture&lt;/strong&gt; eliminated complex orchestration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Folder-based organization&lt;/strong&gt; provided natural data governance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Base64 tokenization&lt;/strong&gt; was perfect for MVP validation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serverless approach&lt;/strong&gt; kept costs minimal during development&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🚨 Production Considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Base64 isn't cryptographically secure&lt;/strong&gt;—upgrade to AES-256 for production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Large files&lt;/strong&gt; need chunking strategies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cold starts&lt;/strong&gt; add 1-2 seconds latency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error handling&lt;/strong&gt; needs retry logic and dead letter queues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔮 What's Next?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[ ] &lt;strong&gt;AWS KMS integration&lt;/strong&gt; for enterprise-grade encryption&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Real-time streaming&lt;/strong&gt; with Kinesis for live data&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Multi-format support&lt;/strong&gt; (JSON, XML, Parquet)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Yourself: Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# AWS CLI setup&lt;/span&gt;
aws configure

&lt;span class="c"&gt;# Create your bucket&lt;/span&gt;
aws s3 mb s3://your-pii-project-bucket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Quick Deploy
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Clone the repo&lt;/span&gt;
git clone &lt;span class="o"&gt;[&lt;/span&gt;your-repo-url]

&lt;span class="c"&gt;# 2. Deploy Lambda functions&lt;/span&gt;
./deploy.sh

&lt;span class="c"&gt;# 3. Test with sample data&lt;/span&gt;
aws s3 &lt;span class="nb"&gt;cp &lt;/span&gt;sample-data/customer_data.csv s3://your-bucket/raw/

&lt;span class="c"&gt;# 4. Watch the magic happen!&lt;/span&gt;
aws logs &lt;span class="nb"&gt;tail&lt;/span&gt; /aws/lambda/pii-detector &lt;span class="nt"&gt;--follow&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Bigger Picture: Why This Matters
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Beyond Technical Implementation
&lt;/h3&gt;

&lt;p&gt;This isn't just about code—it's about &lt;strong&gt;changing how we think about data protection&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Proactive vs. Reactive&lt;/strong&gt;: Instead of adding security as an afterthought, we bake it into the data pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy by Design&lt;/strong&gt;: Sensitive data is protected from the moment it enters our systems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Democratic Data Protection&lt;/strong&gt;: Serverless makes enterprise-grade security accessible to everyone&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Equifax Test
&lt;/h3&gt;

&lt;p&gt;Ask yourself: &lt;em&gt;"If attackers breached my system today, would the stolen data be useless?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With this pipeline, the answer is &lt;strong&gt;yes&lt;/strong&gt;. Tokenized data without the decryption keys is just gibberish.&lt;/p&gt;

&lt;h2&gt;
  
  
  Join the Movement
&lt;/h2&gt;

&lt;p&gt;Data breaches aren't slowing down—they're accelerating. But so are our tools to fight them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This project proves that with modern cloud services, protecting PII doesn't require:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;❌ Million-dollar budgets&lt;/li&gt;
&lt;li&gt;❌ Dedicated security teams
&lt;/li&gt;
&lt;li&gt;❌ Complex infrastructure&lt;/li&gt;
&lt;li&gt;❌ Months of development&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;It just requires:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Smart architecture&lt;/li&gt;
&lt;li&gt;✅ Automation-first thinking&lt;/li&gt;
&lt;li&gt;✅ Security by design&lt;/li&gt;
&lt;li&gt;✅ A few lines of Python&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Your Take?
&lt;/h2&gt;

&lt;p&gt;Have you implemented similar data protection patterns? What challenges did you face? What would you build differently?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drop your thoughts in the comments—let's make data protection the norm, not the exception.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🔗 Resources
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;[GitHub Repository]&lt;/strong&gt;:&lt;a href="https://github.com/hvmathan/Tokenization-and-Data-Masking" rel="noopener noreferrer"&gt;https://github.com/hvmathan/Tokenization-and-Data-Masking&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you!&lt;br&gt;
Harsha&lt;/p&gt;

</description>
      <category>serverless</category>
    </item>
    <item>
      <title>Getting Started with Amazon Q and RAG: Build an AI Assistant Using Just S3 Files!</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Wed, 04 Jun 2025 18:29:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/getting-started-with-amazon-q-and-rag-build-an-ai-assistant-using-just-s3-files-4jp2</link>
      <guid>https://dev.to/aws-builders/getting-started-with-amazon-q-and-rag-build-an-ai-assistant-using-just-s3-files-4jp2</guid>
      <description>&lt;p&gt;Are you someone who has always wondered what the buzzword &lt;strong&gt;RAG&lt;/strong&gt; is all about?&lt;/p&gt;

&lt;p&gt;If you’ve been following the world of &lt;strong&gt;Generative AI&lt;/strong&gt;, chances are you’ve seen it pop up frequently — especially alongside tools like &lt;strong&gt;Amazon Q&lt;/strong&gt;. But if it still feels like a mystery, you're in the right place.&lt;/p&gt;

&lt;p&gt;In this blog, I’ll break down three key ideas:&lt;/p&gt;

&lt;p&gt;✅ What is an &lt;strong&gt;LLM&lt;/strong&gt;?&lt;br&gt;&lt;br&gt;
✅ What is &lt;strong&gt;Amazon Q&lt;/strong&gt; and how is it different from ChatGPT?&lt;br&gt;&lt;br&gt;
✅ What exactly is &lt;strong&gt;RAG (Retrieval-Augmented Generation)&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;And then — we’ll make things fun by building a &lt;strong&gt;Cricket Q&amp;amp;A Bot&lt;/strong&gt; powered by Amazon Q, using just a few PDFs about &lt;strong&gt;Virat Kohli&lt;/strong&gt; and &lt;strong&gt;RCB&lt;/strong&gt;!&lt;/p&gt;




&lt;h2&gt;
  
  
  Let’s Start With LLMs
&lt;/h2&gt;

&lt;p&gt;You’ve probably used tools like &lt;strong&gt;ChatGPT&lt;/strong&gt;, &lt;strong&gt;Grok&lt;/strong&gt;, or &lt;strong&gt;Gemini&lt;/strong&gt; — marveled at how they can answer almost &lt;em&gt;anything&lt;/em&gt; you throw at them.&lt;/p&gt;

&lt;p&gt;That’s thanks to something called an &lt;strong&gt;LLM&lt;/strong&gt; — a &lt;strong&gt;Large Language Model&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
As the name implies "Large" --&amp;gt; These models are trained on massive amounts of public data: books, websites, forums, code, and more.&lt;/p&gt;

&lt;p&gt;They use that training to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand your questions&lt;/li&gt;
&lt;li&gt;Predict contextually accurate responses&lt;/li&gt;
&lt;li&gt;Hold conversations like a human would&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So far so good — but here’s the catch...&lt;/p&gt;




&lt;h2&gt;
  
  
  What If You Want AI to Only Use &lt;em&gt;Your&lt;/em&gt; Data?
&lt;/h2&gt;

&lt;p&gt;Let’s say you’re a massive &lt;strong&gt;Virat Kohli&lt;/strong&gt; fan. You’ve collected 5 PDFs — articles, stats, bios — all about him and RCB.&lt;/p&gt;

&lt;p&gt;Now imagine you want to create a chatbot that answers questions &lt;strong&gt;only&lt;/strong&gt; from those PDFs. Not the entire internet.&lt;br&gt;&lt;br&gt;
No external opinions. Just facts from those documents.&lt;/p&gt;

&lt;p&gt;Can ChatGPT or Grok do that?&lt;br&gt;&lt;br&gt;
❌ Not directly.&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;Amazon Q&lt;/strong&gt; can.&lt;br&gt;&lt;br&gt;
✅ It lets you build an AI assistant that works exclusively on your data — PDFs, CSVs, PPTs, etc. — stored privately and securely in S3.&lt;/p&gt;




&lt;h2&gt;
  
  
  Enter RAG: Retrieval-Augmented Generation
&lt;/h2&gt;

&lt;p&gt;Here’s where the magic happens.&lt;/p&gt;

&lt;p&gt;RAG stands for &lt;strong&gt;Retrieval-Augmented Generation&lt;/strong&gt; — and it’s how tools like Amazon Q answer questions accurately from your documents.&lt;/p&gt;

&lt;p&gt;Think of it as a &lt;strong&gt;two-step AI process&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Retrieve&lt;/strong&gt; relevant content from your uploaded documents (like searching inside your PDFs)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate&lt;/strong&gt; a natural, human-like response using that content&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Simple Analogy:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;RAG is like an open-book exam.&lt;br&gt;&lt;br&gt;
The AI doesn’t memorize everything — it looks up answers in your docs, and then explains them in plain English.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No outdated info from the web&lt;/li&gt;
&lt;li&gt;You stay in full control of your assistant's brain&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Cricket Bot Example — Powered by RAG &amp;amp; Amazon Q
&lt;/h2&gt;

&lt;p&gt;I built a quick Cricket Q&amp;amp;A Bot using &lt;strong&gt;Amazon Q Developer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here’s what I did:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uploaded 5 short PDFs about &lt;strong&gt;Virat Kohli&lt;/strong&gt; and &lt;strong&gt;RCB&lt;/strong&gt; to S3&lt;/li&gt;
&lt;li&gt;Created an Amazon Q application (takes &amp;lt;5 mins)&lt;/li&gt;
&lt;li&gt;Asked it questions like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"How many runs did Kohli score in IPL 2016?"
"Has RCB ever won the IPL?"
"Who coached RCB in 2023?"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why This Is a Game-Changer&lt;br&gt;
With Amazon Q and RAG, you can build assistants that:&lt;/p&gt;

&lt;p&gt;Rely only on private data (great for enterprise use)&lt;br&gt;
Scale as you add new content&lt;br&gt;
Work across industries: HR, finance, insurance, healthcare, sports — you name it!&lt;/p&gt;

&lt;p&gt;What You Need to Try This Yourself&lt;/p&gt;

&lt;p&gt;✅ An AWS account (Developer Tier is enough)&lt;br&gt;
✅ A few PDFs or files you want to build an assistant for&lt;br&gt;
✅ Amazon Q Developer Console&lt;br&gt;
✅ 20 minutes of curiosity!&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Prepare Your Data
&lt;/h3&gt;

&lt;p&gt;Get your source material ready. For this example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I collected &lt;strong&gt;5 short PDFs&lt;/strong&gt; about &lt;strong&gt;Virat Kohli&lt;/strong&gt; and &lt;strong&gt;RCB&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Make sure each document is cleanly structured (stats, articles, bios, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Upload Your Documents to S3
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to your &lt;strong&gt;AWS Console&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Open the &lt;strong&gt;S3&lt;/strong&gt; service&lt;/li&gt;
&lt;li&gt;Create a new bucket (e.g., &lt;code&gt;cricket-kohli-bot-data&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Upload your PDFs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F545eaujgfu5hu2wdm4h7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F545eaujgfu5hu2wdm4h7.png" alt="Image description" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpf18t2bwohbclx3ay5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpf18t2bwohbclx3ay5a.png" alt="Image description" width="604" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Create an Amazon Q Application
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;a href="https://console.aws.amazon.com/q/" rel="noopener noreferrer"&gt;Amazon Q Developer Console&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create Application&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Name it something like &lt;code&gt;RCB-Kohli-Assistant&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Private Data&lt;/strong&gt; as your knowledge source&lt;/li&gt;
&lt;li&gt;Add &lt;strong&gt;S3&lt;/strong&gt; as your data source, and point it to your bucket&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvt9toqxta1crtue4a020.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvt9toqxta1crtue4a020.png" alt="Image description" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftax9xci51twmsdr9au9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftax9xci51twmsdr9au9u.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now it’s time to point your data source. But first — make sure to click on “Create Index” — this is required before you can select any data source (like S3 in our case).&lt;/p&gt;

&lt;p&gt;Once the index is created, go ahead and choose S3 as your data source.&lt;/p&gt;

&lt;p&gt;You can leave all the settings as default and simply click “Create”.&lt;/p&gt;

&lt;p&gt;After setup, your data source will appear in the list — don’t forget to click “Sync” to pull in all your source files (your PDFs from S3).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s14x58yxfzzvj8ivs1p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s14x58yxfzzvj8ivs1p.png" alt="Image description" width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will take a while (&amp;lt;1 min for me) but depends on the volume of files you wish to sync.&lt;/p&gt;

&lt;p&gt;Now we have all set, lets spin up our bot applicaiton. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79nzwzjc3b0j3e73018o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79nzwzjc3b0j3e73018o.png" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you enable the “Preview web experience,” it opens a webpage where you can start asking questions — just like you would with ChatGPT.&lt;/p&gt;

&lt;p&gt;My questions :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a41qrqn44ospcvawsvz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a41qrqn44ospcvawsvz.png" alt="Image description" width="800" height="639"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;if the info is not available in those PDF's it just returns as No answer found.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvzvdnctd5nvtkv69nud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvzvdnctd5nvtkv69nud.png" alt="Image description" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that’s it — you’ve just built your very own customized ChatGPT-style chatbot, tailored exclusively to your data!&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts :
&lt;/h2&gt;

&lt;p&gt;AI isn’t just about answering everything — it’s about answering the right things based on the right data.&lt;/p&gt;

&lt;p&gt;Amazon Q gives you that power.&lt;br&gt;
RAG makes it smart and conversational.&lt;br&gt;
And cricket? Well, that just made it fun. 😄&lt;/p&gt;

&lt;p&gt;P.S.: I’m well aware that as of June 4th 2025, RCB finally clinched the title after 18 years!  But for the sake of this demo, I’m using sample data that isn’t updated beyond early 2025 — just to illustrate how the concept works in theory.&lt;/p&gt;

&lt;p&gt;Thanks for reading, please share your thoughts!&lt;/p&gt;

&lt;p&gt;Harsha&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rag</category>
      <category>amazonq</category>
    </item>
    <item>
      <title>AI-Powered Resume Shortlister using Amazon Bedrock</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Wed, 14 May 2025 17:10:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/ai-powered-resume-shortlister-using-amazon-bedrock-5gg4</link>
      <guid>https://dev.to/aws-builders/ai-powered-resume-shortlister-using-amazon-bedrock-5gg4</guid>
      <description>&lt;p&gt;Recently, I came across a situation to fill a position for my team. The moment we opened the role, I received close to 300 resumes for evaluation. While our HR team did their best to manually sift through them based on job description, experience, skill set, and other factors... I realized:&lt;br&gt;
Why not automate this using Generative AI?&lt;/p&gt;

&lt;p&gt;And so, this product was born :)&lt;/p&gt;
&lt;h2&gt;
  
  
  What This Is About
&lt;/h2&gt;

&lt;p&gt;This project explores how Generative AI (via Amazon Bedrock) can help HR teams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically screen resumes&lt;/li&gt;
&lt;li&gt;Evaluate candidate fitment to job descriptions&lt;/li&gt;
&lt;li&gt;Provide actionable insights&lt;/li&gt;
&lt;li&gt;All in minutes, not days.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Recruiters often face:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hundreds of resumes per job posting&lt;/li&gt;
&lt;li&gt;Time-consuming manual screening&lt;/li&gt;
&lt;li&gt;Inconsistent and subjective evaluation&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;With the power of Amazon Bedrock + Python + Streamlit, I built a Resume Shortlister that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Takes resumes (.docx) as input&lt;/li&gt;
&lt;li&gt;Evaluates them against a selected job description&lt;/li&gt;
&lt;li&gt;Returns:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Match Score&lt;/li&gt;
&lt;li&gt;Reasoning&lt;/li&gt;
&lt;li&gt;Missing Skills&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Bedrock (Titan Text - Nova)&lt;/li&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;Streamlit for UI&lt;/li&gt;
&lt;li&gt;Matplotlib, WordCloud for analytics&lt;/li&gt;
&lt;li&gt;Pandas for data handling&lt;/li&gt;
&lt;li&gt;Boto3 for AWS integration&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Folder Structure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zqb8qi67r87ee8cul5p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zqb8qi67r87ee8cul5p.png" alt="Image description" width="459" height="202"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  How It Works (Code Snippet on a high level)
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def build_prompt(resume_text, jd_text):
    return f"""
Human: Evaluate the following resume against this job description. Score the match out of 100 and explain why.

Job Description:
{jd_text}

Resume:
{resume_text}

Return a JSON object with:
- "score": integer
- "reasoning": list of 2-3 sentences
- "missing": list of skills or experiences missing
Respond only with valid JSON.
Assistant:
"""
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;resume_shortlister.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
from docx import Document

# ---------- Step 1: Load Resume ----------
def extract_resume_text(docx_path):
    doc = Document(docx_path)
    return "\n".join([para.text for para in doc.paragraphs])

# ---------- Step 2: Sample Job Descriptions ----------
job_descriptions = {
    "Project Manager": """
We are looking for an experienced Project Manager to lead cross-functional teams...
(Key skills: Agile, Scrum, Budgeting, JIRA, Communication)
""",
    "Software Developer": """
Join our dev team to build scalable backend services...
(Key skills: Python, Java, APIs, AWS, CI/CD)
"""
}

# ---------- Step 3: Build Prompt ----------
def build_prompt(jd_text, resume_text):
    return f"""
You are an AI assistant helping shortlist job candidates.

Here is the job description:
{jd_text}

Here is the resume:
{resume_text}

Task:
1. Rate this resume from 0 to 100 based on its relevance to the job.
2. List the top 3 reasons why this resume is a good or poor fit.
3. Mention any important missing qualifications.

Respond in JSON format:
{{
  "score": &amp;lt;score&amp;gt;,
  "reasoning": ["..."],
  "missing": ["..."]
}}
"""

# ---------- Step 4: Call Bedrock ----------
def call_bedrock(prompt):
    bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")

    body = {
        "anthropic_version": "bedrock-2023-05-31",
        "max_tokens": 500,
        "temperature": 0.5,
        "messages": [
            {"role": "user", "content": prompt}
        ]
    }

    response = bedrock.invoke_model(
        modelId="amazon.titan-text-lite-v1",
        body=json.dumps(body),
        contentType="application/json",
)


    output = json.loads(response['body'].read().decode())
    return output['content'][0]['text']

# ---------- Run ----------
if __name__ == "__main__":
    resume_path = "resumes/Project_Manager_1.docx"
    role = "Project Manager"  # change role

    resume_text = extract_resume_text(resume_path)
    jd = job_descriptions[role]
    prompt = build_prompt(jd, resume_text)

    print("\n📤 Sending prompt to Bedrock...\n")
    result = call_bedrock(prompt)

    print("✅ Response from Bedrock:\n")
    print(result)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;streamlit_app.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import streamlit as st
import os
import boto3
import json
from docx import Document
import pandas as pd
import matplotlib.pyplot as plt
from wordcloud import WordCloud
from collections import Counter
from botocore.exceptions import BotoCoreError, ClientError

# Constants
BEDROCK_MODEL_ID = "amazon.titan-text-lite-v1"
REGION = "us-east-1"

job_descriptions = {
    "Project Manager": "We are looking for an experienced Project Manager to lead cross-functional teams...",
    "Software Developer": "We are looking for a Software Developer with experience in front-end and back-end technologies...",
    "Intern": "We are looking for a motivated Intern with a strong willingness to learn and assist in various tasks...",
    "Team Lead": "Looking for a Team Lead to guide and support engineering teams in agile delivery...",
    "HR Manager": "We need a skilled HR Manager to handle talent acquisition, employee engagement, and compliance...",
    "Sales Expert": "Seeking a sales expert with experience in B2B and client relationship management..."
    # Add more roles as needed
}

# --- Helper Functions ---
def extract_resume_text(docx_file):
    doc = Document(docx_file)
    return "\n".join([p.text for p in doc.paragraphs if p.text.strip() != ""])

def build_prompt(resume_text, jd_text):
    return f"""
Human: Evaluate the following resume against this job description. Score the match out of 100 and explain why.

Job Description:
{jd_text}

Resume:
{resume_text}

Return a JSON object with:
- "score": integer
- "reasoning": list of 2-3 sentences
- "missing": list of skills or experiences missing
Respond only with valid JSON.
Assistant:
"""

def get_bedrock_response(prompt):
    client = boto3.client("bedrock-runtime", region_name=REGION)
    body = {
        "messages": [
            {"role": "user", "content": prompt}
        ],
        "anthropic_version": "bedrock-2023-05-31",
        "max_tokens": 1024,
        "temperature": 0.3
    }
    try:
        response = client.invoke_model(
            modelId=BEDROCK_MODEL_ID,
            contentType="application/json",
            accept="application/json",
            body=json.dumps(body)
        )
        response_body = json.loads(response['body'].read())
        return response_body['content'][0]['text']
    except (BotoCoreError, ClientError) as e:
        st.error(f"Error from Bedrock: {e}")
        return None

# --- Streamlit UI ---
st.title("📄 AI Resume Shortlister &amp;amp; Analytics Dashboard")

role = st.selectbox("Select Job Role", list(job_descriptions.keys()))
uploaded_files = st.file_uploader("Upload Resumes", type=["docx"], accept_multiple_files=True)

results = []

if uploaded_files:
    for uploaded_file in uploaded_files:
        st.write(f"📑 Evaluating: `{uploaded_file.name}`")
        resume_text = extract_resume_text(uploaded_file)
        job_description = job_descriptions.get(role, "")
        prompt = build_prompt(resume_text, job_description)

        with st.spinner("Sending to Amazon Bedrock..."):
            raw_output = get_bedrock_response(prompt)

        if raw_output:
            try:
                parsed = json.loads(raw_output)
                score = parsed.get("score", 0)
                reasoning = " | ".join(parsed.get("reasoning", []))
                missing = parsed.get("missing", [])
                results.append({
                    "name": uploaded_file.name,
                    "score": score,
                    "reasoning": reasoning,
                    "missing": missing
                })

                st.success(f"✅ Score: {score}")
                st.write(f"💡 Reasoning: {reasoning}")
                if missing:
                    st.warning(f"❌ Missing Skills: {' | '.join(missing)}")
            except json.JSONDecodeError:
                st.error("❌ Failed to parse Bedrock's response.")
        else:
            st.error("⚠️ No response from Bedrock.")

# --- Analytics Dashboard ---
if results:
    st.subheader("📊 Resume Analytics")

    df = pd.DataFrame(results)

    # Score Distribution
    st.markdown("### 📈 Score Distribution")
    fig, ax = plt.subplots()
    ax.hist(df["score"], bins=10, color="skyblue", edgecolor="black")
    ax.set_xlabel("Match Score")
    ax.set_ylabel("Number of Candidates")
    st.pyplot(fig)

    # Top Candidates
    st.markdown("### 🏆 Top 5 Candidates")
    top_candidates = df.sort_values(by="score", ascending=False).head(5)
    st.dataframe(top_candidates[["name", "score", "reasoning"]])

    # WordCloud of Missing Skills
    st.markdown("### ☁️ Missing Skills WordCloud")
    all_missing = [skill for sublist in df["missing"] for skill in sublist]
    if all_missing:
        wordcloud = WordCloud(width=800, height=400, background_color='white').generate(" ".join(all_missing))
        fig_wc, ax_wc = plt.subplots()
        ax_wc.imshow(wordcloud, interpolation='bilinear')
        ax_wc.axis("off")
        st.pyplot(fig_wc)
    else:
        st.info("✅ No missing skills across uploaded resumes!")


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Streamlit Demo (UI Preview)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5yu4a3nh0ks9ui7g5ra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5yu4a3nh0ks9ui7g5ra.png" alt="Image description" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the above command is run, it opens up the UI which has the below features :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upload multiple resumes&lt;/li&gt;
&lt;li&gt;Choose Job Role (Project Manager, Developer, Intern, etc.)&lt;/li&gt;
&lt;li&gt;See Match Score, Reasoning, and Missing Skills&lt;/li&gt;
&lt;li&gt;View charts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmwgybqggd8m5sppmmpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmwgybqggd8m5sppmmpg.png" alt="Image description" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Suppose I'm hiring a Software Developer for my team — I simply select the relevant role from the dropdown menu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3i6poawkvqwmmznr6n6m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3i6poawkvqwmmznr6n6m.png" alt="Image description" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, I upload the resumes I’ve received. For demonstration purposes, I’ve included a mix of profiles to test whether Bedrock can accurately identify the most relevant candidates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpiorimqi35q78vzhegm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpiorimqi35q78vzhegm.png" alt="Image description" width="793" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bedrock now takes over and begins evaluating each profile.&lt;br&gt;
If you're curious about what exactly we're sending to Bedrock, you can revisit the Python code — The prompt is designed to be clear and structured. Bedrock then uses Amazon Titan (Nova) under the hood to evaluate the resume and return results.&lt;/p&gt;

&lt;p&gt;Once the analysis is complete, we get a structured summary for each resume, including:&lt;/p&gt;

&lt;p&gt;Match Score (out of 100)&lt;/p&gt;

&lt;p&gt;Reasoning behind the score&lt;/p&gt;

&lt;p&gt;Missing skills or experience compared to the job description&lt;/p&gt;

&lt;p&gt;This gives instant clarity to recruiters on which candidates are worth shortlisting — all without manually opening a single resume.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4bktd0kb6knl7l5ns36g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4bktd0kb6knl7l5ns36g.png" alt="Image description" width="750" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "score": 85,&lt;br&gt;
  "reasoning": [&lt;br&gt;
    "The resume demonstrates relevant experience as a project manager...",&lt;br&gt;
    "The candidate has 1 year of experience...",&lt;br&gt;
    "Highlights skills in team coordination..."&lt;br&gt;
  ],&lt;br&gt;
  "missing": [&lt;br&gt;
    "No mention of JIRA",&lt;br&gt;
    "Education section lacks detail"&lt;br&gt;
  ]&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally, the system presents the Top 5 candidates — each with a match score and clear reasoning — making it incredibly easy for the HR team to not just identify qualified profiles, but to confidently select the best of the best.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs52bzwvt0utmg5jay5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs52bzwvt0utmg5jay5w.png" alt="Image description" width="754" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact on HR Teams
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Saves time and effort&lt;/li&gt;
&lt;li&gt;Transparent and explainable results&lt;/li&gt;
&lt;li&gt;Consistent, AI-assisted decision making&lt;/li&gt;
&lt;li&gt;Shortlist ready-to-interview candidates faster&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This project showcases how Generative AI combined with Amazon Nova can revolutionize Talent Acquisition and HR Analytics — automating resume screening, improving decision-making, and saving hours of manual effort.&lt;/p&gt;

&lt;p&gt;Whether you're an HR professional, a hiring manager, or an AI enthusiast, this is a glimpse into the future of intelligent hiring.&lt;/p&gt;

&lt;p&gt;Feel free to try it out and share your feedback!&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br&gt;
Harsha&lt;/p&gt;

</description>
      <category>aws</category>
      <category>bedrock</category>
      <category>nova</category>
    </item>
    <item>
      <title>Amazon Nova Powered AI Claims Assistant</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Sun, 20 Apr 2025 06:19:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/amazon-nova-powered-ai-claims-assistant-42p1</link>
      <guid>https://dev.to/aws-builders/amazon-nova-powered-ai-claims-assistant-42p1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;If you're looking to experiment with Amazon Bedrock or want a simple hands-on project to get started, this toy use case is a great way to explore how large language models can be integrated into real-world workflows.&lt;/p&gt;

&lt;p&gt;Traditionally, when you submit an insurance claim—whether it’s for an auto accident or a medical procedure—it’s first reviewed by a human adjuster. The adjuster evaluates the details, validates supporting documents, and then determines whether the claim is genuine before approving or rejecting it.&lt;/p&gt;

&lt;p&gt;In this blog post, we explore how Amazon Bedrock, specifically the Amazon Nova model, can be used to play the role of an adjuster. By leveraging Bedrock through a simple AWS Lambda function, we demonstrate how AI can assist in summarizing claims and assessing potential fraud risks.&lt;/p&gt;

&lt;p&gt;For this initial prototype, we’re using AWS Lambda and Amazon Bedrock, but in the future, this project could be extended to include services like API Gateway, Step Functions, S3, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Get Model Access from Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;Before you begin, make sure you have access to the Amazon Titan Nova Lite and Nova Pro models:&lt;/p&gt;

&lt;p&gt;Go to Amazon Bedrock in the AWS Console.&lt;/p&gt;

&lt;p&gt;Under Model Access, request access to the Nova models if not already enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84uw4vp87schvgp2iegv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84uw4vp87schvgp2iegv.png" alt="Image description" width="800" height="43"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create the Lambda Function
&lt;/h2&gt;

&lt;p&gt;In VS Code, create a new folder for your Lambda project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzit9wm5z8mpjcrdiuny7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzit9wm5z8mpjcrdiuny7.png" alt="Image description" width="654" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d5hjmmxfpe4egpfzvvt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d5hjmmxfpe4egpfzvvt.png" alt="Image description" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Write your Lambda function code (shared below).&lt;/p&gt;

&lt;p&gt;In the AWS Console, create a new Lambda function.&lt;/p&gt;

&lt;p&gt;After creation, a default IAM role will be associated with the Lambda. Add a policy to this role to give it Amazon Bedrock full access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbspr00rk1odj0le7d4y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbspr00rk1odj0le7d4y.png" alt="Image description" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgp78vm22vcbl1wlo5mr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgp78vm22vcbl1wlo5mr2.png" alt="Image description" width="800" height="300"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Effect": "Allow",
  "Action": [
    "bedrock:InvokeModel"
  ],
  "Resource": "*"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Lambda Function Code (lambda_function.py)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
import os

bedrock = boto3.client("bedrock-runtime")
MODEL_ID = os.getenv("MODEL_ID", "amazon.nova-pro-v1:0")

def lambda_handler(event, context):
    body = json.loads(event['body'])
    note = body.get('note', '')

    prompt = build_prompt(note)

    response = bedrock.invoke_model(
        modelId=MODEL_ID,
        contentType="application/json",
        accept="application/json",
        body=json.dumps({
            "messages": [
                {
                    "role": "user",
                    "content": [
                        {"text": prompt}
                    ]
                }
            ]
        })
    )

    model_output = json.loads(response['body'].read())
    ai_reply = model_output.get('output', {}).get('message', {}).get('content', [{}])[0].get('text', '')

    return {
        "statusCode": 200,
        "body": json.dumps({
            "summary_and_score": ai_reply
        })
    }

def build_prompt(note):
    return f"""
You are an AI assistant for insurance claims adjusters. Read the adjuster's note and:

1. Summarize the claim in 3–5 bullet points.
2. Highlight any fraud risk indicators (if any).
3. Provide a fraud risk score between 0 and 10.

Adjuster's Note:
\"\"\"{note}\"\"\"

Output Format:
Summary:
- ...
Fraud Risk:
- ...
Score: ...
"""

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don’t forget to add an environment variable in the Lambda settings:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;MODEL_ID = amazon.nova-pro-v1:0&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip1fm63ogbadxtey6tn3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip1fm63ogbadxtey6tn3.png" alt="Image description" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Testing the Lambda&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use the Lambda test feature in the console to simulate a request. Here's a sample payload you can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "body": "{\"note\": \"Claimant reports a scratch on their car in a parking lot. No CCTV footage is available. No other witnesses. Wants to proceed with the claim.\"}"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2ky4iba60de4przuic2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2ky4iba60de4przuic2.png" alt="Image description" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lxlwaejqcplwlz32xn8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lxlwaejqcplwlz32xn8.png" alt="Image description" width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sample Output&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8jbg9wnh1j4fpcko676.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8jbg9wnh1j4fpcko676.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "statusCode": 200,
  "body": "{\"summary_and_score\": \"Summary:\\n- Claimant reports a scratch on their car in a parking lot.\\n- No CCTV footage is available to support the claim.\\n- Claimant is unable to produce any evidence of the incident.\\n\\nFraud Risk:\\n- Lack of evidence or witnesses.\\n- No CCTV footage to corroborate the claimant's statement.\\n\\nScore: 7\\n\\nExplanation: The high fraud risk score is due to the absence of supporting evidence and the inability to verify the claim through surveillance or other means.\"}"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How is the Fraud Risk Score Interpreted?&lt;/p&gt;

&lt;p&gt;The fraud risk score is a value from 0 to 10, where:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Score Range&lt;/strong&gt;       &lt;strong&gt;Interpretation&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;0–3        Low fraud risk — the claim appears legitimate&lt;br&gt;
 4–6         Moderate risk — the claim may need further review&lt;br&gt;
 7–10        High fraud risk — the claim has red flags or lacks supporting evidence&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;As shown above, the Nova model provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A clear summary of the claim&lt;/li&gt;
&lt;li&gt;Detected fraud indicators&lt;/li&gt;
&lt;li&gt;A risk score with reasoning&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This toy project demonstrates how Amazon Bedrock and the Nova model can act as an intelligent assistant for insurance claims adjusters. By summarizing claim notes and providing fraud risk analysis, it can help organizations quickly triage and prioritize claims.&lt;/p&gt;

&lt;p&gt;This is just a starting point. Future enhancements may include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time REST APIs via API Gateway&lt;/li&gt;
&lt;li&gt;Workflow orchestration using Step Functions&lt;/li&gt;
&lt;li&gt;Secure data storage on S3&lt;/li&gt;
&lt;li&gt;Automated claim status notifications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By leveraging the Amazon Nova model through Amazon Bedrock, we're able to mimic the judgment of a seasoned claims adjuster—automating the summarization of adjuster notes, detecting possible red flags, and assigning a fraud risk score based on context.&lt;/p&gt;

&lt;p&gt;This AI-powered workflow brings multiple benefits:&lt;/p&gt;

&lt;p&gt;🧠 Smarter decisions: Nova evaluates claims contextually, helping to reduce oversight and human error.&lt;/p&gt;

&lt;p&gt;⚖️ Consistent outcomes: The model applies the same logic across all claims, ensuring fairness and reducing bias.&lt;/p&gt;

&lt;p&gt;🕒 Time savings: Automating the initial review process frees up human adjusters to focus on complex or high-value claims.&lt;/p&gt;

&lt;p&gt;📈 Scalable operations: This approach can be applied to thousands of claims daily without additional headcount.&lt;/p&gt;

&lt;p&gt;If you have any suggestions or scope of improvement, feel free to comment or you can reach me on LinkedIn - &lt;a href="https://www.linkedin.com/in/hvmathan/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/hvmathan/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading! &lt;/p&gt;

</description>
      <category>amazonnova</category>
      <category>amazonbedrock</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS Step Function Limitations and How to Overcome Them</title>
      <dc:creator>Harsha Mathan</dc:creator>
      <pubDate>Tue, 08 Apr 2025 04:15:59 +0000</pubDate>
      <link>https://dev.to/hvmathan/aws-step-function-limitations-and-how-to-overcome-them-16m8</link>
      <guid>https://dev.to/hvmathan/aws-step-function-limitations-and-how-to-overcome-them-16m8</guid>
      <description>&lt;p&gt;AWS Step Functions is one of the best serverless cloud services for orchestration. It is user-friendly, easy to adopt, and integrates seamlessly with numerous AWS services. However, like any tool, it has its limitations. This article highlights some common challenges and practical workarounds...&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Payload Size Limitation&lt;/strong&gt;&lt;br&gt;
When using services like AWS DMS, retrieving associated tasks for a replication instance via DescribeReplicationTasks can result in a payload exceeding the 256 KB limit, leading to errors or incomplete data handling.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Workaround:&lt;/em&gt;&lt;br&gt;
Use AWS Lambda to preprocess or transform large datasets before passing them to the Step Function. Alternatively, store large payloads in Amazon S3 or DynamoDB, and pass a reference (e.g., an S3 URL) instead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. No Native Retry for Express Workflows&lt;/strong&gt;&lt;br&gt;
Express Workflows lack the robust retry and error-handling mechanisms available in Standard Workflows, making them less resilient to transient failures.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Workaround:&lt;/em&gt;&lt;br&gt;
Combine CloudWatch metrics with an external monitoring or alerting system to detect and handle failures. Additionally, design your application logic to retry failed tasks as needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Limited Monitoring&lt;/strong&gt;&lt;br&gt;
While Step Functions offer basic CloudWatch metrics, they lack fine-grained debugging capabilities, such as detailed logs for every state transition.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Workaround:&lt;/em&gt;&lt;br&gt;
Enable detailed logging by sending execution history to CloudWatch Logs. This allows you to monitor workflow behavior, debug issues, and analyze execution history in depth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Choice Workflow State&lt;/strong&gt;&lt;br&gt;
When handling pagination in workflows, you might encounter a situation where the marker value is null. In such cases, it's crucial to evaluate the IsNull condition first, followed by the NotNull condition. The order ensures correct behavior.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
 "Variable": "$.possiblyNullValue",
 "IsNull": true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This blog from AWS specifies all the necessary choice state that we can leverage. &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/state-choice.html#amazon-states-language-choice-state-rules" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/step-functions/latest/dg/state-choice.html#amazon-states-language-choice-state-rules&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Step Functions remains a powerful orchestration tool with extensive integrations and scalability. By understanding its constraints and implementing these workarounds, you can design robust and efficient workflows that meet your application needs. Thanks for your time!&lt;/p&gt;

&lt;p&gt;— Harsha&lt;/p&gt;

</description>
      <category>stepfunctions</category>
      <category>aws</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
