<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matteo Depascale</title>
    <description>The latest articles on DEV Community by Matteo Depascale (@depaa).</description>
    <link>https://dev.to/depaa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/depaa"/>
    <language>en</language>
    <item>
      <title>10 Essential AWS Security Steps for Your AWS Account</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Wed, 13 Nov 2024 13:00:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/10-essential-aws-security-steps-for-your-aws-account-2kac</link>
      <guid>https://dev.to/aws-builders/10-essential-aws-security-steps-for-your-aws-account-2kac</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;⚠️ Hey! This blog post was initially published on my own blog. Check it out from the source: &lt;a href="https://cloudnature.net/blog/10-essential-aws-security-steps-for-your-aws-account?utm_medium=social&amp;amp;utm_source=dev.to"&gt;https://cloudnature.net/blog/10-essential-aws-security-steps-for-your-aws-account&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;After spending years helping teams set up their AWS infrastructure, I've noticed something interesting: many of us face the same security challenges when starting out. You know what I mean if you've ever wondered "Wait, is my S3 bucket actually secure?" or "Should I really be using the root account for this?" (Spoiler: probably not!)&lt;/p&gt;

&lt;p&gt;The good news? I've put together this guide to help you build a rock-solid AWS security foundation from day one. We'll cover 10 essential security measures that I've seen make a real difference in protecting AWS environments. While absolute security is a journey rather than a destination, implementing these steps will put you way ahead of the game in defending against common attack vectors.&lt;/p&gt;

&lt;p&gt;And I've also created a Terraform project that you can use as baseline for your securing your AWS account!&lt;/p&gt;

&lt;p&gt;The best part? It's all under the AWS free tier! 😉&lt;/p&gt;

&lt;p&gt;Essentially, I got tired or reading the same posts regarding people (or organizations) getting their account hacked, here's my solution for that!&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Set up AWS Organizations
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS Organizations, it's one of those services that I wish someone had told me about when I first started with AWS!&lt;/p&gt;

&lt;h3&gt;
  
  
  What is AWS Organizations?
&lt;/h3&gt;

&lt;p&gt;AWS Organizations is like a control center for all your AWS accounts. Think of it as a tree for your AWS infrastructure, where you can manage multiple accounts under one umbrella. Even if you're starting with just one account, setting up Organizations from day one is a smart move (I'll tell you why in a moment!).&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Organizations is Essential for Security
&lt;/h3&gt;

&lt;p&gt;Here's what makes Organizations so powerful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Centralized Security Management&lt;/strong&gt;: Apply security policies across all accounts from one place&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consolidated Billing&lt;/strong&gt;: One bill for all accounts (and often volume discounts!)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Control Policies (SCPs)&lt;/strong&gt;: Think of these as guardrails that prevent security troubles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API-driven account creation&lt;/strong&gt;: Automate new account creation (super handy as you grow)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Benefits for Single Account Holders
&lt;/h3&gt;

&lt;p&gt;"But wait," you might say, "I only have one account!" Trust me, I hear this a lot. Here's why Organizations is still valuable for you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Future-Proofing&lt;/strong&gt;: When (not if) you need another account for development/staging, you're ready&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Best Practices&lt;/strong&gt;: You can set SCPs only with AWS Organization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Management&lt;/strong&gt;: Track and categorize spending more effectively&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Free to Use&lt;/strong&gt;: There's no additional cost for using Organizations, right under the AWS Free Tier&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Quick Setup Guide
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Log into your AWS root account&lt;/li&gt;
&lt;li&gt;Navigate to AWS Organizations&lt;/li&gt;
&lt;li&gt;Click "Create Organization"&lt;/li&gt;
&lt;li&gt;Choose "All features" (don't limit yourself!)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Want to make this even easier? My Terraform module sets this all up automatically with best practices in it! &lt;/p&gt;

&lt;p&gt;Your AWS account structure is like the foundation of a house: it's much easier to get it right from the start than to renovate later!&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Implement AWS Single Sign-On (SSO)
&lt;/h2&gt;

&lt;p&gt;If you're still creating IAM users and managing access keys, let me introduce you to a game-changer: AWS SSO (now called IAM Identity Center).&lt;/p&gt;

&lt;h3&gt;
  
  
  What is AWS SSO?
&lt;/h3&gt;

&lt;p&gt;AWS Single Sign-On (SSO), now called AWS IAM Identity Center, is a cloud service that lets you centrally manage access to your AWS accounts and business applications. Instead of having individual IAM users with long-term access keys in each AWS account, users log in once through a web portal and get temporary security credentials to access AWS resources.&lt;/p&gt;

&lt;p&gt;Think of it like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Old way&lt;/strong&gt;: Create IAM users in each account, manage access keys, update permissions in multiple places&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSO way&lt;/strong&gt;: Users log in once at &lt;code&gt;https://your-domain.awsapps.com/start&lt;/code&gt;, click on the account/role they need, and get instant secure access&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why AWS SSO is Awesome
&lt;/h3&gt;

&lt;p&gt;With AWS SSO you can say goodbye to a very long list of "bad practices":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No More Access Key Management&lt;/strong&gt;: Users log in through a web portal instead&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporary Credentials&lt;/strong&gt;: Security tokens that expire automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralized Access Control&lt;/strong&gt;: Manage permissions for all accounts in one place&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perfect with Organizations&lt;/strong&gt;: Works seamlessly with the structure we just set up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's what makes SSO particularly secure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User Login → SSO Portal → Temporary Credentials → AWS Access
    ↳ MFA Required     ↳ Auto-expires     ↳ Limited Scope
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every access is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authenticated (you prove who you are)&lt;/li&gt;
&lt;li&gt;Authorized (you get only the permissions you need)&lt;/li&gt;
&lt;li&gt;Audited (every login is logged)&lt;/li&gt;
&lt;li&gt;Temporary (credentials expire automatically)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;There are 2 things to keep in mind, AWS Managed policies and Customer Managed policies. Customer Managed policies are custom policies and they need to be replicated in each AWS account..&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Setting It Up
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F50001870-1cf8-4f8f-81f0-fbfdeb57366b" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F50001870-1cf8-4f8f-81f0-fbfdeb57366b" alt="AWS Organizations Set up - Enable" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to IAM Identity Center in your Organization's management account&lt;/li&gt;
&lt;li&gt;Click "Enable"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This step can only be done manually, unfortunately AWS doesn't support enabling SSO programmatically. After you have done this part, you can continue with the Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Best Practices I've Learned
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Always Enable MFA&lt;/strong&gt;: Make it mandatory for all users. The best part? It's enabled by default 🤩&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Groups&lt;/strong&gt;: Assign permissions to groups, not individual users&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regular Access Reviews&lt;/strong&gt;: Review permissions!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It's free&lt;/strong&gt;: AWS SSO is free under the AWS Free Tier&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Good security should make the right way the easy way. AWS SSO does exactly that! &lt;/p&gt;

&lt;p&gt;Anyway, if you have just opened you AWS account, I highly suggest to create an "Admin" group and put your user in there! This way you can ditch the root user you are probably using right now.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Enable Multi-Factor Authentication (MFA)
&lt;/h2&gt;

&lt;p&gt;If you're not using MFA yet, this is probably the most important section of this guide.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is MFA and Why It's Non-Negotiable
&lt;/h3&gt;

&lt;p&gt;MFA adds an extra verification step when someone tries to access your AWS accounts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Something you know (password)&lt;/li&gt;
&lt;li&gt;Something you have (MFA device/app)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without MFA, a compromised password means game over. With MFA, attackers need both pieces of the puzzle! &lt;/p&gt;

&lt;h3&gt;
  
  
  Essential MFA Setup Checklist
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Root Account MFA&lt;/strong&gt; (Absolutely Critical! 🚨)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   AWS Console → Root user → Security credentials → Assign MFA device → Choose Virtual MFA device
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;IAM Users MFA&lt;/strong&gt; (For any remaining IAM users)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   IAM → Users → User name → Security credentials → Assign MFA device
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;SSO Users MFA&lt;/strong&gt; (Remember our previous section?)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   IAM Identity Center → Settings → Multi-factor authentication → Enable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Protecting Sensitive Operations
&lt;/h3&gt;

&lt;p&gt;Here's a cool trick - you can require MFA for specific operations. For example, protecting CloudTrail logs deletion:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "RequireMFAForDelete",
            "Effect": "Deny",
            "Action": "s3:DeleteObject",
            "Resource": "arn:aws:s3:::my-cloudtrail-bucket/*",
            "Condition": {"BoolIfExists": {"aws:MultiFactorAuthPresent": false}}
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Best Practices I Always Follow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Backup MFA&lt;/strong&gt;: Store MFA recovery codes in a secure password manager&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Recovery&lt;/strong&gt;: Practice account recovery procedures.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Quick Setup Guide
&lt;/h3&gt;

&lt;p&gt;Setting up Virtual MFA (most common option):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download an authenticator app (like Google Authenticator)&lt;/li&gt;
&lt;li&gt;Scan the QR code from AWS&lt;/li&gt;
&lt;li&gt;Enter two consecutive codes to verify&lt;/li&gt;
&lt;li&gt;Store recovery codes safely&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Common Questions
&lt;/h3&gt;

&lt;p&gt;These are some questions I get asked a lot:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: "What if I lose my MFA device?"&lt;/strong&gt;&lt;br&gt;
A: That's why we keep recovery codes. If you have lost them, no worries, the AWS Support can help, you will need to identify yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: "Isn't this overkill for a small account?"&lt;/strong&gt;&lt;br&gt;
A: Not at all. This is the number one best practices to follow. There are some other guardrails that may be overkill for non-organizations, but this one, it's a life saver!&lt;/p&gt;


&lt;h2&gt;
  
  
  4. Configure Service Control Policies (SCPs)
&lt;/h2&gt;

&lt;p&gt;Time to talk about Service Control Policies (SCPs): your organization's security guard rails! These are like having organization-wide rules that no one (not even account administrators) can override.&lt;/p&gt;
&lt;h3&gt;
  
  
  What Are SCPs, Really?
&lt;/h3&gt;

&lt;p&gt;Think of SCPs as a firewall for AWS actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They define maximum permissions (what CAN'T be done)&lt;/li&gt;
&lt;li&gt;Apply to all users and roles (even root users!)&lt;/li&gt;
&lt;li&gt;Work across your entire AWS Organization
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User Permissions = IAM Policies ∩ SCPs
(In other words: you only get permissions allowed by BOTH)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Essential SCPs for Every AWS Organization
&lt;/h3&gt;

&lt;p&gt;Here are my top "must-have" SCPs that I implement in every environment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Prevent Root User Actions&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Require MFA for Sensitive Actions&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restrict Regions&lt;/strong&gt; (reduce attack surface)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deny disabling CloudTrail&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deny leaving the AWS Organizations&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are many more examples online if you want to add a few more SCPs to restrict even more your AWS environment.&lt;/p&gt;
&lt;h3&gt;
  
  
  Common Gotchas to Avoid
&lt;/h3&gt;

&lt;p&gt;⚠️ Watch out for these common mistakes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SCPs don't affect service-linked roles&lt;/li&gt;
&lt;li&gt;SCPs don't grant permissions (they only restrict)&lt;/li&gt;
&lt;li&gt;Maximum 5 SCPs for account/OU and maximum size is 5120 bytes (choose conditions wisely!)&lt;/li&gt;
&lt;li&gt;As of now, there is no way to debug or to test them out&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  5. Set up AWS Budgets Alarms
&lt;/h2&gt;

&lt;p&gt;Let's talk money! 💰 AWS Budgets Alarms aren't just about controlling costs - they're also an excellent security tool. Unexpected spikes in AWS bills often indicate security issues (like crypto mining on compromised resources).&lt;/p&gt;
&lt;h3&gt;
  
  
  Why Budget Alarms are a Security Feature
&lt;/h3&gt;

&lt;p&gt;Here's a secret many don't know: Budget alarms can be your first warning of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compromised access keys mining crypto&lt;/li&gt;
&lt;li&gt;DDoS attacks consuming resources&lt;/li&gt;
&lt;li&gt;Misconfigured autoscaling groups&lt;/li&gt;
&lt;li&gt;Forgotten running resources&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Essential Budget Alerts to Set Up
&lt;/h3&gt;

&lt;p&gt;I recommend starting with these three basic budgets:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Overall Monthly Spend&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Free Tier Usage&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Anomaly Detection&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Best Practices for Budget Alarms
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set Multiple Thresholds&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;80% warning&lt;/li&gt;
&lt;li&gt;90% warning&lt;/li&gt;
&lt;li&gt;100% alert&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use Multiple Contact Methods&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Email notifications&lt;/li&gt;
&lt;li&gt;SNS topics for automation&lt;/li&gt;
&lt;li&gt;Consider integration with Slack/Teams&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Free Tier Considerations
&lt;/h3&gt;

&lt;p&gt;Good news! AWS Budgets is partially free:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First two budgets are free&lt;/li&gt;
&lt;li&gt;Additional budgets cost $0.10/day &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anyway, you can do just fine with 2 budgets within the AWS Free Tier ✌️&lt;/p&gt;

&lt;p&gt;Cost monitoring isn't just about saving money, it's an essential part of your security monitoring strategy!&lt;/p&gt;


&lt;h2&gt;
  
  
  6. Use IAM Roles with Least Privilege
&lt;/h2&gt;

&lt;p&gt;One of the most fundamental AWS security principles is the concept of least privilege: giving users and services only the permissions they absolutely need to perform their tasks. IAM Roles are the preferred way to implement this principle in AWS, as they provide temporary credentials and eliminate the need for storing long-term access keys.&lt;/p&gt;

&lt;p&gt;If you're wondering why this matters so much, think of it this way: you wouldn't give your house keys to everyone who needs to drop off a package, right? The same goes for AWS permissions!&lt;/p&gt;
&lt;h3&gt;
  
  
  Understanding IAM Roles
&lt;/h3&gt;

&lt;p&gt;IAM Roles are like virtual users that can be assumed by AWS services, applications, or human users. Unlike traditional IAM users with permanent credentials, roles provide temporary security credentials that automatically rotate. The best part? No more storing long-term access keys (which, trust me, can be a security nightmare).&lt;/p&gt;
&lt;h3&gt;
  
  
  Implementing Least Privilege
&lt;/h3&gt;

&lt;p&gt;Here's how I approach implementing least privilege with IAM Roles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start with Zero Trust&lt;/strong&gt;: I know it's tempting to just click "full access," but trust me, start with zero permissions add only what's necessary&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use AWS-Managed Policies Carefully&lt;/strong&gt;: While convenient, these often provide more permissions than needed, like using a sledgehammer when you need a scalpel&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Custom Policies&lt;/strong&gt;: Write specific policies that match your exact requirements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Policy Conditions&lt;/strong&gt;: Add restrictions based on IP ranges, time of day, or required MFA&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Practical Tips for Implementation
&lt;/h3&gt;

&lt;p&gt;Look, I get it, implementing least privilege can be a pain. Here's what works for me in the real world:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use AWS IAM Access Analyzer&lt;/strong&gt; to identify unused permissions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable CloudTrail&lt;/strong&gt; to monitor actual usage (more on this later 👇)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement session tags&lt;/strong&gt; to add additional context to role assumptions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regular review cycles&lt;/strong&gt; to remove unnecessary permissions&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  When Things Get Tricky
&lt;/h3&gt;

&lt;p&gt;Let's be honest, sometimes least privilege is harder than it sounds. The elephant in the room is:  &lt;strong&gt;Development Speed vs. Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This topic is tricky, teams often need quick access during development and can't wait 2 weeks for new permissions. Here are some solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create separate roles for development and production environments&lt;/li&gt;
&lt;li&gt;Dive access to a sandbox account, this account is usually detached from your main network 😉&lt;/li&gt;
&lt;li&gt;Create practical documentation on how to test locally, if possible&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Practical Role Management
&lt;/h3&gt;

&lt;p&gt;Here's what works for me:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Name roles clearly&lt;/strong&gt;: &lt;code&gt;prod-api-read-only&lt;/code&gt; is better than &lt;code&gt;role1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document everything&lt;/strong&gt;: Future you will appreciate knowing why each permission exists&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use tags wisely&lt;/strong&gt;: They're great for keeping things organized&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit regularly&lt;/strong&gt;: Set a calendar reminder for quarterly reviews&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  When Perfect Isn't Possible
&lt;/h3&gt;

&lt;p&gt;Sometimes you can't get least privilege perfect right away. Here's what to do instead:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lock down your network with VPC endpoints&lt;/li&gt;
&lt;li&gt;Use time-based restrictions when possible&lt;/li&gt;
&lt;li&gt;Implement resource tagging, and limit access with ABAC&lt;/li&gt;
&lt;li&gt;Add IP-based conditions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Those are some best practices I've learned along the way. If you're just starting with AWS and don't want to go down this complex path right now, that's okay! Just make sure to use Administrator access only for MFA-backed users! It's not perfect, but it's way better than having non-MFA admin accounts running around. You can always come back to implement more granular permissions later.&lt;/p&gt;


&lt;h2&gt;
  
  
  7. Rotate Access Keys and Passwords Regularly
&lt;/h2&gt;

&lt;p&gt;Let's talk about one of those security practices that everyone knows they should do but often forgets: rotating access keys and passwords.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why Rotate Access Keys?
&lt;/h3&gt;

&lt;p&gt;You might be wondering why we need to rotate keys if they haven't been compromised. Here's the thing: you might not always know if they've been compromised! Regular rotation helps limit the potential damage from undetected breaches and ensures that any leaked credentials have a limited lifetime.&lt;/p&gt;
&lt;h3&gt;
  
  
  Key Rotation
&lt;/h3&gt;

&lt;p&gt;AWS makes this pretty straightforward. Here's how to get started:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;For IAM Users&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use AWS IAM to create a new active access keys&lt;/li&gt;
&lt;li&gt;Replace the old one with the new one&lt;/li&gt;
&lt;li&gt;Test everything&lt;/li&gt;
&lt;li&gt;Deactivate&lt;/li&gt;
&lt;li&gt;Test everything again &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;For Applications&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use AWS Secrets Manager (my preferred choice) or AWS System Manager Parameter Store&lt;/li&gt;
&lt;li&gt;Set up automatic rotation schedules&lt;/li&gt;
&lt;li&gt;In your application, make the API call&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;For CI/CD&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For some of them you don't even need a IAM User&lt;/li&gt;
&lt;li&gt;For others, I suggest the IAM User to only assume another role used for deploying everything&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Best Practices I've Learned
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Rotation Schedule&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;90 days for regular access keys&lt;/li&gt;
&lt;li&gt;30 days for high-privilege accounts&lt;/li&gt;
&lt;li&gt;Immediate rotation if you suspect compromise&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Documentation&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep a rotation schedule&lt;/li&gt;
&lt;li&gt;Document emergency procedures&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  8. Enable CloudTrail
&lt;/h2&gt;

&lt;p&gt;If you're not using it yet, you're flying blind, and trust me, that's not where you want to be! 🛫&lt;/p&gt;
&lt;h3&gt;
  
  
  Why CloudTrail Is a Must-Have
&lt;/h3&gt;

&lt;p&gt;Think of CloudTrail as your AWS security camera system. It records:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who's doing what in your account&lt;/li&gt;
&lt;li&gt;When they did it&lt;/li&gt;
&lt;li&gt;Where they did it from&lt;/li&gt;
&lt;li&gt;Whether they succeeded or failed&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Setting Up CloudTrail (The Right Way)
&lt;/h3&gt;

&lt;p&gt;Here's my recommended setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Enable multi-region logging&lt;/li&gt;
&lt;li&gt;✅ Enable log file validation&lt;/li&gt;
&lt;li&gt;✅ Enable CloudWatch Logs integration&lt;/li&gt;
&lt;li&gt;✅ Turn on S3 Object Lock for compliance&lt;/li&gt;
&lt;li&gt;✅ Enable MFA Delete on S3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anyway, if you are using the Terraform template, it comes out of the box 😉&lt;/p&gt;


&lt;h2&gt;
  
  
  9. Properly Configure S3 Bucket Policies
&lt;/h2&gt;

&lt;p&gt;I've seen too many news headlines about exposed S3 buckets, so let's make sure yours isn't next! 🛡️&lt;/p&gt;
&lt;h3&gt;
  
  
  Block Public Access (No Exceptions!)
&lt;/h3&gt;

&lt;p&gt;No need to do anything, it's enabled by default. Just don't disable it 🙏 &lt;/p&gt;
&lt;h3&gt;
  
  
  Secure Bucket Policy Template
&lt;/h3&gt;

&lt;p&gt;Here's a solid starting point for your bucket policies, we are going to allow only HTTPS traffic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyHTTP",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::your-bucket",
                "arn:aws:s3:::your-bucket/*"
            ],
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Serving Content Securely
&lt;/h3&gt;

&lt;p&gt;Need to serve public content? Here's the right way: use &lt;strong&gt;CloudFront&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a CloudFront distribution&lt;/li&gt;
&lt;li&gt;Use Origin Access Control (OAC)&lt;/li&gt;
&lt;li&gt;Enable HTTPS only&lt;/li&gt;
&lt;li&gt;Set up custom domain if needed&lt;/li&gt;
&lt;li&gt;Update S3 Bucket Policy&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Best Practices I Always Follow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Use unique, non-guessable names&lt;/li&gt;
&lt;li&gt;Consider using prefixes for organization&lt;/li&gt;
&lt;li&gt;Use folders for access control&lt;/li&gt;
&lt;li&gt;Implement lifecycle policies&lt;/li&gt;
&lt;li&gt;Use presigned URLs for temporary access&lt;/li&gt;
&lt;li&gt;Implement VPC endpoints for internal access&lt;/li&gt;
&lt;li&gt;Use bucket policies for cross-account access (not roles)&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  10. Use VPCs and Network Segmentation
&lt;/h2&gt;

&lt;p&gt;Please, please, please don't leave your security groups wide open to the internet! 🏰 At least use your own IP address!&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Default VPC Is Not Your Friend
&lt;/h3&gt;

&lt;p&gt;The default VPC is like leaving your front door unlocked. Sure, it's convenient, but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It's too permissive by default&lt;/li&gt;
&lt;li&gt;Everyone knows its structure&lt;/li&gt;
&lt;li&gt;It lacks proper segmentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That being said, you need to create your own VPC. AWS has a neat interface for doing it, but I highly suggest you to write one with Terraform (or another IAC tool), it's really simple, you can even &lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest" rel="noopener noreferrer"&gt;use a module from the community&lt;/a&gt;. Anyway, time to design our VPC:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Public Subnets&lt;/strong&gt;: Only for load balancers and bastion hosts, this subnet has an Internet Gateway&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private Subnets&lt;/strong&gt;: For your applications, if you need internet use NAT or a NAT instance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Subnets&lt;/strong&gt;: Isolated from everything else. Use managed services as database so you don't need to download updates from the internet&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  🚨 Security Groups - The Golden Rules 🚨
&lt;/h3&gt;

&lt;p&gt;This is crucial:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;❌ NEVER allow 0.0.0.0/0 to your security groups except for load balancers&lt;/li&gt;
&lt;li&gt;❌ NEVER open all ports (like port range 0-65535)&lt;/li&gt;
&lt;li&gt;❌ NEVER use the default security group&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This alone will prevent most of the attacks!&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And there you have it, folks. That was a lot to take in, wasn't it? But you will thank me in the future, or not cause with these best practices it's unlikely your are getting hacked!&lt;/p&gt;

&lt;p&gt;Anyway, if you're feeling overwhelmed, here's your bare minimum security checklist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Turn on CloudTrail&lt;/li&gt;
&lt;li&gt;✅ Block public access to S3 buckets&lt;/li&gt;
&lt;li&gt;✅ Never allow 0.0.0.0/0 in security groups (except for load balancers)&lt;/li&gt;
&lt;li&gt;✅ Use AWS Organizations for account management&lt;/li&gt;
&lt;li&gt;✅ Enable SSO with MFA&lt;/li&gt;
&lt;li&gt;✅ Enable MFA on root and IAM users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Security isn't a one-time setup, it's a journey. Don't know where to start? Here is a complete baseline to get started 👇&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Here's the link to my GitHub repository: &lt;a href="https://github.com/Depaa/terraform-aws-security-baseline" rel="noopener noreferrer"&gt;https://github.com/Depaa/terraform-aws-security-baseline&lt;/a&gt; 😉&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;p.s. there's another great tool that can help you with the setup, it's called &lt;a href="https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html" rel="noopener noreferrer"&gt;AWS Control Tower&lt;/a&gt;. I didn't want to include too much information as we've covered a lot here already. If that's something that interests you, let me know.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you enjoyed this article, please let me know in the comment section or send me a DM. I'm always happy to chat! ✌️&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS-related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Security.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Security.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-best-practices.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-best-practices.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: opinions expressed are solely my own and do not express the views or opinions of my employer.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>Large Language Model Operations (LLMops) on AWS</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Fri, 03 May 2024 12:02:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/large-language-model-operations-llmops-on-aws-4b86</link>
      <guid>https://dev.to/aws-builders/large-language-model-operations-llmops-on-aws-4b86</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;⚠️ Hey! This blog post was initially published on my own blog. Check it out from the source: &lt;a href="https://cloudnature.net/blog/large-language-model-operations-llmops-on-aws?utm_medium=social&amp;amp;utm_source=dev.to"&gt;https://cloudnature.net/blog/large-language-model-operations-llmops-on-aws&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Large Language Models (LLMs) are becoming increasingly powerful, and their applications are widespread across many industries, from content creation to healthcare. Generative AI is becoming the de facto standard for multiple tasks due to its ability to generate human-like text, images, videos, and more.&lt;/p&gt;

&lt;p&gt;For LLMs, the evergreen MLops might not be sufficient anymore. We need practices and processes designed specifically for Large Language Models. These practices and processes are called Large Language Model Operations, or LLMops for short. In this article, we will discuss what these operations are and, more importantly, how we can leverage them with AWS.&lt;/p&gt;

&lt;p&gt;This article serves as both a high-level overview for everyone and a bit in-depth with technical details for those who want to dive deeper.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are LLMops?
&lt;/h2&gt;

&lt;p&gt;LLMops, short for Large Language Model Operations, refer to the practice of leveraging large language models (LLMs) like GPT-3, Anthropic Claude, Mistral AI, and others to automate various tasks and workflows. The core idea behind LLMops is to use the powerful generation capabilities of LLMs to create software applications, APIs, and tools that can understand and generate human-like text, images, video, and audio.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Wild! I know, right? LLMs can even generate music with soundtrack and vocals too. I wonder what they will be capable of in a few years.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The purpose of LLMops is to augment and automate a wide range of generation-related tasks that were previously labor-intensive or required significant domain expertise. So, LLMops encompass all operations needed to enable and enhance the usage of LLMs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Components of LLMops
&lt;/h2&gt;

&lt;p&gt;We introduced operations, let's look at what these operations actually are.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ This is a series! I'll talk in depth about each one of these practices in separate articles.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Data Preparation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F1af4e246-9cc3-407f-9193-73e02cc561bd" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F1af4e246-9cc3-407f-9193-73e02cc561bd" alt="Data preparation with AWS"&gt;&lt;/a&gt;&lt;br&gt;
Are you familiar with the phrase: "💩 in -&amp;gt; 💩 out"? &lt;br&gt;
The scope of data preparation is indeed to change that phrase into: "💩 in -&amp;gt; ✨out".&lt;/p&gt;

&lt;p&gt;In order to improve the quality of the data, there are a bunch of operations we can do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Text cleaning and normalization&lt;/strong&gt;: You don't want to train your AI with your customers' credit card numbers, do you? That's essentially what you are doing here, you are cleaning the "dirt" in your data;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data deduplication&lt;/strong&gt;: Removing duplicates is always a good thing, right? Your bill will thank you later on 😉;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data augmentation&lt;/strong&gt;: Sometimes your data is not enough. You may need to add text classification or image description. Other times, you may need to generate generic synthetic data to include with your original data;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data filtering&lt;/strong&gt;: When you don't need every piece of information in your raw data, you can filter out unnecessary data. This helps with unwanted data and unnecessary information that LLM may not need.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Data Ingestion
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F86b15af0-4fd7-4646-82b6-d6161d73872c" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F86b15af0-4fd7-4646-82b6-d6161d73872c" alt="Data ingestion on AWS"&gt;&lt;/a&gt;&lt;br&gt;
Your data is cleaned, and it's now time to send the data out to begin the process. Data ingestion usually involves sending data to some kind of storage. Some common examples of data ingestion are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Database&lt;/strong&gt;: When you don't need fancy words like semantic search, a classic SQL or NoSQL database is just fine, even for AI-related tasks;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector database&lt;/strong&gt;: When you actually need fancy words like semantic search, in that case, you need to use a vector database;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cold storage&lt;/strong&gt;: After the data is processed, you may want to store your raw data in some kind of cold storage to pay less and always have your data in case something happens;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata database&lt;/strong&gt;: It's always a good thing to track metadata in a database, like file locations or simply tagging these files, for instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Model Evaluation
&lt;/h3&gt;

&lt;p&gt;Whatever you are doing with LLMs, you must employ some kind of evaluation mechanisms in order to at least get the feeling that the LLM answers properly to your prompts.&lt;/p&gt;

&lt;p&gt;There are soooo many ways to evaluate; the most common benchmarks are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GLUE (General Language Understanding Evaluation) Benchmark&lt;/li&gt;
&lt;li&gt;SuperGLUE Benchmark&lt;/li&gt;
&lt;li&gt;HellaSwag&lt;/li&gt;
&lt;li&gt;TruthfulQA&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not only that, of course, there are many frameworks, like Amazon Bedrock, that provide evaluation against your own prompts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Deployment and Inference
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fbaeb0447-8c1f-47ea-bd17-b77ef31c66cc" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fbaeb0447-8c1f-47ea-bd17-b77ef31c66cc" alt="Model Deployment and Inference on AWS"&gt;&lt;/a&gt;&lt;br&gt;
When your model is ready, it's time to deploy and serve your model, and this is exactly it, nothing too fancy, I'm sorry 😉.&lt;br&gt;
Once served, you (or your userbase) can start using your LLM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Fine-tuning
&lt;/h3&gt;

&lt;p&gt;Using the foundation model (FM) may not be enough for you; in that case, you may want to consider improving (fine-tuning) your FM with your own dataset. Consequently, you have the power of a fully working LLM with your data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Retrieval-Augmented Generation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F3ef11a39-9661-4928-9b72-4e89e2a2775d" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F3ef11a39-9661-4928-9b72-4e89e2a2775d" alt="Retrieval-Augmented Generation on AWS"&gt;&lt;/a&gt;&lt;br&gt;
Not always do you need to train your LLMs with your own data. Sometimes Retrieval-Augmented Generation (RAG) is all you need &lt;em&gt;(cit.)&lt;/em&gt; to add dynamic text into your prompt.&lt;/p&gt;

&lt;p&gt;And this is it, with RAG we can customize our prompt with our data. One of its downsides is the limit on the context window size, which means your prompt can't be higher than a particular amount of tokens.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Monitoring
&lt;/h3&gt;

&lt;p&gt;Monitoring is essential in every aspect of the IT sector, and in this case, it is paramount (can I say this word or only AI is allowed to?😜).&lt;/p&gt;

&lt;p&gt;But what exactly is monitoring for Generative AI? Let's look at this list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System monitoring&lt;/strong&gt;: Like every other system, you need to ensure your LLMs are up and running. So, if you deploy your LLM in your EKS cluster, then you need to ensure its scaling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: If someone is trying to hack your LLM, don't you want to know when it happens?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics to watch for&lt;/strong&gt;: Choosing the right KPIs to monitor is essential. You may want to monitor your LLM's sentiment, its security, how many times it hallucinates, or even the quality of your LLM responses. There are so many frameworks and algorithms worth talking about when dealing with metrics to monitor, but you get the point of how this works.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Model Governance and Review
&lt;/h3&gt;

&lt;p&gt;By tracking, documenting, monitoring, versioning, and controlling access to LLMs, you can control model inputs and outputs to understand what might affect LLM outputs. Major companies are struggling with keeping their LLMs unbiased, and by introducing governance and human review, you can reduce the risks of biases in your LLMs.&lt;/p&gt;

&lt;p&gt;Granted that some models need governance more than others, but you don't want to destroy your company's reputation because your large language model said something it shouldn't have, do you? 😅&lt;/p&gt;

&lt;p&gt;Anyway, most of the time it's not about reputation; it's about &lt;strong&gt;responsible AI&lt;/strong&gt; and following its principles in order to create the safest version of your LLM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Optimization
&lt;/h3&gt;

&lt;p&gt;Generative AI is expensive 💸. Usually, enterprises do not have clarity in their minds about how much it costs for them. Is it the input or output costs? What if it's the hardware, or it may be the monthly commitment? Clearly, you need a written strategy so when dealing with costs, everyone can speak the same language.&lt;/p&gt;

&lt;p&gt;Apart from that, there are many strategies to mitigate costs in your LLMs, to mention a few of them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt compression&lt;/li&gt;
&lt;li&gt;Caching&lt;/li&gt;
&lt;li&gt;Use RAG instead of fine-tuning&lt;/li&gt;
&lt;li&gt;Deploy your own model instead of using a third-party one&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prompt Engineering and Management
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fbaeb0447-8c1f-47ea-bd17-b77ef31c66cc" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fbaeb0447-8c1f-47ea-bd17-b77ef31c66cc" alt="Prompt management on AWS"&gt;&lt;/a&gt;&lt;br&gt;
Prompt engineering doesn't stop at that simple phrase you write to your LLM. Prompt engineering is a complex and large topic involving:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different prompt techniques&lt;/li&gt;
&lt;li&gt;Prompt security defenses&lt;/li&gt;
&lt;li&gt;Prompt versioning and tracking&lt;/li&gt;
&lt;li&gt;Prompt optimization and tuning&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Security and Compliance
&lt;/h3&gt;

&lt;p&gt;I don't need to say that Generative AI security should be the backbone of your Generative AI application design, right? We should always design solutions with security in mind.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;... a few minutes after I spared you a long version of my pep talk 😜&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Securing your large language model means protecting it from prompt injection, prompt leaks, DDoS attacks on your infrastructure, or even restricting the types of content it should receive/answer to.&lt;/p&gt;

&lt;p&gt;There are many tools you can employ to get the job done. For instance, in your application, you can use &lt;strong&gt;guardrails&lt;/strong&gt; to restrict input and output so your LLM doesn't answer with biased or hallucinatory text.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Lifecycle Management
&lt;/h3&gt;

&lt;p&gt;LLMs have lifecycle management too, from model versioning to rolling back updates, or even archiving and retiring old versions of your LLM. In short, lifecycle management is essential and it includes almost all of the previous points.&lt;/p&gt;

&lt;h2&gt;
  
  
  LLMops on AWS
&lt;/h2&gt;

&lt;p&gt;AWS is a leading public cloud provider and, as you can imagine, it offers every tool available to develop and build LLMops in the cloud. Throughout this article, you probably saw some reference architecture of how we can build that particular solution using AWS services.&lt;/p&gt;

&lt;p&gt;At its core, Amazon Bedrock is a fully managed service that provides access to foundation models (FMs) created by Amazon and third-party model providers through an API. Additionally, we can fine-tune it, run AI Agents, have its knowledge base, and even add guardrails and review your own LLM.&lt;/p&gt;

&lt;p&gt;Amazon Bedrock, along with a couple of other serverless services, can get us through each and every aspect of LLMops.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ To know more about it, you can check out my blog post regarding Amazon Bedrock: 🔗 &lt;a href="https://cloudnature.net/blog/the-complete-guide-to-amazon-bedrock-for-generative-ai" rel="noopener noreferrer"&gt;The Complete Guide to Amazon Bedrock for Generative AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And "that" is it, folks. Hopefully, I was able to make a good recap of what LLMops are and how we can do it on AWS. You probably noticed I didn't go into details regarding infrastructure, procedures, etc., why is that? Because I wanted to write about them in detail in my upcoming articles 😉&lt;/p&gt;

&lt;p&gt;Future articles will be listed somewhere "here" and they will talk about each LLM operation in depth, with reference architecture and projects to follow along.&lt;/p&gt;

&lt;p&gt;If you enjoyed this article, please let me know in the comment section or send me a DM. I'm always happy to chat! ✌️&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: opinions expressed are solely my own and do not express the views or opinions of my employer.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>aws</category>
    </item>
    <item>
      <title>Building a Serverless Newsletter: Your Guide to AWS and Amazon SES</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Thu, 28 Mar 2024 13:00:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-serverless-newsletter-your-guide-to-aws-and-amazon-ses-4emg</link>
      <guid>https://dev.to/aws-builders/building-a-serverless-newsletter-your-guide-to-aws-and-amazon-ses-4emg</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;⚠️ Hey! This blog post was initially published on my own blog. Check it out from the source: &lt;a href="https://cloudnature.net/blog/building-a-serverless-newsletter-your-guide-to-aws-and-amazon-ses" rel="noopener noreferrer"&gt;https://cloudnature.net/blog/building-a-serverless-newsletter-your-guide-to-aws-and-amazon-ses&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Discover the secrets of creating a serverless newsletter using AWS and Amazon SES. Dive into SST, Middy with Typescript, and master everything you need to know about SES for a seamless newsletter service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Have you ever wrestled with Amazon SES, firing off emails left and right, dealing with bounces, and handling complaints? I sure have. Then, a lightbulb moment hit 💡, I want to build my own newsletter infrastructure! Sure, there are quicker routes out there, but where's the fun in that?&lt;/p&gt;

&lt;p&gt;So, buckle up for this blog post. We're delving into serverless setups and everything there is to know about Amazon SES. Of course, we're making things work at scale!&lt;/p&gt;

&lt;p&gt;And hold tight, because one section is titled "Lambda? Nope, thank you!". Can serverless exists without Lambda functions? 🤔&lt;/p&gt;




&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fd96ebc56-110d-4c94-9a96-6278f74749f5" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fd96ebc56-110d-4c94-9a96-6278f74749f5" alt="Serverless Newsletter Infrastructure Overview"&gt;&lt;/a&gt;&lt;br&gt;
Creating a newsletter infrastructure is quite a puzzle. It needs to be quick, handle heavy traffic, and stay operational no matter what. So, why did I opt for a complete serverless approach? It's straightforward: serverless computing is the superhero of performance, resilience, and scalability 🦸. &lt;/p&gt;

&lt;p&gt;And hey, it's also budget-friendly. Newsletters don't need to be active 24/7; they wake up once a day, get the job done, and then rest until the next day. Let's break down the reasons behind choosing a serverless setup for this newsletter. But before we get there, let's tackle the elephant in the room by exploring three ways to create a newsletter.&lt;/p&gt;


&lt;h3&gt;
  
  
  The Fastest Way
&lt;/h3&gt;

&lt;p&gt;Do I really have to spell it out? Numerous newsletter providers exist, so many choices, really. 😐&lt;/p&gt;


&lt;h3&gt;
  
  
  The Simple Way
&lt;/h3&gt;

&lt;p&gt;Setting up a basic serverless newsletter infrastructure on AWS is a breeze, requiring just a few essential components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate your sender identity in Amazon SES&lt;/li&gt;
&lt;li&gt;Implement a straightforward AWS Step Function for sending emails with AWS&lt;/li&gt;
&lt;li&gt;Set up a single web service to publish the newsletter&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these in place, you can kick off your newsletter distribution. The beauty of this solution? In just 8-16 hours, you've built your custom newsletter infrastructure.&lt;/p&gt;

&lt;p&gt;But hold on, we're not aiming for a quick fix here, are we? We're on a mission to dedicate an entire weekend just to the HTML and CSS 🙄. Don't worry, I'm not trying to spook anyone. I believe in your champion-level skills, capable of centering a div better than I ever could.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Don't worry, I'll spare you the HTML and CSS details of the newsletter templates.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's shift our focus to the complete AWS infrastructure, which is the reason we're all gathered here 🚀.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Complete Way
&lt;/h2&gt;

&lt;p&gt;Building a serverless infrastructure doesn't have to be a complete mess. I'm not a fan of unnecessary complexity. Instead, let's keep it simple by assembling basic components, creating completeness, and minimizing confusion.&lt;/p&gt;

&lt;p&gt;So, what are these foundational bricks? Let's break them down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up Amazon SES&lt;/li&gt;
&lt;li&gt;Additional Amazon SES configurations for Bounces, Complaints, Clicks (don't worry, we'll dive into this shortly)&lt;/li&gt;
&lt;li&gt;API Web Services (AWS 😜) for seamless newsletter operations&lt;/li&gt;
&lt;li&gt;AWS Step function workflow for sending emails&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These bricks, when glued together, form the backbone of our newsletter.&lt;/p&gt;

&lt;p&gt;Now, let's delve into each element while keeping things clear and straightforward.&lt;/p&gt;


&lt;h2&gt;
  
  
  Amazon SES Setup
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F4186703b-1b8e-43e8-8e84-f64080ef70bf" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F4186703b-1b8e-43e8-8e84-f64080ef70bf" alt="Amazon SES Dashboard"&gt;&lt;/a&gt;&lt;br&gt;
Setting up SES (Simple Email Service) is the starting point for our newsletter infrastructure. It's the AWS service that powers our email delivery. Let's look at the infrastructure needed to send emails with AWS.&lt;/p&gt;
&lt;h3&gt;
  
  
  Domain Validation
&lt;/h3&gt;

&lt;p&gt;Before we jump into the technicalities, let's get our domain validated. Why? Email providers want to know who owns that email address. The drill? Domain validation through DNS (basically, adding a few CNAME records in your DNS provider). In my case, a quick validation of &lt;a href="https://cloudnature.net" rel="noopener noreferrer"&gt;https://cloudnature.net&lt;/a&gt; in Route53, and we're good to go.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Find the full identity validation process in the Infrastructure As Code section at the bottom of the GitHub repository.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But hold on, validation alone is not enough for email providers. To be top-tier among other email providers, we need a bit more from Amazon SES:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DKIM (DomainKeys Identified Mail)&lt;/li&gt;
&lt;li&gt;Custom MAIL FROM (cue SPF authentication and DMARC compliance)&lt;/li&gt;
&lt;li&gt;BIMI compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let's look into all of them. The names might be a mouthful, but they're harmless, I promise 😜&lt;/p&gt;
&lt;h4&gt;
  
  
  DomainKeys Identified Mail (DKIM)
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fd25b7597-d960-45d9-8b62-a747c7cb2a11" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fd25b7597-d960-45d9-8b62-a747c7cb2a11" alt="DomainKeys Identified Mail - DKIM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ensures email authenticity by adding a digital signature. This is your email's passport to a trustworthy inbox. DKIM is a security standard, ensures that an email claiming to be from a specific domain is genuinely authorized by the domain owner.&lt;/p&gt;

&lt;p&gt;Opting for Easy DKIM means Amazon SES takes the lead. It generates a public-private key pair and includes a DKIM signature in every message sent from that identity. Your job? Well, nothing much, Amazon SES has got it covered!&lt;/p&gt;
&lt;h4&gt;
  
  
  Custom MAIL FROM
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F39b60146-d840-431a-bc84-eef4f1f57905" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F39b60146-d840-431a-bc84-eef4f1f57905" alt="Custom Email From result"&gt;&lt;/a&gt;&lt;br&gt;
Ever wondered about the addresses behind the emails you send? There's the From address, visible to the recipient, and then there's the MAIL FROM address, indicating the message's origin. By default, Amazon SES assigns a MAIL FROM domain for your outgoing messages, using a subdomain of &lt;em&gt;amazonses.com&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;But what if you want to add a personal touch to your MAIL FROM address? As seen in the picture above, adding your own MAIL FROM means you get rid of that ugly &lt;em&gt;"via amazonses.com"&lt;/em&gt;  tag near your name!&lt;/p&gt;

&lt;p&gt;For this, you simply need to comply with the Sender Policy Framework (&lt;strong&gt;SPF&lt;/strong&gt;), an email validation standard designed to prevent email spoofing. While at it, let's add another protocol: Domain-based Message Authentication, Reporting, and Conformance (&lt;strong&gt;DMARC&lt;/strong&gt;). DMARC, fancy name, simple job: it helps detect email spoofing, allowing you to guide the email provider on how to act.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Feedc88e2-4a86-431d-ab42-a5d2f8aaf6c9" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Feedc88e2-4a86-431d-ab42-a5d2f8aaf6c9" alt="DMARC &amp;amp; SPF - DNS Record"&gt;&lt;/a&gt;&lt;br&gt;
To check your DMARC compliance, visit &lt;a href="https://dmarcian.com/dmarc-inspector/" rel="noopener noreferrer"&gt;🔗 DMARC Inspector&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Brand Indicators for Message Identification (BIMI)
&lt;/h4&gt;

&lt;p&gt;Give your emails a visual identity. Let's set up BIMI and become a recognized face in your recipient's inbox.&lt;/p&gt;

&lt;p&gt;At this point, we're almost there. But wouldn't it be nice to have a logo as your email profile image? That's where BIMI comes in. If you've followed the previous steps, your Amazon SES configuration is nearly ready.&lt;/p&gt;

&lt;p&gt;All we need are two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an SVG for our profile image.&lt;/li&gt;
&lt;li&gt;Add the BIMI record to our DNS registrar.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To convert your SVG logo (modern SVGs may not be sufficient for BIMI), download a simple yet efficient program in GitHub repository &lt;a href="https://github.com/authindicators/svg-ps-converters/tree/master" rel="noopener noreferrer"&gt;https://github.com/authindicators/svg-ps-converters/tree/master&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Afterward, upload your logo and add a DNS record like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;default._bimi.dev.cloudnature.net TXT v=BIMI1;l=https://cloudnature.net/logos/logo.svg;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And there you have it! Check your BIMI compliance at &lt;a href="https://bimigroup.org/bimi-generator/" rel="noopener noreferrer"&gt;🔗 Bimi Group&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Email Sending Challenges
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fe016c3dd-20f2-490b-8877-9d4e06c8571d" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fe016c3dd-20f2-490b-8877-9d4e06c8571d" alt="SES Topics for handling bounces, complaints, errors and clicks"&gt;&lt;/a&gt;&lt;br&gt;
Sending emails may seem like a straightforward task, but the journey from your server to your recipient's inbox can be a bumpy ride. In this section, we'll navigate through common challenges encountered while sending emails.&lt;/p&gt;

&lt;p&gt;We need to be careful because AWS policies are strict, so manual checking may not be enough. Here's how our automatism flows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Some triggering event arrives&lt;/li&gt;
&lt;li&gt;Amazon SES triggers Amazon SNS&lt;/li&gt;
&lt;li&gt;Amazon SNS triggers AWS Lambda&lt;/li&gt;
&lt;li&gt;AWS Lambda run custom logic to decide if the email address needs to be removed from the newsletter list (for example if the email is unexistent), if so, it remove the record from the DynamoDB table&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's see briefly one case at the time 👇.&lt;/p&gt;
&lt;h3&gt;
  
  
  Handling Bounces
&lt;/h3&gt;

&lt;p&gt;Bounces in the email world are annoying but more common than you could think. They happen when an email cannot be delivered to the recipient, either due to an &lt;strong&gt;invalid email address&lt;/strong&gt; or a temporary issue. In our serverless setup, dealing with bounces is crucial for maintaining a clean and effective newsletter infrastructure.&lt;/p&gt;

&lt;p&gt;So, what's the drill when a bounce event pops up? There are different types of bounces, but for the hard ones, we remove that email address from our newsletter list.&lt;/p&gt;
&lt;h3&gt;
  
  
  Addressing Complaints
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F2e1f4a07-c56c-4e95-af00-dd1f30e4cb56" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F2e1f4a07-c56c-4e95-af00-dd1f30e4cb56" alt="Spam Meter Score 10/10"&gt;&lt;/a&gt;&lt;br&gt;
Not every email recipient is thrilled to receive newsletters, and complaints may arise. Complaints occur when a recipient marks your email as &lt;strong&gt;spam&lt;/strong&gt; or unwanted. Managing complaints is key to maintaining a positive sender reputation and ensuring your emails land in the inbox.&lt;/p&gt;

&lt;p&gt;Now, what happen when a complaint event lands in our logs? Developers don't pay attention to warnings, so treat it like a yellow card and investigate why it got the spam label. If it becomes a recurring event, consider removing that email address from your subscribers list.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;To check if you are following every best practices, use this website: &lt;a href="https://www.mail-tester.com/" rel="noopener noreferrer"&gt;Mail Tester&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Troubleshooting Errors
&lt;/h3&gt;

&lt;p&gt;In the intricate world of email sending, errors can pop up unexpectedly. From server glitches to misconfigurations, troubleshooting errors is an essential part of maintaining a reliable newsletter infrastructure.&lt;/p&gt;

&lt;p&gt;Always keep a watchful eye on error logging. It's like your troubleshooting toolkit, it helps you identify issues worth fixing.&lt;/p&gt;
&lt;h3&gt;
  
  
  Tracking Clicks Per Link
&lt;/h3&gt;

&lt;p&gt;Knowing how recipients engage with your newsletters is gold. Tracking clicks per links gives you the insight on the effectiveness of your email campaigns. While Amazon SES doesn't offer this out of the box, we've got to add to our serverless architecture an additional SNS Topic and an AWS Lambda. This duo saves records in our Amazon DynamoDB database, tracking every piece of information about a specific link.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A big shoutout to &lt;a href="https://www.linkedin.com/in/guilleojeda" rel="noopener noreferrer"&gt;Guillermo Ojeda&lt;/a&gt; for enlightening me on the significance of this metric during our conversation about must-have metrics for newsletters. Thanks Guillermo 🙏.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Newsletter API Web Services
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F81adc9e0-e083-4f88-ac3a-44a6d3203d4a" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F81adc9e0-e083-4f88-ac3a-44a6d3203d4a" alt="Middy with SST enhancing Serverless experience"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building the AWS serverless infrastructure was a piece of cache (sorry, I couldn't resist). Let's break down the stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;SST with CDK integration&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Typescript&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Middy&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools are the unsung heroes that took my developer experience to a whole new level. I won't dive too deep here; you can find the full repository with all the goodies at the bottom.&lt;/p&gt;

&lt;p&gt;Now, which APIs are we talking about?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CRUD APIs&lt;/strong&gt; for admin operations on newsletter items&lt;/li&gt;
&lt;li&gt;Newsletter &lt;strong&gt;publish and unpublish&lt;/strong&gt; APIs&lt;/li&gt;
&lt;li&gt;Newsletter &lt;strong&gt;subscribe and unsubscribe&lt;/strong&gt; APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pretty standard stuff, right? For the Amazon DynamoDB connection, I opted for dynamodb-toolbox: a library that proved its worth and is likely to feature in my future projects too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F6d323dbe-5c49-4c72-82bb-cc05ec27c83d" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F6d323dbe-5c49-4c72-82bb-cc05ec27c83d" alt="Postman API Documentation"&gt;&lt;/a&gt;&lt;br&gt;
Take a peek at a snippet of the Postman APIs developed for this project. One API in the mix serves as the documentation. If you're authenticated and give it a buzz, it'll hand you the JSON Postman collection, all decked out with documentation ready to use (just remember to include environment variables).&lt;/p&gt;


&lt;h2&gt;
  
  
  AWS Lambda functions? No, Thank You!
&lt;/h2&gt;

&lt;p&gt;Alright, let's kick off this chapter by addressing the elephant in the serverless room: AWS Lambda functions. Now, you might wonder, "Why didn't we use Lambda functions?" Well, the answer is as simple as our serverless setup: a single AWS &lt;strong&gt;Step&lt;/strong&gt; Function took the reins and orchestrated the entire show. Say goodbye to the need for countless Lambda functions or a single AWS Lambda function doing all the dirty work. Interested? Let's explore why one &lt;em&gt;not-so-big&lt;/em&gt; AWS Step function stole the spotlight.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F262259c2-e639-4ffb-b050-b62d908a67d6" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F262259c2-e639-4ffb-b050-b62d908a67d6" alt="AWS Step Function for Serverless Infrastructure"&gt;&lt;/a&gt;&lt;br&gt;
This overview divides our step function into three main parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Workflow that sends the first email&lt;/strong&gt;: a critical step to ensure everything is functioning as expected. An initial test to our own email address helps us validate the integrity of our setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;List every subscriber from Amazon DynamoDB&lt;/strong&gt;: an essential task where we fetch the list of subscribers and organize them into batches of &lt;strong&gt;50 email addresses&lt;/strong&gt; each.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For each batch, send out emails with Amazon SES&lt;/strong&gt;: the final leg of our journey involves the systematic sending of emails in batches using Amazon SES.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, let's dive into each step and explore the challenges that may arise when building this intricate workflow with AWS Step Functions.&lt;/p&gt;
&lt;h3&gt;
  
  
  Sending Test Email
&lt;/h3&gt;

&lt;p&gt;This segment involves a straightforward process. We retrieve the newsletter item from Amazon DynamoDB, process and convert it into a valid JSON, and then proceed to send the email using the Amazon SES SendBulkEmail V2 API.&lt;/p&gt;

&lt;p&gt;SendBulkEmail stands out as the best practice, ensuring that each recipient sees only their own name and email address in the To header of the messages they receive.&lt;/p&gt;

&lt;p&gt;This step proves to be exceptionally useful to know how the newsletter will truly appear in our email inbox. The simplicity and effectiveness of this approach allow for quick identification and correction if any HTML issues arise. Following this, we can unpublish and reschedule the newsletter publish date.&lt;/p&gt;
&lt;h3&gt;
  
  
  Processing Subscribers
&lt;/h3&gt;

&lt;p&gt;This phase proved to be the most intricate to develop! Pagination for Amazon DynamoDB works differently compared to many NoSQL or SQL databases. You simply inform the database about the last item you retrieved. Using that as a starting point, we can list through every single subscriber.&lt;/p&gt;

&lt;p&gt;The objective here is to produce a list of subscribed email addresses. However, to create that list, we need to go through a few rounds of Pass states. In my case, this was the workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;First Step&lt;/strong&gt;: Query elements from Amazon DynamoDB (not scan)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Second Step&lt;/strong&gt;: Create an array of items that merges previous items and new ones: States.Array($.dynamodbConfig.items[ * ], $.subscribers.Items[ * ].email.S)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Third Step&lt;/strong&gt;: The result is pretty ugly:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"items": [
      [
        "emailaddress3@example.com"
            ],
      [
        "emailaddress@example.com",
        "emailaddress1@example.com",
        "emailaddress2@example.com"
      ]
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This means we need to flatten this list like so: $.dynamodbConfig.items[ * ][ * ]&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fourth Step&lt;/strong&gt;: Check if there are more elements. If not, we are going to close this process. If, in fact, there are more subscribers, we save the items in another object and start all over again.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Sending Out Emails
&lt;/h3&gt;

&lt;p&gt;Now, we are almost ready to "hit" that send button and dispatch our newsletters. The workflow here is comparatively simpler than the previous one, but we need to make some adjustments to ensure we can call the Amazon SES SendBulkEmail at scale. &lt;/p&gt;

&lt;p&gt;The process begins with this input:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    [
        "emailaddress@example.com",
        "emailaddress1@example.com",
        "emailaddress2@example.com",
        "emailaddress3@example.com",
                ...
    ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what we need to achieve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a valid object with parameters like so:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
    {
      "Destination": {
        "ToAddresses": [
          "emailaddress@example.com"
        ]
      }
    },
        ...
  ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While this task is relatively straightforward, we need to loop through each email address and add the payload as shown above.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Divide the object containing email addresses into batches of 50 items (due to Amazon SES hard quotas). Even better, we can use the AWS Step Function intrinsic function like so States.ArrayPartition($.toAddressDestinations, 50)&lt;/li&gt;
&lt;li&gt;For each batch, call the Amazon SES SendBulkEmail with the correct parameter structure. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One thing that proved super useful was getting the parameter from outside the Map state. This is possible using &lt;em&gt;ItemSelector&lt;/em&gt; like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "ContextIndex.$": "$$.Map.Item.Index",
  "ContextValue.$": "$$.Map.Item.Value",
  "toAddressDestinations.$": "$.toAddressDestinations",
  "templateData.$": "$.templateData"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, our workflow is done. Before wrapping everything up, we need to set up a proper error flow. In this case, if there are any errors, my workflow stops, and I get alerted via email.&lt;/p&gt;

&lt;p&gt;See? A complete workflow without AWS Lambda functions!&lt;/p&gt;




&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;During the process of building the serverless AWS infrastructure, I encountered several challenges, but I'm glad I was able to complete the AWS Step Function workflow without the need for AWS Lambda functions. &lt;/p&gt;

&lt;p&gt;Let's explore some of the challenges I faced (and no, I won't talk about the enormous time spent on HTML/CSS 😜):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Unfortunately, there is no built-in utility in AWS Step Functions to enable marshalling and unmarshalling results to and from Amazon DynamoDB. This means you need to handle .S or .N every time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can identify errors from SendBulkEmail by examining this object: $.BulkEmailEntryResults[ * ].Error. In my case, I used this object to list every error. If any were found, I raised an error, stopping the entire workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flattening the DynamoDB output requires two steps. Unfortunately, it's not possible to accomplish this in a single, straightforward step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It's not possible to use AWS Step Function intrinsic functions within the Choice state. Consequently, in many cases, a new Pass step needed to be added.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Metrics
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fe7c58c46-51ee-4a1e-86c2-88892e4f3da1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fe7c58c46-51ee-4a1e-86c2-88892e4f3da1" alt="Amazon SES dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Visualization of metrics is crucial, and SES offers two main capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Metrics&lt;/strong&gt; for tracking Bounces, Complaints, Sends, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Deliverability Manager&lt;/strong&gt; (VDM) for Amazon SES.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The default metrics are fairly standard. Now, let's look into VDM functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Virtual Deliverability Manager for Amazon SES
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F3f10b8b1-9e3b-4650-a45c-0e42e4e83612" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F3f10b8b1-9e3b-4650-a45c-0e42e4e83612" alt="Virtual Deliverability Manager for Amazon SES dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;VDM provides a lot of metrics, including soft and hard bounce rates, open rates for emails, and more. While it lacks a "click per link" metric, which we had to build that ourselves. Apart from this, there aren't many fancy or custom metrics available; however, the standard insights provided are fundamental for any newsletter.&lt;/p&gt;

&lt;p&gt;Additionally, you can download a CSV file containing every single metric available in Amazon SES, which proves to be quite useful.&lt;/p&gt;




&lt;h2&gt;
  
  
  Total Pricing
&lt;/h2&gt;

&lt;p&gt;Let's quickly summarize the pricing for Amazon SES and AWS Step Functions. Here's a comparison:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon SES:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1,000&lt;/strong&gt; emails: &lt;strong&gt;~ 0.3$&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10,000&lt;/strong&gt; emails: &lt;strong&gt;~ 3$&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1,000,000&lt;/strong&gt; emails: &lt;strong&gt;~ 30$&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Step Functions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1,000&lt;/strong&gt; emails: ~ 2,150 executions = &lt;strong&gt;free&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10,000&lt;/strong&gt; emails: ~ 21,500 executions = &lt;strong&gt;~ 0.5$&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1,000,000&lt;/strong&gt; emails: ~ 2,150,000 executions = &lt;strong&gt;~ 55$&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ The price could potentially be reduced to 3$ for AWS Step Functions by incorporating the "Compute to bulk destination" Map into the DynamoDB processing. This could be a notable improvement for the future.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Creating a serverless newsletter with Amazon SES and AWS Step Functions is a potent and cost-efficient solution. By leveraging SES for reliable email delivery and Step Functions for orchestration, we've built a scalable system without traditional Lambda functions.&lt;/p&gt;

&lt;p&gt;Well now you know how to create a serverless newsletter on AWS, the only thing I didn't show you is how it turns out! &lt;/p&gt;

&lt;p&gt;Check the results 👉  &lt;a href="https://cloudnature.net/newsletters/subscribe" rel="noopener noreferrer"&gt;https://cloudnature.net/newsletters/subscribe&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;I'll post monthly, covering the latest AWS trends, community blog posts, and Cloud-related news. If you're an AWS Cloud Architect, Cloud Engineer, DevOps, or AI/ML Developer, you'll find valuable content.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can find the repository here: &lt;a href="https://github.com/Depaa/newsletter-manager-template" rel="noopener noreferrer"&gt;https://github.com/Depaa/newsletter-manager-template&lt;/a&gt; 😉.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you enjoyed this article, please let me know in the comment section or send me a DM. I'm always happy to chat! ✌️&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS-related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/ses/" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/ses/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://middy.js.org/" rel="noopener noreferrer"&gt;https://middy.js.org/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.sst.dev/" rel="noopener noreferrer"&gt;https://docs.sst.dev/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mail-tester.com/" rel="noopener noreferrer"&gt;https://www.mail-tester.com/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: opinions expressed are solely my own and do not express the views or opinions of my employer.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>webdev</category>
      <category>aws</category>
      <category>serverless</category>
      <category>programming</category>
    </item>
    <item>
      <title>Elevating Your AWS Lambda Game With Middy And TypeScript</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Wed, 28 Feb 2024 13:02:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/elevating-your-aws-lambda-game-with-middy-and-typescript-pdj</link>
      <guid>https://dev.to/aws-builders/elevating-your-aws-lambda-game-with-middy-and-typescript-pdj</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the ever-evolving landscape of AWS Lambda development, efficiency is key. Picture this: you're building serverless functions, but the process feels a bit like assembling a puzzle without all the pieces. That's where Middy steps in as the missing link.&lt;/p&gt;

&lt;p&gt;AWS Lambda, while revolutionary, can pose challenges in terms of code organization, reusability, and overall development simplicity. Without the right tools, navigating these challenges might feel like traversing uncharted territory.&lt;/p&gt;

&lt;p&gt;Enter Middy, the middleware framework designed to streamline and enhance AWS Lambda functions. Middy acts as your development ally, addressing the pain points of Lambda development with its middleware architecture. In this blogpost you'll discover how Middy turns these challenges into opportunities for a more efficient and enjoyable serverless development experience. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ This is the second part of the series focused on AWS Lambda and serverless. You don't need the first part, but if you are here you will probably like it. Here is the link:&lt;a href="https://cloudnature.net/blog/nextlevel-serverless-development-with-sst-and-typescript" rel="noopener noreferrer"&gt;https://cloudnature.net/blog/nextlevel-serverless-development-with-sst-and-typescript&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Setting the Stage
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F4d52a2bc-614f-4210-9d70-36b42dd92992" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F4d52a2bc-614f-4210-9d70-36b42dd92992" alt="Lambda execution environment lifecycle"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Lambda functions, the backbone of serverless computing, offer scalable and cost-effective execution. Before diving into the power of Middy, it's crucial to grasp the fundamentals of Lambda functions.&lt;/p&gt;

&lt;p&gt;AWS Lambda allows you to run code without provisioning or managing servers, responding to events while automatically managing compute resources. You write the code, AWS handles the rest. Fantastic, right? However, like any tech, there are a few aspects to be mindful of.&lt;/p&gt;

&lt;p&gt;Here's a simple handler code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const handler = async (event, context) =&amp;gt; {
  console.log("EVENT: \n" + JSON.stringify(event, null, 2));
  return context.logStreamName;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lambda functions respond to various events: S3, SNS, SQS, HTTP from API Gateway, and more. In this code we are printing the event that the AWS Lambda function &lt;em&gt;handled&lt;/em&gt;. &lt;br&gt;
Lambda development isn't always a smooth sail. Finding and addressing these challenges is crucial for achieving a robust and efficient serverless architecture. Let's explore challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input Logging&lt;/strong&gt;: debugging based on received events&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Logging&lt;/strong&gt;: ensuring the response is correct&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Body Parsing&lt;/strong&gt;: yep, JSON.parse(event.body) for HTTP events&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP Body Response&lt;/strong&gt;: parsing responses with statusCode and message&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CORS Headers&lt;/strong&gt;: adding them in HTTP response&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling&lt;/strong&gt;: ah, the inevitable pain point (I can feel you pain)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Validation&lt;/strong&gt;: ensuring events meet expected criteria&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Body Typing&lt;/strong&gt;: especially in TypeScript, right?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Solving these challenges often involves developing your Lambda handler wrapper: wrapping in try...catch, logging events, handling errors, and reshaping responses. Well, I did all of this until I stumbled upon THE middleware solution: &lt;strong&gt;Middy&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  Introducing Middy
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F6fd05598-2b6c-429c-baf2-40789c99e0de" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F6fd05598-2b6c-429c-baf2-40789c99e0de" alt="How Middy works"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Middy, a game-changer in Lambda development, introduces a middleware architecture that streamlines your serverless journey. Think of middleware as a series of functions that intercept the flow of data between the initial request and the final response. In the context of AWS Lambda, Middy seamlessly integrates into the execution flow, allowing you to augment, modify, or validate both the incoming events and outgoing responses. &lt;br&gt;
Each middleware handles a specific concern, contributing to a modular and maintainable codebase. This design facilitates the addition or removal of middleware based on your specific requirements.&lt;/p&gt;

&lt;p&gt;Let's revisit some of the challenges we identified earlier and discover how Middy rises to the occasion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input Logging&lt;/strong&gt;: Middy has a dedicated middleware for that.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Logging&lt;/strong&gt;: There's a middleware designed to handle that seamlessly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Body Parsing&lt;/strong&gt;: Yes, you guessed it, there's a middleware for parsing HTTP event bodies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP Body Response&lt;/strong&gt;: Once again, Middy steps in with a dedicated middleware.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we understand the workings of AWS Lambda and the capabilities Middy brings to the table, let's explore how we can combine them to overcome real-world challenges in serverless development.&lt;/p&gt;


&lt;h2&gt;
  
  
  Getting Started with Middy and TypeScript
&lt;/h2&gt;

&lt;p&gt;Getting Middy up and running is a breeze. Follow these simple steps to integrate Middy into your AWS Lambda environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install --save @middy/core
npm install --save-dev @types/aws-lambda
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For typescript I found this issue, which is literally what we need (after a few twiks 🔨)&lt;br&gt;
🔗 &lt;a href="https://github.com/middyjs/middy/issues/316" rel="noopener noreferrer"&gt;https://github.com/middyjs/middy/issues/316&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our Middyfy wrapper sets the stage, providing a structured approach to typing events and handling requests. Now, let's extend it to accommodate essential Middy middleware:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import middy, { type MiddyfiedHandler } from '@middy/core'
import type { APIGatewayProxyEvent, APIGatewayProxyResult, Context, Handler as AWSHandler, SNSEvent } from 'aws-lambda'
import { type Entity } from 'dynamodb-onetable'
import { type OneField, type Paged } from 'dynamodb-onetable/dist/mjs/Model'

// Event is an APIGatewayProxyEvent with a typed body, pathParameters and queryStringParameters which depends on http-json-body-parser &amp;amp; json-schema-to-ts
// queryStringParameters and multiValueQueryStringParameters is non-nullable as we use http-event-normalizer
export interface Event&amp;lt;TBody, TPathParameters, TQueryStringParameters&amp;gt;
  extends Omit&amp;lt;APIGatewayProxyEvent, 'body' | 'pathParameters' | 'queryStringParameters'&amp;gt; {
  waitTimestamp: string | number | Date
  body: TBody
  pathParameters: TPathParameters
  queryStringParameters: TQueryStringParameters
  multiValueQueryStringParameters: NonNullable&amp;lt;APIGatewayProxyEvent['multiValueQueryStringParameters']&amp;gt;
}

// We are making use of http-response-serializer, so our body type can either be an Entity, an Array&amp;lt;Entity&amp;gt; or a string
interface Result extends Omit&amp;lt;APIGatewayProxyResult, 'body'&amp;gt; {
  body:
  | Entity&amp;lt;Record&amp;lt;string, OneField&amp;gt;&amp;gt;
  | Paged&amp;lt;Entity&amp;lt;Record&amp;lt;string, OneField&amp;gt;&amp;gt;&amp;gt;
  | string
  | Record&amp;lt;string, unknown&amp;gt;
}

// Handler type which gives us proper types on our event based on TBody and TPathParameters which are JSON schemas
export type Handler&amp;lt;TBody = void, TPathParameters = void, TQueryStringParameters = void&amp;gt; = AWSHandler&amp;lt;
Event&amp;lt;TBody, TPathParameters, TQueryStringParameters&amp;gt;,
Result
&amp;gt;

interface RequestSchema {
  properties?: {
    body?: Record&amp;lt;string, unknown&amp;gt; | null
    pathParameters?: Record&amp;lt;string, unknown&amp;gt; | null
    queryStringParameters?: Record&amp;lt;string, unknown&amp;gt; | null
  }
}

export const middyfy = (
  handler: Handler&amp;lt;never, never, never&amp;gt;,
  requestSchema: RequestSchema | null = null
): MiddyfiedHandler&amp;lt;Event&amp;lt;never, never, never&amp;gt;, Result, Error, Context&amp;gt; =&amp;gt; {
  const wrapper = middy(handler)

    // Attach desired Middy middlewares here
  wrapper.use(/* middleware 1 */);
  wrapper.use(/* middleware 2 */);
  // ... add more as needed

  return wrapper
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dive into the Middy middleware ecosystem and choose the ones that align with your Lambda function's requirements. Whether it's input logging, output logging, handling HTTP responses... Middy has you covered.&lt;/p&gt;

&lt;p&gt;And this is our sample lambda function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { middyfy, type Handler } from '@core/libs/middyWrapper'
import type { FromSchema } from 'json-schema-to-ts'

export const bodySchema = {
  type: 'object',
  properties: {
    subject: { type: 'string', maxLength: 100 },
    content: { type: 'string', maxLength: 100 }
  },
  required: ['subject', 'content'],
  additionalProperties: false
} as const

export const schema = {
  type: 'object',
  properties: {
    body: bodySchema
  }
} as const

const main: Handler&amp;lt;FromSchema&amp;lt;typeof bodySchema&amp;gt;, void, void&amp;gt; = async (event) =&amp;gt; {
  return {
    statusCode: 200,
    body: { ...event }
  }
}

export const handler = middyfy(main, schema)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Did you copy everything? Perfect, because now we are going to add middlewares and get our hands dirty 😈&lt;/p&gt;




&lt;h2&gt;
  
  
  Leveraging Middy's Middleware
&lt;/h2&gt;

&lt;p&gt;Ready for the exciting part? Follow these simple steps to introduce essential middlewares (not all of them, there are a lot) and witness the transformation.&lt;/p&gt;

&lt;p&gt;Let's start by installing some middlewares. Open your terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install @middy/http-error-handler
npm install @middy/http-json-body-parser
npm install @middy/http-response-serializer
npm install @middy/validator
npm install @middy/validator/transpile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, import them into your Middyfy wrapper code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import httpErrorHandlerMiddleware from '@middy/http-error-handler'
import httpJsonBodyParserMiddleware from '@middy/http-json-body-parser'
import httpResponseSerializerMiddleware from '@middy/http-response-serializer'
import validatorMiddleware from '@middy/validator'
import { transpileSchema } from '@middy/validator/transpile'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break it down one middleware at the time:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;http-json-body-parser&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wrapper
      .use(httpJsonBodyParserMiddleware())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;validator&lt;/strong&gt;: which checks if the event is compliant with the schema we specified in out lambda function
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  if (requestSchema != null) {
    wrapper.use(validatorMiddleware({ eventSchema: transpileSchema(requestSchema) }))
      .use({
        onError: (request) =&amp;gt; {
          const response = request.response
          const error = request.error as any
          if (response.statusCode === 400) {
            response.headers['Content-Type'] = 'application/json'
            response.body = JSON.stringify({ message: response.body, validationErrors: error.cause })
          }
        }
      })
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;http-error-handler&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wrapper
      .use(httpErrorHandlerMiddleware({}))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;http-response-serializer&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wrapper
      .use(
      httpResponseSerializerMiddleware({
        serializers: [
          {
            regex: /^application\/json$/,
            serializer: ({ body }) =&amp;gt; JSON.stringify(body)
          }
        ],
        defaultContentType: 'application/json'
      })
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With less than 100 lines of code, your Lambda functions are now robust and production-ready.&lt;/p&gt;




&lt;h2&gt;
  
  
  Considerations
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fcdb84a8f-6450-4953-bb38-de3d13bf3a6a" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fcdb84a8f-6450-4953-bb38-de3d13bf3a6a" alt="Logs for input and output Middy middleware"&gt;&lt;/a&gt;&lt;br&gt;
Various middlewares showcased in the image above illustrates the power of Middy in action. From logging input and output to enhancing security headers and serializing responses, Middy simplifies complex processes into an elegant solution.&lt;/p&gt;

&lt;p&gt;Having incorporated Middy into the production environment for my newsletter, I've witnessed a noticeable improvement in debugging and monitoring. The once complex task has become more streamlined and less prone to messy complications.&lt;/p&gt;

&lt;p&gt;A particularly noteworthy aspect is the ability to use a unified interface for input validation and TypeScript typing. This seemingly simple feature significantly elevates the developer experience.&lt;/p&gt;

&lt;p&gt;I really like the idea, and I plan to use Middy more in my future projects. Now that I have a new and ready-to-use library for my Middy wrapper, it's even more exciting! 🤩&lt;/p&gt;




&lt;h2&gt;
  
  
  Future Trends
&lt;/h2&gt;

&lt;p&gt;The future of AWS Lambda development with Middy holds more and more possibilities as the Serverless adoption goes on. The increasing adoption of serverless architecture has attracted attention, leading to AWS and FourTheorem sponsorship which signifies the growing significance of Middy in the serverless ecosystem.&lt;/p&gt;

&lt;p&gt;As serverless computing continues to gain traction, more developers are embracing the simplicity and efficiency it offers. This surge in popularity implies a broader user base for Middy, making it a go-to choice for Lambda development.&lt;/p&gt;

&lt;p&gt;The growing user base is likely to result in increased community contributions. This influx of developers brings diverse perspectives and ideas, which may lead to create new and innovative middlewares. Expect a richer ecosystem with a wide array of middleware options for various use cases.&lt;/p&gt;

&lt;p&gt;While there is already a lot of HTTP-related middlewares, the future holds the promise of an expanded selection for different AWS services.&lt;/p&gt;

&lt;p&gt;Hopefully Middy becomes an integral part of the serverless development landscape, it really deserve it ✌️&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In closing, Middy emerges as a game-changer in the world of AWS Lambda development. As we explored Lambda, we encountered challenges like logging and error handling. Middy steps in to effortlessly tackle these hurdles with its modular middleware approach.&lt;/p&gt;

&lt;p&gt;Middy isn't just a middleware, it's a tool that simplifies AWS Lambda complexities, taking serverless development to a whole new level.&lt;/p&gt;

&lt;p&gt;And there you have it, folks! Do I have your attention now? Get ready because the next article will delve into how I developed my Serverless newsletter infrastructure with SST and Middy (plus a really big Step Function) 💪&lt;/p&gt;

&lt;p&gt;If you enjoyed this article, please let me know in the comment section or send me a DM. I'm always happy to chat! ✌️&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS-related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://middy.js.org/" rel="noopener noreferrer"&gt;https://middy.js.org/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: opinions expressed are solely my own and do not express the views or opinions of my employer.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>typescript</category>
      <category>programming</category>
      <category>aws</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Next-Level Serverless Development with SST and TypeScript</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Tue, 20 Feb 2024 13:00:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/next-level-serverless-development-with-sst-and-typescript-41c4</link>
      <guid>https://dev.to/aws-builders/next-level-serverless-development-with-sst-and-typescript-41c4</guid>
      <description>&lt;p&gt;Transform your Serverless workflow with SST and TypeScript. From backends to web apps, SST simplifies it all. Learn the future of Serverless today!&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In my recent project, I built a blog website from scratch, covering Infrastructure, Backend, and event Frontend. It was a big project with everything made custom. That project, combined with other projects I did in the past, revealed two problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;defining boundaries between infrastructure and application code&lt;/li&gt;
&lt;li&gt;the inadequacy of my utility toolbox for certain use cases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, these challenges find resolutions in two ✌️ handy tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SST&lt;/li&gt;
&lt;li&gt;Middy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This first part delves into SST, exploring its functionality and setting up a project.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️Note: Middy will be covered in the second part, and the third part unveils the open sourcing of my Newsletter infrastructure built with Typescript using SST and Middy.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  SST
&lt;/h2&gt;

&lt;p&gt;SST acts as an Infrastructure As Code tool, wrapping AWS CDK while enhancing it with features tailored for building not only backend but also Server Side Applications like Svelte, Next.js, Solid, etc.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;User-Friendly CDK Constructs&lt;/strong&gt;: Say goodbye to complexity. SST provides optimized constructs for serverless apps. Want your Next.js Website? Just 1 line, that's it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Live Lambda Development (LLD)&lt;/strong&gt;: Your Instant Lambda Upgrade! Develop in real-time with instant Lambda function updates. No more waiting, code in the cloud within seconds. And the best part? SST seamlessly syncs your local and remote functions.&lt;br&gt;
Ever thought, "Great, now I have to dive into CloudWatch logs every time I call a function"? Not anymore! SST streams all logs to your local terminal. Convenient, right? Let's dive into the magic behind this cool feature.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Live Lambda Development
&lt;/h3&gt;

&lt;p&gt;SST uses AWS IoT over WebSocket to bridge the connection between your machine and the AWS Lambda function. But how does this work?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Execute "sst dev" to deploy a "mock" version of your infrastructure, transforming your Lambda functions into stubs (a simpler version).&lt;/li&gt;
&lt;li&gt;A local WebSocket client starts up and links up with your AWS account.&lt;/li&gt;
&lt;li&gt;When a Lambda function in your app gets used, it sends out an event with the function's request.&lt;/li&gt;
&lt;li&gt;The local WebSocket client gets this event, says it got the request.&lt;/li&gt;
&lt;li&gt;It runs the local function (like a mini version) and then sends out another event with the function's response.&lt;/li&gt;
&lt;li&gt;The stub Lambda function receives the event and responds with the answer, and voila!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Moreover, it's significantly faster than CDK Watch and SAM Accelerate (SST claims 50-100x faster).&lt;/p&gt;

&lt;p&gt;I'll be honest; I was a bit skeptical initially. I really wanted to test AWS Lambda functions on my machine because I had always found myself comfortable that way. Oh boy, what I would have missed! After experiencing Live Lambda Development, I was genuinely amazed. No need to mock every AWS service in my environment. That's when I fell in love with this functionality.&lt;/p&gt;




&lt;h3&gt;
  
  
  Weighing the Pros and Cons
&lt;/h3&gt;

&lt;p&gt;Let's see what are the major pros and cons of SST&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Live Lambda Development for real-time updates during development&lt;/li&gt;
&lt;li&gt;Higher-level serverless constructs with the flexibility to add custom CDK parameters when necessary&lt;/li&gt;
&lt;li&gt;Engage with a lively community on Discord and Slack&lt;/li&gt;
&lt;li&gt;Effortlessly reference resources across stacks&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Primarily tailored for AWS Lambda, web applications, and serverless scenarios. For more complexity/enterprise, consider using Terraform or CDK, ideal for complex infrastructures&lt;/li&gt;
&lt;li&gt;Keep in mind, it doesn't support multi-region stacks. If you're venturing into a multi-region serverless stack, your might need to find some workarounds&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Setting Up SST
&lt;/h3&gt;

&lt;p&gt;Let's kick things off by initializing the project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-sst@latest --template=examples/rest-api-dynamodb
cd my-sst-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fdf05c8ae-d520-49de-b563-65f06ef06f9e" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fdf05c8ae-d520-49de-b563-65f06ef06f9e" alt="Project structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's as simple as that! Now, let's quickly go over what just happened:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;sst.config.ts&lt;/strong&gt;: This file serves as your configuration hub for every stack you deploy. Here's a peek at how I prefer to set it up:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export default {
  config (_input) {
    return {
      name: 'newsletter-backend',
      region: 'eu-central-1',
      stage: _input.stage
    }
  },
  async stacks (app) {
    app.setDefaultFunctionProps({
      ...
    })

    Tags.of(app).add('env', app.stage)
    Tags.of(app).add('deploy', 'sst')
    Tags.of(app).add('project', app.name)

    await app.stack(DatabaseStack)
    app.stack(EmailStack)
    app.stack(AlertingStack).stack(SchedulerStack).stack(ApiPermissionStack).stack(ApiStack)
  }
} satisfies SSTConfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By setting it up this way, you can establish default parameters for every lambda function and seamlessly add tags to your entire infrastructure. And hey, notice the async-await? That's right, deploy stacks in parallel or await others when needed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;stacks/MyStack.ts&lt;/strong&gt;: This is where your infrastructure takes shape. You can have as many stacks as you want, all in TypeScript. Check out the example API in here. Yep! Is &lt;em&gt;that&lt;/em&gt; simple to build and deploy serverless infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;packages/&lt;/strong&gt;: Inside, you'll find two folders, &lt;strong&gt;core&lt;/strong&gt; for all your business logic and &lt;strong&gt;functions&lt;/strong&gt; where your AWS Lambda function code resides.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fantastic! Now, the next steps, it's time to build our application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install
npm init @eslint/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, we're adding a linter. We can't just run wild without something keeping us in check, right? 😜 Once ESLint is configured, your infrastructure is ready to make its cloud debut:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx sst dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And there you have it🪄! Watch as your application is built and a "mock" version is deployed directly to the cloud. Once done, grab the URL, paste it into Postman, and give it a spin.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You've just laid the solid foundation of your serverless infrastructure. Now, it's time to roll up your sleeves and build something exciting. Get hands-on with the tool, and trust me, you're in for a lovely ride.&lt;/p&gt;

&lt;p&gt;And the excitement doesn't end here! In the next part, we'll level up by adding Middy as our handler middleware 💪&lt;/p&gt;

&lt;p&gt;If you enjoyed this article, please let me know in the comment section or send me a DM. I'm always happy to chat! ✌️&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS-related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.sst.dev/" rel="noopener noreferrer"&gt;https://docs.sst.dev/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: opinions expressed are solely my own and do not express the views or opinions of my employer.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>serverless</category>
      <category>programming</category>
      <category>devops</category>
      <category>sst</category>
    </item>
    <item>
      <title>Real-Time Insights: Monitoring Your Call Center On AWS</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Wed, 17 Jan 2024 13:00:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/real-time-insights-monitoring-your-call-center-on-aws-41i1</link>
      <guid>https://dev.to/aws-builders/real-time-insights-monitoring-your-call-center-on-aws-41i1</guid>
      <description>&lt;p&gt;Gain a deeper understanding of your call center's performance in real-time on AWS. Explore monitoring tools, metrics, and best practices.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;🔗 This is the second part of this series about Amazon Connect, you can find the first part here: &lt;a href="https://cloudnature.net/blog/setting-up-your-first-call-center-on-aws-a-step-by-stepguide"&gt;https://cloudnature.net/blog/setting-up-your-first-call-center-on-aws-a-step-by-stepguide&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Call center monitoring is not just a feature; it’s a necessity. In the fast-paced world of customer support, businesses must maintain a real-time grasp on their operations. Effective monitoring allows you to stay ahead of issues, ensure customer satisfaction, and optimize resource utilization.&lt;/p&gt;

&lt;p&gt;Imagine your call center as a grand piano on a sold-out stage 🎹. Each call your agents handle is a note played on that piano. Effective monitoring acts as the pianist’s ear, allowing you to hit the right chords, keep the perfect tempo and produce melodies. Call center monitoring allows you to watch over your agents’ interactions, ensuring they hit the right notes. It enables you to identify and address any discordant sounds in real-time, thereby orchestrating customer satisfaction and resource utilization to a delightful tune 🎶.&lt;/p&gt;

&lt;p&gt;In the next section, we will explore how, much like a pianist with a trained ear, you can listen to the harmonies that shape your call center’s performance on the AWS stage.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️Note: As we saw in the first part of this series, a real call center is not build with just an Amazon Connect instance. This article covers just the monitoring of Amazon Connect instances and related phone numbers.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Key Metrics to Monitor
&lt;/h2&gt;

&lt;p&gt;We’ve discussed why monitoring is important enough. Now it’s time to understand the metrics we need to monitor in order to have a harmonious call center. We can divide the most critical metrics to monitor into three groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service limit metrics&lt;/li&gt;
&lt;li&gt;Instance and Flow metrics&lt;/li&gt;
&lt;li&gt;Business metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Service Limit Metrics
&lt;/h3&gt;

&lt;h4&gt;
  
  
  CallsBreachingConcurrencyQuota — Count
&lt;/h4&gt;

&lt;p&gt;This metric tracks the total number of voice calls that exceed the concurrent calls quota for your instance. Exceeding quotas can lead to service disruptions and affect customer experience. Monitoring this metric helps you react as soon as possible so you might ask the service team for a service limit increase.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  ThrottledCalls — Count
&lt;/h4&gt;

&lt;p&gt;Throttled calls are voice calls rejected due to exceeding the maximum supported call rate. Like above, by monitoring this metrics you could ask for a service limit increase.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  ConcurrentCallsPercentage — Percent
&lt;/h4&gt;

&lt;p&gt;This metric provides insights into the percentage of your concurrent active voice calls service quota being used. As you can se, unlike the two metrics above, &lt;strong&gt;this one helps you prevent&lt;/strong&gt; the issue so you can plan capacity accordingly.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;80&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;300&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Maximum&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Instance and Flow Metrics
&lt;/h3&gt;

&lt;h4&gt;
  
  
  ToInstancePacketLossRate — Percent
&lt;/h4&gt;

&lt;p&gt;Packet loss can degrade call quality. Higher percentage means the call quality is low, hence there could be network problems.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;0.03&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Average&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ Note: his one is noteworthy. AWS suggests setting an alarm when packet loss exceeds 1% (0.01). I think it’s safe to say we can set a 3% threshold for one minute.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  CallRecordingUploadError — Count
&lt;/h4&gt;

&lt;p&gt;Failed call recording uploads can result in lost data and compliance issues. However, it’s important to note that when an upload error occurs, it doesn’t necessarily indicate a complete failure. Amazon Connect automatically retries the delivery within 24 hours, mitigating potential data loss.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ Note: in a real-world scenario, when encountering an upload error, you should reach out to AWS Support, providing them with the contact ID, contact flow logs, and CTR (Contact Trace Records). They will assist in resolving the issue and retrieving the recording for you, ensuring data integrity and compliance.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  ContactFlowErrors — Count
&lt;/h4&gt;

&lt;p&gt;Tracking flow errors helps you identify and correct issues in your call flow logic, ensuring smooth customer interactions.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  ContactFlowFatalErrors — Count
&lt;/h4&gt;

&lt;p&gt;These errors indicate critical failures in call flow execution. Monitoring them is crucial for preventing disruptions in call center operations.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ Note: it’s crucial to understand that these errors originate from Amazon Connect’s side, which means there’s limited action we can take independently. We may not even have access to CloudWatch logs related to the error. In such cases, the recommended procedure is to notify the AWS Support team, providing details about the affected Contact Flow and a timeline of the issue. They will investigate and resolve the problem, and reach out to you with their findings.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  CallBackNotDialableNumber — Count
&lt;/h4&gt;

&lt;p&gt;This metric alerts you to instances where queued callbacks couldn’t be dialed due to geographic restrictions. Ensuring compliance with outbound call rules is essential.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  MisconfiguredPhoneNumbers — Count
&lt;/h4&gt;

&lt;p&gt;Failed calls due to misconfigured phone numbers can result in missed opportunities. Monitoring this metric helps maintain call quality.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda Execution Duration — Milliseconds
&lt;/h4&gt;

&lt;p&gt;Tracking the execution time of Lambda functions in your AWS account helps you optimize their performance and reduce latency.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;6000&lt;/strong&gt; (6s)&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Maximum&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda Execution Errors — Count
&lt;/h4&gt;

&lt;p&gt;Monitoring Lambda function failures is critical for identifying and resolving issues within your call center infrastructure.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Custom: IncomingCalls — Count
&lt;/h4&gt;

&lt;p&gt;A sudden drop in incoming calls to a DID number (or TFN) could indicate issues with the carrier, and quick action is required to address them. Moreover this metric can also be used to set an upper-bound depending on you Carrier Provider capacity.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt; &lt;em&gt;(depending on your workload)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ Note: How can we trigger an alert for this metric? We are testing “Anomaly detection” whenever the threshold is lower than the predefined range. However, since most call centers do not operate during the weekend, we cannot simply send notifications using SNS. Instead, we need to route the notification through a lambda function first and incorporate custom logic to determine if our call center is open.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Business Metrics
&lt;/h3&gt;

&lt;h4&gt;
  
  
  LongestQueueWaitTime — Seconds
&lt;/h4&gt;

&lt;p&gt;Customer satisfaction aim to minimize wait times. Monitoring this metric helps you identify and address queue bottlenecks in real time.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  QueueSize — Count
&lt;/h4&gt;

&lt;p&gt;Knowing the number of contacts in the queue at any given moment is essential for resource allocation and managing customer expectations.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;100&lt;/strong&gt; &lt;em&gt;(depending on your SLAs)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  QueueCapacityExceededError — Count
&lt;/h4&gt;

&lt;p&gt;Monitoring this metric helps prevent service disruptions caused by a full queue, ensuring smooth call center operation.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  MissedCalls — Count
&lt;/h4&gt;

&lt;p&gt;Missed calls can result in lost business opportunities and dissatisfied customers. Keeping track of missed calls helps you reduce response times and improve service quality.&lt;/p&gt;

&lt;p&gt;⚖️Threshold: &lt;strong&gt;10&lt;/strong&gt; &lt;em&gt;(depending on your SLAs)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;📅Evaluation Period: &lt;strong&gt;1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⏳Period: &lt;strong&gt;60&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📈Statistic: &lt;strong&gt;Sum&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Monitoring with AWS
&lt;/h2&gt;

&lt;p&gt;Monitoring our call center on AWS is a straightforward process. We simply combine Amazon CloudWatch with Amazon SNS, to orchestrate the perfect monitoring symphony, which mean creating a fully automated notification service that triggers alerts as needed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ Note: I won’t cover every single metric alarm (you can find them inside the repository). Instead, I’ll show one example of how to customize the monitoring of incoming calls which is the &lt;strong&gt;Custom: IncomingCalls — Count&lt;/strong&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s start by looking at an example of a metric alarm configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_metric_alarm" "concurrent_calls_percentage" {
  alarm_name          = "${var.project}-connect-concurrent-calls-percentage"
  alarm_description   = "Alarm for Connect Concurrent Calls Percentage"
  comparison_operator = "GreaterThanOrEqualToThreshold"
  metric_name         = "ConcurrentCallsPercentage"
  namespace           = "AWS/Connect"
  statistic           = "Maximum"
  unit                = "Percent"
  dimensions = {
    InstanceId  = var.instance_id
    MetricGroup = "VoiceCalls"
  }
  evaluation_periods = 1
  threshold          = 80
  period             = 300

  alarm_actions      = var.sns_topic_arn != null ? [var.sns_topic_arn] : null
  treat_missing_data = "notBreaching"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This alarm is designed to monitor the “ConcurrentCallsPercentage” metric. It triggers when the maximum value reaches 80%, just one time, within a 5-minute window, generating an SNS email notification.&lt;/p&gt;

&lt;p&gt;But our monitoring efforts don’t end here. To gain deeper insights into incoming calls and understand which numbers are being dialed and how often, we need to set up a custom metric filter. Here’s the code for it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_log_metric_filter" "custom_incoming_calls" {
  name           = "${var.project}-connect-incoming-calls"
  pattern        = "{ $.Parameters.Value != \"\" &amp;amp;&amp;amp; $.Parameters.Key = \"SystemEndpointAddress\" }"
  log_group_name = var.connect_log_group_name

  metric_transformation {
    name          = "IncomingCalls"
    namespace     = var.project
    value         = "1"
    unit          = "Count"
    dimensions = {
      IncomingCalls = "$.Parameters.Value"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the pattern for this metric filter is defined as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ $.Parameters.Value != "" &amp;amp;&amp;amp; $.Parameters.Key = "SystemEndpointAddress" }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make this filter work, we must update the contact flow by adding a “Set Contact Attribute” block that contains the “System Dialed number” value, like so 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vBGeh-Ms--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/68d1af32-d921-4a42-b8a7-9cc92d0513ab" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vBGeh-Ms--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/68d1af32-d921-4a42-b8a7-9cc92d0513ab" alt="How to set a custom attribute containing the dialed phone number" width="359" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By implementing this custom attribute, we are be able to track incoming calls and gain valuable insights into which phone numbers are being dialed and how frequently.&lt;/p&gt;

&lt;p&gt;And there we have it, we have successfully monitored our call center on AWS🎉.&lt;/p&gt;




&lt;h2&gt;
  
  
  Data Visualization on AWS
&lt;/h2&gt;

&lt;p&gt;Yeah! You’re absolutely right, we’ve accomplished a lot so far but we can’t call it a day, yet! When it comes to monitoring a call center, one of the most crucial aspects is visualizing the metrics in a single, comprehensive dashboard. Fortunately, Amazon CloudWatch allows us to create such dashboards easily, much like a composer brings together various musical notes to create a symphony. Let’s take a look at an example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aLP7sANO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/fac62112-ab4a-4bc3-9d76-f4865bfd3d24" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aLP7sANO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/fac62112-ab4a-4bc3-9d76-f4865bfd3d24" alt="Amazon CloudWatch dashboard" width="720" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this dashboard in place, we have successfully completed the visualization aspect of our monitoring system. This real-time dashboard provides us with the ability to ensure that our call center is running smoothly, free from errors or slowdowns. 🦥&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ Note: you have the option to create a custom dashboard in Amazon CloudWatch directly with Terraform. While some may find this convenient, others, like myself, prefer to create dashboards manually using the AWS Console and then either import them or copy and paste the structure into Terraform. If you’re curious about the Terraform approach, you can find more information here 🔗&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_dashboard"&gt;https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_dashboard&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There you have it, folks! With that, we’ve reached the final crescendo 🎶! While in the first article we’ve covered essential aspects of creating a robust and efficient call center infrastructure, in this post we have learnt how monitoring a call center on AWS works. From the initial setup and configuration of Amazon Connect to real-time monitoring and data visualization using Amazon CloudWatch, we’ve explored the key steps to ensure our call center operates seamlessly.&lt;/p&gt;

&lt;p&gt;By following this guide, you can not only establish a reliable call center but also gain valuable insights into its performance, ensuring that it continues to meet the needs of your customers and your business, creating a symphony of success. 🎹.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Here you can find the GitHub repository: &lt;a href="https://github.com/Depaa/amazon-connect-terraform/tree/main"&gt;https://github.com/Depaa/amazon-connect-terraform/tree/main&lt;/a&gt; 😉&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you enjoyed this article, please let me know in the comment section or send me a DM. I’m always happy to chat! ✌️&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS-related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/connect/latest/adminguide/monitoring-cloudwatch.html"&gt;https://docs.aws.amazon.com/connect/latest/adminguide/monitoring-cloudwatch.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/connect/latest/adminguide/architecture-guidance.html"&gt;https://docs.aws.amazon.com/connect/latest/adminguide/architecture-guidance.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: opinions expressed are solely my own and do not express the views or opinions of my employer.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>serverless</category>
      <category>contactcenter</category>
      <category>monitoring</category>
      <category>aws</category>
    </item>
    <item>
      <title>Setting Up Your First Call Center on AWS: A Step By Step Guide</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Wed, 20 Dec 2023 13:00:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/setting-up-your-first-call-center-on-aws-a-step-by-step-guide-3j39</link>
      <guid>https://dev.to/aws-builders/setting-up-your-first-call-center-on-aws-a-step-by-step-guide-3j39</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Discover how to set up your first Amazon Connect instance with this step-by-step guide. Explore features, pricing, and secure call center management.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In today's digital age, where customer service is essential, having an efficient and scalable call center solution is crucial. Amazon Connect, a cloud-based contact center service by Amazon Web Services (AWS), offers a robust and flexible platform to create and manage your call center operations seamlessly.&lt;/p&gt;

&lt;p&gt;In this step-by-step guide, we will walk through the process of creating an Amazon Connect instance, exploring its key features, understanding the pricing model, ensuring the secure storage of call recordings and contact trace records, and after that, we can test it out!🛠️&lt;/p&gt;

&lt;p&gt;Whether it's a small business or a large enterprise, Amazon Connect certainly has a lot to bring to the table. Let's get started! 🚀&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️ About Me: I have been working with Amazon Connect since November 2022, and our instance successfully handles 10,000 concurrent calls every single day. Through this series, I aim to share my hands-on experience in building and managing a contact center on AWS.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What is Amazon Connect
&lt;/h2&gt;

&lt;p&gt;Let's get down to the basics and understand what Amazon Connect is all about. Imagine we have a business, be it small or large, and we're looking to enhance our customer service operations. Well, Amazon Connect is the solution we've been searching for.&lt;/p&gt;

&lt;p&gt;In essence, Amazon Connect is a cloud-based, managed service provided by Amazon Web Services (AWS). It's designed to simplify how you handle customer interactions. One of the significant advantages of Amazon Connect is its ease of use. We won't need to invest in costly hardware💲, worry about maintenance, or navigate through intricate software installations. Everything is managed in the cloud, eliminating the need for outdated, on-premises systems.&lt;/p&gt;

&lt;p&gt;Okay, we are only scratching the surface. AWS's service has a lot more to offer. Yes, we can make inbound calls, but we can also handle outbound calls. We can use its chat functionality or manage agents' scheduling, even guiding them to improve customer satisfaction. It has many more features, but for now, we will cover only the basics. I don't want this post to become bloated with content 🌋.&lt;/p&gt;

&lt;p&gt;Now, let's address the question on everyone's mind: How much does Amazon Connect cost?&lt;/p&gt;




&lt;h2&gt;
  
  
  Amazon Connect Pricing
&lt;/h2&gt;

&lt;p&gt;The service offers a pay-as-you-go pricing model, which means you only pay for what you use! Essentially the billing motto is: you pay based on the minutes you spend talking📞. With that in mind, let's provide an example of a real bill, assuming our call center works exclusively in Italy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;End customer minutes used:&lt;/strong&gt; $0.018 per minute. This means that AWS charges based on the total accumulated minutes of voice calls that the contact center handles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outbound calls:&lt;/strong&gt; $0.0200 per minute.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inbound calls (DID):&lt;/strong&gt; $0.0040 per minute. I used DID as an example. If you had a Toll-Free or UIFN number, the price per minute changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claimed Phone Numbers (Toll-Free):&lt;/strong&gt; $0.32 per number per day. This charge applies when you claim a phone number. If you had a DID or UIFN number, the price changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To sum it up with an example, here are the assumptions:&lt;br&gt;
*10 Toll-Free numbers claimed = 10 * $0.32 = $3.2&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On average, call duration is 5 minutes, and we have 500 calls per day, totaling 75,000 minutes = 75,000 * $0.018 = $1,350&lt;/li&gt;
&lt;li&gt;60,000 minutes are from Inbound calls = 60,000 * $0.0040 = $240&lt;/li&gt;
&lt;li&gt;15,000 minutes are from Outbound calls = 15,000 * $0.0200 = $300&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This leaves us with a grand total of $1,893.2 per month for 500 daily calls💲.&lt;/p&gt;

&lt;p&gt;Now that we've got a solid grasp of Amazon Connect pricing, it's time to roll up our sleeves and dive into the practical side of things🤿.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️Note: we are going to use Terraform as our Infrastructure as Code (IaC) tool. Every resource will be created using it.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Infrastructure
&lt;/h2&gt;

&lt;p&gt;It may seem simple at first - in fact, it is straightforward! However, to ensure the call center is fully prepared and compliant, we need to include the following essential resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;KMS Key for &lt;strong&gt;data encryption&lt;/strong&gt;. This is critical to secure sensitive data.&lt;/li&gt;
&lt;li&gt;S3 Bucket for storing &lt;strong&gt;call recordings&lt;/strong&gt;. This is crucial for compliance, as regulations often require call recordings to be stored for several years, depending on the location of the call center.&lt;/li&gt;
&lt;li&gt;Kinesis Stream with Lambda for retrieving &lt;strong&gt;Agent Events&lt;/strong&gt;. Agent Events play a vital role in custom agent logic.&lt;/li&gt;
&lt;li&gt;Kinesis Stream with Lambda for retrieving &lt;strong&gt;Contact Trace Records&lt;/strong&gt;. Compliance is not the only reason to use Contact Trace Records; you'll also discover their incredible potential when integrated into a Data Warehouse.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phone number&lt;/strong&gt; associated with a Contact Flow. Contact Flows define the customer experience within a contact center. They consist of a series of instructions that determine how customer interactions are handled and routed within the system. For instance, in the example below, we set the logging and recording behavior, play a prompt to customers, and then redirect them to available Agents.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q06lcLsv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/bfac6388-a5fe-4df7-a16d-c2c7277baf3b" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q06lcLsv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/bfac6388-a5fe-4df7-a16d-c2c7277baf3b" alt="Contact Flow Example" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These components work together to create a robust and compliant call center infrastructure. &lt;/p&gt;

&lt;p&gt;Oh, and one more thing! This architecture is designed for high availability across 3 different Availability Zones. Pretty cool, right? No added cost for in-region disaster recovery. ❤️️&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️Note: for the sake of simplicity, our Lambda function simply prints the event. Normally, you would want your Lambda function to incorporate custom logic for your CRM or even attach a Kinesis Firehose to the Kinesis Stream to save data to S3 for compliance purposes.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Creating Your Amazon Connect Instance
&lt;/h2&gt;

&lt;p&gt;As I mentioned before, it’s really straightforward!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_connect_instance" "this" {
  instance_alias            = [YOUR_INSTANCE_ALIAS]
  identity_management_type  = "CONNECT_MANAGED"
  inbound_calls_enabled     = true
  outbound_calls_enabled    = true
  contact_flow_logs_enabled = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yes, that’s really all there is to it! With these few lines, we were able to create our Call Center. Does it work? Yes, it does! It even includes a few Contact Flow examples you can run.&lt;/p&gt;

&lt;p&gt;But what do these lines mean? With this configuration, we’ve enabled contact flow logs for our instance, along with the ability to receive inbound and outbound calls. We could have also enabled contact lens, but it’s not necessary for now.&lt;/p&gt;

&lt;p&gt;Anyway, the purpose of this post is to show how to set up a real call center. So, we can’t stop here, we have a few building blocks to create. 🧱&lt;/p&gt;




&lt;h2&gt;
  
  
  Securely Storing Call Recordings
&lt;/h2&gt;

&lt;p&gt;Guess what? We can natively store Call Recordings in Amazon S3 with just a few lines of code. But there are a couple of really handy features of S3 that we need to keep an eye on:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Object lock&lt;/strong&gt; configuration: this feature gives us an extra layer of protection against changes or deletions of recordings for compliance purposes. You can also set up the retention period as needed.&lt;/li&gt;
&lt;li&gt;Server-side &lt;strong&gt;encryption&lt;/strong&gt;: of course, there’s no real “Security” without KMS, so we need to ensure that we create a key and use it to encrypt everything.
In the end, we’ll have this configuration for storing call recordings:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_connect_instance_storage_config" "call_recordings" {
  instance_id   = module.connect_instance.id
  resource_type = "CALL_RECORDINGS"

  storage_config {
    s3_config {
      bucket_name   = module.s3_bucket.s3_bucket_id
      bucket_prefix = "CallRecordings"

      encryption_config {
        encryption_type = "KMS"
        key_id          = module.kms_key.key_arn
      }
    }
    storage_type = "S3"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These steps ensure that your call recordings are securely stored in Amazon S3.&lt;/p&gt;




&lt;h2&gt;
  
  
  Saving Contact Trace Records and Agent Events
&lt;/h2&gt;

&lt;p&gt;I highly recommend saving them to S3 as well! But first, we need to go through a Kinesis Stream, which enables future custom real-time development. In this demo, I’ve created a Kinesis stream and a Lambda function that reads from the stream and prints records. Linking the Stream to Amazon Connect is seamless. Here’s the configuration for Contact Trace Records (it’s the same for Agent Events):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_connect_instance_storage_config" "contact_trace_records" {
  instance_id   = module.connect_instance.id
  resource_type = "CONTACT_TRACE_RECORDS"

  storage_config {
    kinesis_stream_config {
      stream_arn = module.kinesis_contact_trace_records.arn
    }
    storage_type = "KINESIS_STREAM"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And there you have it — our contact center is now ready to be safely used! 🚀&lt;/p&gt;




&lt;h2&gt;
  
  
  Configuring Phone Numbers
&lt;/h2&gt;

&lt;p&gt;A Call Center without a phone number is almost useless. Let’s explore how we can claim one and what steps we need to follow afterward.&lt;/p&gt;

&lt;p&gt;Claiming a phone number isn’t automated; you’ll need to open a ticket or request it directly in the Amazon Connect instance.&lt;/p&gt;

&lt;p&gt;First, we need to log in to our instance. Since it’s the first time, and we don’t have any users created, we’ll need to use the emergency access feature 🚨.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JvYD2mcs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/f3dc58ca-297d-480d-9553-2d71778b1193" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JvYD2mcs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/f3dc58ca-297d-480d-9553-2d71778b1193" alt="Emergency Log In" width="720" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, navigate to the &lt;strong&gt;“Phone numbers”&lt;/strong&gt; section and click the &lt;strong&gt;“Claim a number”&lt;/strong&gt; button in the top-right corner. Find the country from which you need the number, select the number, and you’re all set!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️Note: if there are no numbers available for that country, it’s because some countries require verification even for Toll-Free numbers. You can find more information for each country here: &lt;a href="https://docs.aws.amazon.com/connect/latest/adminguide/phone-number-requirements.html"&gt;https://docs.aws.amazon.com/connect/latest/adminguide/phone-number-requirements.html&lt;/a&gt;.&lt;br&gt;
P.S. You can select another country without verification requirements if needed.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With our first number claimed, the only thing left is to test our Call Center.&lt;/p&gt;




&lt;h2&gt;
  
  
  Testing the Call Center
&lt;/h2&gt;

&lt;p&gt;Now that we’ve claimed our phone number, we can associate it with a Contact Flow. Let’s do just that! Click on the newly claimed phone number and select “Sample recording behavior.”&lt;/p&gt;

&lt;p&gt;One more thing we need to do is have an Agent available. To do this, select “Agent Workspace” or “Contact Control Panel” from the top-right header bar, and then select “Available.”&lt;/p&gt;

&lt;p&gt;Now, we can call the number we claimed. After a few steps in the sample Contact Flow, we’ll call an agent (ourselves), and our Panel should ring ☎️.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bgiS5wBq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/82465637-fd42-4a56-97be-cb9d2f5865ec" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bgiS5wBq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/82465637-fd42-4a56-97be-cb9d2f5865ec" alt="Contact Control Panel with incoming call" width="330" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we’ve had a few minutes of fun with ourselves, we can close the call and check the results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The S3 bucket has a new folder named “CallRecordings,” which contains our recording.&lt;/li&gt;
&lt;li&gt;CloudWatch logs have a log group named after your instance. With this sample flow, we won’t see any logs, but if we had added the “Set logging behavior” block to the contact flow, we would have seen everything that happened during the interaction.&lt;/li&gt;
&lt;li&gt;CloudWatch logs have two new log groups for our Lambda functions. In these, we can find the agent events and contact trace records that were passed from the Kinesis Stream to our Lambda function.
Pretty cool, isn’t it? With just a few configurations, we’ve managed to create a polished Call Center!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations, you’ve successfully embarked on the journey of setting up your very own Amazon Connect call center! But remember, setting up your call center is just the beginning. Amazon Connect offers endless possibilities for customization and optimization.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Here you can find the GitHub repository: &lt;a href="https://github.com/Depaa/amazon-connect-terraform/tree/main"&gt;https://github.com/Depaa/amazon-connect-terraform/tree/main&lt;/a&gt; 😉&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you enjoyed this article, please let me know in the comment section or send me a DM. I’m always happy to chat! ✌️&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS-related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;References&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/connect/latest/adminguide/what-is-amazon-connect.html"&gt;https://docs.aws.amazon.com/connect/latest/adminguide/what-is-amazon-connect.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/connect/latest/adminguide/phone-number-requirements.html"&gt;https://docs.aws.amazon.com/connect/latest/adminguide/phone-number-requirements.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/connect/pricing/"&gt;https://aws.amazon.com/connect/pricing/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>serverless</category>
      <category>contactcenter</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>PartyRock-Powered Social Media AI Assistant</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Thu, 16 Nov 2023 17:59:44 +0000</pubDate>
      <link>https://dev.to/aws-builders/partyrock-powered-social-media-ai-assistant-929</link>
      <guid>https://dev.to/aws-builders/partyrock-powered-social-media-ai-assistant-929</guid>
      <description>&lt;p&gt;Elevate your social media game with PartyRock! This Generative AI assistant crafts engaging posts, unleashing creativity for a standout online presence. Lean everything about PartyRock!&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the ever-changing realm of Generative Artificial Intelligence, a new tool has popped up. Its scope is to improve learning for Generative AI models, mostly for prompt engineering using Amazon Bedrock's available models. PartyRock is not affiliated with Amazon; it's just a playground that utilizes Amazon Bedrock, and it aims to let us experiment with different foundation models.&lt;/p&gt;

&lt;p&gt;In this blog post, we are going to learn how to use PartyRock with a pretty cool example (my one social media manager friend says it's cool, so I'm sticking with it) 😎.&lt;/p&gt;




&lt;h2&gt;
  
  
  Social Media Assistant
&lt;/h2&gt;

&lt;p&gt;Essentially, we are going to build a social media assistant that will help us write social media posts starting from a context, which could be a blog post, an announcement, etc. To keep it interesting, we are going to have tone suggestions, word limitations, and a customized post refinement. Yes, we are going to generate images as well 🔝.&lt;/p&gt;




&lt;h2&gt;
  
  
  Create New Applications
&lt;/h2&gt;

&lt;p&gt;Let's jump right through it and create our own application. Essentially, we need just a great prompt, and the game is done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1t0Lq04B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/17b80d44-d398-4ba9-8b31-c1cec24e2c30" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1t0Lq04B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/17b80d44-d398-4ba9-8b31-c1cec24e2c30" alt="PartyRock app builder interface" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can see how we can use the "App Builder" to generate a custom application within PartyRock.&lt;/p&gt;

&lt;p&gt;This is my prompt: "This app will help you write better social media posts. By providing the context, the app will generate a social media post. Additionally, we can customize the social media post with our own ideas, and the app will generate a new version of our social media post."&lt;/p&gt;

&lt;p&gt;After hitting "Generate App," in a few seconds, we are going to see our app. The app will take a context and generate a social media post. After that, we can customize the generated post with our own ideas and suggestions, and the app will generate an updated social media post.&lt;/p&gt;




&lt;h2&gt;
  
  
  Adding Widgets
&lt;/h2&gt;

&lt;p&gt;There are five types of widgets we can add:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User Input&lt;/li&gt;
&lt;li&gt;Static Text&lt;/li&gt;
&lt;li&gt;Text Generation&lt;/li&gt;
&lt;li&gt;Image Generation&lt;/li&gt;
&lt;li&gt;Chatbot&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They are pretty much self-explanatory. Let's try to add a few widgets to our application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ymlZBxBy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/0574218e-ebe1-475f-b172-efb9267567d9" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ymlZBxBy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/0574218e-ebe1-475f-b172-efb9267567d9" alt="PartyRock widget interface" width="675" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this interface, I'm going to add a widget for selecting a tone and one for limiting the number of words (I'm looking at you Twitter/X). Then I want to add a widget for generating an image related to the prompt so that I can use it in the social media post. You know what could be really handy? Adding a chatbot that gives us feedback on our post ✅.&lt;/p&gt;




&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;After spending a few minutes playing with the prompt to find the best one, this is a gif that shows the entire application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bndzJ67A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/a0e2eeb8-52c9-43a4-bec4-5e568651d261" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bndzJ67A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://content.cloudnature.net/images/a0e2eeb8-52c9-43a4-bec4-5e568651d261" alt="PartyRock Social Media Assistant demo" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Actually, I see myself using this tool to receive feedback on my own social media posts and improve their quality.&lt;/p&gt;




&lt;h2&gt;
  
  
  Integration
&lt;/h2&gt;

&lt;p&gt;Now that we've seen what this tool is capable of, I want to mention that we can also customize the single widget in order to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change inference parameters such as temperature and top p&lt;/li&gt;
&lt;li&gt;Change the foundation model used within the widget&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This tool is not officially related to AWS, but they built it. When talking with them, they said they are going to monitor the usage and add features to PartyRock as the demand grows. That said, I hope to see some kind of Retrieval Augmented Generation (RAG) widgets as well as image-to-image generation 🚀.&lt;/p&gt;




&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;p&gt;As of now, it has a generous free trial. In the future, I expect that we are going to sign up with an AWS Account so that the costs associated will be only on Amazon Bedrock or it will be a subscription to the AWS Marketplace.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Overall, it was a great experience to brainstorm a few ideas and try them out in a matter of seconds. I love the fact that I can reference inputs and outputs from other widgets. I see myself using this tool for testing a few ideas before actually starting to develop them, which means I can see them come to light with little to no effort.&lt;/p&gt;

&lt;p&gt;Want to use and clone my application? Here is the link: &lt;a href="https://partyrock.aws/u/Depa/cEEGq-AMY/Social-Media-Assistant"&gt;https://partyrock.aws/u/Depa/cEEGq-AMY/Social-Media-Assistant&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you enjoyed this article, let me know 🚀.&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;If you want to try it out, here is the website:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://partyrock.aws/"&gt;https://partyrock.aws/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://partyrock.aws/guide/faq"&gt;https://partyrock.aws/guide/faq&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: opinions expressed are solely my own and do not express the views or opinions of my employer.&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>generativeai</category>
      <category>partyrock</category>
      <category>amazonbedrock</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>Amazon Bedrock For JavaScript and TypeScript Developers</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Mon, 06 Nov 2023 13:00:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/amazon-bedrock-for-javascript-and-typescript-developers-4m5b</link>
      <guid>https://dev.to/aws-builders/amazon-bedrock-for-javascript-and-typescript-developers-4m5b</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Hey! If you are here, it means you are planning to use Amazon Bedrock, and every example you see uses Python. So, you might have typed on Google something like '"Amazon" "Bedrock" "Typescript"' and somehow you ended up here (not complaining though 😜).&lt;/p&gt;

&lt;p&gt;This blog post covers examples in Node.js, both JavaScript and TypeScript, for using Amazon Bedrock SDK V3. You can expect two things from this blog post: examples and my thoughts.&lt;/p&gt;

&lt;p&gt;Did I mention there is one example for each model available in Amazon Bedrock? 🚀&lt;/p&gt;

&lt;p&gt;Let's jump right through it.&lt;/p&gt;

&lt;p&gt;What? Wait a sec! If you have never used Amazon Bedrock, I have an article waiting for you after this one: &lt;a href="https://cloudnature.net/blog/the-complete-guide-to-amazon-bedrock-for-generative-ai" rel="noopener noreferrer"&gt;The Complete Guide to Amazon Bedrock for Generative AI&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initializing Clients
&lt;/h2&gt;

&lt;p&gt;"Clients"! Yes, you read it right, this time it's not my typo. We need to install two clients:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install @aws-sdk/client-bedrock
npm install @aws-sdk/client-bedrock-runtime
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;client-bedrock&lt;/strong&gt;: SDK for creating and managing Amazon Bedrock models;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;client-bedrock-runtime&lt;/strong&gt;: SDK for invoking Amazon Bedrock models and running inference on them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let's import the libraries. Amazon Bedrock is available in different regions, so we need to select one, or it will use the default one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { BedrockRuntimeClient, InvokeModelCommand, InvokeModelCommandInput, InvokeModelCommandOutput, InvokeModelWithResponseStreamCommand, InvokeModelWithResponseStreamCommandInput, InvokeModelWithResponseStreamCommandOutput } from "@aws-sdk/client-bedrock-runtime";

import { BedrockClient, CreateModelCustomizationJobCommand, GetModelCustomizationJobCommand, ListFoundationModelsCommand, CreateModelCustomizationJobCommandInput, CreateModelCustomizationJobCommandOutput, GetModelCustomizationJobCommandInput, GetModelCustomizationJobCommandOutput, ListFoundationModelsCommandInput, ListFoundationModelsCommandOutput } from '@aws-sdk/client-bedrock';

const client = new BedrockRuntimeClient({ region: process.env.REGION || 'us-east-1' });
const client = new BedrockClient({ region: process.env.REGION || 'us-east-1' });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that done, we're ready to start with the &lt;strong&gt;client-bedrock-runtime&lt;/strong&gt; SDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  Invoke Model With Response Stream
&lt;/h2&gt;

&lt;p&gt;This API streams the Generative AI model response to us. For this example, I used the model Anthropic Claude Instant V1.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const MODEL_ID = process.env.MODEL_ID || 'anthropic.claude-instant-v1';
const PROMPT = process.env.PROMPT || 'Hi, who are you?';

const params: InvokeModelWithResponseStreamCommandInput = {
    modelId: MODEL_ID,
    contentType: "application/json",
    accept: "application/json",
    body: JSON.stringify({
      prompt: `\n\nHuman:${PROMPT}\n\nAssistant:`,
      max_tokens_to_sample: 300,
      temperature: 0.5,
      top_k: 250,
      top_p: 1,
    }),
};
const command = new InvokeModelWithResponseStreamCommand(param);
const res = await client.send(command);

const chunks = [];

for await (const event of res.body) {
    if (event.chunk &amp;amp;&amp;amp; event.chunk.bytes) {
        const chunk = JSON.parse(Buffer.from(event.chunk.bytes).toString("utf-8"));
        chunks.push(chunk.completion); // change this line
    } else if (
        event.internalServerException ||
        event.modelStreamErrorException ||
        event.throttlingException ||
        event.validationException
    ) {
        console.error(event);
        break;
    }
};
console.log({
    prompt: PROMPT,
    completion: chunks.join(''),
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above, we can see how to integrate the 'InvokeModelWithResponseStreamCommand' with TypeScript. One thing worth mentioning is that if we are using a Lambda Function, this integration may not be as useful if we simply take it from this example. It becomes more useful when integrated with &lt;strong&gt;Lambda response streaming&lt;/strong&gt; so we can effectively stream the response from the Generative AI model back to our users. You can find an example here: &lt;a href="https://serverlessland.com/patterns/lambda-streaming-ttfb-pipeline-sam" rel="noopener noreferrer"&gt;Lambda Streaming First Byte (TTFB) Pipeline&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Invoke Model
&lt;/h2&gt;

&lt;p&gt;Starting from the basics, we will go through every single model provider available in Amazon Bedrock as of the time I'm writing this blog post (I'll probably maintain this blog post updated if I see you like it 🙂).&lt;/p&gt;

&lt;p&gt;Here is the complete structure to invoke the model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const MODEL_ID = process.env.MODEL_ID || '';
const PROMPT = process.env.PROMPT || '';

const params: InvokeModelCommandInput = {
    modelId: MODEL_ID,
    contentType: "application/json",
    accept: "application/json",
    body: JSON.stringify(/*Here we place prompt and inference parameters, every model has its own structure 😩*/),
};
const command = new InvokeModelCommand(params);
const res = await client.send(command);
const jsonString = new TextDecoder().decode(res.body);
const modelRes = JSON.parse(jsonString);

const bodyRes = {
    prompt: PROMPT,
    completion: /* Here we place the response from "modelRes", every model has its own response 😩*/,
};
console.debug(bodyRes);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each model has its own body and response parameters. Starting from this code I'll show the 'params' and how to retrieve the 'response' for each model provider.&lt;/p&gt;

&lt;p&gt;I understand that these are all different providers, and generative AI is still too young to have a strong standard, but having to map different parameters and different response objects can be quite confusing. Please, model providers, talk to each other and provide a single, user-friendly way for parameters and model responses 🙏.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️Depending on the region you are in, you may not see every model available.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Invoke Model: Anthropic
&lt;/h2&gt;

&lt;p&gt;Here are the models available for Anthropic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;"anthropic.claude-v1"&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;"anthropic.claude-instant-v1"&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;"anthropic.claude-v2"&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here is the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const MODEL_ID = process.env.MODEL_ID || 'anthropic.claude-instant-v1';
const PROMPT = process.env.PROMPT || 'Hi, who are you?';
const params: InvokeModelCommandInput = {
    modelId: MODEL_ID,
    contentType: "application/json",
    accept: "application/json",
    body: JSON.stringify({
      prompt: `\n\nHuman:${PROMPT}\n\nAssistant:`,
      max_tokens_to_sample: 300,
      temperature: 0.5,
      top_k: 250,
      top_p: 1,
    }),
};
const res = await invokeModel(params);
const jsonString = new TextDecoder().decode(res.body);
const modelRes = JSON.parse(jsonString);

const bodyRes = {
        prompt: PROMPT,
        completion: modelRes.completion,
};
console.debug(bodyRes);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Invoke Model: AI21 Labs
&lt;/h2&gt;

&lt;p&gt;Here are the models available for AI21 Labs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;"ai21.j2-mid-v1"&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;"ai21.j2-ultra-v1"&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here is the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const MODEL_ID = process.env.MODEL_ID || 'ai21.j2-mid';
const PROMPT = process.env.PROMPT || 'Hi, who are you?';
const params: InvokeModelCommandInput = {
    modelId: MODEL_ID,
    contentType: "application/json",
    accept: "application/json",
    body: JSON.stringify({
      prompt: PROMPT,
      maxTokens: 200,
      temperature: 0.7,
      topP: 1,
      stopSequences: [],
      countPenalty: { scale: 0 },
      presencePenalty: { scale: 0 },
      frequencyPenalty: { scale: 0 },
    }),
};
const res = await invokeModel(params);
const jsonString = new TextDecoder().decode(res.body);
const modelRes = JSON.parse(jsonString);

const bodyRes = {
        prompt: PROMPT,
        completion: modelRes.completions[0].data.text,
};
console.debug(bodyRes);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Invoke Model: Cohere
&lt;/h2&gt;

&lt;p&gt;Here are the models available for Cohere:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;"cohere.command-text-v14"&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here is the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const MODEL_ID = process.env.MODEL_ID || 'cohere.command-text-v14';
const PROMPT = process.env.PROMPT || 'Hi, who are you?';
const params: InvokeModelCommandInput = {
    modelId: MODEL_ID,
    contentType: "application/json",
    accept: "application/json",
    body: JSON.stringify({
      prompt: PROMPT,
      max_tokens: 400,
      temperature: 0.75,
      p: 0.01,
      k: 0,
      stop_sequences: [],
      return_likelihoods: "NONE",
    }),
};
const res = await invokeModel(params);
const jsonString = new TextDecoder().decode(res.body);
const modelRes = JSON.parse(jsonString);

const bodyRes = {
        prompt: PROMPT,
        completion: modelRes.generations[0].text,
};
console.debug(bodyRes);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Invoke Model: Amazon Text
&lt;/h2&gt;

&lt;p&gt;Here are the models available for Amazon Text:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;"amazon.titan-text-lite-v1"&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;"amazon.titan-text-express-v1"&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;"amazon.titan-text-agile-v1"&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here is the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const MODEL_ID = process.env.MODEL_ID || 'amazon.titan-text-lite-v1';
const PROMPT = process.env.PROMPT || 'Hi, who are you?';
const params: InvokeModelCommandInput = {
    modelId: MODEL_ID,
    contentType: "application/json",
    accept: "application/json",
    body: JSON.stringify({
      inputText: PROMPT,
      textGenerationConfig: {
        maxTokenCount: 300,
        stopSequences: [],
        temperature: 0,
        topP: 0.9,
      }
    }),
};
const res = await invokeModel(params);
const jsonString = new TextDecoder().decode(res.body);
const modelRes = JSON.parse(jsonString);

const bodyRes = {
        prompt: PROMPT,
        completion: modelRes.results[0].outputText,
};
console.debug(bodyRes);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️Those models are still in preview, but the documentation show a detailed overview on what Amazon' models need and give.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Invoke Model: Amazon Embedding
&lt;/h2&gt;

&lt;p&gt;Here are the models available for Amazon Embedding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;"amazon.titan-embed-text-v1"&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here is the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const MODEL_ID = process.env.MODEL_ID || 'amazon.titan-embed-text-v1';
const PROMPT = process.env.PROMPT || 'Hi, who are you?';
const params: InvokeModelCommandInput = {
    modelId: MODEL_ID,
    contentType: "application/json",
    accept: "application/json",
    body: JSON.stringify({
      inputText: PROMPT,
    }),
};
const res = await invokeModel(params);
const jsonString = new TextDecoder().decode(res.body);
const modelRes = JSON.parse(jsonString);

const bodyRes = {
        prompt: PROMPT,
        embedding: modelRes.embedding,
};
console.debug(bodyRes);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Invoke Model: Stability AI
&lt;/h2&gt;

&lt;p&gt;Here are the models available for Stability AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;"stability.stable-diffusion-xl-v0"&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here is the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const MODEL_ID = process.env.MODEL_ID || 'stability.stable-diffusion-xl-v0';
const PROMPT = process.env.PROMPT || 'Hi, who are you?';
const params: InvokeModelCommandInput = {
    modelId: MODEL_ID,
    contentType: "application/json",
    accept: "application/json",
    body: JSON.stringify({
      text_prompts: [{ text: PROMPT }],
      cfg_scale: 10,
      seed: 0,
      steps: 50,
    }),
};
const res = await invokeModel(params);
const jsonString = new TextDecoder().decode(res.body);
const modelRes = JSON.parse(jsonString);

const bodyRes = {
        prompt: PROMPT,
        image: modelRes.artifacts[0].base64,
};
console.debug(bodyRes);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, we can save the image in a file. I'll leave this code to the AI 🤖... I'm joking, it's in my repository 😜.&lt;/p&gt;

&lt;p&gt;On a side note, don't you want to know what the Stable Diffusion XL V0 response is to the question: "Hi, who are you?"? Here's the result 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F5d7f7f2c-a59c-45a3-ac7c-a211b2f866a4" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F5d7f7f2c-a59c-45a3-ac7c-a211b2f866a4" alt="Personification of Stability Diffusion XL V0"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Perfect, now that we have explored everything there is in the &lt;strong&gt;client-bedrock-runtime&lt;/strong&gt; SDK, it's time to learn how we can use the &lt;strong&gt;client-bedrock&lt;/strong&gt; SDK 🚅.&lt;/p&gt;

&lt;h2&gt;
  
  
  List Foundation Models
&lt;/h2&gt;

&lt;p&gt;The easiest of them all, it will list foundation models available on Amazon Bedrock. Here's the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const params: ListFoundationModelsCommandInput = {
    byInferenceType: "ON_DEMAND", // or "PROVISIONED"
}
const command = new ListFoundationModelsCommand(param);
const res = await client.send(command);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First of all, shouldn't this API return every model without specifying parameters? Why did I use 'byInferenceType'?&lt;/p&gt;

&lt;p&gt;Unfortunately, this API has a bug, and without parameters, it throws errors in the 'byProvider' parameter. Also, using the 'byProvider' parameter will throw error status 400 because the regex for the model provider name is not correct. Using 'byInferenceType' is the least impacting parameter if you are starting with Amazon Bedrock 😊.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️Here's the issue: &lt;a href="https://github.com/aws/aws-sdk-js/issues/4519" rel="noopener noreferrer"&gt;https://github.com/aws/aws-sdk-js/issues/4519&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Create Model Customization Job
&lt;/h2&gt;

&lt;p&gt;This command was taken right from the V3 documentation, which is really well done. &lt;/p&gt;

&lt;p&gt;Here's the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const BUCKET_URI = process.env.BUCKET_URI || 's3://S3_BUCKET_NAME';
const ROLE_ARN = process.env.ROLE_ARN || 'arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME';
const BASE_MODEL_IDENTIFIER = process.env.BASE_MODEL_IDENTIFIER || 'arn:aws:bedrock:us-east-1::foundation-model/amazon.titan-text-express-v1';

const now = new Date();
const params: CreateModelCustomizationJobCommandInput = {
    jobName: `job-${now.getTime()}`, // required
    customModelName: `titan-text-express-v1-${now.getTime()}`, // required
    roleArn: ROLE_ARN, // required
    baseModelIdentifier: BASE_MODEL_IDENTIFIER, // required
    jobTags: [ // TagList
      { // Tag
        key: 'bedrock', // required
        value: 'true', // required
      },
    ],
    customModelTags: [
      {
        key: 'custom-bedrock', // required
        value: 'true', // required
      },
    ],
    trainingDataConfig: {
      s3Uri: `${BUCKET_URI}/training/dataset.jsonl`, // required
    },
    outputDataConfig: {
      s3Uri: `${BUCKET_URI}/output`, // required
    },
    hyperParameters: { // required
      'epochCount': '1',
      'batchSize': '4',
      'learningRate': '0.02',
      'learningRateWarmupSteps': '0',
    },
    // customModelKmsKeyId: 'STRING_VALUE',
    // clientRequestToken: 'STRING_VALUE',
    // validationDataConfig: { // ValidationDataConfig
    //   validators: [ // Validators // required
    //     { // Validator
    //       s3Uri: 'STRING_VALUE', // required
    //     },
    //   ],
    // },
    // vpcConfig: { // VpcConfig
    //   subnetIds: [ // SubnetIds // required
    //     'STRING_VALUE',
    //   ],
    //   securityGroupIds: [ // SecurityGroupIds // required
    //     'STRING_VALUE',
    //   ],
    // },
  };
const command = new CreateModelCustomizationJobCommand(param);
const res = await client.send(command);

console.log(res.jobArn)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fine tuning is still in preview, but from this documentation &lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/bedrock/command/CreateModelCustomizationJobCommand/" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/bedrock/command/CreateModelCustomizationJobCommand/&lt;/a&gt;, we can have a detailed sneak peek 👀 of what's coming.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Model Customization Job
&lt;/h2&gt;

&lt;p&gt;Nothing too special about this one, it gets the job information using its identifier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const JOB_ARN = process.env.JOB_NAME || 'JOB_NAME';
const params: GetModelCustomizationJobCommandInput = {
    jobIdentifier: JOB_ARN,
};
const command = new GetModelCustomizationJobCommand(params);
const res = await client.send(command);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we can check the 'status' and also the 'failureMessage', which is really handy to receive by email on Friday at 17.55 😈.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;⚠️There are other APIs from 'client-bedrock' that I won't cover because they are really simple or not as useful as these 3.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There you have it, folks! With these code snippets in mind, you can use Amazon Bedrock like a pro 💻. We went through both SDKs and found workarounds for bugs, I think this was a nice ride, and hopefully, you too will be able to enjoy your ride better after this article.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Here you can find the GitHub repository: &lt;a href="https://github.com/Depaa/amazon-bedrock-nodejs" rel="noopener noreferrer"&gt;https://github.com/Depaa/amazon-bedrock-nodejs&lt;/a&gt; 😉&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you enjoyed this article, please let me know in the comment section or send me a DM. I'm always happy to chat! ✌️&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS-related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/bedrock/" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/bedrock/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/bedrock-runtime/" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/bedrock-runtime/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/bedrock/latest/userguide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;https://aws.amazon.com/bedrock/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: opinions expressed are solely my own and do not represent the views or opinions of my employer.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>generativeai</category>
      <category>amazonbedrock</category>
      <category>javascript</category>
      <category>node</category>
    </item>
    <item>
      <title>Top 5 Techniques for Prompt Engineering</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Thu, 02 Nov 2023 12:08:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/top-5-techniques-for-prompt-engineering-1jf5</link>
      <guid>https://dev.to/aws-builders/top-5-techniques-for-prompt-engineering-1jf5</guid>
      <description>&lt;p&gt;Discover the top techniques for prompt engineering, including strategies for crafting effective prompts and optimizing output of your Generative AI model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Are you tired of feeling like your prompts are a bit… meh? Like, they’re not quite getting the job done, but you’re not sure why? Well, buckle up, because we’re about to dive headfirst into the wild world of &lt;strong&gt;prompt engineering&lt;/strong&gt;! &lt;/p&gt;

&lt;p&gt;With these top-secret* techniques, you’ll be whipping up prompts like a pro chef whips up a mean omelette. Your language models will be begging for a break from your never-ending barrage of thoughtful queries.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Actually they are not secret, but they are top indeed&lt;/em&gt; 😜&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;em&gt;This post highlights 5 prompt engineering techniques you can incorporate into your use of Generative AI for better results. These methods are not necessarily best practices or tips; however, if you’re interested in learning more, feel free to send me a DM and I’ll be happy to give you some.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What Makes Up a Prompt?
&lt;/h2&gt;

&lt;p&gt;Basically a prompt is made up by the following pieces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Instruction&lt;/strong&gt;: Tell the model what to do.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context&lt;/strong&gt;: Give the model extra info to help it understand.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input Data&lt;/strong&gt;: Provide the data the model will work with.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Indicator&lt;/strong&gt;: Say what kind of result you want.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the basic parts of a prompt, but you might not always need all of them.&lt;/p&gt;

&lt;p&gt;Let’s see a few prompt engineering techniques to improve your game with Generative AI. Starting from the basics one to the most advanced.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️&lt;em&gt;Disclaimer: all tests have been done using llama-2 70b.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  1. Zero-Shot Prompting
&lt;/h2&gt;

&lt;p&gt;This is the simplest technique that you may unconsciously be using in your everyday routine. With this prompt technique, the model can create output without explicit training data for a specific task, thanks to its broad training dataset that enables it to make educated guesses.&lt;/p&gt;

&lt;p&gt;Here are a few examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: 
  Summarize the main points of the following article: 'Optimizing Content Delivery: The Complete Guide Through S3 Caching and CloudFront.'
Output: 
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you google "S3" the answer is straightforward, but for this model it's a tricky question:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: 
  What does S3 mean?
Ouput: 
  S3 is an abbreviation that can stand for several things, depending on the context in which it is used...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the ‘S3’ output isn’t quite what we had hoped for. Let’s explore another technique instead. 👇&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Few-Shot Prompting
&lt;/h2&gt;

&lt;p&gt;While Zero-Shot learning can be effective in some cases, it may not always work well, especially when dealing with more complex tasks. However, you can enhance this technique by providing additional context within your prompt. This way, your model will have a better understanding of what’s required and will be able to answer with greater accuracy.&lt;/p&gt;

&lt;p&gt;Think of it like teaching a dog new tricks with just a few treats — it’s challenging, but definitely possible 🐶.&lt;/p&gt;

&lt;p&gt;Let’s take a look at some examples to illustrate this further:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: 
  EC2 means Elastic Compute Cloud
  AWS means Amazon Web Services
  What does S3 mean?

Output:
  S3 stands for Simple Storage Service. It's a cloud-based storage service...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With a bit more context, our model’s accuracy finally improves, and it gives us the correct answer ✅.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input:
  2 + 2 + x = 5
  10 + 3 + x = 14
  5 * x = x + 4
  Using the examples above, which is the valueof x in 5 + 5 + x?

Output:
  ... The equations provided do not have a consistent variable or operation, making it impossible to determine the value of x...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, as you can see in this other prompt, even though we provided several examples, the model was unable to identify the solution of x = 1 ❌.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Chain-of-Thought Prompting
&lt;/h2&gt;

&lt;p&gt;The goal of this technique is to create a logical chain of thoughts, much like explaining things to a curious five-year-old. By breaking down the process step by step, we can help the model understand the reasoning behind the answers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input:
  2 + 2 + x = 5. Because 2 + 2 = 4 so 4 + x = 5, hence x = 1.
  10 + 3 + x = 14. Because 10 + 3 = 13 so 13 + x = 14, hence x = 1.
  5 * x = x + 4. If we had x = 1 than 5 = 1 + 4, hence x = 1.
  Using the examples above, which is the value of x in 5 + 5 + x?

Output:
  Based on the given examples, the value of x in 5 + 5 + x can be determined using the same logic.
  5 + 5 = 10
  10 + x = 10 + 1 = 11
  Therefore, the value of x in 5 + 5 + x is 1. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As shown in the answer, we were ultimately successful in getting the correct answer ✅ by walking the model through the problem-solving process.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Generated Knowledge Prompting
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Feb6daf30-886b-4d2b-bd8e-be44c02e2bcb" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Feb6daf30-886b-4d2b-bd8e-be44c02e2bcb" alt="“Generated Knowledge Prompting for Commonsense Reasoning” whitepaper."&gt;&lt;/a&gt;&lt;br&gt;
This technique involves generating an explanation to be used as part of the prompt, which is created using the AI itself 🤖. The idea is to utilize the AI’s ability to generate text to produce a prompt that includes relevant information and explanations 🔮. This can potentially lead to better outputs from the model.&lt;/p&gt;

&lt;p&gt;Let’s consider an example to demonstrate this technique. We’ll attempt to answer the following question:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input:
  After 5 minutes the AWS Lambda times out. Yes or no?

Output:
  No, the AWS Lambda function does not time out after 15 minutes. The default timeout for an AWS Lambda function is 15 seconds, but it can be configured to run for up to 15 minutes. After the configured timeout period has elapsed, the function will stop executing and return an error response.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Well, this is awkward! The answer is incorrect, but the explanation provided is spot on. Let’s generate some knowledge to obtain a more accurate response, I think the model needs it badly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: 
  Use this information to generate "knowledge"
  Input: The AWS API Gateway timed out after 30 seconds.
  Knowledge: If a request takes longer than 30 seconds to process, it will automatically time out and return an error response.
  Input: The AWS Lambda function returns an error when the event is too large
  Knowledge: The maximum size of an event that can be processed by an AWS Lambda function is 6 megabytes (MB). If the event exceeds this limit, the function will return an error response. It is recommended to either reduce the size of the event or use a different approach to process larger events.
  Input: After 15 minutes the AWS Lambda times out.

Output:
  Knowledge: The AWS Lambda function has a timeout limit of 15 minutes. If the function does not complete execution within this time frame, it will automatically terminate and return an error response. It is recommended to optimize the function code to reduce execution time.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looks like we’re on the right track! Now, let’s ask the model to provide a compelling response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input:
  Input: After 15 minutes the AWS Lambda times out. Yes or no?
  Knowledge: The AWS Lambda function has a timeout limit of 15 minutes. If the function does not complete execution within this time frame, it will automatically terminate and return an error response. It is recommended to optimize the function code to reduce execution time.
  Answer:

Output:
  Yes.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila! The answer is correct this time around, and it even provides a satisfactory explanation when asked to elaborate. 🎉&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Answer: Yes.
Explanation: According to the knowledge provided, the AWS Lambda function has a timeout limit of 15 minutes. This means that if the function does not complete execution within 15 minutes, it will automatically terminate and return an error response. Therefore, after 15 minutes, the AWS Lambda function will time out. It is important to note that it is recommended to optimize the function code to reduce execution time to avoid hitting the timeout limit.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  5. Self-Consistency Prompting
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F46a1fd17-480f-42e5-8444-e53e8f09662c" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F46a1fd17-480f-42e5-8444-e53e8f09662c" alt="“Self-consistency improves chain of thought reasoning in language models” whitepaper."&gt;&lt;/a&gt;&lt;br&gt;
The essence of this technique involves asking the model the same prompt repeatedly and taking the majority result as the final outcome. It is an improved version of Chain-of-Thought (CoT) and works best when used in conjunction with CoT.&lt;/p&gt;

&lt;p&gt;Self-Consistency Prompting has been shown to improve results on arithmetic, commonsense, and symbolic reasoning tasks.&lt;/p&gt;

&lt;p&gt;To illustrate how this technique works, let’s consider an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: 
  If a store has 101 apples and 9 oranges, and it sells 6 apples and 4 oranges, how many fruits are left in the store?

Output:
  After the sales, the store has 95 apples (101 - 6) and 5 oranges (9 - 4).
  So, there are a total of 95 + 5 = 100 fruits left in the store.

Output #2:
  After the sales, the store has 95 apples (101 - 6) and 5 oranges (9 - 4).
  So, there are a total of 100 fruits left in the store: 95 apples + 5 oranges.

Output #3:
  First, we need to calculate the number of apples left in the store. We can do this by subtracting the number of apples sold (6) from the total number of apples (101):101 - 6 = 95
  Now, we need to calculate the number of oranges left in the store. We can do this by subtracting the number of oranges sold (4) from the total number of oranges (9): 9 - 4 = 5
  Finally, we add the number of apples and oranges left in the store to get the total number of fruits left: 95 + 5 = 100
  Therefore, there are 100 fruits left in the store after the sales.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see, the model is consistently outputting 100 as the answer which is the correct solution to the problem. Unfortunately this time we are trying to get wrong answers but this model is too smart for this simple problem. However, by using &lt;strong&gt;llama2–7b&lt;/strong&gt;, we can generate more compelling (wrong 😈) outputs for our use case.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: 
  If a store has 101 apples and 9 oranges, and it sells 6 apples and 4 oranges, how many fruits are left in the store?

Output:
  Great question! To find out how many fruits are left in the store after the sales, we need to subtract the number of fruits sold from the total number of fruits available.
  Let's start by identifying the total number of fruits available in the store:
  101 apples + 9 oranges = 110 fruits
  Now, let's calculate the number of fruits sold:
  6 apples + 4 oranges = 10 fruits
  So, there are 110 - 10 = 100 fruits left in the store after the sales.
  I hope this helps! Let me know if you have any other questions.

Output #2:
  Great question! I'm happy to help you with this. Let me see...
  So, if the store had 101 apples and 9 oranges initially, and it sold 6 apples and 4 oranges, that means the total number of fruits sold is 6 + 4 = 10.
  Now, let's calculate the number of fruits left in the store:
  101 - 10 = 91 fruits remaining in the store (including both apples and oranges).
  Therefore, there are 91 fruits left in the store after the sales.

Output #3:
  Thank you for asking! I'm here to help you with your question. However, I noticed that the information provided in the question doesn't add up correctly. The total number of fruits in the store is 101 apples + 9 oranges = 110 fruits.
  After selling 6 apples and 4 oranges, the remaining fruits in the store would be: 110 - 6 = 104 fruits (apples) and 104 - 4 = 100 fruits (oranges).
  So, there are 104 apples and 100 oranges left in the store after the sales.

Output #4:
  Great question! To find out how many fruits are left in the store after the sales, we need to subtract the number of fruits sold from the total number of fruits available.
  Let's start by identifying the total number of fruits available in the store:
  101 apples + 9 oranges = 110 fruits
  Now, let's calculate the number of fruits sold:
  6 apples + 4 oranges = 10 fruits
  So, there are 100 fruits left in the store (110 - 10).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The essence of Self-Consistency Prompting is to take the most common answer, which in this case is 100 fruits left, and treat it as the correct solution. Additionally, by combining this technique with Chain-of-Thought, we may be able to resolve even more complex problems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There you have it, folks! With these top-notch prompt engineering techniques, you’ll be able to take your language model interactions to the next level. Don’t be afraid to get creative and think outside the box when it comes to crafting your prompts — sometimes the most unexpected questions yield the most hilarious and insightful responses! 😜&lt;/p&gt;

&lt;p&gt;If you enjoyed this article, let me know — I might just turn it into a series on prompt engineering 🚀.&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;If you want to read more regarding these 5 techniques, you can find every bit in here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learnprompting.org/" rel="noopener noreferrer"&gt;https://learnprompting.org/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.promptingguide.ai/" rel="noopener noreferrer"&gt;https://www.promptingguide.ai/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learningprompt.wiki/" rel="noopener noreferrer"&gt;https://learningprompt.wiki/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Screenshots and a few examples were taken from the research papers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chain-of-Thought Prompting Elicits Reasoning in Large Language Models -&amp;gt; &lt;a href="https://arxiv.org/abs/2201.11903" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2201.11903&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Generated Knowledge Prompting for Commonsense Reasoning -&amp;gt; &lt;a href="https://arxiv.org/abs/2110.08387" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2110.08387&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Self-Consistency Improves Chain of Thought Reasoning in Language Models -&amp;gt; &lt;a href="https://arxiv.org/abs/2203.11171" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2203.11171&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: opinions expressed are solely my own and do not express the views or opinions of my employer.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>promptengineering</category>
      <category>ai</category>
      <category>generativeai</category>
    </item>
    <item>
      <title>The Complete Guide to Amazon Bedrock for Generative AI</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Wed, 04 Oct 2023 12:00:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/the-complete-guide-to-amazon-bedrock-for-generative-ai-2ohk</link>
      <guid>https://dev.to/aws-builders/the-complete-guide-to-amazon-bedrock-for-generative-ai-2ohk</guid>
      <description>&lt;p&gt;Revolutionize your generative AI development with Amazon Bedrock. Benefit from the top-notch security measures, compliance standards, and reliable infrastructure provided by Amazon Bedrock. See what's going on from a Solution Architect perspective.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I had early access to Amazon Bedrock, and I'm excited to share my experience with you in this blog post. As a user, I've always been impressed with the power and versatility of generative AI, and here I hope to help others better understand Amazon Bedrock. &lt;br&gt;
Over the course of the preview, I provided tons of feedback to AWS regarding the use of Amazon Bedrock, its SDK, and their models. This means that I already have a decent amount of experience with this service 😏.&lt;/p&gt;

&lt;p&gt;Here's a sneak peek at what I'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is Amazon Bedrock?&lt;/li&gt;
&lt;li&gt;Features&lt;/li&gt;
&lt;li&gt;Pricing&lt;/li&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;Compliance&lt;/li&gt;
&lt;li&gt;Resilience&lt;/li&gt;
&lt;li&gt;Monitoring&lt;/li&gt;
&lt;li&gt;Using Amazon Bedrock&lt;/li&gt;
&lt;li&gt;Fine-tuning&lt;/li&gt;
&lt;li&gt;Inference parameters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Buckle up because we're about to embark on an journey into Amazon Bedrock!&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Amazon Bedrock?
&lt;/h2&gt;

&lt;p&gt;Amazon Bedrock is a fully managed service that provides access to foundation models (FMs) created by Amazon and third-party model providers through an API.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;em&gt;Bedrock has a few other features, such as Agents and Embeddings. I'll cover them in part 2 when I gain hands-on experience with them. (Perhaps in part 3 or 4, because as I'm exploring GenAI offerings on AWS, I discover more topics to talk about.👀)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Features of Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;As of the time I'm writing this, Amazon Bedrock offers the following capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Text playground - A text generation application in the AWS console;&lt;/li&gt;
&lt;li&gt;Image playground - An image generation application in the AWS console;&lt;/li&gt;
&lt;li&gt;Amazon Bedrock API - Explore it using the AWS CLI or utilize the API to access the foundation models;&lt;/li&gt;
&lt;li&gt;Embeddings - Generate embeddings from the Titan text embedding model using the API;&lt;/li&gt;
&lt;li&gt;Model fine-tuning - Create a training dataset and fine-tune an Amazon Bedrock model;&lt;/li&gt;
&lt;li&gt;Agents - Although still in preview, agents can execute complex tasks by dynamically invoking APIs.
Amazon Bedrock currently supports a wide range of models. I won't list them all here because I have a hunch that I'll need to update this list every few months.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Pricing of Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;This time, AWS has made pricing quite straightforward for us. The pricing structure is based on the specific model and its provider. To find the pricing details, you can navigate to the AWS console section labeled "Providers" (or you can consult the documentation, although we both know it can be a bit &lt;em&gt;intimidating&lt;/em&gt; 😜).&lt;/p&gt;

&lt;p&gt;Looking at an example of pricing from the Stable Diffusion XL by Stability.ai:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$0.018/image (step size &amp;lt;= 50, resolution &amp;lt; 512 x 512)&lt;/li&gt;
&lt;li&gt;$0.036/image (step size 51-150, resolution &amp;lt; 512 x 512)&lt;/li&gt;
&lt;li&gt;$0.036/image (step size &amp;lt;= 50, resolution &amp;gt; 512 x 512)&lt;/li&gt;
&lt;li&gt;$0.072/image (step size 51-150, resolution &amp;gt; 512 x 512)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we can see, the pricing follows AWS' well-known model of On-Demand pricing, which means that the cost for generating a single image can range from $0.018 to $0.072.&lt;br&gt;
Additionally, you have the option to request "Provisioned throughput" for a model, guaranteeing you a specified level of throughput at a fixed cost. For instance, you can pay $35 per hour with a 6-month commitment for the Claude model.&lt;/p&gt;
&lt;h2&gt;
  
  
  Security of Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;Security has been a hot topic since the birth of Generative AI🔥. From the beginning, AWS states that security is a shared responsibility between us and them. Essentially, they manage the servers, so we don't need to do anything, but we are responsible for encrypting traffic in transit, at rest, and so on. You can find all the details here: &lt;a href="https://aws.amazon.com/compliance/shared-responsibility-model/" rel="noopener noreferrer"&gt;https://aws.amazon.com/compliance/shared-responsibility-model/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Amazon Bedrock uses encryption to protect data at rest and in transit. For data at rest, it encrypts prompts and responses using service-managed keys (similar to S3/RDS, etc.), while encryption in transit is secured using TLS 1.2. Additionally, we can use our own KMS key.&lt;/p&gt;

&lt;p&gt;AWS handles most security aspects; we only need to set up the proper role before calling the Amazon Bedrock API. The service integrates fully with IAM, similar to other AWS services. A few things to note about the IAM integration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It doesn't have a resource-based policy (e.g., like specifying a bucket policy in S3).&lt;/li&gt;
&lt;li&gt;It partially supports Attribute-based access control (ABAC) using tags in policies, allowing us to restrict service access based on AWS tags.&lt;/li&gt;
&lt;li&gt;Using policies, we can easily allow or restrict access to our model. This is useful, especially for multi-tenant use cases.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Compliance of Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;All of our data remains private within our AWS account, and, most importantly, it's &lt;strong&gt;not shared&lt;/strong&gt; with third-party model providers, nor is it used by AWS itself (IMH this should be the default behavior for every provider).&lt;/p&gt;

&lt;p&gt;Additionally, you have the option to configure a VPC endpoint to establish a private connection to Amazon Bedrock over the AWS network, ensuring private connectivity.&lt;/p&gt;

&lt;p&gt;Furthermore, you can use CloudWatch to track usage and CloudTrail to monitor API activity, enhancing your control and visibility over the service.&lt;/p&gt;

&lt;p&gt;Regarding compliance, Amazon Bedrock aligns with common standards such as GDPR and HIPAA, meeting expected compliance requirements.&lt;br&gt;
AWS has met my expectations, and at this point, I'm hopeful that they will expand the service to more regions.&lt;/p&gt;
&lt;h2&gt;
  
  
  Resilience of Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;Amazon Bedrock is a fully managed service, which means we can use it without worrying about its infrastructure. It is automatically patched and is highly available by default across all Availability Zones (AZs) distributed throughout their region. In the event of any issues, AWS provides notifications through the Service Health Dashboard.&lt;/p&gt;
&lt;h2&gt;
  
  
  Quotas
&lt;/h2&gt;

&lt;p&gt;To be honest, I was expecting the Request Per Second (RPS) to be higher than they actually are. Currently, Anthropic Claude V2 offers a little over 1.5 RPS, while Amazon Titan Express provides 66.6 RPS. However, it's worth noting that these quotas are region-based. This means that if we require additional RPS, we could deploy Bedrock in other regions or accounts and utilize the combination of multiple Bedrock.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;em&gt;Of course, this solution does come with several implications. For instance, if you are using your custom model in account/region A, can account/region B gain access to it?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Monitoring Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;We can monitor Amazon Bedrock using Amazon CloudWatch, which collects data and provides near real-time metrics that can also be graphed within the service console. Additionally, CloudWatch enables us to create alarms that trigger when specific thresholds are reached. With these alarms, we have the flexibility to take various actions, such as sending an email or blocking access, to address potential issues.&lt;/p&gt;

&lt;p&gt;When looking at the metrics, there are several interesting ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Invocations&lt;/strong&gt; - personally, I find limited use for this metric. It could be more valuable if we could differentiate metrics based on more specific criteria, but for now, customization is something we need to handle independently;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;InvocationLatency&lt;/strong&gt; - this metric is valuable for monitoring the performance of our GenAI applications. However, it's important to note that it's a global metric, which means it aggregates data for all GenAI applications, potentially affecting accuracy;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;InvocationClientErrors&lt;/strong&gt; - this metric is essential for identifying issues when our GenAI applications encounter problems from our end;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;InvocationServerErrors&lt;/strong&gt; - this metric triggers whenever AWS experiences errors. Since Amazon Bedrock is a managed service, the primary purpose of this metric is to prompt the opening of a support case 😂;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;InvocationThrottles&lt;/strong&gt; - this one is self explaining.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, AWS offers the "Model invocation logging" feature &lt;em&gt;in preview&lt;/em&gt;. This feature collects invocation logs, model input data, and output data for all invocations, sending them to CloudWatch Logs and S3.&lt;/p&gt;

&lt;p&gt;Lastly, you can configure an EventBridge event to receive notifications for actions within Bedrock, such as when a job stops running.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;em&gt;Before we begin using Bedrock, it's important to grant access to models within Bedrock. To do so, we can simply go to "Model Access/Edit" and select the models to use.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Using Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;Finally, we can dive into Amazon Bedrock. Before I begin, please note that I won't be demonstrating how to use the service from the AWS CLI or the AWS Console. Instead, I'll focus on the AWS SDK and provide a few lines of code to illustrate how the service functions.&lt;br&gt;
Let's get started!&lt;br&gt;
First, we'll examine an example of how to use Amazon Bedrock with the Python boto3 SDK. To utilize Amazon Bedrock, we need to specify the region (keep in mind that it's not available in every region, so it's essential to check availability first), the model ID, and, of course, the prompt.&lt;/p&gt;

&lt;p&gt;Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  import boto3
import json
bedrock = boto3.client(service_name='bedrock-runtime')

body = json.dumps({
    "prompt": "\n\nHuman:explain black holes to 8th graders\n\nAssistant:",
    "max_tokens_to_sample": 300,
    "temperature": 0.1,
    "top_p": 0.9,
})

modelId = 'anthropic.claude-v2'
accept = 'application/json'
contentType = 'application/json'

response = bedrock.invoke_model(body=body, modelId=modelId, accept=accept, contentType=contentType)

response_body = json.loads(response.get('body').read())

# text
print(response_body.get('completion'))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which gives the output below 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F85c0aa7d-bfe8-451a-abbe-dea327707ca0" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2F85c0aa7d-bfe8-451a-abbe-dea327707ca0" alt="Bedrock API result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally you can also stream the response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json

bedrock = boto3.client(service_name='bedrock-runtime')

body = json.dumps({
    'prompt': '\n\nHuman:write an essay for living on mars in 1000 words\n\nAssistant:',
    'max_tokens_to_sample': 100
})

response = bedrock.invoke_model_with_response_stream(
    modelId='anthropic.claude-v2', 
    body=body
)

stream = response.get('body')
if stream:
    for event in stream:
        chunk = event.get('chunk')
        if chunk:
            print(json.loads(chunk.get('bytes').decode()))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, you can observe that the output is streamed chunk by chunk, and in this case, there were 3 chunks. 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fc2381cfc-1c4c-4f7e-ba12-e7b714bf8221" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontent.cloudnature.net%2Fimages%2Fc2381cfc-1c4c-4f7e-ba12-e7b714bf8221" alt="Bedrock API result in streaming"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'd like to highlight a small detail: each model has its own specific requirements. For instance, the amazon.titan model requires the inputText parameter, while if you're using anthropic.claude-instant-v1, you need to set prompt and max_tokens_to_samples values to receive the output. To get started, you can explore examples directly within the Amazon Bedrock console, as they are highly informative.&lt;/p&gt;

&lt;p&gt;As far as I'm aware, it would have been convenient if the SDK had standardized interfaces grouping all LLMs. Unfortunately, it appears that we need to handle this ourselves using LangChain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fine-tuning Amazon Bedrock
&lt;/h2&gt;

&lt;p&gt;Now, let's delve into the process of fine-tuning a generic Foundation Model to align it with our specific dataset.&lt;/p&gt;

&lt;p&gt;To fine-tune our model, we require a training dataset in JSONL format, structured like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  {"input": "[OUR_PROMPT],"output": "[OUR_EXPECTED_RESPONSE]}
  {"input": "[OUR_PROMPT_2],"output": "[OUR_EXPECTED_RESPONSE_2]}
  {"input": "[OUR_PROMPT_3],"output": "[OUR_EXPECTED_RESPONSE_3]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To perform the actual fine-tuning, you can use the script below 👇&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Before running it, please remember to change the role ARN. You need an actual role capable of reading and writing on S3 and using Amazon Bedrock. Moreover, you need to specify the S3 bucket from which the data will be retrieved.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  import boto3
  import json
  import calendar
  import time

    bedrock = boto3.client(service_name="bedrock", region_name="us-east-1")
  current_GMT = time.gmtime()

  # required
  jobName = "job-" + str(calendar.timegm(current_GMT))
  customModelName = "[CUSTOM_NAME]-" + str(calendar.timegm(current_GMT))
  roleArn = "arn:aws:iam::[YOUR_ACCOUNT_ID]:role/[YOUR_ROLE_NAME]"
  baseModelIdentifier = (
    "arn:aws:bedrock:us-east-1::foundation-model/[MODEL_ID]"
  )
  trainingDataConfig = {"s3Uri": "s3://[YOUR_BUCKET_NAME]/[YOUR_TRAINING_DATASET_FILE_PATH]"}
  outputDataConfig = {"s3Uri": "s3://[YOUR_BUCKET_NAME]/output/dataset.json"}
  hyperParameters = {
    "epochCount": "1",
    "batchSize": "1",
    "learningRate": "0.005",
    "learningRateWarmupSteps": "0",
  }

  # optional
  clientRequestToken = ""
  jobTags = [{"key": "bedrock", "value": "true"}]
  customModelTags = [{"key": "bedrock", "value": "true"}]
  validationDataConfig = {
    "validators": [
      {
        "name": "bedrock-validator",
        "s3Uri": "s3://[YOUR_BUCKET_NAME]/[YOUR_VALIDATOR_FILE_PATH]",
      }
    ]
  } 

  response = bedrock.create_model_customization_job(
    jobName=jobName,
    customModelName=customModelName,
    roleArn=roleArn,
    baseModelIdentifier=baseModelIdentifier,
    trainingDataConfig=trainingDataConfig,
    outputDataConfig=outputDataConfig,
    hyperParameters=hyperParameters,
    # jobTags=jobTags,
    # customModelTags=customModelTags,
    # validationDataConfig=validationDataConfig,
  )
  print(response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have all the necessary permissions and files set up correctly, you can simply wait, and wait…………….. still waiting?………… Hopefully there are no errors and you are still waiting😜. While you wait, you can check the status directly in the AWS Console or with the AWS SDK.&lt;br&gt;
t some point in the future, the model will complete its training, and then we can start using it. You can interact with it via the SDK or Console. Typically, there's a brief waiting period when running the first prompt because the model needs to load up. However, once it's ready, you can submit queries, and this time it will respond based on our training data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inference parameters
&lt;/h2&gt;

&lt;p&gt;To generate high quality and accurate responses, we need to tune the parameters. There are a lot of them. I'll explain the most common ones I found in Bedrock so we can hit the ground running. I won't explain every bit of them, just what to expect when we change them up.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Temperature&lt;/strong&gt;: controls the randomness of the generated response. Higher the value mens more randomness whereas a lower value get responses that are closer to the training data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Top P&lt;/strong&gt;: is somewhat like a filter, higher the value means it will only looks at the very best candidates for what comes next in the sentence. The value goes from 0 to 1, if we had 0.9 it means that the model will consider only the top 10% most probable tokens when generating the next token.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Top K&lt;/strong&gt;: similar to Top P, but, instead of working in percentage, it specifies an absolute number of tokens like 10 or 2.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Presence penalty&lt;/strong&gt;: reduces the probability of generating new tokens that have already been used in either the prompt and completion. This helps prevent constant repetition.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Count penalty&lt;/strong&gt;: similar the one above but it takes int oaccount how often a token appears in both the prompt and completion. Tokens that appear more frequently will be less likely to be generated again.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frequency penalty&lt;/strong&gt;: as the two above but it considers the overall frequency given a specified text length.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Penalize special token&lt;/strong&gt;: allows us to choose specific type of token (such as punctuation) that won't be subject to the penalties described above.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stop sequence&lt;/strong&gt;: it's out model handbrake, usually for a chat bot we can specify "Human:" so that it stops when it's the user turn to answer.
Hopefully these hyper parameters should cover all the knowledge you need to create your own prompt and twik them up to reach the best answers as possible.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There you have it, folks! Hopefully, this blog post has set you on the right track and provided you with all the information you need to kickstart your project. I aimed to offer a Solution Architect's perspective in this post. In the next one, we'll look into real use cases and projects, so stay tuned.&lt;/p&gt;

&lt;p&gt;If you enjoyed this article, please let me know in the comment section or send me a DM. I'm always happy to chat! ✌️&lt;/p&gt;

&lt;p&gt;Thank you so much for reading! 🙏 Keep an eye out for more AWS related posts, and feel free to connect with me on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/a4bdb007-5600-4368-81c5-ff5b4154f518/en-US" rel="noopener noreferrer"&gt;https://catalog.us-east-1.prod.workshops.aws/workshops/a4bdb007-5600-4368-81c5-ff5b4154f518/en-US&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-service.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-service.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://python.langchain.com/docs/modules/model_io/models/llms/" rel="noopener noreferrer"&gt;https://python.langchain.com/docs/modules/model_io/models/llms/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;https://aws.amazon.com/bedrock/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: opinions expressed are solely my own and do not express the views or opinions of my employer.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>bedrock</category>
      <category>aws</category>
      <category>ai</category>
      <category>generativeai</category>
    </item>
    <item>
      <title>From SPA to SSR on AWS: Boost Your Website’s SEO</title>
      <dc:creator>Matteo Depascale</dc:creator>
      <pubDate>Thu, 29 Jun 2023 06:58:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/from-spa-to-ssr-on-aws-boost-your-websites-seo-4fo2</link>
      <guid>https://dev.to/aws-builders/from-spa-to-ssr-on-aws-boost-your-websites-seo-4fo2</guid>
      <description>&lt;p&gt;Learn how to convert your Single-Page Application (SPA) to Server-Side Rendering (SSR) on AWS. Explore the benefits of SSR and its impact on search engine optimization. Additionally, discover key techniques for implementing SEO, analytics, and user tracking to improve privacy, performance, and user engagement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;We have one of the fastest serverless websites out there! However, having a fast website alone doesn't guarantee traffic, views, and engagements. That's where SEO and analytics come into play. In this article, I will guide you through the most efficient practices to help you achieve your goals without the need to read hundreds of articles. Let's get started so you can hit the ground running!🏃&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with SPA
&lt;/h2&gt;

&lt;p&gt;Yes, you are right! Why does there always have to be a problem?&lt;br&gt;
Let's put it this way: you're reading an exceptional article, one of the best, and you decide to share it with your friends. You click the share button and choose your favorite social media platform. A new window or tab pops up, and you write a captivating caption like, "Look, guys, this article is lit 🔥" and click "Publish."&lt;br&gt;
How disappointing it is when the article doesn't have a title or a preview image? Believe me, &lt;strong&gt;a lot&lt;/strong&gt;!&lt;br&gt;
Alright, the lack of article previews may be a minor issue that you can overlook. However, let's look 2 more related problems.&lt;/p&gt;
&lt;h3&gt;
  
  
  Lack of Meta Tags
&lt;/h3&gt;

&lt;p&gt;You know when you search on Google for something like "optimizing content delivery: S3 caching and CloudFront" and the article you're referring to appears right after the search query?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7d8b7a0lbehctc8zlxr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7d8b7a0lbehctc8zlxr4.png" alt="Figure 1: Google search results." width="715" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, with SPA, this doesn't happen. Since you have a &lt;strong&gt;Single Page&lt;/strong&gt;, search engines only have access to a single set of meta tags, resulting in only one title and one description. If you're running a blog, it's like running a full marathon with a huge rock on your back 🥉.&lt;/p&gt;
&lt;h3&gt;
  
  
  Slower Load Times
&lt;/h3&gt;

&lt;p&gt;Although our cache helps mitigate this issue, SPA still needs to load all JavaScript files before the website can run. It's not as critical as the lack of meta tags because caching can fix it, but it's still a factor to consider.&lt;/p&gt;
&lt;h2&gt;
  
  
  The solution is SSR
&lt;/h2&gt;

&lt;p&gt;Server-Side Rendering (SSR) provides the solution to the problems faced by SPA. SSR involves rendering the website on the server and sending back a fully-rendered HTML to the client. This approach allows search crawlers to easily access and understand the content. Moreover, SSR enables customization of each page's meta tags and significantly improves initial page load times, resulting in better performance.&lt;br&gt;
Nowadays, there are numerous frameworks to choose from. If you have been following my series, I opted for SvelteKit, which fortunately supports both SSR and SPA. This compatibility should make the transition relatively smooth and painless 👌.&lt;/p&gt;
&lt;h2&gt;
  
  
  Converting SPA to SSR on AWS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h09hn6j5o01zsusp1kh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h09hn6j5o01zsusp1kh.jpg" alt="Figure 2: Quick and easy meme!" width="500" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As it turned out, the process was quite the opposite of painless, a blood bath 🩸.&lt;br&gt;
In the case of SSR with SvelteKit on AWS, there weren't many readily available libraries. While there was one existing library, it was not actively maintained and didn't provide the level of customization I needed. Therefore, I took the initiative and forked the library to create my own solution.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;You can find the original library here: &lt;a href="https://github.com/MikeBild/sveltekit-adapter-aws"&gt;https://github.com/MikeBild/sveltekit-adapter-aws&lt;/a&gt;. You can see the author knows it's stuff.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Armed with a lot of patience, I dug deep into the project and identified numerous areas for improvement and updates. The existing infrastructure was well-built and impressive. Here's an overview of the initial architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69h9niqzwjnks24wld1m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69h9niqzwjnks24wld1m.jpg" alt="Figure 3: SvelteKit Adapter Architecture." width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And this is how I managed to make it more complicated then it should be:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1336he1ntsru0f7miec.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1336he1ntsru0f7miec.jpg" alt="Figure 4: Updated SvelteKit Adapter Architecture." width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, I needed to integrate it with our existing CloudFront distribution, which added complexity to the process. I had to create a custom resource to preserve all the new origins and cache behaviors within the existing distribution.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;⚠️ If you don't know what I'm talking about, you are missing out on a really good post! You can find it here 👇&lt;/em&gt;&lt;br&gt;
&lt;a href="https://depascalematteo.medium.com/building-a-lightning-fast-serverless-blog-on-aws-with-sveltekit-part-4-5ca74d6dfa4a"&gt;https://depascalematteo.medium.com/building-a-lightning-fast-serverless-blog-on-aws-with-sveltekit-part-4-5ca74d6dfa4a&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;You can find the updated npm package here: &lt;a href="https://www.npmjs.com/package/@mdepascale/sveltekit-aws-adapter"&gt;https://www.npmjs.com/package/@mdepascale/sveltekit-aws-adapter&lt;/a&gt;&lt;br&gt;
While the documentation may not be extensive at the moment, I'll make sure to document it at my best. Additionally, I welcome any contributions to make pull requests to enhance the library.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  SSR Results
&lt;/h2&gt;

&lt;p&gt;The journey of converting from SPA to SSR was an enjoyable one, but now it's time to evaluate if it was truly worth it. To determine the impact of SSR, I used &lt;a href="https://www.linkedin.com/post-inspector/"&gt;https://www.linkedin.com/post-inspector/&lt;/a&gt; to check if crawlers can preview images and titles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2u270qt4b68yp9nnlmeo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2u270qt4b68yp9nnlmeo.png" alt="Figure 5: SSR Results." width="706" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The results were positive, as crawlers were able to preview the images and titles. This is a significant victory ✅.&lt;br&gt;
Next, I assessed the performance of our website using Lighthouse scores 🚀.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb097x67uq5xicwcnp85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb097x67uq5xicwcnp85.png" alt="Figure 6: Lighthouse score results." width="417" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The migration to SSR yielded excellent results, with our website achieving the desired scores. &lt;br&gt;
Now we are fully prepared for the next phase 🏆!&lt;/p&gt;
&lt;h2&gt;
  
  
  Top SEO Best Practices for Bloggers
&lt;/h2&gt;

&lt;p&gt;Now, let's delve into some of the top SEO best practices for bloggers.&lt;/p&gt;
&lt;h3&gt;
  
  
  Title
&lt;/h3&gt;

&lt;p&gt;Ensure that the title is between 50–60 characters in length and includes our primary keywords.&lt;/p&gt;
&lt;h3&gt;
  
  
  Description
&lt;/h3&gt;

&lt;p&gt;Keep the description within the range of 150–160 characters and incorporate our primary keywords once or even twice.&lt;/p&gt;
&lt;h3&gt;
  
  
  Content
&lt;/h3&gt;

&lt;p&gt;The sweet spot tends to be around 1500 words. If the content exceeds this limit, consider splitting it up. However, it's important to note that there may be some exceptions to this guideline, as it can vary based on the subject matter and context.&lt;/p&gt;
&lt;h3&gt;
  
  
  Header tags
&lt;/h3&gt;

&lt;p&gt;Utilize header tags (H2, H3, H4) effectively throughout the content. Remember to maintain proper order, and reserve H1 for page titles.&lt;/p&gt;
&lt;h3&gt;
  
  
  Sitemap.xml
&lt;/h3&gt;

&lt;p&gt;Creating a sitemap.xml file helps list all the pages on our website, enabling search engine bots to crawl our website more efficiently.&lt;/p&gt;
&lt;h3&gt;
  
  
  External links
&lt;/h3&gt;

&lt;p&gt;Include links to reputable sources within our content. This not only enhances the credibility of our website but also earns bonus points when other websites reference our content.&lt;/p&gt;
&lt;h3&gt;
  
  
  Content freshness
&lt;/h3&gt;

&lt;p&gt;Search engines prioritize fresh and up-to-date content. Therefore, updating our content ensure it stays relevant. Moreover it will help us study consistently 🤓.&lt;/p&gt;
&lt;h2&gt;
  
  
  Privacy-Friendly Analytics
&lt;/h2&gt;

&lt;p&gt;Without analytics it's like walking in the dark without any light. We need to track visits, visits duration, users behavior and so on so forth.&lt;br&gt;
Nowadays, before reading something online, you need to give access (deny if they let you) to cookies, privacy policies, TOS, newsletter signups…. I just want to read a few interesting paragraphs, I don't want to buy a car!&lt;br&gt;
That's where &lt;a href="https://plausible.io/"&gt;Plausible&lt;/a&gt; comes in. Let's explore why it rocks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Really easy to use;&lt;/li&gt;
&lt;li&gt;Privacy friendly;&lt;/li&gt;
&lt;li&gt;Lightweight library;&lt;/li&gt;
&lt;li&gt;Open source;&lt;/li&gt;
&lt;li&gt;Open source (Yeah, I really like this part).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's look at some of the things we can do with Plausible.&lt;/p&gt;
&lt;h3&gt;
  
  
  Starting with Plausible
&lt;/h3&gt;

&lt;p&gt;Using Plausible is straightforward. Simply copy and paste the following script in your app.html file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script defer data-domain="yourdomain.com" src="https://plausible.io/js/script.js"&amp;gt;&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And just like that, it starts recording analytics data right away!&lt;/p&gt;

&lt;h3&gt;
  
  
  Proxy though CloudFront
&lt;/h3&gt;

&lt;p&gt;Sometimes adblockers block analytics (thanks bad analytics!). To overcome this issue we can set up a proxy from our domain to the analytics domain.&lt;br&gt;
Here's an example: if your domain is my.domain.com, you can proxy every request to my.domain.com/api/event to analytics.com.&lt;br&gt;
With CloudFront, this process becomes easy. You'll need to create a new origin and two custom behaviors. Here's the guide, if you are looking for it: &lt;a href="https://plausible.io/docs/proxy/guides/cloudfront"&gt;https://plausible.io/docs/proxy/guides/cloudfront&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Add outbound links tracking
&lt;/h3&gt;

&lt;p&gt;Tracking outbound links can provide valuable information, from monitoring link clicks, to assessing the effectiveness of affiliate marketing campaigns. Here's an example script to add outbound links tracking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script
   defer
   data-domain="yourdomain.com"
   src="https://plausible.io/js/script.outbound-links.js"
&amp;gt;&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, adding ".outbound-link" in the script's URL was enough!&lt;/p&gt;

&lt;h3&gt;
  
  
  Find out how many 404s 
&lt;/h3&gt;

&lt;p&gt;Plausible natively supports tracking 404 errors. To set it up, create a goal named "404" and add the following two scripts. The first one goes inside app.html:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script&amp;gt;
    window.plausible = window.plausible || function() { 
      (window.plausible.q = window.plausible.q || []).push(arguments) 
    }
&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the second one goes inside your 404 page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script&amp;gt;
    plausible("404", { 
      props: { 
        path: document.location.pathname
      } 
    });
&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What about custom events?
&lt;/h3&gt;

&lt;p&gt;Implementing custom events with Plausible is simple. We can modify our HTML classes as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- before --&amp;gt;
&amp;lt;button class="some-existing-class"&amp;gt;Click Me&amp;lt;/button&amp;gt;
&amp;lt;!-- after --&amp;gt;
&amp;lt;button class="some-existing-class plausible-event-name=Button+Click"&amp;gt;Click Me&amp;lt;/button&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just make sure to create a custom event goal named "Button Click" (without the "+").&lt;br&gt;
Plausible provides an easy-to-use, privacy-friendly, lightweight, and open-source analytics solution. By following these steps, you can gather valuable data while respecting user privacy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;You can find all the code here: &lt;a href="https://github.com/Depaa/sveltekit-blog-template"&gt;https://github.com/Depaa/sveltekit-blog-template&lt;/a&gt; 😉. I'll update this repository with every new version, if you like it drop a star ⭐.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog post, we've highlighted the issues with SPAs and provided a guide on converting to SSR on AWS. One key advantage of SSR is improved SEO, driving traffic to our website. However, SEO is incomplete without analytics. That's why we delve into Plausible, a privacy-friendly analytics solution. With Plausible, we can gather valuable data and gain insights to make sense of our SEO efforts. By combining SSR, SEO best practices, and privacy-friendly analytics, we create a powerful recipe for growth and success 💯.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Shout out to &lt;a href="https://www.linkedin.com/in/gianmarco-pettenuzzo/"&gt;Gianmarco Pettenuzzo&lt;/a&gt; who's helping me developing the website✨&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This series is a journey to production, and we have many more blog posts planned, here the list I'll try to keep updated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;em&gt;&lt;a href="https://cloudnature.net/blog/blog-series-from-zero-to-hero-part-1-serverless-infrastructure-on-aws-for-blogwebsite"&gt;Serverless infrastructure on AWS for website&lt;/a&gt;;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;✅ &lt;em&gt;&lt;a href="https://cloudnature.net/blog/blog-series-from-zero-to-hero-part-2-serverless-backend-api-on-aws-for-blog-website"&gt;Serverless backend api on AWS for website&lt;/a&gt;;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;✅ &lt;em&gt;&lt;a href="https://cloudnature.net/blog/building-a-high-performing-static-backoffice-on-aws-with-sveltekit--part-3"&gt;Building a high performing static backoffice on AWS with SvelteKit&lt;/a&gt;;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;✅ &lt;em&gt;&lt;a href="https://cloudnature.net/blog/building-a-lightningfast-serverless-blog-on-aws-part-4"&gt;Building a lightning fast serverless blog on AWS with SvelteKit&lt;/a&gt;;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;✅ &lt;em&gt;From SPA to SSR on AWS: Boost Your Website's SEO;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Setting up pipelines, monitoring, and alerting for website;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Deploying website to production;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Implementing Disaster Recovery strategies while going multiregion with serverless;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;… you see, there are a lot planned, and if you want to add more points just DM me or comment right below.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you so much for reading! 🙏 I will keep posting different AWS architecture from time to time so follow me on DevTo ✨ or on LinkedIn 👉 &lt;a href="https://www.linkedin.com/in/matteo-depascale/"&gt;https://www.linkedin.com/in/matteo-depascale/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;overview of the AWS infrastructure we are going to built to send text notification to users when they enter/exit a particular area;&lt;br&gt;
what is AWS CDK and how to start a project with a simple boilerplate;&lt;br&gt;
geofencing and tracking with Amazon Location Service, well there is also a custom resource with CDK;&lt;br&gt;
sending SMS with Amazon SNS.&lt;br&gt;
Solution overview&lt;/p&gt;

</description>
      <category>programming</category>
      <category>serverless</category>
      <category>aws</category>
      <category>svelte</category>
    </item>
  </channel>
</rss>
