<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jens Båvenmark</title>
    <description>The latest articles on DEV Community by Jens Båvenmark (@jbvk).</description>
    <link>https://dev.to/jbvk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jbvk"/>
    <language>en</language>
    <item>
      <title>New pricing model for CloudFront</title>
      <dc:creator>Jens Båvenmark</dc:creator>
      <pubDate>Tue, 18 Nov 2025 21:58:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/new-pricing-model-for-cloudfront-213k</link>
      <guid>https://dev.to/aws-builders/new-pricing-model-for-cloudfront-213k</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsvo65tkgsiayexx85ut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsvo65tkgsiayexx85ut.png" alt="Fixed-Price Plans for CloudFront" width="700" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS just released a new pricing model for CloudFront. Now you can pay a fixed price.&lt;/p&gt;

&lt;p&gt;As most of you know, calculating the cost in AWS can be hard, especially on resources where traffic is the cost driver (like CloudFront). So this change will make it a lot easier.&lt;/p&gt;

&lt;p&gt;So how will this work then?&lt;/p&gt;

&lt;p&gt;There will be different plans with varying costs that include CloudFront and other services in the pricing. You will select the plan you want when deploying your CloudFront Distribution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1osqkzd6kujdhw0qa3p5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1osqkzd6kujdhw0qa3p5.png" alt="Flat-Rate security and delivery plans" width="700" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For current distributions, you can “migrate” them to the plan of your choice by selecting to switch to a plan for your distribution in the console (right now, the console is the only place to set plans). If the current deployment configuration matches what the plan provides, you will be able to use that plan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxz1a2strcttj2umpebxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxz1a2strcttj2umpebxl.png" alt="Migrate to plan" width="700" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Included Services
&lt;/h2&gt;

&lt;p&gt;The different plans include these services, but not all plans include all services. Some services are for the higher plans (see the image below for details about each plan).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CloudFront CDN&lt;/li&gt;
&lt;li&gt;WAF and DDoS protection&lt;/li&gt;
&lt;li&gt;DNS (Route 53)&lt;/li&gt;
&lt;li&gt;TLS Certificate (ACM)&lt;/li&gt;
&lt;li&gt;CloudWatch Log Ingestion (Storage costs for log still apply)&lt;/li&gt;
&lt;li&gt;Serverless Edge Compute (CloudFront Functions, not Lambda@Edge)&lt;/li&gt;
&lt;li&gt;Bot Management (Business plan and above)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will also get S3 credits that can be used across all your accounts S3 costs.&lt;/p&gt;

&lt;p&gt;And data transfer to CloudFront is automatically waived.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Different Plans
&lt;/h2&gt;

&lt;p&gt;There are four different plans, as well as the old pay-as-you-go pricing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9o3jpd79434caouff52q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9o3jpd79434caouff52q.png" alt="The different plans" width="700" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you see in this image, you can easily see what each plan includes. All higher plans include the services of the lower plans.&lt;/p&gt;

&lt;h3&gt;
  
  
  Free Plan
&lt;/h3&gt;

&lt;p&gt;This is, in my opinion, the best part of these pricing plans. The Free plan will make it possible for people learning AWS or running small workloads to use CloudFront without having to worry about getting a large bill at the end of the month.&lt;/p&gt;

&lt;p&gt;And what you get in the free tier is not a bad setup for many small companies and PoC sites. The included WAF will make it easy to secure you distributions and you can add up to 5 different rules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmaap60rgia7asb04lb2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmaap60rgia7asb04lb2k.png" alt="The 5 free waf rules for free plan" width="700" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, 5 GB of S3 storage is available for free per CloudFront Distribution. But before you start planning on removing your S3 bill by deploying 100 Free distributions, you should know that every account will be limited to three Free Plan distributions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pro Plan
&lt;/h3&gt;

&lt;p&gt;For $15/month, this plan will likely be the most popular choice for companies with medium traffic, as it supports 10 million requests / 50 TB per month, and includes WAF protection for your WordPress, PHP, and SQL databases, as well as logging support.&lt;/p&gt;

&lt;h3&gt;
  
  
  Business Plan
&lt;/h3&gt;

&lt;p&gt;Jumping up in cost with $200/month, the Business Plan will not be for everyone, but will be a must for more heavily trafficked sites (125M requests / 50 TB per month). It also comes with Bot management, more advanced DDoS protection, and the first plan to support Private Origins in your VPC (a great security feature where you do not need to have your ALB in public subnets and have public IPs).&lt;/p&gt;

&lt;h3&gt;
  
  
  Premium Plan
&lt;/h3&gt;

&lt;p&gt;For your sites with extremely high traffic (500M requests / 50 TB per month) or where you require even greater security (like regex-based WAF filtering), the Premium Plan can be yours for only $1000/month.&lt;/p&gt;

&lt;p&gt;With it, you will have features such as automatic origin failover, mTLS for end-users, and high-speed private origin routing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pay-as-you-go Pricing
&lt;/h3&gt;

&lt;p&gt;If you don't want to choose a plan, you can continue to use Pay-as-you-go pricing. Current distributions will continue on this “plan” if not “migrated” to one of the other plans.&lt;/p&gt;

&lt;p&gt;So, when would you want to use this “plan”? If your distribution has a really heavy load so that the Premium Plan is not enough (over 500M requests or 50 TB per month), or when you need a special setting on the services that is not included in the other plans, like Lambda@Edge.&lt;/p&gt;

&lt;p&gt;But if you don't need that, you should select a plan that matches your traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  But what about when?
&lt;/h2&gt;

&lt;p&gt;So changes are scary, and for many technicians, there will be many questions, especially when deciding on migrating current distributions to a plan. So I will answer the questions I can here.&lt;/p&gt;

&lt;h3&gt;
  
  
  What happens if I exceed the usage allowance?
&lt;/h3&gt;

&lt;p&gt;If your usage exceeds the limit of your plan, your distribution will continue to work and receive traffic just as before. But you may experience reduced performance. You will not be charged anything more than your set monthly cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I change plans whenever?
&lt;/h3&gt;

&lt;p&gt;You can deploy a new distribution or “migrate” an existing distribution and select a plan at any time, and the cost of that plan will be applied to the current billing cycle.&lt;/p&gt;

&lt;p&gt;If you want to change plan (downgrade or upgrade), the change will take effect in the next billing cycle.&lt;/p&gt;

&lt;p&gt;If you cancel a plan, it will take effect at the beginning of the next billing cycle, where the distribution will revert to pay-as-you-go pricing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I have as many plans as I want?
&lt;/h3&gt;

&lt;p&gt;From the start, you will be able to have three free plans and 100 paid plans per account.&lt;/p&gt;

&lt;p&gt;I believe the limit for paid plans can be increased after several customers have requested it, but for now, this is the limit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I monitor if I am exceeding my plan's limit?
&lt;/h3&gt;

&lt;p&gt;Yes, there will be CloudWatch metrics for this, and AWS will also send you an email if you are approaching your plan's limit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I use this with Amplify?
&lt;/h3&gt;

&lt;p&gt;No, not right now, Amplify has its own pricing model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I move a Plan to another distribution?
&lt;/h3&gt;

&lt;p&gt;No. A plan is connected to a distribution and can not be moved to another distribution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This is a significant improvement from AWS, making CloudFront costs easier to predict.&lt;/p&gt;

&lt;p&gt;Of course, there are some “features” I would have liked to have, like the possibility to manage this with IaC, a solution for handling Blue/Green deployments without having to have two different plans, and the possibility to switch to a pay-as-you-go model whenever you want from a set plan. But I believe these will be implemented when feedback comes back to AWS after the release, and they see what limitations the customers are facing.&lt;/p&gt;

&lt;p&gt;All information in this blog post was the information available at the time of release on the 17th of November 2025. If something changes, I will try to update this blog post, but as always, check the &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for the latest information.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudfront</category>
    </item>
    <item>
      <title>AWS Capabilities by Region</title>
      <dc:creator>Jens Båvenmark</dc:creator>
      <pubDate>Thu, 06 Nov 2025 22:47:38 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-capabilities-by-region-3ffb</link>
      <guid>https://dev.to/aws-builders/aws-capabilities-by-region-3ffb</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vvhfhr727uh21gv5n1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vvhfhr727uh21gv5n1a.png" alt="AWS Capabilities by Region" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS just released a new service in Builder Center, &lt;a href="https://builder.aws.com/build/capabilities" rel="noopener noreferrer"&gt;AWS Capabilities by Region&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I believe this will be a service that many might miss, but for those who know it exists, it will be integral when deciding where to place workloads when planning for migrations, disaster recovery regions, and expansions.&lt;/p&gt;

&lt;p&gt;So what does the service do? It lets you compare what services and features exist in different regions. And this will make planning much easier and help avoid unnecessary work in regions that won't support your workload.&lt;/p&gt;

&lt;p&gt;For example, it is not uncommon when implementing a disaster recovery (DR) region that you, in the middle of setting it up, run into issues where a service/feature available in the main region doesn't exist in the DR region, and you need to pivot to another region.&lt;/p&gt;

&lt;p&gt;And by using this service in the planning stage, we can avoid issues like that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Service and features
&lt;/h2&gt;

&lt;p&gt;The main feature of the service is the Service and features, where you can compare different regions and see what services and features they offer.&lt;/p&gt;

&lt;p&gt;To use it, you select what regions you want to check and either select all services or specific ones.&lt;/p&gt;

&lt;p&gt;You will see a comparison of the availability of the service and its features between the regions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcqgiy4jawffjw8lynyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcqgiy4jawffjw8lynyw.png" alt="View of Service and features view" width="700" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One other great feature is that you will also be able to see when a service/feature is planned to be deployed to the region.&lt;/p&gt;

&lt;p&gt;When I have used it, I have added all the services I am using, then selected the region I am currently in, and then selected the regions I am planning to migrate to. I quickly get an overview of which services and features will and will not work.&lt;/p&gt;

&lt;h3&gt;
  
  
  API Operations
&lt;/h3&gt;

&lt;p&gt;Another feature of the service is to see what API operations are available in each region and compare the regions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F345wjytxs90du99jo21m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F345wjytxs90du99jo21m.png" alt="View of API Operations view" width="700" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudFormation resources
&lt;/h3&gt;

&lt;p&gt;You can also view and compare available CloudFormation resources in the regions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqez1ohe9d9etn1doccm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqez1ohe9d9etn1doccm6.png" alt="view of CloudFormation view" width="700" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This is a great new service that I will use a lot when planning where to place workloads. It will make it easy to get an overview of where the resources you need are available.&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Create AWS Diagrams with Kiro</title>
      <dc:creator>Jens Båvenmark</dc:creator>
      <pubDate>Mon, 28 Jul 2025 06:33:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-aws-diagrams-with-kiro-29g5</link>
      <guid>https://dev.to/aws-builders/create-aws-diagrams-with-kiro-29g5</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wi4tcwzyy2t1ux9fuod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wi4tcwzyy2t1ux9fuod.png" alt="Create AWS Diagrams with Kiro" width="700" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A short while ago, I wrote a blog post about &lt;a href="https://medium.com/@jens.bavenmark/create-aws-diagrams-with-python-and-q-in-the-cli-03c6f1c5acfc" rel="noopener noreferrer"&gt;Creating AWS Diagrams with Python and Amazon Q Developer in the CLI.&lt;/a&gt; After posting that blog, I got a question about whether I had tried to do the same with Kiro (Kiro is an AI‑powered IDE from AWS) with AWS MCP for diagrams ( &lt;a href="https://awslabs.github.io/mcp/servers/aws-diagram-mcp-server" rel="noopener noreferrer"&gt;https://awslabs.github.io/mcp/servers/aws-diagram-mcp-server&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;I had not, but when I heard about the MCP for Diagrams (which I will refer to as the MCP moving forward), I had to test it and decided to write this blog with the results. The MCP is, as the name states, using the same Python library, Diagrams, as we used in the previous blog to create the diagrams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up the environment
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;First of all, I want to make it clear that I am a beginner with Kiro, and this was my first “project” with the IDE. So there might be better ways to set this up, but this is how I did it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We will need to install Kiro. Go to &lt;a href="https://kiro.dev/" rel="noopener noreferrer"&gt;https://kiro.dev/&lt;/a&gt; and follow the instructions to download and install it for your OS.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At the time of writing this blog there is a waitlist to be able to download Kiro.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We then need to install the dependencies of Diagrams, which are Graphviz and Python (I will assume you have Python installed, otherwise, there are many guides on how to install it).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install graphviz # For Mac
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;If you are not using a Mac, follow the Graphviz&lt;/em&gt; &lt;a href="https://www.graphviz.org/download/" rel="noopener noreferrer"&gt;&lt;em&gt;instructions&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on how to install it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We are then ready to configure Kiro to use the MCP and begin building diagrams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting the MCP configuration
&lt;/h2&gt;

&lt;p&gt;Before we can configure Kiro, we will need the MCPs configuration. For the AWS Diagram MCP Server, it can be found &lt;a href="https://awslabs.github.io/mcp/servers/aws-diagram-mcp-server" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The configuration we are looking for is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "mcpServers": {
    "awslabs.aws-diagram-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-diagram-mcp-server"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      },
      "autoApprove": [],
      "disabled": false
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configuring Kiro
&lt;/h3&gt;

&lt;p&gt;To use the MCP, we will need to configure Kiro to use it. You can &lt;a href="https://kiro.dev/docs/mcp/configuration/" rel="noopener noreferrer"&gt;configure&lt;/a&gt; MCP on a User level or on a Workspace level. We will configure it on a Workspace level in this blog.&lt;/p&gt;

&lt;p&gt;Open Kiro and select the Project we are going to use. For me, it was the Terraform module that I wanted to create the diagram for.&lt;/p&gt;

&lt;p&gt;Click on the Kiro logo in the left menubar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6mljr5qzdm8rbmaxjvz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6mljr5qzdm8rbmaxjvz.png" alt="Logo of Kiro in menubar" width="104" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will then gain access to additional Kiro configuration. At the bottom of that list, you will see MCP Servers. Select to edit this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy11m3siuvt1h0rz50mxk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy11m3siuvt1h0rz50mxk.png" alt="MCP Server list" width="684" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the JSON document that opens, select Workspace Config.&lt;/p&gt;

&lt;p&gt;Update it with the config we got from the MCP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ss0w7elmj7hnbrjrnzd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ss0w7elmj7hnbrjrnzd.png" alt="JSON added to MCP config" width="700" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you have saved it, you will see that it appears in the MCP Servers list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F702l4xewpydhk3aa660u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F702l4xewpydhk3aa660u.png" alt="MCP server appears in list" width="700" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you click on one of the tools, for example, generate_diagram, a prompt will be automatically added to the session to ask Kiro to test the MCP tool: &lt;em&gt;Help me test the MCP tool generate_diagram from the awslabs.aws-diagram-mcp-server server.&lt;/em&gt; Now Kiro will test the tool and “learn” from it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; I'll help you test the generate_diagram tool from the awslabs.aws-diagram-mcp-server server.
&amp;gt; Let me start by getting some example code to understand the syntax, then generate a test diagram.
...
&amp;gt; Called MCP tool
...
&amp;gt; get_diagram_examples
&amp;gt; Now let me test the tool by generating a simple AWS architecture diagram:
...
&amp;gt; Called MCP tool
...
&amp;gt; generate_diagram
&amp;gt; Great! Now let me test with a more complex diagram using custom styling:
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is technically not needed. I tested adding the MCP without testing the tools, and it worked just as well. It might have taken some more time, though.&lt;/p&gt;

&lt;p&gt;You are now ready to use the MCP to create diagrams with Kiro.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Diagram
&lt;/h2&gt;

&lt;p&gt;I give Kiro the simple prompt: &lt;em&gt;Can you now create me an AWS Diagram from the Terraform Module we are currently in.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It then starts to create the diagram. It didn’t run into the same issues we had when not using the MCP with import errors, so the diagram were made a lot quicker. The only problem was that it was not in the workspace.&lt;/p&gt;

&lt;p&gt;When I asked Kiro where it was, it informed me that it was in the /tmp/ folder. And when I informed Kiro I wanted it in the current folder, it copied it there, but ran into some issue and got stuck. This happened multiple times when copying files, so it's probably something that will get fixed in a future version.&lt;/p&gt;

&lt;p&gt;But the diagram was created and looked good. Not perfect, but a great start and probably good enough for most.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Diagram
&lt;/h3&gt;

&lt;p&gt;Here is the finished diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frko5skbo6spzs7v8w5qy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frko5skbo6spzs7v8w5qy.png" alt="AWS Diagram for S3 backend" width="700" height="1199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Code
&lt;/h3&gt;

&lt;p&gt;Here is the code Kiro generated with the help of the MCP. As you will see, the code is not runnable by itself in its current form. It is missing imports, and according to Kiro, this is by design. The guide from the MCP instructs it to ignore imports and focus solely on the code. But if we want to get functional code from it, we only need to ask Kiro to fix this, so it’s not a big thing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;with Diagram("Terraform S3 Backend - Service Architecture", show=False, direction="TB"):
    # Entry Point
    terraform_cli = General("Terraform CLI")

    with Cluster("AWS Services"):
        # Primary Services
        with Cluster("Storage Services"):
            s3 = S3("Amazon S3")
            s3_bucket = S3("Backend Bucket")

        with Cluster("Database Services"):
            dynamodb = Dynamodb("Amazon DynamoDB")
            lock_table = DynamodbTable("terraform_state")

    with Cluster("Resource Configuration"):
        # S3 Features
        with Cluster("S3 Configuration"):
            versioning = S3("Versioning: Enabled")

        # DynamoDB Features
        with Cluster("DynamoDB Configuration"):
            hash_key = DynamodbAttribute("LockID (String)")
            capacity = Dynamodb("Lock")

    # Service Relationships
    terraform_cli &amp;gt;&amp;gt; Edge(label="stores state", color="green", style="bold") &amp;gt;&amp;gt; s3
    (
        terraform_cli
        &amp;gt;&amp;gt; Edge(label="manages locks", color="orange", style="bold")
        &amp;gt;&amp;gt; dynamodb
    )

    # Service to Resource Mapping
    s3 &amp;gt;&amp;gt; Edge(label="contains", color="green") &amp;gt;&amp;gt; s3_bucket
    dynamodb &amp;gt;&amp;gt; Edge(label="contains", color="orange") &amp;gt;&amp;gt; lock_table

    # Configuration Relationships
    (s3_bucket &amp;gt;&amp;gt; Edge(label="configured with", color="blue") &amp;gt;&amp;gt; [versioning])
    lock_table &amp;gt;&amp;gt; Edge(label="configured with", color="blue") &amp;gt;&amp;gt; [hash_key, capacity]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using the MCP with Amazon Q Developer
&lt;/h2&gt;

&lt;p&gt;When reading the instructions for setting up the MCP I saw that we can use it with Amazon Q Developer as well. So I configured Q to use the MCP as well and asked it to create the same diagram. And it was just as easy as with Kiro.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Q to use the MCP
&lt;/h3&gt;

&lt;p&gt;To configure Q to use MCP we need to update a configuration file with the MCP configuration. Add the same MCP configuration as we added to Kiro to &lt;code&gt;~/.aws/amazonq/mcp.json&lt;/code&gt; for a User configuration or to . &lt;code&gt;amazonq/mcp.json&lt;/code&gt; in your workspace folder for Workspace configuration.&lt;/p&gt;

&lt;p&gt;To test that the configuration works, we can check the loaded MCP in q and should see awslabs.aws-diagram-mcp-server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; qchat mcp listbash
&amp;gt; workspace:
  /Users/USERNAME/WORKSPACE/.amazonq/mcp.json
    (empty)

🌍 global:
  /Users/USERNAME/.aws/amazonq/mcp.json
    • awslabs.aws-diagram-mcp-server uvx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we can ask Q to create the diagram just as we did before. And it did without any issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Diagram
&lt;/h3&gt;

&lt;p&gt;Here is the Diagram from Q Developer using the MCP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsggm28ynegwj6zcgb1vz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsggm28ynegwj6zcgb1vz.png" alt="AWS Diagram for S3 Backend" width="700" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Using the MCP with either Kiro or Amazon Q works really well and makes it easier to create the diagrams.&lt;/p&gt;

&lt;p&gt;I tested both with more advanced Terraform modules, and it works just as well, almost better in fact. It can become a little weird sometimes, but I would suggest asking the AI to create multiple diagrams at different detail levels for you to choose from.&lt;/p&gt;

&lt;p&gt;Some of the issues I ran into with Kiro, such as the diagrams in the wrong folder, the code not created in the workspace, or not runnable manually, can all be handled with better prompting, my prompting was really basic.&lt;/p&gt;

&lt;p&gt;So if you are looking into building Diagrams with Code and using AI, I would suggest configuring it to use the AWS MCP for diagrams.&lt;/p&gt;

&lt;p&gt;But as always, when using AI to build something, don’t forget to learn the code it uses so you can modify it and address any issues with it.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>kiro</category>
      <category>mcp</category>
    </item>
    <item>
      <title>Create AWS Diagrams with Python and Q in the CLI</title>
      <dc:creator>Jens Båvenmark</dc:creator>
      <pubDate>Thu, 24 Jul 2025 18:36:21 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-aws-diagrams-with-python-and-q-in-the-cli-31d0</link>
      <guid>https://dev.to/aws-builders/create-aws-diagrams-with-python-and-q-in-the-cli-31d0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybw5fsco4kuwq41qelr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybw5fsco4kuwq41qelr2.png" alt="Create AWS Diagrams with Python and Q in the CLI" width="700" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can utilize Amazon Q Developer in the CLI to create your AWS Diagram from your Terraform code.&lt;/p&gt;

&lt;p&gt;This is how I made my first Diagrams with Code with the help of Amazon Q.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Just a quick note: This is not a “AI can create everything for me and I don’t need to know anything” blog. I used AI to help me create some diagrams and it was so convenient that I wanted to share the process.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;The last part I always have to do after creating a Terraform module for AWS is to spend some time in draw.io to create a diagram of the AWS resources and their connections within the module for the documentation. Later, when I have to make a change to the module, I need to find the source document for the diagram to be able to update it. And this is not a good way to keep the documentation in the code.&lt;/p&gt;

&lt;p&gt;I have read about Diagram as Code, but never looked into it. But during a drive to update the documentation of all my Terraform modules, I decided to look into it to see if it could help me not just quicken up the work but also make it more available and easier to update.&lt;/p&gt;

&lt;p&gt;Since I often use Python, I decided to look into Diagrams ( &lt;a href="https://diagrams.mingrammer.com/" rel="noopener noreferrer"&gt;https://diagrams.mingrammer.com&lt;/a&gt;) and was impressed by how easily the code was to understand. Started writing diagrams for my Terraform modules, and it worked well.&lt;/p&gt;

&lt;p&gt;But as always, I wondered if I could speed it up some more with help from AI.&lt;/p&gt;

&lt;p&gt;So I went to my “trusty” Sidekick ChatGPT to ask it to create the code for me (to be honest, it is not as trusty anymore). I explained the module and the resources in it, and it created the code for me. And when I ran the code, it failed with an import error (every Node (resource) you want to use in your diagram needs to be imported). Went back to ChatGPT with the error and got new code. And it failed again. After doing this back and forth a couple of times, my patience was spent.&lt;/p&gt;

&lt;p&gt;But then I remembered that I had started testing Amazon Q Developer in my CLI and wondered if it would be a better option. And that is what this blog post is about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the environment
&lt;/h2&gt;

&lt;p&gt;First, let's install the programs and set up the environment. My guides will be for Mac.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Developer Q
&lt;/h3&gt;

&lt;p&gt;You will need an AWS Builder ID (or IAM Identity Center user) to use Amazon Developer Q. The Builder ID is free. To create one, follow AWS &lt;a href="https://docs.aws.amazon.com/signin/latest/userguide/create-aws_builder_id.html" rel="noopener noreferrer"&gt;instructions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Follow AWS &lt;a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-installing.html" rel="noopener noreferrer"&gt;instructions&lt;/a&gt; to install Amazon Q Developer and give it access to the CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment and Diagrams
&lt;/h3&gt;

&lt;p&gt;Create a folder for your code and diagrams.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir terraform_diagrams
cd terraform_diagrams
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I suggest setting up a virtual environment with venv before installing dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 -m venv virtual_env
source virtual_env/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we will install Diagrams and its dependencies. Diagrams require Graphviz to render the diagrams.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install graphviz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we will install Diagrams.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install diagrams
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we are ready to create some diagrams. I would suggest reading the official &lt;a href="https://diagrams.mingrammer.com/docs/guides/diagram" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; for Diagrams so you are avare on how it works since it will help with your understanding of the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Q Developer creates diagrams
&lt;/h2&gt;

&lt;p&gt;This time, I started Q on the CLI and asked it to review a Terraform module and create the code for the diagram for me.&lt;/p&gt;

&lt;p&gt;My prompt was: &lt;em&gt;Can you check this Terraform module ${Path To Module} and create the code for me to create a AWS Diagram with Python diagrams package.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Q then started reading the files in the module and the README.md file to understand what the module was used for and how it worked.&lt;/p&gt;

&lt;p&gt;Then it checked if Diagrams was installed and stated it would install it otherwise (not sure if it would try to install it, but I wouldn't suggest allowing AI to install packages on your computer). Since it was installed, it continued creating the code and running it to create the diagrams.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You will get prompted for allowing Q to do actions and I would suggest not allowing all actions being run without reviewing them. It will take some more time but give you more control.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And Q got the same import errors as ChatGPT got (importing the SNS node). But here is where I really got impressed with Q. It tried to fix the issues with the imports twice and failed both times. It then decided to check the Diagrams package for the available modules for AWS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; Let's check the available modules in the diagrams package
&amp;gt; find virtual_env/lib/python3.12/site-packages/diagrams/aws -name "*.py" | grep -v "__" | sort
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It then checked through all the modules and their classes, looking for the SNS nodes class, and found it in the integration module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; Let's check the available classes in the engagement module
&amp;gt; grep -A 50 "class" virtual_env/lib/python3.12/site-packages/diagrams/aws/engagement.py
....
&amp;gt; Let's check the integration module for SNS
&amp;gt; grep -A 50 "class" virtual_env/lib/python3.12/site-packages/diagrams/aws/integration.py
....
&amp;gt; Great! Now I can see that SNS is in the integration module. Let's update our script
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It then updated the code and tried to run it again. Now getting new errors as other imports were wrong. It then went through the modules again and found the correct node classes and updated the code until it worked.&lt;/p&gt;

&lt;p&gt;It then ran the final code and created a diagram for my Terraform module, which creates an S3 bucket and DynamoDB to use as a Terraform backend.&lt;/p&gt;

&lt;p&gt;Was the diagram perfect? No. But was it a great start. And if you then ask Q to fix the issues you have with the diagram, or you do it yourself, you can get it to the point where you want to add it to your documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  The diagram
&lt;/h3&gt;

&lt;p&gt;Here is the diagram that Q created for me.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8o7y2lwh6mmzqhs33q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8o7y2lwh6mmzqhs33q7.png" alt="Diagram of Terraform module" width="700" height="1006"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The code
&lt;/h3&gt;

&lt;p&gt;Here is the code that Q created for this Diagram. Note that it imports classes that are not used. So not perfect but good enough.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from diagrams import Diagram, Cluster, Edge
from diagrams.aws.storage import S3
from diagrams.aws.database import DynamodbTable
from diagrams.aws.management import Config
from diagrams.aws.devtools import Codebuild
from diagrams.aws.compute import Lambda
from diagrams.onprem.client import User

# Create the diagram
with Diagram("AWS S3 Backend for Terraform State", show=True, direction="TB"):

    # Terraform users/clients
    with Cluster("Terraform Clients"):
        terraform_users = [\
            User("Developer 1"),\
            User("Developer 2"),\
            User("CI/CD Pipeline"),\
        ]

    # Terraform state management
    with Cluster("Terraform State Management"):
        terraform = Config("Terraform")

    # S3 Backend components
    with Cluster("S3 Backend Infrastructure"):
        # S3 bucket with versioning
        s3_bucket = S3("State Bucket\nwith Versioning")

        # DynamoDB for state locking
        dynamodb = DynamodbTable("DynamoDB\nState Lock Table")

    # Connect components
    for user in terraform_users:
        user &amp;gt;&amp;gt; Edge(label="terraform apply") &amp;gt;&amp;gt; terraform

    terraform &amp;gt;&amp;gt; Edge(label="read/write state") &amp;gt;&amp;gt; s3_bucket
    terraform &amp;gt;&amp;gt; Edge(label="acquire/release lock") &amp;gt;&amp;gt; dynamodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;I was quite amazed that Q looked through the actual package for the classes we needed and found them. It worked much better than I predicted, and I didn't have to do the regular AI back-and-forth dance.&lt;/p&gt;

&lt;p&gt;But don't just let the Q create the diagrams for you, learn the code as well, so you can polish the code for the diagram after Q creates the initial version.&lt;/p&gt;

&lt;p&gt;Final note when using Amazon Q in the CLI. Remember to commit your changes between runs, as it can overwrite the files it’s working on, and you may lose some working code for broken code if you are not careful. But you can use Amazon Q to commit the files for you.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>diagrams</category>
    </item>
    <item>
      <title>LinkedIn for New Technicians</title>
      <dc:creator>Jens Båvenmark</dc:creator>
      <pubDate>Mon, 21 Jul 2025 11:01:37 +0000</pubDate>
      <link>https://dev.to/jbvk/linkedin-for-new-technicians-1oel</link>
      <guid>https://dev.to/jbvk/linkedin-for-new-technicians-1oel</guid>
      <description>&lt;p&gt;Create the best profile you can to set yourself up for success.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxwtmd99wyfwvexz706m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxwtmd99wyfwvexz706m.png" alt="Image of Junior Technicans using LinkedIn" width="700" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my last blog post, “ &lt;a href="https://medium.com/new-ish-to-newbies-navigating-devops-together/growing-your-professional-persona-122a160adbbc" rel="noopener noreferrer"&gt;Growing your Professional Persona&lt;/a&gt;”, for &lt;a href="https://medium.com/new-ish-to-newbies-navigating-devops-together" rel="noopener noreferrer"&gt;New-ish to Newbies: Navigating DevOps Together&lt;/a&gt;, I discussed the importance of LinkedIn. However, I felt that there was more to say about LinkedIn, so I have decided to focus this post on your LinkedIn profile and how it can help you as a new IT technician. The post is focused on helping new technicians, but many of the tips can also help more experienced technicians.&lt;/p&gt;

&lt;p&gt;I have spoken with several IT recruiters, and many more responded to a questionnaire I sent to gather the information I wrote about in this post. This information comes directly from the individuals who will review your profile and decide whether to proceed with you for their available positions. All the recruiters work with technicians in Sweden, but some also recruit for companies outside the country.&lt;/p&gt;

&lt;p&gt;We will look at these different parts of the profile and what to think about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Headline&lt;/li&gt;
&lt;li&gt;About&lt;/li&gt;
&lt;li&gt;Experience&lt;/li&gt;
&lt;li&gt;Projects&lt;/li&gt;
&lt;li&gt;Skills&lt;/li&gt;
&lt;li&gt;Recommendations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will outline the key points that recruiters emphasized in each section, as well as common mistakes to avoid.&lt;/p&gt;

&lt;p&gt;I will then provide a summary of the key points.&lt;/p&gt;

&lt;p&gt;On some of the questions, the recruiter had very divided standings, and I will be sure to add that when writing their answers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Headline Section
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Key points recruiters find attractive in LinkedIn Headlines
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Mention your area of expertise or specialization (e.g., “Java Developer,” “Azure Architect,” or “Senior DevOps Engineer”).&lt;/li&gt;
&lt;li&gt;Avoid overly generic or broad titles (e.g., refrain from using “IT Engineer” or “Consultant” without providing context).&lt;/li&gt;
&lt;li&gt;Use job titles that directly match or closely align with the roles you want to be found for. Recruiters often search using specific skills or job titles.&lt;/li&gt;
&lt;li&gt;Indicate seniority when applicable (e.g., “Senior DevOps Engineer,” “Chief Cloud Architect”).&lt;/li&gt;
&lt;li&gt;It’s beneficial if you reflect genuine enthusiasm or career goals in your headline (e.g., “Aspiring DevOps Professional” or “DevOps Specialist with Love for Automation”).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Common mistakes to avoid
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Avoid overstating or exaggerating your seniority or expertise. Don’t claim advanced titles like “Cloud Architect” if your experience doesn’t match.&lt;/li&gt;
&lt;li&gt;Headlines that include too many certifications or details appear cluttered. Keep it succinct and easy to read.&lt;/li&gt;
&lt;li&gt;Don’t use overly broad or vague titles (“Consultant” without clear context can confuse recruiters).&lt;/li&gt;
&lt;li&gt;Ensure your title accurately represents your role and responsibilities.&lt;/li&gt;
&lt;li&gt;Avoid explicitly mentioning “Junior” in the title. Instead, position internships or entry-level roles in a professional and appealing way.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Asking recruiters to rate different headlines
&lt;/h3&gt;

&lt;p&gt;I gave the recruiters four different headlines I had seen on LinkedIn and asked them to rate them from 1 to 5.&lt;/p&gt;

&lt;p&gt;“CloudOps Engineer | AWS Certified (5x) | Terraform Certified | Focus on Secure Automation” — &lt;strong&gt;4.25&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“CloudOps Engineer @ Company” — &lt;strong&gt;3.88&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“CloudOps Engineer” — &lt;strong&gt;3.25&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“Creating the future for digital work” — &lt;strong&gt;1.38&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;A concise and clear headline is most attractive.&lt;/p&gt;

&lt;p&gt;Ensure your job title is clear and specific, matching the position you are seeking. Specify your area of expertise, but do not exaggerate your seniority or expertise level.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Section
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Compelling Details &amp;amp; Themes Recruiters Look For
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Clearly state your current role, primary responsibilities, and technical expertise.&lt;/li&gt;
&lt;li&gt;Clearly state what motivates you professionally and your career aspirations.&lt;/li&gt;
&lt;li&gt;Share your passion or enthusiasm for specific technologies or areas within the tech industry.&lt;/li&gt;
&lt;li&gt;Briefly describe your significant accomplishments and how you personally contributed to them.&lt;/li&gt;
&lt;li&gt;Recruiters highly value soft skills — qualities such as teamwork, problem-solving, communication, and adaptability.&lt;/li&gt;
&lt;li&gt;Include personal touches to stand out, but keep them relevant and meaningful (avoid clichés like “coffee lover”).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Recommended Length &amp;amp; Detail Level
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Keep it direct, clear, and easy to read.&lt;/li&gt;
&lt;li&gt;Approximately 3–5 lines or a short, focused paragraph is ideal.&lt;/li&gt;
&lt;li&gt;Avoid “fluff” — every detail included should add genuine value or insight.&lt;/li&gt;
&lt;li&gt;Provide enough detail to distinguish yourself from peers, especially for new technicians with limited experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Common Pitfalls &amp;amp; Clichés to Avoid
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Avoid overly polished AI-written bios; recruiters can easily identify inauthentic content.&lt;/li&gt;
&lt;li&gt;Utilize AI tools for support, but ensure the content accurately reflects your genuine personality and professional identity.&lt;/li&gt;
&lt;li&gt;Write in the first person to maintain authenticity and engagement.&lt;/li&gt;
&lt;li&gt;Avoid extensive personal hobbies or childhood anecdotes that don’t directly relate to your professional skills.&lt;/li&gt;
&lt;li&gt;Steer clear of clichés such as “building computers since childhood” or vague statements like “I want to join a company to develop my skills.”&lt;/li&gt;
&lt;li&gt;Avoid begging language; emphasize what value you bring rather than what you seek.&lt;/li&gt;
&lt;li&gt;Don’t leave the “About” section empty or overly vague; it’s a key opportunity to highlight your distinct professional profile.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Where the headline makes the recruiter click on your profile, the About section will make them start to know you.&lt;/p&gt;

&lt;p&gt;Keep it clear and easy to read. Do not write a book and avoid fluff.&lt;/p&gt;

&lt;p&gt;Clearly state your current role, expertise, and what motivates you. Do not forget about your soft skills.&lt;/p&gt;

&lt;p&gt;Only the first three lines are shown unless you expand the section. So make sure the most important is there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experience Section
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Duration &amp;amp; Job Stability&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;How long you’ve worked at each job to assess stability and progression.&lt;/li&gt;
&lt;li&gt;Identifying a clear career progression or “red thread.”&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Job Titles &amp;amp; Keywords&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A quick scan of your job titles to see immediate alignment with roles they’re recruiting for.&lt;/li&gt;
&lt;li&gt;Relevant technical keywords and specific technologies used (e.g., AWS, Azure, Terraform, programming languages).&lt;/li&gt;
&lt;li&gt;Some recruiters interpret skill order as an indicator of proficiency, so list your most relevant skills prominently.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Responsibilities are generally preferred overall, but accomplishments are valuable as well
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Recruiters strongly prefer to see a clear summary of your role’s primary responsibilities outlined.&lt;/li&gt;
&lt;li&gt;Responsibilities provide immediate insight into what you have done regularly and can quickly show alignment with a new role.&lt;/li&gt;
&lt;li&gt;Specific projects or achievements demonstrate the tangible value you’ve provided in your roles.&lt;/li&gt;
&lt;li&gt;Ideal descriptions blend both but emphasize your daily responsibilities clearly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Balanced Approach&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The best practice is to combine responsibilities (primary) and accomplishments/projects (secondary but significant) to convey comprehensive professional value clearly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Commonly Overlooked Details &amp;amp; Pitfalls:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Avoid overly vague descriptions (e.g., “I worked as a Java developer” alone isn’t sufficient).&lt;/li&gt;
&lt;li&gt;Not clearly stating specific tasks, tools, and technologies used.&lt;/li&gt;
&lt;li&gt;Essential skills and accomplishments should appear at the beginning of the description. Recruiters often skim-read quickly and may miss details placed later in the text.&lt;/li&gt;
&lt;li&gt;Avoid descriptions that merely state what the company does. Recruiters want to know what &lt;strong&gt;you&lt;/strong&gt; did specifically, how you did it, and why it matters professionally.&lt;/li&gt;
&lt;li&gt;Ensure descriptions directly relate to the job roles you want to attract, as irrelevant experiences dilute the profile’s focus.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Be specific when describing your responsibilities and the tasks you performed in the position and the technologies you worked with.&lt;/p&gt;

&lt;p&gt;While it can be hard as a Junior Technician to have a clear career progression in IT, try to focus on the skills in the jobs where you have worked that closely align with the requirements for the career you are looking for. An earlier job might not have required the technical skills needed, but many of the soft skills or management skills you developed can be “translated” to a new career.&lt;/p&gt;

&lt;h2&gt;
  
  
  Projects Section
&lt;/h2&gt;

&lt;p&gt;The importance of projects was split among the recruiters. Some saw them as important, while others didn't see the importance of them, especially if they were not directly relevant to the role they were looking for.&lt;/p&gt;

&lt;h3&gt;
  
  
  Importance of Personal/Open-source Projects:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Average importance rating&lt;/strong&gt;: 5 out of 10 (mixed opinions)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High importance&lt;/strong&gt;: Some recruiters value these projects highly (ratings: 7, 8, 9) as indicators of initiative, skill, and passion.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low importance&lt;/strong&gt;: Others view them as less impactful (ratings: 1, 3) unless directly relevant to the role.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Most Relevant Project Details Recruiters Look For
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Purpose/Problem&lt;/strong&gt;: Clearly describe what problem the project aims to solve.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Role/Responsibility&lt;/strong&gt;: Specifically outline your individual contributions and responsibilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tech Stack&lt;/strong&gt;: Clearly state the technologies/tools used and justify their choice.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Results/Impact&lt;/strong&gt;: Summarize what was achieved or the current status of the project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relevance&lt;/strong&gt;: Explain how the project aligns with your career goals or targeted roles.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Recruiters prefer clarity and specificity in these details to better assess your capabilities and independence in handling tasks relevant to potential roles.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It’s beneficial to include links to GitHub, websites, or demos if they clearly showcase relevant, updated, and high-quality work.&lt;/li&gt;
&lt;li&gt;Avoid linking if your profiles or projects are outdated, incomplete, or do not positively enhance your overall professional image.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;While some recruiters do not see the Projects as that important, some do. And when you specify your project, be sure to clearly explain what the project does and how you solved it. Include links. Think about the fact that many recruiters are not technicians themselves, so the explanations need to be clear for non-technicians.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills Section
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Importance of Listing Skills
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Average importance rating&lt;/strong&gt;: 8.6 out of 10&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Most recruiters strongly emphasize the importance of explicitly listing skills due to:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enhanced visibility through searchability (keywords).&lt;/li&gt;
&lt;li&gt;Immediate clarity of your primary technical competencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Recruiter Perception of Skill Endorsements
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Moderate Impact:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Skill endorsements have some influence but are not decisive.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Positive Impact&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Endorsements by relevant colleagues or credible sources strengthen perceived credibility.&lt;/li&gt;
&lt;li&gt;High endorsements in key skills can make candidates more attractive initially.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Limited Impact&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recruiters generally don’t rely heavily on endorsements, especially if the profile clearly outlines skills elsewhere.&lt;/li&gt;
&lt;li&gt;Generic endorsements or those from non-technical contacts have minimal value.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Listing your skills is important, especially for recruiters searching for specific skills. It also clarifies the technical competencies you possess.&lt;/p&gt;

&lt;p&gt;Endorsement of skill can have some influence, but it also needs to come from colleagues or credible sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommendations Section
&lt;/h2&gt;

&lt;p&gt;The importance of recommendations was also a point of contention, with recruiters divided on the issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do Recruiters Read Recommendations?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Most recruiters rarely or occasionally glance at recommendations, but they’re generally not decisive.&lt;/li&gt;
&lt;li&gt;Some recruiters read them, but often after already making an initial decision on whether to contact the candidate or not.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Impact of Recommendations:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Average impact rating&lt;/strong&gt;: 5.5 out of 10 (varied significantly from very low to very high importance)&lt;/li&gt;
&lt;li&gt;Some recruiters rate them highly impactful (8–10), while others see minimal influence (2–6).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Should Technicians Actively Seek Recommendations?
&lt;/h3&gt;

&lt;p&gt;Yes, but selectively.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recommendations from well-respected individuals or those known in the tech community lend credibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Prioritize obtaining recommendations from:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Technical colleagues or peers with direct experience of your skills.&lt;/li&gt;
&lt;li&gt;Managers who can describe both technical and soft skills, though technical endorsements typically carry more weight.&lt;/li&gt;
&lt;li&gt;Recent, relevant recommendations (avoid outdated ones).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What Makes a Recommendation Credible and Impactful?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Specific and skill-focused&lt;/strong&gt;: Clearly describes the candidate’s technical skills, strengths, and the value they brought to projects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Positive yet authentic&lt;/strong&gt;: Uniformly positive recommendations make candidates appealing, but they must feel genuine and specific rather than generic praise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tech-focused descriptions&lt;/strong&gt;: Recommendations emphasizing direct technical contributions or problem-solving abilities are most impactful.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Recommendations can be impactful with the correct recruiter, but will probably not prompt them to reach out to you if your profile to this point has not piqued their interest. However, if it has, it can help you get the whole way.&lt;/p&gt;

&lt;p&gt;Ensure your recommendations are recent and from individuals with relevant experience in your field.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;I hope this will help some of you on your IT journey. It became a lot more text then I had anticipated but after compilling all the answers from the questionnaire (with help from AI) I didnt want to edit to much away as the recruiter had a lot to say about the different sections.&lt;/p&gt;

&lt;p&gt;Two important notes that the recruiters made that I want to share as the final points are:&lt;/p&gt;

&lt;p&gt;Ensure your LinkedIn profile aligns with your CV, and make it personal. The LinkedIn profile is about you.&lt;/p&gt;

</description>
      <category>career</category>
    </item>
    <item>
      <title>AWS Alert Validation - Lambda</title>
      <dc:creator>Jens Båvenmark</dc:creator>
      <pubDate>Mon, 19 May 2025 08:02:58 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-alert-validation-lambda-243l</link>
      <guid>https://dev.to/aws-builders/aws-alert-validation-lambda-243l</guid>
      <description>&lt;p&gt;We are continuing the blog series about testing your AWS alarms. The first part of the series, which looked at CloudWatch actions and EC2 alarms, can be found &lt;a href="https://dev.to/aws-builders/aws-alert-validation-ec2-45bn"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An untested alarm is not one you can trust.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This time we will look at alarms for your Lambda functions. As before, we will test the alarms by ”breaking” the Lambda so you get the same outcome as when a real issue would occur.&lt;/p&gt;

&lt;p&gt;Since this is Lambda, we will add code (or entire Lambdas) to make the Lambda act as we want. I will use Python, but the logic works for all the other supported languages.&lt;/p&gt;

&lt;p&gt;I have an &lt;a href="https://github.com/JBVK/AWSAlertValidation" rel="noopener noreferrer"&gt;examples repo&lt;/a&gt; where you can find Terraform code to deploy Lambas and required resources to AWS to test the alarms. You will need to connect your alarms to the Lambda functions, though.&lt;/p&gt;

&lt;p&gt;Remember to lower the thresholds on your alarms so they trigger more easily. If they trigger for one value, they will trigger for your real value as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda alarms
&lt;/h2&gt;

&lt;p&gt;The alarms we are going to look at are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Error Alarm&lt;/li&gt;
&lt;li&gt;Throttling Alarm&lt;/li&gt;
&lt;li&gt;Timeout Alarm&lt;/li&gt;
&lt;li&gt;High Duration Alarm&lt;/li&gt;
&lt;li&gt;Out of Memory Alarm&lt;/li&gt;
&lt;li&gt;Log Alarm&lt;/li&gt;
&lt;li&gt;Failed Lambda message to DLQ Alarm&lt;/li&gt;
&lt;li&gt;Dead Letter Failure Alarm&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Error Alarm
&lt;/h3&gt;

&lt;p&gt;The Error alarm is the most common Lambda alarm. To test it we just need to crash the Lambda or exit it in a TODO state.&lt;/p&gt;

&lt;p&gt;We will manage this by running a Lambda that is monitored with this code snippet, making it crash:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lambda_handler(event, context):
    raise Exception("Triggered error alarm for testing purposes.")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An example Lambda can be found &lt;a href="https://github.com/JBVK/AWSAlertValidation/tree/main/Lambda/ErrorAlarm" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To trigger the alarm, invoke the Lambda with this CLI command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda invoke --function-name {FunctionName} outfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Throttling Alarm
&lt;/h3&gt;

&lt;p&gt;To test throttling, we need to deploy a Lambda with a reserved concurrency limit set to one, so only one Lambda can be run at a time.&lt;/p&gt;

&lt;p&gt;I would suggest having the Lambda run a sleep or similar to keep it running so you can trigger multiple runs easily.&lt;/p&gt;

&lt;p&gt;Example Lambda and Terraform can be found &lt;a href="https://github.com/JBVK/AWSAlertValidation/tree/main/Lambda/ThrottlingAlarm" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To trigger the alarm, we will need to invoke the Lambda at least two time by running this command simultaneously in multiple terminals.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda invoke --function-name {FunctionName} outfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Timeout Alarm
&lt;/h3&gt;

&lt;p&gt;If you are monitoring for Lambda timeouts (timeout creates log entries that can be checked for with a metric filter) we can test that alarm with just adding a sleep in a Lambda that is longer than the configured timeout.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import time

def lambda_handler(event, context):
    time.sleep(25)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example Lambda and Terraform can be found &lt;a href="https://github.com/JBVK/AWSAlertValidation/tree/main/Lambda/TimeoutAlarm" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To trigger the alarm, invoke the Lambda with this CLI command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda invoke --function-name {FunctionName} outfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  High Duration Alarm
&lt;/h3&gt;

&lt;p&gt;To test for Lambda that takes a long time to finish (high duration), we will use the same setup as for Timeout Alarms, but we will set the Lambda timeout to longer than the sleep set in the Lambda.&lt;/p&gt;

&lt;p&gt;Make sure to set your alarm threshold lower than the time set to sleep in the Lambda function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import time

def lambda_handler(event, context):
    time.sleep(15)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example Lambda and Terraform can be found &lt;a href="https://github.com/JBVK/AWSAlertValidation/tree/main/Lambda/HighDurationAlarm" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To trigger the alarm, invoke the Lambda with this CLI command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda invoke --function-name {FunctionName} outfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Out Of Memory Alarm
&lt;/h3&gt;

&lt;p&gt;If you are monitoring for Out Of Memory (OOM) events on your Lambdas (when your Lambdas are using more memory than they are assigned, they will crash and log: Error Type: Runtime.OutOfMemory), we will run a Lambda that will use more memory than it has been assigned.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lambda_handler(event, context):
    mem_size_mb = 128

    # Allocate memory slightly over the limit
    bytes_to_allocate = (mem_size_mb + 10) * 1024 * 1024  # exceed by 10 MB
    memory_hog = "X" * bytes_to_allocate
    return len(memory_hog)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example Lambda and Terraform can be found &lt;a href="https://github.com/JBVK/AWSAlertValidation/tree/main/Lambda/OutOfMemoryAlarm" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To trigger the alarm, invoke the Lambda with this CLI command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda invoke --function-name {FunctionName} outfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Log Alarm
&lt;/h3&gt;

&lt;p&gt;To test log alarms from Lambdas, we just need to run a Lambda that logs what your metric filter is checking for.&lt;/p&gt;

&lt;p&gt;So if for example, you have a metric filter for the string “This is a test log line” run this code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    logger.info("This is a test log line")

    return {"statusCode": 200, "body": "Test completed successfully."}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example Lambda and Terraform can be found &lt;a href="https://github.com/JBVK/AWSAlertValidation/tree/main/Lambda/LogAlarm" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To trigger the alarm, invoke the Lambda with this CLI command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda invoke --function-name {FunctionName} outfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Failed Lambda async message to DLQ alarm
&lt;/h3&gt;

&lt;p&gt;If you send the event triggering the Lambda to an SQS Dead Letter Queue (DLQ) if the Lambda fails, and monitor whether the DLQ gets messages, we can test it the same way we did with testing error alarms.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lambda_handler(event, context):
    raise Exception("Triggered error alarm for testing purposes.")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example Lambda and Terraform can be found &lt;a href="https://github.com/JBVK/AWSAlertValidation/tree/main/Lambda/DLQAlarm" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To trigger the alarm, invoke the Lambda with this CLI command (DLQ only works for asynchronous invocations).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda invoke --function-name {FunctionName}--invocation-type Event output.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The DLQ can take a little time to report the message, so do not stress if you don't see the message there straight away.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dead Letter Errors Alarm
&lt;/h3&gt;

&lt;p&gt;If you are monitoring for failure to send messages to the DLQ (DeadLetterError) in case of async event failures with your Lambda, we can test it almost the same way as with the DLQ alarm above.&lt;/p&gt;

&lt;p&gt;The difference will be that we will remove the IAM permission for the Lambda to publish the message to the DLQ. This will trigger any monitoring set on the metric DeadLetterErrors.&lt;/p&gt;

&lt;p&gt;Example Lambda and Terraform can be found &lt;a href="https://github.com/JBVK/AWSAlertValidation/tree/main/Lambda/DeadLetterErrorAlarm" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;After deploying the Lambda with the DLQ, you will need to remove the permissions for the Lambda to post to the DLQ. Terraform will block the setup of the Lambda with DLQ configured if it doesn't have access to post to it.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Testing that your alarms work as you expect can save you a lot of headaches in the future.&lt;/p&gt;

&lt;p&gt;I hope that these tests will make your Lambda monitoring more secure.&lt;/p&gt;

&lt;p&gt;This was the second part in this series. In the &lt;a href="https://dev.to/aws-builders/aws-alert-validation-ec2-45bn"&gt;first part&lt;/a&gt;, we looked at tests for EC2 alarms and CloudWatch actions. In the upcoming part we will look at alarms for other resources.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudwatch</category>
      <category>lambda</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>AWS Alert Validation - EC2</title>
      <dc:creator>Jens Båvenmark</dc:creator>
      <pubDate>Thu, 08 May 2025 11:23:31 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-alert-validation-ec2-45bn</link>
      <guid>https://dev.to/aws-builders/aws-alert-validation-ec2-45bn</guid>
      <description>&lt;p&gt;For monitoring, the golden rule (at least in my opinion) is that an untested alarm is not one you can trust.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An untested alarm is not one you can trust&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this blog series, I will describe different ways you can test your monitoring, both metric and service monitoring, to be sure that it works as you want.&lt;/p&gt;

&lt;p&gt;So, what will that entail? Well, you will need to break stuff—at least enough so that your alarms will trigger.&lt;/p&gt;

&lt;p&gt;Testing monitoring can be time-consuming especially if you want to test real alarms. Usually, you don’t want to get notified for every spike on an EC2, but more if it persists over a set time. And checking if scheduled tasks work will require you to wait for the schedule to run.&lt;/p&gt;

&lt;p&gt;Before we start going through different tests, I suggest you run these tests in a non-production account, and that all monitoring (and its dependencies) is deployed with IaC. That way, you can be sure that the monitoring you have tested in your Dev account will work in your production account as well.&lt;/p&gt;

&lt;p&gt;In this first part of this blog series, we will examine testing EC2 alarms and ensuring that your CloudWatch alarm actions are triggered correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing CloudWatch Actions
&lt;/h2&gt;

&lt;p&gt;One common thing many want to test is whether they will receive a notification when their CloudWatch alarm is triggered, whether it triggers their Lambda as expected, and whether CloudWatch can trigger the action they have specified.&lt;/p&gt;

&lt;p&gt;We usually want to test this without triggering the real alarm, as that can be time-consuming. We can easily do this with the AWS CLI.&lt;/p&gt;

&lt;p&gt;With the CLI, we will change the state of the CloudWatch alarm to Alarm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudwatch set-alarm-state --alarm-name "AlarmName" --state-reason "Testing alarm" --state-value ALARM
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will trigger the action you specified on your alarm when in state ALARM.&lt;/p&gt;

&lt;p&gt;You will also get to test the action you set for the OK state. When your CloudWatch alarm checks the required metric against the threshold within the specified period for the alarm, it returns an OK state (since the metric should be at an OK level compared to the threshold) and triggers the action.&lt;/p&gt;

&lt;p&gt;If you don’t want to wait for the period to pass, you can also send the same CLI command again, but with the OK status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudwatch set-alarm-state --alarm-name "AlarmName" --state-reason "Testing alarm" --state-value OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to test for actions for missing data (insufficient data), then set the state to INSUFFICIENT_DATA.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudwatch set-alarm-state --alarm-name "AlarmName" --state-reason "Testing alarm" --state-value INSUFFICIENT_DATA
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing EC2 larms
&lt;/h2&gt;

&lt;p&gt;We will look at how you can test the most common alarms for EC2 by triggering them by increasing the metric monitored by utilizing special applications or commands to mimic usage on the EC2 instance (we will test on Linux instances)&lt;/p&gt;

&lt;p&gt;The application we will use is called &lt;em&gt;stress-ng.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing stress-ng
&lt;/h3&gt;

&lt;p&gt;To install &lt;em&gt;stress-ng&lt;/em&gt;, run this command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Linux/RHEL/CentOS/Fedora/Rocky&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf install stress-ng
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Ubuntu/Debian&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install stress-ng
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You don't need to do anything more than install the application. We will look into the commands when testing the different alarms.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Before starting testing the alarms, I suggest you modify the thresholds on your alarms to make them easier to trigger. If they trigger on a higher or lower threshold, they will trigger on the correct threshold as well.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;All &lt;em&gt;stress-ng&lt;/em&gt; commands are “run forever,” so remember to cancel them with CTRL+c when your alarm triggers.&lt;/p&gt;

&lt;h2&gt;
  
  
  CPU
&lt;/h2&gt;

&lt;p&gt;To test CPU alarms, we will mimic CPU load with the &lt;em&gt;stress-ng&lt;/em&gt; application. In these examples, we will trigger a CPU usage alarm by running all cores on the EC2 to a set percentage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo stress-ng --cpu {Number of cpus} --cpu-load {Load in percentage per cpu}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All tests are run on a burstable EC2 instance with two cores.&lt;/p&gt;

&lt;h3&gt;
  
  
  CPU Usage
&lt;/h3&gt;

&lt;p&gt;To test CPU usage, I have lowered the alarm's threshold to 50%, so I will run the test at 75%.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo stress-ng --cpu 2 --cpu-load 75
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  CPU Load
&lt;/h3&gt;

&lt;p&gt;To test CPU Load, we will run the test with more CPU threads than the instance has.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo stress-ng --cpu 4--cpu-load 100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  CPU Credits
&lt;/h3&gt;

&lt;p&gt;If you have a burstable instance and want to test the CPU Credits alarm, we run the test on the CPU with a high load. Remember to raise the alarm's threshold to limit the time you will need to wait until it triggers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo stress-ng --cpu 2 --cpu-load 100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Memory
&lt;/h2&gt;

&lt;p&gt;To test memory alarms, we will mimic Memory usage with the &lt;em&gt;stress-ng&lt;/em&gt; application. In this example, we will trigger memory usage with the vm flag and set the available memory usage to a set percentage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo stress-ng --vm {Number of workers to use memory} --vm-bytes {Bytes or percent of available memory} --vm-keep
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using more memory than the instance has will result in OOM (Out Of Memory).&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory Usage
&lt;/h3&gt;

&lt;p&gt;To test Memory Usage, we will run two workers using 80% of the available memory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo stress-ng --vm 2 --vm-bytes 80% --vm-keep
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Swap Usage
&lt;/h3&gt;

&lt;p&gt;To test Swap Usage, we will run one worker using 150% of total memory to get swap to be used quickly. The command will retrieve the total memory and multiply it by 1.5. Remember that this can cause OOM issues.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  sudo stress-ng --vm 1 --vm-bytes $(awk '/MemTotal/ {print int($2 * 1.5) "k"}' /proc/meminfo) --vm-keep
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Disk
&lt;/h2&gt;

&lt;p&gt;To test disk alarms, we will create disk usage with fallocate or dd to create a dummy file of a specific size.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo fallocate -l {Size of file} {Path to file}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If fallocate doesn't work on your Linux distribution, you can use dd instead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dd if=/dev/zero of={Path to file} bs={Size of block} count={number of count}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Disk Usage
&lt;/h3&gt;

&lt;p&gt;To test disk usage, we will create a dummy file of a specific size on the disk you are monitoring, raising the disk usage above the threshold you have set for your alarm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo fallocate -l 2G /var/filldisk.img
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If fallocate doesn't work, use dd instead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dd if=/dev/zero of={Path to file} bs={Size of block} count={number of count}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Testing that your alarms work as you expect can save you a lot of headaches in the future. The tests we have done here are not unique to AWS since all are done with Linux tools.&lt;/p&gt;

&lt;p&gt;This was the first post in this series, and in the upcoming posts, we will look at testing alarms for other AWS resources.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudwatch</category>
      <category>ec2</category>
      <category>alarm</category>
    </item>
  </channel>
</rss>
