<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matt Bacchi</title>
    <description>The latest articles on DEV Community by Matt Bacchi (@mbacchi).</description>
    <link>https://dev.to/mbacchi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mbacchi"/>
    <language>en</language>
    <item>
      <title>Using the VSCode Claude Code Extension with Bedrock and Claude Sonnet 4.5</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Fri, 02 Jan 2026 03:55:36 +0000</pubDate>
      <link>https://dev.to/aws-builders/using-the-vscode-claude-code-extension-with-bedrock-and-claude-sonnet-45-2d69</link>
      <guid>https://dev.to/aws-builders/using-the-vscode-claude-code-extension-with-bedrock-and-claude-sonnet-45-2d69</guid>
      <description>&lt;p&gt;Lots of folks use the Claude IDE, or the Claude Code VSCode extension. Unfortunately, your prompts and completions are used (by default) to train Claude models. &lt;a href="https://code.claude.com/docs/en/data-usage#data-training-policy" rel="noopener noreferrer"&gt;[0]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Bedrock, on the other hand, doesn't use your prompts and completions to train any AWS models or give them to 3rd parties. &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/data-protection.html" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For these reasons (privacy, data sovereignty) I'm more inclined to use Bedrock as an LLM in my IDE. Today we'll go over how to set up the VSCode Claude Code extension with AWS Bedrock and use the Claude Sonnet 4.5 foundation model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In order to use the Claude Code VSCode extension with Bedrock and the Claude Sonnet 4.5 model, we need to perform these tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setup AWS IAM permissions to allow Bedrock usage&lt;/li&gt;
&lt;li&gt;Install and configure Claude Code VSCode extension&lt;/li&gt;
&lt;li&gt;Integrate AWS credentials, configure extension&lt;/li&gt;
&lt;li&gt;Test our setup works&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To do this, we'll use Terraform to create our AWS IAM user and policies (but you could use the AWS console.)&lt;/p&gt;

&lt;p&gt;Then we'll integrate this all together and verify it works as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unfortunate Missing Features and a Bug/Regression with the Claude Code Extension
&lt;/h2&gt;

&lt;p&gt;AWS Bedrock enables generating short lived API tokens via their SDK. &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/api-keys-generate.html#api-keys-refresh-short-term" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; Claude Code does support two methods for automatic AWS credential refresh, but Bedrock API tokens is not one of them.&lt;/p&gt;

&lt;p&gt;If it did support this feature it would be the best solution from a security perspective. Tokens would expire in 12 hours and when expired and the extension is used it would automatically refresh them.&lt;/p&gt;

&lt;p&gt;Instead it only supports AWS SSO (or rather AWS Identity Center) for it's &lt;code&gt;awsAuthRefresh&lt;/code&gt; option, or AWS IAM credentials for its &lt;code&gt;awsCredentialsExport&lt;/code&gt; refresh methods. &lt;a href="https://code.claude.com/docs/en/amazon-bedrock#advanced-credential-configuration" rel="noopener noreferrer"&gt;[3]&lt;/a&gt; This deficiency is a poor decision, or at least an oversight by the Claude Code development team.&lt;/p&gt;

&lt;p&gt;Unfortunately, a more egregious issue is that they claim the above &lt;code&gt;awsCredentialsExport&lt;/code&gt; refresh method is functional when it is not. Whether it's a regression or bug, or maybe never worked, I couldn't get it working within a couple hours. (Including time spent conversing with the Claude Code VSCode extension using a working AWS Profile and it couldn't suggest a workaround to this problem.) In addition to all these setbacks, using the Claude Code settings file (&lt;code&gt;~/.claude/settings.json&lt;/code&gt;) didn't work either so I have to use the VSCode settings file to set all Claude Code extension configuration options.&lt;/p&gt;

&lt;p&gt;Since using AWS Identity Center for a personal account is overkill, refreshing Bedrock API tokens in Claude Code is not supported, and the AWS credential export method to automatically refresh my credentials for the Claude Code VSCode extension is not functional, I'll settle for the lowest common denominator and use the AWS Profile in my configuration. I don't love this method because it uses a long lived AWS IAM user credential (access key and secret access key.) But I can't improve security due to the poor state of affairs in the Claude Code VSCode extension.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create IAM User and Bedrock Permissions
&lt;/h2&gt;

&lt;p&gt;Create an IAM user and attach IAM policy to the user with this Terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_user"&lt;/span&gt; &lt;span class="s2"&gt;"bedrock_user"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"bedrock-user"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_access_key"&lt;/span&gt; &lt;span class="s2"&gt;"bedrock"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bedrock_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_policy_document"&lt;/span&gt; &lt;span class="s2"&gt;"bedrock"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;statement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;effect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
    &lt;span class="nx"&gt;actions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;"bedrock:InvokeModel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"bedrock:ListFoundationModels"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"bedrock:ListInferenceProfiles"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"bedrock:InvokeModelWithResponseStream"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="c1"&gt;#   "bedrock:CallWithBearerToken" # required if using Bedrock API token which we're not doing here&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;statement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;effect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
    &lt;span class="nx"&gt;actions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;"aws-marketplace:ViewSubscriptions"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"aws-marketplace:Subscribe"&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;condition&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;test&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"StringEquals"&lt;/span&gt;
      &lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"aws:CalledViaLast"&lt;/span&gt;
      &lt;span class="nx"&gt;values&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"bedrock.amazonaws.com"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_user_policy"&lt;/span&gt; &lt;span class="s2"&gt;"bedrock"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"bedrock"&lt;/span&gt;
  &lt;span class="nx"&gt;user&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bedrock_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_iam_policy_document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bedrock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup AWS Profile
&lt;/h2&gt;

&lt;p&gt;I typically use &lt;code&gt;aws-vault&lt;/code&gt; to manage my AWS credentials, for enhanced security and to use short lived credentials. But again we're going to have to use the standard AWS method of storing long lived credentials in our &lt;code&gt;~/.aws/credentials&lt;/code&gt; file, and then access that profile from the Claude Code extension.&lt;/p&gt;

&lt;p&gt;Above in the IAM user creation section, we only created the IAM user and policy. Now you will need to create an Access Key to use as the credential in your profile. Go to the AWS Console and create an access key for this user, follow the instructions to create it for the CLI use case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6oj30f6e3y91po67a47.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6oj30f6e3y91po67a47.png" alt=" " width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating it, copy your Access Key and Secret Access Key somewhere for safe keeping (I use a password manager for this purpose.)&lt;/p&gt;

&lt;p&gt;To setup your profile in the &lt;code&gt;~/.aws/credentials&lt;/code&gt; file, use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure &lt;span class="nt"&gt;--profile&lt;/span&gt; YOUR_AWS_PROFILE_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will prompt you to provide the AWS Access Key and Secret Access Key, which will be added to the &lt;code&gt;~/.aws/credentials&lt;/code&gt; file using the profile name you specified in the command (choose wisely!)&lt;/p&gt;

&lt;h2&gt;
  
  
  Enable Bedrock Claude Sonnet 4.5 and Validate Access
&lt;/h2&gt;

&lt;p&gt;After creating the above IAM user and policy, and setting up the profile, we'll login to the AWS console. Choose the AWS region that you normally use, but be aware that these Bedrock foundation models aren't available in every single global AWS region. I used region &lt;code&gt;us-west-2&lt;/code&gt; but &lt;code&gt;us-east-1&lt;/code&gt; and &lt;code&gt;us-east-2&lt;/code&gt; are also supported.&lt;/p&gt;

&lt;p&gt;Anthropic requires first-time customers to submit use case details before invoking a model once per account or once at the organization's management account. &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; This is a rather antiquated policy in the cloud era, but required nonetheless.&lt;/p&gt;

&lt;p&gt;Go to the Bedrock section of the AWS console, then to "Chat/Text Playground" and select Anthropic Claude Sonnet 4.5. You'll be presented with a dialog to fill out and enable the foundation model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install and Configure VSCode
&lt;/h2&gt;

&lt;p&gt;Install the VSCode extension:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;code &lt;span class="nt"&gt;--install-extension&lt;/span&gt; anthropic.claude-code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Identify your Bedrock Claude Sonnet 4.5 inference profile ARN:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws bedrock list-inference-profiles  &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="nt"&gt;--profile&lt;/span&gt; YOUR_AWS_PROFILE_NAME &lt;span class="nt"&gt;--no-cli-pager&lt;/span&gt; | jq &lt;span class="s1"&gt;'.inferenceProfileSummaries | .[] | select(.inferenceProfileId | match("us.anthropic.claude-sonnet-4-5-20250929-v1:0")) | .inferenceProfileArn'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: This assumes you're in the US, if using another global region use the global Anthropic Claude Sonnet profile name in the above command.&lt;/p&gt;

&lt;p&gt;Add the following to your VSCode user settings.json file (usually this is &lt;code&gt;~/.config/Code/User/settings.json&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"claudeCode.selectedModel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us.anthropic.claude-sonnet-4-5-20250929-v1:0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"claudeCode.environmentVariables"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS_PROFILE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"YOUR_AWS_PROFILE_NAME"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS_REGION"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"YOUR_AWS_REGION_FROM_ABOVE"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"BEDROCK_MODEL_ID"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"INFERENCE_PROFILE_ARN_FROM_ABOVE"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CLAUDE_CODE_USE_BEDROCK"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"claudeCode.disableLoginPrompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Did it work?
&lt;/h2&gt;

&lt;p&gt;You'll probably have to restart your VSCode session if it was running. Then, open the Claude window and type a question or request and you should see a successful response like below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk48clugkaul6elsqx166.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk48clugkaul6elsqx166.png" alt=" " width="318" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We did it! I'm not super impressed with the missing/broken/overlooked features of the Claude Code VSCode extension related to AWS IAM credentials. But it works fine for the time being and I'll revisit this issue and report back when these issues are resolved and we can all begin using short lived credentials with the extension.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;I borrowed some information from this incredibly detailed blog post by Vasko Kelkocev. &lt;a href="https://aws.plainenglish.io/configuring-claude-code-extension-with-aws-bedrock-and-how-you-can-avoid-my-mistakes-090dbed5215b" rel="noopener noreferrer"&gt;[5]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But even though that blog was written in October 2025, it was already out of date by the time I found it. I had to add more IAM permissions to get the extension to work with Bedrock (specifically &lt;code&gt;bedrock:InvokeModelWithResponseStream&lt;/code&gt;), and there were some other issues with the configuration I had to play with. Thanks for the great blog Vasko.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>claudecode</category>
      <category>vscode</category>
      <category>bedrock</category>
    </item>
    <item>
      <title>Firecracker Virtualization Overview</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Thu, 11 Dec 2025 22:52:19 +0000</pubDate>
      <link>https://dev.to/aws-builders/firecracker-virtualization-overview-1hif</link>
      <guid>https://dev.to/aws-builders/firecracker-virtualization-overview-1hif</guid>
      <description>&lt;p&gt;Firecracker is an open source virtualization technology created by Amazon Web Services (AWS) which underpins their AWS Lambda Functions as a Service (FaaS) serverless product.&lt;/p&gt;

&lt;p&gt;Firecracker was open sourced in 2018 &lt;a href="https://aws.amazon.com/about-aws/whats-new/2018/11/firecracker-lightweight-virtualization-for-serverless-computing/" rel="noopener noreferrer"&gt;[0]&lt;/a&gt;, making it possible for anyone to use this extremely fast and reliable system for their own projects and use cases.&lt;/p&gt;

&lt;p&gt;I've been researching the ecosystem lately and am impressed at the flexibility architected into the Firecracker code which enables it to be used in many ways. We might have expected them to design it such that it only works in their very tightly controlled environment. But the fact that it's not specialized to just the AWS Lambda use case means that it can be leveraged by anyone from AWS scale to a home lab running a single VM.&lt;/p&gt;

&lt;p&gt;Let's explore the capabilities of Firecracker and the various methods of using it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Firecracker High Level Overview
&lt;/h2&gt;

&lt;p&gt;There are many good quick start documents &lt;a href="https://github.com/firecracker-microvm/firecracker/blob/main/docs/getting-started.md" rel="noopener noreferrer"&gt;1&lt;/a&gt; and blogs describing how to install and start a single Firecracker MicroVM instance. Because of these great resources readily available, I won't describe that here.&lt;/p&gt;

&lt;p&gt;The basic requirements are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Firecracker binary&lt;/li&gt;
&lt;li&gt;kernel&lt;/li&gt;
&lt;li&gt;rootfs&lt;/li&gt;
&lt;li&gt;networking configuration (optional)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you start a firecracker VM by executing the &lt;code&gt;firecracker&lt;/code&gt; program, you are running a single VM instance which can be managed through the firecracker process you launched it with. You can send subsequent commands to perform additional management tasks to the firecracker process using the API via a unix socket. When you are done with the VM you can and should stop it "gracefully".&lt;/p&gt;

&lt;p&gt;Multiple firecracker processes can be executed at one time, which translates to running multiple VMs. Presumably this is how AWS runs millions of Lambda functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;There are a couple ways to configure your VM:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sending HTTP requests via the API socket &lt;a href="https://github.com/firecracker-microvm/firecracker/blob/main/docs/api_requests/actions.md" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Using the configuration file &lt;a href="https://github.com/firecracker-microvm/firecracker/blob/main/docs/getting-started.md#configuring-the-microvm-without-sending-api-requests" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you use the configuration file to set the VM guest kernel and rootfs, you can still use the socket to send API requests. These configuration methods seem to conflict, but they appear to me to be part of the design of a highly flexible system.&lt;/p&gt;

&lt;p&gt;The HTTP API is enhanced by what appears to be an official Go SDK &lt;a href="https://pkg.go.dev/github.com/firecracker-microvm/firecracker-go-sdk" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; that you can use to manage your Firecracker VM instances. A likely scenario is that AWS uses this Go SDK to provision and control their Lambda functions via this HTTP API provided by the Firecracker process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initialization
&lt;/h2&gt;

&lt;p&gt;Firecracker is also flexible in how it can be initialized. Options include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;command line execution&lt;/li&gt;
&lt;li&gt;systemd&lt;/li&gt;
&lt;li&gt;jailer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first two options are obviously common to many Linux tools. But the diverse use cases that AWS themselves have for a multi-purpose virtualization technology like Firecracker make it likely that they designed it to be versatile. This adaptable capability extends from the configuration methods to initialization, and likely beyond.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhanced Isolation using Jailer
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;jailer&lt;/code&gt; tool &lt;a href="https://github.com/firecracker-microvm/firecracker/blob/main/docs/jailer.md" rel="noopener noreferrer"&gt;[5]&lt;/a&gt; which is specifically designed for firecracker further isolates each firecracker process to its own chroot jail directory. This security mechanism has been used for years in Unix systems and affords a greater sense of security in a multi-tenant environment.&lt;/p&gt;

&lt;p&gt;We won't go into detail on this for now, and it isn't necessary for your home lab unless you're running untrusted code (i.e. not written by you.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Demonstration
&lt;/h2&gt;

&lt;p&gt;To quickly show what we've discussed above, we'll demonstrate with a small bash script that starts a Firecracker micro VM using previously downloaded kernel and rootfs. Here's the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;

&lt;span class="nv"&gt;firecracker&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/usr/bin/firecracker"&lt;/span&gt;

&lt;span class="nv"&gt;uuid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;/usr/bin/uuidgen&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;nohup&lt;/span&gt; &lt;span class="nv"&gt;$firecracker&lt;/span&gt; &lt;span class="nt"&gt;--api-sock&lt;/span&gt; /tmp/&lt;span class="nv"&gt;$uuid&lt;/span&gt;.socket &amp;amp;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"created firecracker process running on socket: /tmp/&lt;/span&gt;&lt;span class="nv"&gt;$uuid&lt;/span&gt;&lt;span class="s2"&gt;.socket"&lt;/span&gt;

curl &lt;span class="nt"&gt;--unix-socket&lt;/span&gt; /tmp/&lt;span class="nv"&gt;$uuid&lt;/span&gt;.socket &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-X&lt;/span&gt; PUT &lt;span class="s1"&gt;'http://localhost/boot-source'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Accept: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
        "kernel_image_path": "/tmp/firecracker-test/hello-vmlinux.bin",
        "boot_args": "console=ttyS0 reboot=k panic=1 pci=off"
    }'&lt;/span&gt;

curl &lt;span class="nt"&gt;--unix-socket&lt;/span&gt; /tmp/&lt;span class="nv"&gt;$uuid&lt;/span&gt;.socket &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-X&lt;/span&gt; PUT &lt;span class="s1"&gt;'http://localhost/drives/rootfs'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Accept: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
        "drive_id": "rootfs",
        "path_on_host": "/tmp/firecracker-test/hello-rootfs.ext4",
        "is_root_device": true,
        "is_read_only": false
    }'&lt;/span&gt;

curl &lt;span class="nt"&gt;--unix-socket&lt;/span&gt; /tmp/&lt;span class="nv"&gt;$uuid&lt;/span&gt;.socket &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-X&lt;/span&gt; PUT &lt;span class="s1"&gt;'http://localhost/actions'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-H&lt;/span&gt;  &lt;span class="s1"&gt;'Accept: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-H&lt;/span&gt;  &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
        "action_type": "InstanceStart"
    }'&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Started VM instance."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running it we get the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;user@temphost:~/data/firecracker-test&lt;span class="nv"&gt;$ &lt;/span&gt;./startvm.sh 
created firecracker process running on socket: /tmp/b214c708-e506-455b-a98d-647939a0ef0d.socket
&lt;span class="nb"&gt;nohup&lt;/span&gt;: appending output to &lt;span class="s1"&gt;'nohup.out'&lt;/span&gt;
HTTP/1.1 204 
Server: Firecracker API
Connection: keep-alive

HTTP/1.1 204 
Server: Firecracker API
Connection: keep-alive

HTTP/1.1 204 
Server: Firecracker API
Connection: keep-alive

Started VM instance.
user@temphost:~/data/firecracker-test&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tail &lt;/span&gt;nohup.out 
 &lt;span class="o"&gt;[&lt;/span&gt; ok &lt;span class="o"&gt;]&lt;/span&gt;
 &lt;span class="k"&gt;*&lt;/span&gt; Mounting persistent storage &lt;span class="o"&gt;(&lt;/span&gt;pstore&lt;span class="o"&gt;)&lt;/span&gt; filesystem ...
 &lt;span class="o"&gt;[&lt;/span&gt; ok &lt;span class="o"&gt;]&lt;/span&gt;
Starting default runlevel
&lt;span class="o"&gt;[&lt;/span&gt;    1.088111] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2879b124109, max_idle_ns: 440795245440 ns

Welcome to Alpine Linux 3.8
Kernel 4.14.55-84.37.amzn2.x86_64 on an x86_64 &lt;span class="o"&gt;(&lt;/span&gt;ttyS0&lt;span class="o"&gt;)&lt;/span&gt;

localhost login: 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see that we have a login prompt in the &lt;code&gt;nohup.out&lt;/code&gt; file that indicates the VM is running and ready to accept logins. We didn't setup any SSH keys in this attempt, nor networking, so we're SOL on logging in. But that's OK because we were just trying to show how straightforward this can be.&lt;/p&gt;

&lt;p&gt;Notice we created a random UUID to use as the firecracker unix socket, which is how you can communicate with it via the HTTP API. After starting the firecracker process, we then make &lt;code&gt;curl&lt;/code&gt; requests to configure the kernel, rootfs, and then initiate the &lt;code&gt;InstanceStart&lt;/code&gt; action.&lt;/p&gt;

&lt;p&gt;Granted, this is a very rudimentary example. But it shows how easily firecracker can be used if you understand the fundamentals.&lt;/p&gt;

&lt;h2&gt;
  
  
  Firecracker Ecosystem Discrepancies Compared to Lambda
&lt;/h2&gt;

&lt;p&gt;In multiple AWS re:Invent talks, Lambda architecture has been detailed by AWS employees at a high level. One example is in Julian Wood's 2022 session &lt;a href="https://youtu.be/0_jfH6qijVY?si=GRy8QqQiUwajQKhk&amp;amp;t=636" rel="noopener noreferrer"&gt;A closer look at AWS Lambda (SVS404-R)&lt;/a&gt;. At 10:36 in the video, he references the Lambda data plane, which consists of a host of interrelated services for synchronous vs. asynchronous Lambda invocations. These services are:&lt;/p&gt;

&lt;h3&gt;
  
  
  Synchronous
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Frontend Invoke Service&lt;/li&gt;
&lt;li&gt;Counting Service&lt;/li&gt;
&lt;li&gt;Assignment Service&lt;/li&gt;
&lt;li&gt;Worker Hosts&lt;/li&gt;
&lt;li&gt;Placement Service&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Asynchronous
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Poller Fleet&lt;/li&gt;
&lt;li&gt;Queue Manager&lt;/li&gt;
&lt;li&gt;State Manager&lt;/li&gt;
&lt;li&gt;Stream Tracker&lt;/li&gt;
&lt;li&gt;Leasing Service&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Cold Truth
&lt;/h3&gt;

&lt;p&gt;These administrative services used to run millions or billions of AWS Lambda invocations are obviously not available for Firecracker. It makes sense that AWS would release their Firecracker product as open source, but they're not likely to ever provide the facade that props Firecracker up in their environment, enabling AWS Lambda to be highly performant and reliable. But this is something I'm interested in. I think some of these features that streamline the functionality of Firecracker in the Lambda use case could be built to make it easier to manage for those of us who think this tool is exciting and wish it came with batteries included.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Over the next few weeks (and months) I'll be playing with this some more. I'll likely be posting as I do this to give a different perspective on firecracker than the standard quick start blogs out there already.&lt;/p&gt;

&lt;p&gt;Hope you got something out of this today!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>firecracker</category>
      <category>virtualization</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Installing Nvidia Datacenter GPU Manager on Amazon Linux 2023</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Sat, 01 Feb 2025 15:51:13 +0000</pubDate>
      <link>https://dev.to/aws-builders/installing-nvidia-datacenter-gpu-manager-on-amazon-linux-2023-16hk</link>
      <guid>https://dev.to/aws-builders/installing-nvidia-datacenter-gpu-manager-on-amazon-linux-2023-16hk</guid>
      <description>&lt;p&gt;A short post discussing the installation of Nvidia Datacenter GPU Manager on Amazon Linux 2023. I recently had to figure this out using sketchy documentation, so I'm hoping this helps some folks out there doing similar head scratching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: This blog post does not include all steps to install Nvidia drivers. I assume you already have driver package(s) required for your application installed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Datacenter GPU Manager
&lt;/h2&gt;

&lt;p&gt;Nvidia &lt;a href="https://docs.nvidia.com/data-center-gpu-manager-dcgm/index.html" rel="noopener noreferrer"&gt;Datacenter GPU Manager&lt;/a&gt; is a suite of tools used for managing and monitoring Nvidia GPUs in the data center.&lt;/p&gt;

&lt;p&gt;In our environment we use Nvidia Datacenter GPU Manager as a prerequisite for the DCGM Prometheus metrics exporter used to send metrics to Datadog, named &lt;a href="https://github.com/NVIDIA/dcgm-exporter" rel="noopener noreferrer"&gt;dcgm-exporter&lt;/a&gt;. I won't go into detail on installing &lt;code&gt;dcgm-exporter&lt;/code&gt;, the &lt;a href="https://docs.nvidia.com/data-center-gpu-manager-dcgm/index.html" rel="noopener noreferrer"&gt;instructions in the Github README&lt;/a&gt; are fairly straightforward to follow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Linux 2 End Of Life
&lt;/h2&gt;

&lt;p&gt;The Amazon Linux 2 (AL2) &lt;a href="https://aws.amazon.com/amazon-linux-2/faqs/" rel="noopener noreferrer"&gt;EOL date is 6/30/2025&lt;/a&gt; (June 30, 2025). The replacement is Amazon Linux 2023 (AL2023) &lt;a href="https://aws.amazon.com/linux/amazon-linux-2023/" rel="noopener noreferrer"&gt;[3]&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing DCGM on AL2023
&lt;/h2&gt;

&lt;p&gt;Installing Nvidia Datacenter GPU Manager on AL2 was documented, but applying that documentation to AL2023 wasn't clear what value to use for the &lt;code&gt;&amp;lt;distro&amp;gt;&lt;/code&gt; &lt;a href="https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#network-repo-installation-for-amazon-linux" rel="noopener noreferrer"&gt;[4]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In order to understand what distribution Amazon Linux 2023 most closely relates to, you have to look in the &lt;a href="https://docs.aws.amazon.com/linux/al2023/ug/compare-with-al2.html#building-on-fedora" rel="noopener noreferrer"&gt;Amazon Linux 2023 User Guide&lt;/a&gt;. It describes that AL2023 is "sourced from multiple versions of Fedora...including CentOS 9."&lt;/p&gt;

&lt;p&gt;That means when following the Nvidia installation instructions, they suggest enabling the repository with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;dnf config-manager &lt;span class="nt"&gt;--add-repo&lt;/span&gt; https://developer.download.nvidia.com/compute/cuda/repos/&amp;lt;distro&amp;gt;/x86_64/cuda-&amp;lt;distro&amp;gt;.repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because AL2023 is derived from Fedora/CentOS 9, the &lt;code&gt;&amp;lt;distro&amp;gt;&lt;/code&gt; parameter is going to be &lt;code&gt;rhel9&lt;/code&gt;. So our command to install the repository instead becomes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;dnf config-manager &lt;span class="nt"&gt;--add-repo&lt;/span&gt; https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can run the install command for the &lt;code&gt;datacenter-gpu-manager&lt;/code&gt; package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;dnf &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; datacenter-gpu-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The documentation was not clear which distribution should be used for AL2023. Replace &lt;code&gt;&amp;lt;distro&amp;gt;&lt;/code&gt; with &lt;code&gt;rhel9&lt;/code&gt; in the repo installation command and we're now able to install the &lt;code&gt;datacenter-gpu-manager&lt;/code&gt; package.&lt;/p&gt;




&lt;p&gt;Cover photo by &lt;a href="https://unsplash.com/@christianw" rel="noopener noreferrer"&gt;Christian Wiediger&lt;/a&gt; on &lt;a href="https://unsplash.com/" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>gpu</category>
      <category>nvidia</category>
    </item>
    <item>
      <title>Switching to the Terraform S3 Backend with Native State File Locks</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Wed, 08 Jan 2025 15:56:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/switching-to-the-terraform-s3-backend-with-native-state-file-locks-3h44</link>
      <guid>https://dev.to/aws-builders/switching-to-the-terraform-s3-backend-with-native-state-file-locks-3h44</guid>
      <description>&lt;p&gt;Terraform is a flexible, cloud agnostic infrastructure as code (IaC) tool. As it constructs infrastructure resources, it builds a ledger used to track resources that have successfully been created as well as additional metadata (such as &lt;code&gt;id&lt;/code&gt;.) Terraform stores this state in a binary formatted file with the extension &lt;code&gt;.tfstate&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Terraform S3 Backend
&lt;/h2&gt;

&lt;p&gt;The Terraform state file described above by default is stored in the same directory as the Terraform infrastructure definition files you wrote. But with this state on your local computer it is vulnerable to being lost or overwritten, and it cannot be shared with or managed by other team members. Using a distributed storage mechanism to store this state file is straightforward with Terraform, and they provide many &lt;a href="https://developer.hashicorp.com/terraform/language/backend#backend-types" rel="noopener noreferrer"&gt;backend options&lt;/a&gt;. For AWS users, the &lt;a href="https://developer.hashicorp.com/terraform/language/backend/s3" rel="noopener noreferrer"&gt;Terraform S3 Backend&lt;/a&gt; allows storing this state file in AWS S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is State File Locking
&lt;/h2&gt;

&lt;p&gt;So now we know this state file is stored in distributed object storage (AWS S3,) and more than one user can manage resources within it. But to safely manage this state file, we require a locking mechanism (often &lt;a href="https://en.wikipedia.org/wiki/Lock_(computer_science)" rel="noopener noreferrer"&gt;called a mutex in computing&lt;/a&gt;) that disallows multiple users from attempting to write to it at once. We would have a mess if we allowed more than one user to write to it at the same time, potentially losing the resources that were created and intended to be stored in this state file.&lt;/p&gt;

&lt;p&gt;Terraform state locking capability has been available for the &lt;a href="https://developer.hashicorp.com/terraform/language/backend/s3#state-locking" rel="noopener noreferrer"&gt;S3 backend&lt;/a&gt; for quite some time. But unfortunately it has required an additional DynamoDB table to be created that tracked the state file locking status.&lt;/p&gt;

&lt;p&gt;Until now.&lt;/p&gt;

&lt;p&gt;This DynamoDB table is an extra resource that seemed tangential to the Terraform state backend process and complicated the process of configuring your backend. That requirement has been rendered obsolete with a recent feature that was added to AWS S3, &lt;strong&gt;conditional writes&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS S3 Conditional Writes
&lt;/h2&gt;

&lt;p&gt;In August, AWS announced the addition of the &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/" rel="noopener noreferrer"&gt;S3 Conditional Writes feature&lt;/a&gt;. This feature of AWS S3 compels S3 clients to check for the existence of an object before writing it, and if it already exists to fail. If the file exists the &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/conditional-writes.html#conditional-error-response" rel="noopener noreferrer"&gt;S3 client returns&lt;/a&gt; a &lt;code&gt;412 Precondition Failed&lt;/code&gt; error response.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 Conditional Write Support Added in Terraform v1.10
&lt;/h2&gt;

&lt;p&gt;Support for S3 Conditional Writes was added to Terraform &lt;a href="https://github.com/hashicorp/terraform/releases/tag/v1.10.0" rel="noopener noreferrer"&gt;release v1.10&lt;/a&gt;. (If you want to see some great background and architecture detail from the developer Bruno Schaatsbergen about the implementation look &lt;a href="https://github.com/hashicorp/terraform/pull/35661" rel="noopener noreferrer"&gt;here&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;Thankfully this is completely transparent to the Terraform user (unless it returns an error attempting to lock the state file.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring the S3 Backend to Use Native State File Locking
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://developer.hashicorp.com/terraform/language/backend/s3#state-locking" rel="noopener noreferrer"&gt;Terraform documentation&lt;/a&gt; describes the new configuration parameter &lt;code&gt;use_lockfile&lt;/code&gt; to enable S3 state locking. It also currently describes the old DynamoDB method as still available. (It's common for software to support both an old and new related feature for some time until all users can migrate to the new methodology.)&lt;/p&gt;

&lt;p&gt;This means you can actually use both locking mechanisms at the same time. But this is both unnecessary overkill, and could lead to confusion and problems. I would recommend that you replace your old DynamoDB locking configuration with S3 state locking immediately. It will be cheaper (without having to pay for an extra DynamoDB table or reads/writes to that table,) and less error prone.&lt;/p&gt;

&lt;p&gt;Here's how to change your Terraform backend configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform Configuration Specifics
&lt;/h3&gt;

&lt;p&gt;The old DynamoDB method used a configuration parameter named &lt;code&gt;dynamodb_table&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The new S3 state locking method uses a configuration parameter named &lt;code&gt;use_lockfile&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Both are covered in the current Terraform &lt;a href="https://developer.hashicorp.com/terraform/language/backend/s3#state-locking" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Version Constraints
&lt;/h4&gt;

&lt;p&gt;We also recommend that when you switch to the S3 native state locking method, you set the Terraform configuration parameter &lt;code&gt;required_version&lt;/code&gt; to the minimum version that supports S3 native state file locking. If you have users with an earlier version of Terraform, they won't be able to use this feature and will see errors if the &lt;code&gt;use_lockfile&lt;/code&gt; parameter is enabled. Setting the &lt;code&gt;required_version&lt;/code&gt; to v1.10 at a minumum makes your configuration more resilient and doesn't let someone attempt to create or update resources using an older Terraform version. Think of it as a prerequisite.&lt;/p&gt;

&lt;p&gt;This Terraform version constraints configuration is documented &lt;a href="https://developer.hashicorp.com/terraform/language/terraform#terraform-required_version" rel="noopener noreferrer"&gt;here&lt;/a&gt;, and looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;  &lt;span class="nx"&gt;required_version&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 1.10"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Sample Configuration
&lt;/h4&gt;

&lt;p&gt;With all this background information about the configuration parameters, here's a sample Terraform configuration with both the old and new parameters present:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-west-2"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;encrypt&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;bucket&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tfstate-lock-test-0bhfxn8x1"&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-west-2"&lt;/span&gt;
    &lt;span class="nx"&gt;key&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example/terraform-state-lock-test.tfstate"&lt;/span&gt;
    &lt;span class="nx"&gt;dynamodb_table&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tfstate-lock-test"&lt;/span&gt;
    &lt;span class="nx"&gt;use_lockfile&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"5.82.2"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# This sets the version constraint to a minimum of 1.10 for native state file locking support&lt;/span&gt;
  &lt;span class="nx"&gt;required_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 1.10"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to switch away from using the old DynamoDB locking method, remove that &lt;code&gt;dynamodb_table&lt;/code&gt; configuration parameter.&lt;/p&gt;

&lt;h2&gt;
  
  
  State File Locking in Action
&lt;/h2&gt;

&lt;p&gt;We now know how to configure Terraform S3 native state file locking, but how does it perform and what will we see if you cannot get the mutex to lock the file?&lt;/p&gt;

&lt;p&gt;I've tested both methods and will show you the output from each when state file locking fails.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error from DynamoDB State File Locking
&lt;/h3&gt;

&lt;p&gt;The old DyamoDB state file locking method would return an error such as the below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply plan.out 
Acquiring state lock. This may take a few moments...
╷
│ Error: Error acquiring the state lock
│ 
│ Error message: operation error DynamoDB: PutItem, https response error StatusCode: 400, RequestID: CP9U4IRC04OBONKIQHM4LUOLLJVV4KQNSO5AEMVJF66Q9ASUAAJG, ConditionalCheckFailedException: The conditional request failed
│ Lock Info:
│   ID:        39f64263-4ad8-a563-faf7-f28f8a042a00
│   Path:      tfstate-lock-test-0bhfxn8x1/example/terraform-state-lock-test.tfstate
│   Operation: OperationTypeApply
│   Who:       user@hostname
│   Version:   1.10.3
│   Created:   2025-01-08 03:38:13.121564614 +0000 UTC
│   Info:      
│ 
│ 
│ Terraform acquires a state lock to protect the state from being written
│ by multiple &lt;span class="nb"&gt;users &lt;/span&gt;at the same time. Please resolve the issue above and try
│ again. For most commands, you can disable locking with the &lt;span class="s2"&gt;"-lock=false"&lt;/span&gt;
│ flag, but this is not recommended.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Error from S3 Backend Native State File Locking
&lt;/h3&gt;

&lt;p&gt;The new S3 backend native state file locking method will return an error that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply plan2.out
╷
│ Error: Error acquiring the state lock
│ 
│ Error message: operation error S3: PutObject, https response error StatusCode: 412, RequestID: N0BGGAFN8V1N2WCQ, HostID: 4G32Xus/86u8CehvNbvzv8NoqiyTvsBWGYBXYK6E8Vn0E4+wom+6Jm6WFVUFSaCE7C1TBP5Vauo&lt;span class="o"&gt;=&lt;/span&gt;, api error PreconditionFailed: At least one of the pre-conditions you specified did not hold
│ Lock Info:
│   ID:        837482e8-441e-9ff6-d30b-333ee83d8fc4
│   Path:      tfstate-lock-test-0bhfxn8x1/example/terraform-state-lock-test.tfstate
│   Operation: OperationTypeApply
│   Who:       user@hostname
│   Version:   1.10.3
│   Created:   2025-01-08 03:43:11.691479913 +0000 UTC
│   Info:      
│ 
│ 
│ Terraform acquires a state lock to protect the state from being written
│ by multiple &lt;span class="nb"&gt;users &lt;/span&gt;at the same time. Please resolve the issue above and try
│ again. For most commands, you can disable locking with the &lt;span class="s2"&gt;"-lock=false"&lt;/span&gt;
│ flag, but this is not recommended.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dealing with Stale State File Locks
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: After publishing this blog, I was asked whether the &lt;code&gt;terraform force-unlock&lt;/code&gt; command still worked. I tested this and can say it does perform as expected with the old S3 DynamoDB state file locking mechanism. Here's an example session showing this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply plan2.out
╷
│ Error: Error acquiring the state lock
│ 
│ Error message: operation error S3: PutObject, https response error StatusCode: 412, RequestID: NGQM2VGSTSDWPCZF, HostID: dzPZArnTy31oVeuVLI8Dm61HXnuL6M3R2tlWFe2suztP0zkh4Bwv/eJFBLqVfitAI40I5BvIeds&lt;span class="o"&gt;=&lt;/span&gt;, api error PreconditionFailed: At least one of the pre-conditions you specified did not hold
│ Lock Info:
│   ID:        bde40e3b-2bfb-f577-fea5-44923c9d5275
│   Path:      tfstate-lock-test-0bhfxn8x1/example/terraform-state-lock-test.tfstate
│   Operation: OperationTypeApply
│   Who:       user@hostname
│   Version:   1.10.3
│   Created:   2025-01-08 16:55:35.808464751 +0000 UTC
│   Info:      
│ 
│ 
│ Terraform acquires a state lock to protect the state from being written
│ by multiple &lt;span class="nb"&gt;users &lt;/span&gt;at the same time. Please resolve the issue above and try
│ again. For most commands, you can disable locking with the &lt;span class="s2"&gt;"-lock=false"&lt;/span&gt;
│ flag, but this is not recommended.
╵
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can remove the lock, but only do this if you know the lock is stale. To do this, first note the lock ID above, then run the &lt;code&gt;force-unlock command&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform force-unlock bde40e3b-2bfb-f577-fea5-44923c9d5275
Do you really want to force-unlock?
  Terraform will remove the lock on the remote state.
  This will allow &lt;span class="nb"&gt;local &lt;/span&gt;Terraform commands to modify this state, even though it
  may still be &lt;span class="k"&gt;in &lt;/span&gt;use. Only &lt;span class="s1"&gt;'yes'&lt;/span&gt; will be accepted to confirm.

  Enter a value: &lt;span class="nb"&gt;yes

&lt;/span&gt;Terraform state has been successfully unlocked!

The state has been unlocked, and Terraform commands should now be able to
obtain a new lock on the remote state.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Delete Your Old DynamoDB Tables
&lt;/h2&gt;

&lt;p&gt;Now that you've switched from using the old Terraform DynamoDB locking to the new S3 native state file locking, you can remove the old DynamoDB table used to track these locks!&lt;/p&gt;

&lt;p&gt;Yay, one less resource to manage and be charged by AWS for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Hopefully you see the advantage of using the new Terraform S3 backend native state file locking mechanism, and how to configure it for your environment.&lt;/p&gt;

&lt;p&gt;Happy Terraforming!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>s3</category>
    </item>
    <item>
      <title>A Survey of Serverless Sustainability Trends</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Mon, 30 Dec 2024 02:13:44 +0000</pubDate>
      <link>https://dev.to/aws-builders/a-survey-of-serverless-sustainability-trends-14i8</link>
      <guid>https://dev.to/aws-builders/a-survey-of-serverless-sustainability-trends-14i8</guid>
      <description>&lt;p&gt;As we bring 2024 to a close, and after an invigorating week at AWS re:Invent, many will be writing their year in review summaries. I've decided to dedicate those column inches to the state of serverless sustainability today. The observant among us are quite aware of how the artificial intelligence (AI) craze has wormed it's way into every product, industry and conversation over the past year. It's been making headlines as shuttered power plants like Three Mile Island are &lt;a href="https://www.cnn.com/2024/09/20/energy/three-mile-island-microsoft-ai/index.html" rel="noopener noreferrer"&gt;reopened&lt;/a&gt;. And it's on a collision course with the world's &lt;a href="https://www.wired.com/story/true-cost-generative-ai-data-centers-energy/" rel="noopener noreferrer"&gt;climate change goals&lt;/a&gt;. (AWS also &lt;a href="https://www.aboutamazon.com/news/sustainability/amazon-nuclear-small-modular-reactor-net-carbon-zero" rel="noopener noreferrer"&gt;announced in October&lt;/a&gt; they signed agreements for 3 small modular nuclear reactor projects for their own data centers.)&lt;/p&gt;

&lt;p&gt;Running servers in data centers is an energy intensive process. Every search for a restaurant or game score requires a response from a server. Running a busy webserver requires the same power as one that gets 5 hits per day. Serverless technologies like AWS Lambda, S3, DynamoDB can be used to reduce that constant power usage when there is no demand.&lt;/p&gt;

&lt;p&gt;Today we'll discuss the current state of affairs with serverless sustainability, and perhaps a little about sustainable computing in general.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Sustainability
&lt;/h2&gt;

&lt;p&gt;In the context of technology and computing in particular, sustainability equates primarily to power usage and carbon emissions. The computing industry has been on an upward trend in energy use since, well, the beginning of the computer era. During that time, increases in computational capabilities (faster processors) have driven higher demands for electricity. The next (current?) era of computing is going to require rethinking these needs by reducing the power requirements of computers and data centers, recycling hardware, and becoming better climate citizens. (AWS discusses some of their sustainability improvements related to the circular economy in this re:Invent talk "Advancing sustainable AWS infrastructure to power AI solutions" &lt;a href="https://youtu.be/Fq5Gi-_G6T4?t=303" rel="noopener noreferrer"&gt;here&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;AWS defines sustainability as one of the 6 pillars of their &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/sustainability.html" rel="noopener noreferrer"&gt;Serverless Applications Lens for the AWS Well-Architected Framework&lt;/a&gt;. Their Well-Architected Framework &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/sustainability-pillar.html" rel="noopener noreferrer"&gt;documentation covering Sustainability indicates&lt;/a&gt; their focus on environmental sustainability in this pillar. It's important that they make that distinction, and even more telling that they are making a large commitment to improving energy efficiency and carbon emissions in their own operations. This document (and &lt;a href="https://sustainability.aboutamazon.com/carbon-reduction-aws.pdf" rel="noopener noreferrer"&gt;others they've released [PDF]&lt;/a&gt;) lay out their goals as well as identify how AWS cloud customers can optimize their operations to become more sustainable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sustainable Workloads
&lt;/h2&gt;

&lt;p&gt;As is common when working with AWS, they assert customers approach cloud sustainability by using the &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/the-shared-responsibility-model.html" rel="noopener noreferrer"&gt;shared responsibility model&lt;/a&gt;. This is akin to their approach for security in the cloud, so it makes sense that they want users to take our part seriously.&lt;/p&gt;

&lt;h3&gt;
  
  
  Region Selection
&lt;/h3&gt;

&lt;p&gt;One way to make your application more sustainable is to use the lowest carbon intense AWS region available, taking into account other business requirements such as latency. If you are a small company with only customers in one country or region, you may not be able to take advantage, but if business requirements don't disallow using a more remote location you can use this technique. Obviously some AWS regions are in locations that have a higher percentage of renewable energy entering it's power grid. Deciding to use one of these regions should be based on latency of course too, but carbon intensity must be part of the decision.&lt;/p&gt;

&lt;p&gt;How, then, do we choose a region for sustainability? A good reference is this AWS Architecture Blog: &lt;a href="https://aws.amazon.com/blogs/architecture/how-to-select-a-region-for-your-workload-based-on-sustainability-goals/" rel="noopener noreferrer"&gt;How to select a Region for your workload based on sustainability goals&lt;/a&gt;. They suggest using the site &lt;a href="//electricitymaps.com"&gt;electricitymaps.com&lt;/a&gt; to identify carbon intensity and renewal energy percentage for each regional electricity grid.&lt;/p&gt;

&lt;p&gt;On the &lt;a href="https://app.electricitymaps.com/map/24h" rel="noopener noreferrer"&gt;electicitymaps 24 hour climate impact map&lt;/a&gt;, hover over the PJM region (which covers Virginia, home to the &lt;code&gt;us-east-1&lt;/code&gt; AWS region) we see the carbon intensity (as of 12/21/2024) is 409g CO&lt;sup&gt;2&lt;/sup&gt;/kWh, and the renewable energy mix is 8%. If we now look at the California power grid (home to the AWS &lt;code&gt;us-west-1&lt;/code&gt; region), we see 139g CO&lt;sup&gt;2&lt;/sup&gt;/kWh, and the renewable energy mix is 67%.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjl0ftstnhw5y1s9r5e7l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjl0ftstnhw5y1s9r5e7l.jpg" alt="Power Grid Carbon Intensity 12/21/2024" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Over-provisioned Capacity and Time Shifting
&lt;/h2&gt;

&lt;p&gt;Earlier this year I ran across a podcast from the &lt;a href="https://greensoftware.foundation/" rel="noopener noreferrer"&gt;Green Software Foundation&lt;/a&gt; called Environment Variables. In one &lt;a href="https://podcasts.castplus.fm/e/vnwkr1kn-greening-serverless" rel="noopener noreferrer"&gt;episode&lt;/a&gt; the guest Kate Goldenring talked about the sustainability merits of serverless. She pointed out a statistic from the &lt;a href="https://sysdig.com/blog/2023-cloud-native-security-usage-report/" rel="noopener noreferrer"&gt;Sysdig 2023 Cloud Native Security and Usage Report&lt;/a&gt; that showed 69% of requested CPU resources go unused. That means as an industry we're over provisioning by more than two thirds (2/3). The goal should be higher hardware utilization according to Kate.&lt;/p&gt;

&lt;p&gt;This proves what we've been saying about how running applications on servers (such as EC2) or containers is less efficient than serverless. Provisioning for peak capacity is what makes these "always on" deployments wasteful because traffic is rarely at it's expected peak usage. Serverless technologies that utilize multi-tenancy are intentionally more sustainable than traditional deployment methods because they enable higher hardware utilization.&lt;/p&gt;

&lt;p&gt;It's idealistic to assume we can perfectly utilize hardware via multi-tenancy, but one approach Meta has begun using in their internal private cloud functions product called &lt;a href="https://read.engineerscodex.com/p/meta-xfaas-serverless-functions-explained" rel="noopener noreferrer"&gt;XFaaS&lt;/a&gt; (similar to AWS Lambda) is a technique called "time shifting". Instead of running all workloads immediately, they will delay some operations that are not time sensitive. This increases the average utilization of their hardware due to scheduling these delay tolerant functions during off-peak times, ultimately lowering their peaks and raising their troughs. That equates to a higher average utilization.&lt;/p&gt;

&lt;p&gt;This introduces an alternative approach to managing sustainability, seldom discussed in the serverless realm as far as I've seen. In many event driven systems events don't require immediate processing, and are delay tolerant by design. This characteristic lends itself well to having the processing delayed until utilization in your system is below a certain threshold (i.e. off hours.) Of course this is only possible for data that isn't expected to be immediately consistent. This type of processing emerged in the mainframe computing era, and was called "batch processing". It's still in use today, did you ever wonder why your bank statement can take up to 24 hours to reflect purchases? They're processing payments either overnight or in some batch processing methodology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time Shifting on Kubernetes
&lt;/h2&gt;

&lt;p&gt;Although I don't consider Kubernetes to be serverless by definition, I happened across a Kubernetes admission controller recently that accepts delay tolerant workloads. This paradigm is atypical for Kubernetes (which usually processes a request as soon as it's received,) but the pattern hints at the potential future of all workload processing systems.&lt;/p&gt;

&lt;p&gt;You may have a request sent to your backend system that doesn't require immediate processing, but rather is capable of running whenever is most efficient to do so. The &lt;a href="https://arxiv.org/abs/2205.02895" rel="noopener noreferrer"&gt;research paper describing the Cucumber&lt;/a&gt; and a (2 year old but unmodified) &lt;a href="https://github.com/dos-group/cucumber" rel="noopener noreferrer"&gt;reference implementation&lt;/a&gt; are available. While this is more geared towards running workloads when excess solar power is being generated, the concept could be extended to many types of situations such as how heavily loaded your servers are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;You've just read a few of the trends that have been developing in the sustainable serverless computing arena. Hope this has provided some new information for you to go delve into on your own. Thanks and have a great 2025!&lt;/p&gt;




&lt;p&gt;Cover photo by &lt;a href="https://pixabay.com/users/fabersam-98886" rel="noopener noreferrer"&gt;Samuel Faber&lt;/a&gt; from &lt;a href="https://pixabay.com/" rel="noopener noreferrer"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>sustainability</category>
      <category>lambda</category>
      <category>serverless</category>
      <category>aws</category>
    </item>
    <item>
      <title>Limitations of AWS EC2 Image Builder Lifecycle Policies</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Tue, 13 Aug 2024 13:13:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/limitations-of-aws-ec2-image-builder-lifecycle-policies-2me6</link>
      <guid>https://dev.to/aws-builders/limitations-of-aws-ec2-image-builder-lifecycle-policies-2me6</guid>
      <description>&lt;p&gt;EC2 Image Builder Lifecycle Policies is a fairly new AWS feature that enables automatic cleanup of old EC2 Image Builder pipeline artifacts. If you haven't used EC2 Image Builder, it's quite handy for creating customized AWS Machine Images (AMIs) for your EC2 deployments.&lt;/p&gt;

&lt;p&gt;This could be a really valuable tool that simplifies the AMI build process. But there are a number of artifacts left behind after your AMI is built that potentially cost money and require manual effort to clean up.&lt;/p&gt;

&lt;p&gt;Enter EC2 Image Builder Lifecycle Policies. This feature is advertised as a way to automatically tidy these artifacts. Unfortunately it's almost unusable if you build a large number of AMIs constructed from numerous recipes.&lt;/p&gt;

&lt;p&gt;Below I'll describe a high frequency use case that illustrates the limitations of EC2 Image Builder Lifecycle Policies. You can make your own conclusions about the utility of the service as it stands today.&lt;/p&gt;

&lt;h2&gt;
  
  
  Standard use case
&lt;/h2&gt;

&lt;p&gt;The process of building AWS EC2 Amazon Machine Images (AMIs) can be rather involved. You start with a base image, add required packages and user data, then build, test, and finally deploy your custom AMI. The deployment step isn't even straightforward, as it potentially requires disk volumes, network interfaces, and additional resources which necessitate experimentation. Even after you've done all this, managing the state of these AMIs and their related resources still isn't complete. These resources all need to be cleaned up. And because AMIs tend to be created frequently to meet security objectives, the entire process can occur at a high cadence.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why so many AMIs?
&lt;/h3&gt;

&lt;p&gt;We all know how frequently CVEs (&lt;a href="https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures" rel="noopener noreferrer"&gt;Common Vulnerabilities and Exposures&lt;/a&gt;) can be released, and how quickly management expects them to be mitigated. Lower severity security updates come out at an even more unrelenting pace. For production infrastructure at enterprise scale, updates need to be routinely applied and be extremely reliable. This process is simplified by automating it all with an EC2 Image Builder pipeline executed on a regular schedule.&lt;/p&gt;

&lt;p&gt;But after these AMIs and related resources have been superseded by newer ones they need to be removed. Until now there was no method provided by AWS to perform this action. You needed to develop your own tooling for this purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  EC2 Image Builder Lifecycle Policies
&lt;/h2&gt;

&lt;p&gt;Lifecycle Policies adds a necessary (heretofore missing) feature to EC2 Image Builder. Having AWS remove your unused resources is a great idea. But the execution leaves something to be desired. For the sake of illustration, let's also define what "related resources" means here.&lt;/p&gt;

&lt;p&gt;Every AMI built via EC2 Image Builder requires an image recipe and image pipeline be created. An image recipe consists of AWS managed components or custom (user generated) components to perform an AMI build. The pipeline runs your recipes which generate an output image upon success which references the output AMI, infrastructure configuration, distribution settings, and a CloudWatch log stream (which includes pipeline output for debugging purposes.)&lt;/p&gt;

&lt;p&gt;When you modify the image recipe, a new recipe version is created and the previous versions become obsolete. The now unused recipe and it's related resources (such as the output image as well as the AMI) remains active in AWS indefinitely and won't be removed until you perform that action. (I've highlighted this specific point for a reason that will become obvious later.)&lt;/p&gt;

&lt;p&gt;If you thought you merely needed to delete the AMI when you were done with it, you would leave all these related Image Builder resources in your account. (Granted they don't cost a lot, but it makes sense to remove unused resources, if only to avoid hitting quota limits at some point in the future.)&lt;/p&gt;

&lt;p&gt;While EC2 Image Builder Lifecycle Policies makes this last cleanup bit somewhat more straightforward, it cannot be said it's "a one size fits all" solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations of the current Lifecycle Policies service
&lt;/h2&gt;

&lt;p&gt;To create a lifecycle policy for AMI image resources, the &lt;a href="https://docs.aws.amazon.com/imagebuilder/latest/userguide/create-lifecycle-policies.html#create-lifecycle-policy-ami" rel="noopener noreferrer"&gt;documentation says&lt;/a&gt; you must "apply lifecycle rules to image resources based on the recipe that created them, select up to 50 recipe versions for the policy."&lt;/p&gt;

&lt;p&gt;This is unwieldy because you have to add new recipe versions explicitly each time you create a new image recipe. If you use Infrastructure as Code (IaC) tools to do this (such as Terraform, CloudFormation or CDK), it becomes slightly less manual, but still requires manual effort (or at least in depth awareness of the process.) But even if you add these recipes to your Lifecycle Policy manually, there's a limit of 50 recipe versions in the policy. This isn't as irritating if you only have a dozen recipes in total, the scope of the problem is intensified with scale.&lt;/p&gt;

&lt;p&gt;Also, I'm referring to lifecycle policies for AMI image resources, I don't currently use EC2 Image Builder to create container image resources.&lt;/p&gt;

&lt;p&gt;The limitations as I see them are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The maximum number of recipes allowed in the &lt;code&gt;LifecyclePolicyResourceSelection&lt;/code&gt; API &lt;a href="https://docs.aws.amazon.com/imagebuilder/latest/APIReference/API_LifecyclePolicyResourceSelection.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; which can be used as selection criteria is 50. This 50 recipe limit is quite an arbitrary number, and when a user has a large number of image recipes it is both an extremely small limit, and would become a burden to add all recipes to this rule scope parameter.&lt;/p&gt;

&lt;p&gt;I've verified this is not adjustable in the AWS Service Quotas self service quota increase dashboard in the AWS Console (or the Service Quotas API).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the EC2 Image Builder Lifecycle Policies AWS Console, when selecting the recipes to apply the Rule Scope, the drop down list only displays a few current recipes. It shows a spinner and indicates it is "Loading Recipes" (see screenshot) but the additional recipes never show up.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WzblTXCW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://bacchi.org/images/rule-scope.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WzblTXCW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://bacchi.org/images/rule-scope.png" alt="Rule Scope" width="452" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommended Solutions (Feature Requests)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;An obvious solution to the first limitation above (the 50 recipe maximum problem) is allowing more than 50 recipes in the selection criteria Rule Scope.&lt;/p&gt;

&lt;p&gt;Of course, there's a much more user friendly and comprehensive approach. The most ergonomic proposal would be to allow a wildcard or pattern matching syntax. This would allow power users to identify their recipes in the Rule Scope with something that looks like the below json (the &lt;a href="https://docs.aws.amazon.com/imagebuilder/latest/APIReference/API_LifecyclePolicyResourceSelectionRecipe.html" rel="noopener noreferrer"&gt;API documentation requires&lt;/a&gt; &lt;code&gt;name&lt;/code&gt;/&lt;code&gt;semanticVersion&lt;/code&gt; key value pairs). Otherwise it would mean still having to list every single version you've created that should be subject to the rule scope.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ubuntu-recipe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"semanticVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;My rationale for these suggestions are as follows: If AWS only took the approach of allowing more than 50 recipes in the selection criteria Rule Scope, users would still have to add every single recipe version to the Rule Scope. Every single recipe version would have to be inserted in the &lt;code&gt;LifecyclePolicyResourceSelection&lt;/code&gt; mentioned above. Allowing for a wildcard is much more approachable, as users would only need to add the recipe name and a wildcard to the selection criteria Rule Scope once, and never have to think about it again.&lt;/p&gt;

&lt;p&gt;This single feature would make EC2 Image Builder Lifecycle Policies much more useful and less of a burden for users who actively manage recipes and need a tool to clean these resources up (without resorting to writing something from scratch.)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To resolve the AWS Console Rule Scope drop down problem, it seems like the first 6 results are returned in the request, but the pagination isn't working to return more than those 6 results. In fact, if you open your browser developer console, you can see that 6 results are returned from the &lt;code&gt;ListImageRecipes&lt;/code&gt; request, but no more ever appear to be returned.&lt;/p&gt;

&lt;p&gt;Thankfully the AWS CLI command &lt;code&gt;aws imagebuilder update-lifecycle-policy&lt;/code&gt; (and obviously the API) allows you to provide a list of recipes, but the AWS Console is typically the first exposure that users have to how a feature works. Fixing this issue would be helpful to users newly exposed to Lifecycle Policies.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While EC2 Image Builder Lifecycle Policies has the potential to be a great service, it is currently hampered by the user experience (UX.) Adding functionality to make adding recipes to policies easier would be a major improvement for early adopters and power users.&lt;/p&gt;

</description>
      <category>ec2</category>
      <category>imagebuilder</category>
      <category>awswishlist</category>
      <category>aws</category>
    </item>
    <item>
      <title>Event Driven Processing of ip-ranges.json</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Thu, 01 Feb 2024 22:32:10 +0000</pubDate>
      <link>https://dev.to/mbacchi/event-driven-processing-of-ip-rangesjson-16i1</link>
      <guid>https://dev.to/mbacchi/event-driven-processing-of-ip-rangesjson-16i1</guid>
      <description>&lt;p&gt;Imagine you have a security group that needs to allow all IP addresses of AWS EC2 instances. Or imagine you have to allow IP addresses of Github Actions runners so that only your CI workers connect to your VPC. Both of those IP address ranges change regularly, and need to be updated (usually by hand.)&lt;/p&gt;

&lt;p&gt;If we want to automate these security group updates, how could you figure out when these IP address ranges have changed? AWS has an SNS notification sent every time their &lt;code&gt;ip-ranges.json&lt;/code&gt; list changes. The SNS notification can be used to initiate an automated procedure to update our security group.&lt;/p&gt;

&lt;p&gt;What we're describing is an &lt;a href="https://aws.amazon.com/event-driven-architecture/"&gt;event driven architecture&lt;/a&gt;. In event driven architectures, an event producer causes an event to be created. A downstream event consumer handles the event and may trigger further events.&lt;/p&gt;

&lt;p&gt;In this 2 part blog post series, we'll cover event driven architectures. In the first part of the example used to illustrate the mechanics, we'll build the IP address range processing component. This piece of the puzzle processes the AWS &lt;code&gt;ip-ranges.json&lt;/code&gt; file and inserts it into a DynamoDB table. The second blog in this series will insert the IP ranges into AWS security groups.&lt;/p&gt;

&lt;p&gt;After the next blog post, you'll be able to manage IP address ranges in your environment using Event Driven Architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Event Driven Architecture?
&lt;/h2&gt;

&lt;p&gt;The Wikipedia page for &lt;a href="https://en.wikipedia.org/wiki/Event-driven_architecture"&gt;event driven architecture&lt;/a&gt; (or EDA) describes it as "a software architecture paradigm concerning the production and detection of events". This is in contrast to software which is focused on its own state, and doesn't concern itself with external state changes. Event driven architectures are often made up of loosely coupled components that act independantly on events that they're concerned with.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why use EDA for data processing?
&lt;/h3&gt;

&lt;p&gt;Data processing is often well suited to the EDA model because it's rarely necessary to syncronously perform a data processing task. If a user purchases something in a shopping cart, certainly the interatction with the bank or credit card needs to be syncronous and immediate. But if the website catalogs all purchases in a "top ten products" list from all user purchases, that processing can be done asyncrynously at a later, and possibly less busy time of the day. It's common for data engineering teams to perform extract, transform, load (or ETL) jobs that process data off hours, to avoid contention for compute resources. This can all be done with an  event driven model, where the list of purchased items in our above example would be sent to an event bus (such as AWS EventBridge or Apache Kafka.) Consumers of that type of event would then act on the event depending on their function.&lt;/p&gt;

&lt;h2&gt;
  
  
  SNS notification for ip-ranges.json
&lt;/h2&gt;

&lt;p&gt;As mentioned above, anyone can subscribe to SNS notifications for the &lt;code&gt;ip-ranges.json&lt;/code&gt; list. When a change occurs to that list the SNS notification is sent out. It acts as the event producer in this case, causing anyone who is subscribed to the notification list to receive the event.&lt;/p&gt;

&lt;p&gt;In our example today, we'll setup an AWS Lambda function to process these events stemming from the SNS notification. Our function will be the event consumer in this scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data organization of the ip-ranges.json file
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;ip-ranges.json&lt;/code&gt; file &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/aws-ip-ranges.html#aws-ip-syntax"&gt;syntax&lt;/a&gt; consists of a creation date (to indicate last update time) and a list of IPv4 and IPv6 address ranges. These look like the following sample:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ip_prefix"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"52.4.0.0/14"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"region"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"EC2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"network_border_group"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We want to maintain all of this information, and in fact enhance it with the date stamp (or &lt;code&gt;synctoken&lt;/code&gt;) which is included in the &lt;code&gt;ip-ranges.json&lt;/code&gt; file as mentioned above.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB Single Table Design
&lt;/h2&gt;

&lt;p&gt;We'll be inserting all of these IP address ranges into a DynamoDB table for future access and processing (triggered via events.) DynamoDB is great for serverless applications because it allows a large number of stateless connections via HTTP. Using a single DynamoDB table, we'll store all of this data in a way that can be easily queried by creating composite keys that pre-join the data fields and make lookups extremely fast. This concept is discussed quite extensively, but Alex Debrie (an AWS Data Hero) has a &lt;a href="https://www.alexdebrie.com/posts/dynamodb-single-table/"&gt;blog post&lt;/a&gt; that covers these ideas quite well.&lt;/p&gt;

&lt;p&gt;Since we want to enable queries that return IP prefixes based on AWS region and service. In order to do that, we'll format the data with both a primary key (PK) and sort key (SK), as well as a synctoken that is completely separate from the IP prefixes. This synctoken will enable us to remove all IP prefixes that have the synctoken because we'll be creating a &lt;a href="https://www.trek10.com/blog/the-ten-rules-for-data-modeling-with-dynamodb"&gt;global secondary index&lt;/a&gt; using the synctoken to allow fast queries of the items containing a particular synctoken. We can add non-normalized data to the same DynamoDB table because it isn't an SQL database, it's more like a key-value or wide-column store database, generally referred to as NoSQL.&lt;/p&gt;

&lt;p&gt;Here is what our primary and secondary key layout will look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp84su7fx7lcl8qo5321b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp84su7fx7lcl8qo5321b.png" alt="Primary Key and Attributes" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Function Overview
&lt;/h2&gt;

&lt;p&gt;We'll use a Lambda function that gets triggered by &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/aws-ip-ranges.html#subscribe-notifications"&gt;an SNS notification&lt;/a&gt; whenever the IP ranges JSON changes. This SNS subscription, Lambda Function, DynamoDB table and all the AWS infrastructure is configured using AWS Cloud Development Kit (CDK.) The Lambda Function itself does one thing. It downloads the JSON, cycles through every item, and adds it to the DynamoDB table.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS CDK configuration
&lt;/h2&gt;

&lt;p&gt;The configuration used to create the Lambda Function and other infrastructure is in the Github repository &lt;a href="https://github.com/mbacchi/eda-ip-ranges/tree/main"&gt;here&lt;/a&gt;. Instructions on how to deploy the infrastructure are also in that repository.&lt;/p&gt;

&lt;p&gt;Once deployed the Lambda Function will do nothing until the next time the SNS notification invokes it when the JSON file has changed. Below you can see a screenshot of recent invocations of the function in my environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpip2ys9trawx0r9trf0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpip2ys9trawx0r9trf0f.png" alt="Cloudwatch Metrics of Lambda Function Invocations" width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;In this first part of the blog series, you saw how we used Event Driven Architecture to respond to an event and perform some data processing. In the next section we'll handle using these IP address ranges to update security groups allowing traffic for certain AWS services.&lt;/p&gt;

&lt;p&gt;Thanks for reading and have a great day!&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS &lt;a href="https://aws.amazon.com/blogs/security/how-to-automatically-update-your-security-groups-for-amazon-cloudfront-and-aws-waf-by-using-aws-lambda/"&gt;blog
post&lt;/a&gt;
on "How to Automatically Update Your Security Groups for Amazon CloudFront and
AWS WAF by Using AWS Lambda"&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Cover photo by &lt;a href="https://unsplash.com/@thephotographytimes"&gt;Dushyant Kumar&lt;/a&gt; on &lt;a href="https://unsplash.com"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>serverless</category>
      <category>eventdriven</category>
      <category>aws</category>
    </item>
    <item>
      <title>S3 Bucket Takeover Neutralization</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Tue, 30 Jan 2024 14:07:57 +0000</pubDate>
      <link>https://dev.to/mbacchi/s3-bucket-takeover-neutralization-4d1l</link>
      <guid>https://dev.to/mbacchi/s3-bucket-takeover-neutralization-4d1l</guid>
      <description>&lt;p&gt;There's been a recent &lt;a href="https://checkmarx.com/blog/hijacking-s3-buckets-new-attack-technique-exploited-in-the-wild-by-supply-chain-attackers/"&gt;uptick in the number of S3 buckets&lt;/a&gt; that have become "hijacked" or "taken over".&lt;/p&gt;

&lt;p&gt;How can an attacker take over your S3 bucket? What can you do to prevent this on your buckets?&lt;/p&gt;

&lt;p&gt;We'll answer these questions and provide details on how you can neutralize the threat of S3 bucket takeovers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Hijacking?
&lt;/h2&gt;

&lt;p&gt;How can your S3 bucket get hijacked?&lt;/p&gt;

&lt;p&gt;It should be noted that the term "hijacked" is a misnomer. S3 buckets which have ended up in this state were managed poorly by the owner. The original owner had to choose to give up ownership of the S3 bucket, prior to it being claimed by a bad actor. If you have an active AWS S3 bucket in your account, this cannot occur unless you relinquish control. These are categorized as &lt;a href="https://www.crowdstrike.com/cybersecurity-101/cyberattacks/supply-chain-attacks/"&gt;software supply chain attacks&lt;/a&gt;, where the attacker hijacks open source software or software updates, eventually compromising target computers.&lt;/p&gt;

&lt;p&gt;AWS uses a &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html"&gt;global namespace&lt;/a&gt; for S3 buckets. If a bucket name "mattbacchi" is available and I requisition it, no one else in the world can use that bucket name. A global namespace has some downsides, namely that everyone competes for the same bucket names, and a single bucket name cannot be owned by more than one AWS account.&lt;/p&gt;

&lt;p&gt;This surge in exploit/hijack/takeover incidents ultimately has this global namespace to blame. If you owned an S3 bucket but then deleted it, someone else could subsequently recreate it in their account (after a short period of time.)&lt;/p&gt;

&lt;p&gt;If the deleted bucket was public and contained trusted files (such as npm packages), someone with sinister intentions could recreate the bucket and masquerade as you. If they named their tainted files the same as your trusted files, users might download the malicious content assuming it was your original content. This is how the above poisoned "bignum" &lt;a href="https://checkmarx.com/blog/hijacking-s3-buckets-new-attack-technique-exploited-in-the-wild-by-supply-chain-attackers/"&gt;npm package exploit&lt;/a&gt; worked. The S3 bucket owned by someone the "bignum" software developer trusted was abandoned, subsequently an attacker recreated it with malicious content.&lt;/p&gt;

&lt;p&gt;These attacks cannot occur if S3 buckets are managed with better data lifecycle hygiene.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preventing S3 Bucket Takeover
&lt;/h2&gt;

&lt;p&gt;In order to neutralize this threat, I recommend never deleting your S3 buckets.&lt;/p&gt;

&lt;p&gt;As described above, the only S3 buckets vulnerable to this attack vector are those that have been deleted and are no longer active. The logical defense is to never relinquish your S3 buckets, instead you maintain the bucket and name forever.&lt;/p&gt;

&lt;p&gt;Won't this get expensive, you may ask? AWS &lt;a href="https://aws.amazon.com/s3/pricing/"&gt;bills customers for S3&lt;/a&gt; based on storage. If you aren't storing any data in the bucket, it will never generate a charge. In this way we retain an S3 bucket name so that it doesn't fall into the wrong hands.&lt;/p&gt;

&lt;p&gt;Of course, this may not be necessary for all S3 buckets you own. I recommend using this technique for any buckets that are public and have been used in production software applications. The "bignum" npm package bucket above is a prime example, retaining that bucket would have prevented the attack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decommissioning Process
&lt;/h2&gt;

&lt;p&gt;Developing an official process to manage this in your organization is important.&lt;/p&gt;

&lt;p&gt;If you intend to retain your S3 buckets in order to prevent takeover, you must first develop documentation and communicate this policy. Also consider how you will implement identifiers or signposts indicating the bucket must be retained to administrators who may not be familiar with the process. I've personally used documentation, an AWS tagging scheme, and a couple other methods to prevent the bucket from being accidentally relinquished.&lt;/p&gt;

&lt;p&gt;One method of making this obvious to infrastructure engineers (i.e. devops or platform engineers) is to create a zero byte file in the bucket that identifies it as persistent. I have suggested the file be named &lt;code&gt;DO_NOT_DELETE_S3_BUCKET_PERSISTENT&lt;/code&gt; as a fairly large signpost for people to prevent them from deleting it. A sample command to do this via the AWS CLI is something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;touch &lt;/span&gt;DO_NOT_DELETE_S3_BUCKET_PERSISTENT&lt;span class="p"&gt;;&lt;/span&gt; aws s3 &lt;span class="nb"&gt;cp &lt;/span&gt;DO_NOT_DELETE_S3_BUCKET_PERSISTENT s3://BUCKETNAME/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another method is to create an AWS tag on the bucket such as &lt;code&gt;persistent=true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally, if you want to use a more durable mechanism, add an S3 bucket policy that denies the action &lt;code&gt;s3:DeleteBucket&lt;/code&gt; as well as &lt;code&gt;s3:PutObject&lt;/code&gt;. This will prevent removing the bucket unless the user is an administrator, at which point they can remove the &lt;code&gt;s3:DeleteBucket&lt;/code&gt; permission from the bucket policy. But they must manually perform this update, and hopefully that extra step makes them realize it shouldn't be deleted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Today we've discussed the cause of S3 bucket hijacking and some methods of preventing it. Hopefully this motivates you to perform some analysis of your S3 buckets and apply these suggestions in your environment.&lt;/p&gt;

&lt;p&gt;Thanks for reading and have a great day!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>security</category>
    </item>
    <item>
      <title>Bundling Go Lambda Functions with the AWS CDK</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Fri, 22 Dec 2023 01:38:03 +0000</pubDate>
      <link>https://dev.to/mbacchi/bundling-go-lambda-functions-with-the-aws-cdk-4gee</link>
      <guid>https://dev.to/mbacchi/bundling-go-lambda-functions-with-the-aws-cdk-4gee</guid>
      <description>&lt;p&gt;Recently the Lambda Go runtime has changed from using the Go 1.x managed runtime to using the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-golang.html#golang-al1"&gt;provided runtimes&lt;/a&gt; which have been historically used for custom runtimes (i.e. Rust.) The former &lt;code&gt;go1.x&lt;/code&gt; runtime is being deprecated on January 8, 2024 (quite soon) and the new runtimes &lt;code&gt;provided.al2023&lt;/code&gt; or &lt;code&gt;provided.al2&lt;/code&gt; are expected to be used.&lt;/p&gt;

&lt;p&gt;With the introduction of these new runtimes, all of our Go binaries must &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/golang-handler.html#golang-handler-naming"&gt;now be called&lt;/a&gt; &lt;code&gt;bootstrap&lt;/code&gt; and be located at the root of the zip file used to deploy the function.&lt;/p&gt;

&lt;p&gt;If we have two functions built by separate function code, in the &lt;code&gt;go1.x&lt;/code&gt; runtime the naming convention would allow us to name them differently (i.e. &lt;code&gt;create&lt;/code&gt; vs. &lt;code&gt;list&lt;/code&gt;). Now we must name both function handlers &lt;code&gt;bootstrap&lt;/code&gt;, potentially causing confusion during the bundling process.&lt;/p&gt;

&lt;p&gt;In today's blog post, we'll describe how to bundle two different Go functions with the same name in a single CDK stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS CDK Deployment Using Typescript
&lt;/h2&gt;

&lt;p&gt;I'll be using Typescript as the language for the AWS CDK deployment. It might be strange to mix languages for CDK deployment (Typescript) and the Lambda function (Go), but CDK uses Typescript as its main language and most examples are written in that. The reason for using Go in this example is because many deployments need to use compiled languages for task duration and performance (not necessarily to improve cold start time.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Source Organization
&lt;/h2&gt;

&lt;p&gt;As described earlier, in the past we could have kept the names of the Go function source files in the same directory and compiled them separately to different file names. Now that the handler file name has to be the same (&lt;code&gt;bootstrap&lt;/code&gt;), they can't be output to the same directory any longer.&lt;/p&gt;

&lt;p&gt;The layout of the files now looks like the following, with each function in its own subdirectory in the &lt;code&gt;lib/functions&lt;/code&gt; directory:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pXjO_YGy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6eegcesf1ika7qppsphk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pXjO_YGy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6eegcesf1ika7qppsphk.png" alt="Source code layout " width="434" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bundling Go Functions
&lt;/h2&gt;

&lt;p&gt;Now that we have the function code in separate directories, we can bundle them using the Alpha CDK construct for a &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-lambda-go-alpha-readme.html"&gt;Go Function&lt;/a&gt;, shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BasicGoLambdaStack&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Stack&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;StackProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TableV2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Table&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;partitionKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AttributeType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;STRING&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;removalPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RemovalPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DESTROY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createFunction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GoFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;CreateFunction&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;lib/functions/create&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DYNAMODB_TABLE&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tableName&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;logRetention&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RetentionDays&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ONE_WEEK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;listFunction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GoFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ListFunction&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;lib/functions/list&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DYNAMODB_TABLE&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tableName&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;logRetention&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RetentionDays&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ONE_WEEK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above configuration is obviously fairly straightforward. But before I found the &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-lambda-go-alpha-readme.html"&gt;aws-lambda-go-alpha&lt;/a&gt; module, I attempted (mostly unsuccessfully) to use the &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_lambda.Function.html"&gt;aws-lambda-function Function construct&lt;/a&gt; which required using the &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_lambda.Code.html#static-fromwbrassetpath-options"&gt;Code.fromAsset&lt;/a&gt; method with the &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.ILocalBundling.html"&gt;ILocalBundling interface&lt;/a&gt;. This was extremely uncooperative, I found &lt;a href="https://github.com/aws-samples/cdk-lambda-bundling/blob/main/lib/cdk-bundling-lambda-stack.ts"&gt;multiple blogs&lt;/a&gt; and suggestions on how to configure this, and in the end I did get it working, but the solution I came up with was ugly and actually packaged the Go source file in the zip file with the bootstrap handler. Not cool.&lt;/p&gt;

&lt;p&gt;It's great that the CDK now supports Go Lambda functions as a full blown construct. While &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_lambda_nodejs.NodejsFunction.html"&gt;NodeJS functions&lt;/a&gt; have been supported for much longer it makes sense to have Go supported the same way, because the aggravation I endured packaging these Go handlers with the &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_lambda.Function.html"&gt;standard Lambda Function construct&lt;/a&gt; was a serious pain in the ass.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verifying the Output
&lt;/h2&gt;

&lt;p&gt;Now that we have these defined, we can package the artifacts using &lt;code&gt;cdk synth&lt;/code&gt; and verify that the &lt;code&gt;bootstrap&lt;/code&gt; binary was created for both functions.&lt;/p&gt;

&lt;p&gt;We can see here that the first messages output from the &lt;code&gt;cdk synth&lt;/code&gt; command are the two functions being bundled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;cdk synth
Bundling asset BasicGoLambdaStack/CreateFunction/Code/Stage...
Bundling asset BasicGoLambdaStack/ListFunction/Code/Stage...
Resources:
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And when we look in the &lt;code&gt;cdk.out&lt;/code&gt; directory which is where our intermediate CDK files are placed during packaging, we see that the two Lambda Function output directories itemized in the stack asset JSON file (called &lt;code&gt;BasicGoLambdaStack.assets.json&lt;/code&gt;) have binaries in them (ignore the third .js asset, that's for the cloudwatch logs):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;tree cdk.out/
cdk.out/
├── asset.4e26bf2d0a26f2097fb2b261f22bb51e3f6b4b52635777b1e54edbd8e2d58c35
│   └── index.js
├── asset.b0bae29696f98febe5e6a655c4f61466ad71ebfa0071b46a15e917a1d307333c
│   └── bootstrap
├── asset.f153e459ffb70033083e7507aaa06c00f4b13a792fad71433c11b5f050078e74
│   └── bootstrap
├── BasicGoLambdaStack.assets.json
├── BasicGoLambdaStack.template.json
├── cdk.out
├── manifest.json
└── tree.json

4 directories, 8 files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running &lt;code&gt;cdk synth&lt;/code&gt; to verify things, we can deploy with &lt;code&gt;cdk deploy&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Success
&lt;/h2&gt;

&lt;p&gt;Now we can execute the functions and see them working as expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; https://someurl.execute-api.us-west-2.amazonaws.com/todos &lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s1"&gt;'{ "title": "stuff", "details": "really some stuff"}'&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"id"&lt;/span&gt;:&lt;span class="s2"&gt;"80cf566a-937b-4aef-98ef-48b84ee75c01"&lt;/span&gt;,&lt;span class="s2"&gt;"title"&lt;/span&gt;:&lt;span class="s2"&gt;"stuff"&lt;/span&gt;,&lt;span class="s2"&gt;"details"&lt;/span&gt;:&lt;span class="s2"&gt;"really some stuff"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; GET https://someurl.execute-api.us-west-2.amazonaws.com/todos
&lt;span class="o"&gt;[{&lt;/span&gt;&lt;span class="s2"&gt;"id"&lt;/span&gt;:&lt;span class="s2"&gt;"80cf566a-937b-4aef-98ef-48b84ee75c01"&lt;/span&gt;,&lt;span class="s2"&gt;"title"&lt;/span&gt;:&lt;span class="s2"&gt;"stuff"&lt;/span&gt;,&lt;span class="s2"&gt;"details"&lt;/span&gt;:&lt;span class="s2"&gt;"really some stuff"&lt;/span&gt;&lt;span class="o"&gt;}]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The new requirements for using the provided runtimes for Go Lambda Functions poses some minor challenges (especially if you have to migrate already running Go 1.x Lambda Functions) but thankfully the &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-lambda-go-alpha-readme.html"&gt;aws-lambda-go-alpha&lt;/a&gt; module makes it much more straightforward than bundling these manually.&lt;/p&gt;

&lt;p&gt;I hope you've gotten a good understanding of how to bundle Go Lambda functions by reading this. I know I learned a lot, and this will come in handy as I use Go for Lambdas going forward.&lt;/p&gt;




&lt;p&gt;Cover photo by &lt;a href="https://unsplash.com/@jessefotograaf"&gt;Jesse De Meulenaere&lt;/a&gt; on &lt;a href="https://unsplash.com"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cdk</category>
      <category>lambda</category>
      <category>go</category>
    </item>
    <item>
      <title>A Gentle Introduction to AWS Lambda</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Wed, 02 Feb 2022 14:44:48 +0000</pubDate>
      <link>https://dev.to/aws-builders/a-gentle-introduction-to-aws-lambda-20eb</link>
      <guid>https://dev.to/aws-builders/a-gentle-introduction-to-aws-lambda-20eb</guid>
      <description>&lt;p&gt;You've probably heard of AWS Lambda and serverless by now. But what is Lambda all about? The short definition of AWS Lambda is a "Functions as a Service" (FaaS) technology. The longer and more complicated answer is that Lambda is a lightweight runtime that requires no infrastructure to be defined by the developer.&lt;/p&gt;

&lt;p&gt;FaaS allows developers to build software features quickly with less emphasis on the question of &lt;em&gt;where&lt;/em&gt; and &lt;em&gt;how&lt;/em&gt; they run. This allows for more focus on the business logic that users interact with. Functions as a Service is the most recent innovation in the long line of virtualization technologies. Compute virtualization started with physical servers, then moved to virtual machines (VMs), which grew into Infrastructure as a Service (IaaS). Platforms as a Service came next with companies like Heroku allowing developers to deploy without caring about infrastructure specifics. Now we're arriving at FaaS, which promises to abstract all these details away.  &lt;a href="https://dashbird.io/blog/origin-of-serverless/" rel="noopener noreferrer"&gt;Here&lt;/a&gt; are a &lt;a href="https://joshfechter.com/iaas-paas-saas/" rel="noopener noreferrer"&gt;couple&lt;/a&gt; good descriptions of these origins and where we are as an industry today.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda vs. Serverless Terminology
&lt;/h2&gt;

&lt;p&gt;The terms serverless and AWS Lambda are often used interchangeably, but they aren't exactly the same thing. Serverless refers to a collection of tools that require no infrastructure configurations from the developer, including databases, queues, functions and gateways as a service. AWS Lambda is only the FaaS component in this list. That means there are many more products such as API gateways, message queueing systems, notification tools, and serverless databases that fill out the serverless landscape. In this blog post we'll only work with a couple of these, namely AWS Lambda and AWS API Gateway for our demonstration.&lt;/p&gt;

&lt;h2&gt;
  
  
  FaaS Platforms
&lt;/h2&gt;

&lt;p&gt;There are many &lt;a href="https://fauna.com/blog/comparison-faas-providers" rel="noopener noreferrer"&gt;FaaS providers&lt;/a&gt; in the industry.  From the big players like Google, IBM, and AWS to the smaller ones like the open source OpenWhisk and OpenFaaS, they all provide similar functionality (generally) with different developer experiences.&lt;/p&gt;

&lt;p&gt;There are also Edge computing FaaS providers, which instead of executing the code in one central data center, run the code in a data center nearest the user, known as the "Edge". The players in this area of serverless technology are companies like Cloudflare, Lambda@Edge (AWS), and Fastly.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Lambda in Plan English
&lt;/h2&gt;

&lt;p&gt;After reading the above definition of serverless, Functions as a Service, and AWS Lambda, your understanding might still be cloudy (no pun intended). I'll attempt to explain it in more simple terms.&lt;/p&gt;

&lt;p&gt;The description on the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/welcome.html" rel="noopener noreferrer"&gt;AWS documentation website&lt;/a&gt; says "Lambda is a compute service that lets you run code without provisioning or managing servers." The essence of Lambda is that AWS built all the scaffolding to allow you and I to send them a single piece of code, which they will run for us.&lt;/p&gt;

&lt;p&gt;We don't need to build servers or container images in order to run this single piece of code. All we have to think about is writing code, and then a small SAM template file which defines the requirements of the deployment, and finally shipping it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is it Really a Single Function?
&lt;/h3&gt;

&lt;p&gt;In order to wrap your brain around this idea of functions as a service, think of a single method or function in a small program. The function is called by another function, and may return a result.&lt;/p&gt;

&lt;p&gt;Below you see a simple Python program in which there is a parent function (&lt;code&gt;main()&lt;/code&gt;) calling a child function (&lt;code&gt;childfunction()&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;childfunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inputstring&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;inputstring&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;world&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hi&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;childfunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of this program looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;python hello.py
value: world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At it's simplest form, AWS Lambda can be thought of as a single child function being called by a parent function. AWS owns the parent function, which executes your function (the child function) in it's own virtual environment.&lt;/p&gt;

&lt;p&gt;Your objective during the rest of this blog post is to write, deploy and execute a single Lambda function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's Write One Bite Sized Function
&lt;/h2&gt;

&lt;p&gt;For this example, we'll write a small function that does nothing particularly important, but demonstrates the mechanics of running a function in AWS Lambda.  You are probably familiar with shell environment variables. In your terminal you can print out the environment variables defined in your session, or even create new environment variables.&lt;/p&gt;

&lt;p&gt;The below terminal command prints out an environment variable showing the user name that you are logged in as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$USER&lt;/span&gt;
mbacchi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What Will the Function Do?
&lt;/h3&gt;

&lt;p&gt;The function we'll deploy will print out the default &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime" rel="noopener noreferrer"&gt;runtime environment variables&lt;/a&gt; in the AWS Lambda execution environment.&lt;/p&gt;

&lt;p&gt;Similar to the terminal session above, our Lambda function can access the default environment variables such as &lt;code&gt;AWS_LAMBDA_FUNCTION_NAME&lt;/code&gt;, &lt;code&gt;AWS_LAMBDA_LOG_GROUP_NAME&lt;/code&gt;, or &lt;code&gt;TZ&lt;/code&gt;. But how do we see the output of &lt;code&gt;print&lt;/code&gt; statements in a Lambda? Anything that gets printed to STDOUT &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/python-logging.html" rel="noopener noreferrer"&gt;will be logged to CloudWatch Logs&lt;/a&gt; using the log group name of the function.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does Lambda Work?
&lt;/h2&gt;

&lt;p&gt;The concepts that you'll need to understand are described in the AWS &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-concepts.html" rel="noopener noreferrer"&gt;Lambda documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here are some of the most important details you'll need to write your Lambda functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Invocation:&lt;/strong&gt; The function is &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-invocation.html" rel="noopener noreferrer"&gt;invoked&lt;/a&gt; by one of the triggers that AWS provides. It can also be triggered by cron like &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents-expressions.html" rel="noopener noreferrer"&gt;scheduling events&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Entrypoint:&lt;/strong&gt; Your Lambda function needs to implement a &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/python-handler.html" rel="noopener noreferrer"&gt;handler&lt;/a&gt; which is the method (or function) that gets called by AWS when triggered.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment:&lt;/strong&gt; The handler is passed an &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/python-handler.html#python-handler-how" rel="noopener noreferrer"&gt;event object and context object&lt;/a&gt; from AWS that provides your function both data to be processed and context (information) about how it was called and other clues that you can use in your code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response:&lt;/strong&gt; In certain cases you will want to syncronously &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/python-handler.html#python-handler-return" rel="noopener noreferrer"&gt;return a value&lt;/a&gt; from your function, but that is optional. If you are writing an API backend you will return a value through API Gateway. If instead you are performing some scheduled data processing you may not return a value.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Background on Associated Serverless Components
&lt;/h2&gt;

&lt;p&gt;We mentioned earlier how serverless isn't just AWS Lambda, and in this project we'll use API Gateway and CloudWatch Logs, but only to enable us to show how our Lambda function executes.&lt;/p&gt;

&lt;p&gt;AWS &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html" rel="noopener noreferrer"&gt;API Gateway&lt;/a&gt; is, as the name suggests, an API gateway product. But it also allows for more than just API calls. For example it enables generic HTTP/HTTPS traffic, REST APIs and Websocket APIs. You can host a standard web server with API Gateway and Lambda, you aren't limited to just APIs with API Gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html" rel="noopener noreferrer"&gt;CloudWatch Logs&lt;/a&gt; is a logging product from AWS that allows all other AWS products to log their activity to a central location. In our example we'll be looking at the output of our Lambda function to see the environment variables available in the Lambda itself during runtime.&lt;/p&gt;

&lt;p&gt;We're not going to cover these in any more detail because they're not the main focus of this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample Lambda Function
&lt;/h2&gt;

&lt;p&gt;For this example deployment, we'll use &lt;a href="https://github.com/mbacchi/lambda-env-vars" rel="noopener noreferrer"&gt;a GitHub repository that has our AWS Lambda function&lt;/a&gt; defined. The Lambda function in this repo was written with the intention of being used for demonstration purposes, and as a simple function that can be deployed when testing out CI/CD or the many serverless deployment tools available today (such as &lt;a href="https://github.com/serverless/serverless" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt;, &lt;a href="https://github.com/aws/aws-cdk" rel="noopener noreferrer"&gt;Cloud Development Kit (CDK)&lt;/a&gt; etc.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying a Lambda Function with AWS SAM
&lt;/h2&gt;

&lt;p&gt;AWS SAM is the &lt;a href="https://aws.amazon.com/serverless/sam/" rel="noopener noreferrer"&gt;Serverless Application Model&lt;/a&gt;, which allows you to &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-template-anatomy.html" rel="noopener noreferrer"&gt;write a template&lt;/a&gt; describing the Lambda function, and any related resources you need to run your Lambda. The SAM template uses the YAML markup language and is based on AWS Cloudformation. After reading the template, SAM performs the deployment for you, and you can watch Cloudformation events to identify what the outcome was.&lt;/p&gt;

&lt;p&gt;We are going to use the &lt;a href="https://github.com/mbacchi/lambda-env-vars/blob/master/template.yaml" rel="noopener noreferrer"&gt;template&lt;/a&gt; in the above repository to deploy the function. I won't go into detail breaking down all of the items in the template for now. That information can be found in the documentation above or in other blogs. But I do want to highlight the two resources used here, the &lt;a href="https://github.com/mbacchi/lambda-env-vars/blob/main/template.yaml#L14" rel="noopener noreferrer"&gt;LambdaEnvVarsFunction&lt;/a&gt; which is of the type &lt;code&gt;AWS::Serverless::Function&lt;/code&gt; and the &lt;a href="https://github.com/mbacchi/lambda-env-vars/blob/main/template.yaml#L26" rel="noopener noreferrer"&gt;LambdaLogGroup&lt;/a&gt;, which is the definition of our &lt;code&gt;AWS::Logs::LogGroup&lt;/code&gt; CloudWatch log group, where our Lambda output will be collected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install SAM CLI
&lt;/h2&gt;

&lt;p&gt;You'll need to install a few components before running SAM. I would recommend running this in a Python virtual environment (venv), and I will provide basic steps here to set that up.&lt;/p&gt;

&lt;p&gt;From your terminal run these commands (this is on a Linux system):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step 1: Make sure you have Python 3.8 (you may have to install this first, on Fedora Linux the command would be &lt;code&gt;sudo dnf install python3.8&lt;/code&gt;)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nv"&gt;$ &lt;/span&gt;which python3.8
  /usr/bin/python3.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Step 2: Create a Python 3.8 virtual environment
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nv"&gt;$ &lt;/span&gt;python3.8 &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv38
  &lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls &lt;/span&gt;venv38
  bin  include  lib  lib64  pyvenv.cfg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Step 3: Activate the virtual environment
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; venv38/bin/activate
  &lt;span class="nv"&gt;$ &lt;/span&gt;which python
  ~/FAKE_PATH/lambda-env-vars7/venv38/bin/python
  &lt;span class="nv"&gt;$ &lt;/span&gt;which pip
  ~/FAKE_PATH/lambda-env-vars7/venv38/bin/pip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Step 4: Install SAM CLI &lt;strong&gt;NOTE: This installs Docker in the Python virtual environment&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;aws-sam-cli
  Collecting aws-sam-cli
    Using cached aws_sam_cli-1.37.0-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;5.0 MB&lt;span class="o"&gt;)&lt;/span&gt;
  Collecting &lt;span class="nv"&gt;boto3&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.&lt;span class="k"&gt;*&lt;/span&gt;,&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;1.18.32
    Using cached boto3-1.20.46-py3-none-any.whl &lt;span class="o"&gt;(&lt;/span&gt;131 kB&lt;span class="o"&gt;)&lt;/span&gt;
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can move on to the next step, where we run the Lambda function locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running Your Lambda Function with SAM Local
&lt;/h2&gt;

&lt;p&gt;Before we deploy to AWS itself, we want to test our code by invoking the Lambda function locally. What is this magic you ask? SAM provides a utility command called &lt;code&gt;sam local invoke&lt;/code&gt; which allows your function to run locally in a Docker container to test that it runs as expected before deploying to AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; We assume you haven't left the terminal session in step 4 above. If you have, activate your Python virtual environment as in step 3 above.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;sam local invoke&lt;/code&gt; as &lt;a href="https://github.com/mbacchi/lambda-env-vars#testing-using-sam-local" rel="noopener noreferrer"&gt;described in the Git repository&lt;/a&gt; (this uses &lt;a href="https://github.com/mbacchi/lambda-env-vars/blob/main/events/event.json" rel="noopener noreferrer"&gt;test events&lt;/a&gt; supplied with Lambda function in the repository)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;sam &lt;span class="nb"&gt;local &lt;/span&gt;invoke LambdaEnvVarsFunction &lt;span class="nt"&gt;--event&lt;/span&gt; events/event.json
Invoking app.lambda_handler &lt;span class="o"&gt;(&lt;/span&gt;python3.8&lt;span class="o"&gt;)&lt;/span&gt;
Skip pulling image and use &lt;span class="nb"&gt;local &lt;/span&gt;one: public.ecr.aws/sam/emulation-python3.8:rapid-1.37.0-x86_64.

Mounting /home/mbacchi/data/repos/mbacchi/lambda-env-vars7/.aws-sam/build/LambdaEnvVarsFunction as /var/task:ro,delegated inside runtime container
START RequestId: f97086cf-e8b6-4d9f-b2bc-b627ca5961e6 Version: &lt;span class="nv"&gt;$LATEST&lt;/span&gt;
Hello there this is the body: &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s1"&gt;'_HANDLER'&lt;/span&gt;: &lt;span class="s1"&gt;'app.lambda_handler'&lt;/span&gt;, &lt;span class="s1"&gt;'AWS_REGION'&lt;/span&gt;: &lt;span class="s1"&gt;'us-east-1'&lt;/span&gt;, &lt;span class="s1"&gt;'AWS_EXECUTION_ENV'&lt;/span&gt;: &lt;span class="s1"&gt;'AWS_Lambda_python3.8'&lt;/span&gt;, &lt;span class="s1"&gt;'AWS_LAMBDA_FUNCTION_NAME'&lt;/span&gt;: &lt;span class="s1"&gt;'LambdaEnvVarsFunction'&lt;/span&gt;, &lt;span class="s1"&gt;'AWS_LAMBDA_FUNCTION_MEMORY_SIZE'&lt;/span&gt;: &lt;span class="s1"&gt;'128'&lt;/span&gt;, &lt;span class="s1"&gt;'AWS_LAMBDA_FUNCTION_VERSION'&lt;/span&gt;: &lt;span class="s1"&gt;'$LATEST'&lt;/span&gt;, &lt;span class="s1"&gt;'AWS_LAMBDA_LOG_GROUP_NAME'&lt;/span&gt;: &lt;span class="s1"&gt;'aws/lambda/LambdaEnvVarsFunction'&lt;/span&gt;, &lt;span class="s1"&gt;'AWS_LAMBDA_LOG_STREAM_NAME'&lt;/span&gt;: &lt;span class="s1"&gt;'$LATEST'&lt;/span&gt;, &lt;span class="s1"&gt;'LANG'&lt;/span&gt;: &lt;span class="s1"&gt;'en_US.UTF-8'&lt;/span&gt;, &lt;span class="s1"&gt;'TZ'&lt;/span&gt;: &lt;span class="s1"&gt;':/etc/localtime'&lt;/span&gt;, &lt;span class="s1"&gt;'LAMBDA_TASK_ROOT'&lt;/span&gt;: &lt;span class="s1"&gt;'/var/task'&lt;/span&gt;, &lt;span class="s1"&gt;'LAMBDA_RUNTIME_DIR'&lt;/span&gt;: &lt;span class="s1"&gt;'/var/runtime'&lt;/span&gt;, &lt;span class="s1"&gt;'PATH'&lt;/span&gt;: &lt;span class="s1"&gt;'/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin'&lt;/span&gt;, &lt;span class="s1"&gt;'LD_LIBRARY_PATH'&lt;/span&gt;: &lt;span class="s1"&gt;'/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib'&lt;/span&gt;, &lt;span class="s1"&gt;'PYTHONPATH'&lt;/span&gt;: &lt;span class="s1"&gt;'/var/runtime'&lt;/span&gt;, &lt;span class="s1"&gt;'AWS_LAMBDA_RUNTIME_API'&lt;/span&gt;: &lt;span class="s1"&gt;'127.0.0.1:9001'&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
END RequestId: f97086cf-e8b6-4d9f-b2bc-b627ca5961e6
REPORT RequestId: f97086cf-e8b6-4d9f-b2bc-b627ca5961e6  Init Duration: 0.11 ms  Duration: 100.72 ms Billed Duration: 101 ms Memory Size: 128 MB Max Memory Used: 128 MB 
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"statusCode"&lt;/span&gt;: 200, &lt;span class="s2"&gt;"body"&lt;/span&gt;: &lt;span class="s2"&gt;"{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;_HANDLER&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;app.lambda_handler&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;AWS_REGION&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;us-east-1&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;AWS_EXECUTION_ENV&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;AWS_Lambda_python3.8&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;AWS_LAMBDA_FUNCTION_NAME&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;LambdaEnvVarsFunction&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;AWS_LAMBDA_FUNCTION_MEMORY_SIZE&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;128&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;AWS_LAMBDA_FUNCTION_VERSION&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$LATEST&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;AWS_LAMBDA_LOG_GROUP_NAME&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;aws/lambda/LambdaEnvVarsFunction&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;AWS_LAMBDA_LOG_STREAM_NAME&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$LATEST&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;LANG&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;en_US.UTF-8&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;TZ&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:/etc/localtime&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;LAMBDA_TASK_ROOT&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/var/task&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;LAMBDA_RUNTIME_DIR&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/var/runtime&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;PATH&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;LD_LIBRARY_PATH&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;PYTHONPATH&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/var/runtime&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;AWS_LAMBDA_RUNTIME_API&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;127.0.0.1:9001&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy to AWS
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Before deploying you need to setup your &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html" rel="noopener noreferrer"&gt;AWS credentials&lt;/a&gt; properly.&lt;/p&gt;

&lt;p&gt;In order to deploy the Lambda function, we'll follow the steps in the &lt;a href="https://github.com/mbacchi/lambda-env-vars#deploying-to-aws" rel="noopener noreferrer"&gt;lambda-env-vars&lt;/a&gt; repository:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step 1: Run &lt;code&gt;sam build&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nv"&gt;$ &lt;/span&gt;sam build
  Building codeuri: /home/mbacchi/data/repos/mbacchi/lambda-env-vars7/env_vars runtime: python3.8 metadata: &lt;span class="o"&gt;{}&lt;/span&gt; architecture: x86_64 functions: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'LambdaEnvVarsFunction'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
  Running PythonPipBuilder:ResolveDependencies
  Running PythonPipBuilder:CopySource

  Build Succeeded

  Built Artifacts  : .aws-sam/build
  Built Template   : .aws-sam/build/template.yaml

  Commands you can use next
  &lt;span class="o"&gt;=========================&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Invoke Function: sam &lt;span class="nb"&gt;local &lt;/span&gt;invoke
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Test Function &lt;span class="k"&gt;in &lt;/span&gt;the Cloud: sam &lt;span class="nb"&gt;sync&lt;/span&gt; &lt;span class="nt"&gt;--stack-name&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;stack-name&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--watch&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Deploy: sam deploy &lt;span class="nt"&gt;--guided&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Step 2: Run &lt;code&gt;sam package&lt;/code&gt; (this creates an S3 bucket, which must be removed later)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nv"&gt;$ &lt;/span&gt;sam package &lt;span class="nt"&gt;--output-template-file&lt;/span&gt; packaged.yaml &lt;span class="nt"&gt;--resolve-s3&lt;/span&gt;
    Creating the required resources...
    Successfully created!

      Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-61z4v277mwy0
      A different default S3 bucket can be &lt;span class="nb"&gt;set &lt;/span&gt;&lt;span class="k"&gt;in &lt;/span&gt;samconfig.toml
      Or by specifying &lt;span class="nt"&gt;--s3-bucket&lt;/span&gt; explicitly.
  Uploading to eb6c7534d67a5b044653e48f2802a548  452695 / 452695  &lt;span class="o"&gt;(&lt;/span&gt;100.00%&lt;span class="o"&gt;)&lt;/span&gt;

  Successfully packaged artifacts and wrote output template to file packaged.yaml.
  Execute the following &lt;span class="nb"&gt;command &lt;/span&gt;to deploy the packaged template
  sam deploy &lt;span class="nt"&gt;--template-file&lt;/span&gt; /home/mbacchi/data/repos/mbacchi/lambda-env-vars7/packaged.yaml &lt;span class="nt"&gt;--stack-name&lt;/span&gt; &amp;lt;YOUR STACK NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Step 3: Run &lt;code&gt;sam deploy&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nv"&gt;$ &lt;/span&gt;sam deploy &lt;span class="nt"&gt;--template-file&lt;/span&gt; packaged.yaml &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-2 &lt;span class="nt"&gt;--capabilities&lt;/span&gt; CAPABILITY_IAM &lt;span class="nt"&gt;--stack-name&lt;/span&gt; lambda-env-vars &lt;span class="nt"&gt;--resolve-s3&lt;/span&gt;

      Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-61z4v277mwy0
      A different default S3 bucket can be &lt;span class="nb"&gt;set &lt;/span&gt;&lt;span class="k"&gt;in &lt;/span&gt;samconfig.toml
      Or by specifying &lt;span class="nt"&gt;--s3-bucket&lt;/span&gt; explicitly.

    Deploying with following values
    &lt;span class="o"&gt;===============================&lt;/span&gt;
    Stack name                   : lambda-env-vars
    Region                       : us-east-2
    Confirm changeset            : False
    Disable rollback             : False
    Deployment s3 bucket         : aws-sam-cli-managed-default-samclisourcebucket-61z4v277mwy0
    Capabilities                 : &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"CAPABILITY_IAM"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
    Parameter overrides          : &lt;span class="o"&gt;{}&lt;/span&gt;
    Signing Profiles             : &lt;span class="o"&gt;{}&lt;/span&gt;

  Initiating deployment
  &lt;span class="o"&gt;=====================&lt;/span&gt;
  Uploading to 77524ed205c6d82d5487433831c24c0f.template  1635 / 1635  &lt;span class="o"&gt;(&lt;/span&gt;100.00%&lt;span class="o"&gt;)&lt;/span&gt;

  Waiting &lt;span class="k"&gt;for &lt;/span&gt;changeset to be created..

  CloudFormation stack changeset
  &lt;span class="nt"&gt;-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
  Operation                                                  LogicalResourceId                                          ResourceType                                               Replacement                                              
  &lt;span class="nt"&gt;-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
  + Add                                                      LambdaEnvVarsFunctionLambdaEnvVarsPermissionProd           AWS::Lambda::Permission                                    N/A                                                      
  + Add                                                      LambdaEnvVarsFunctionRole                                  AWS::IAM::Role                                             N/A                                                      
  + Add                                                      LambdaEnvVarsFunction                                      AWS::Lambda::Function                                      N/A                                                      
  + Add                                                      LambdaLogGroup                                             AWS::Logs::LogGroup                                        N/A                                                      
  + Add                                                      ServerlessRestApiDeploymentcaa6ada684                      AWS::ApiGateway::Deployment                                N/A                                                      
  + Add                                                      ServerlessRestApiProdStage                                 AWS::ApiGateway::Stage                                     N/A                                                      
  + Add                                                      ServerlessRestApi                                          AWS::ApiGateway::RestApi                                   N/A                                                      
  &lt;span class="nt"&gt;-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;

  Changeset created successfully. arn:aws:cloudformation:us-east-2:592431548397:changeSet/samcli-deploy1643607514/85a31faf-1358-47ff-9eeb-4428020bb482


  2022-01-30 22:38:46 - Waiting &lt;span class="k"&gt;for &lt;/span&gt;stack create/update to &lt;span class="nb"&gt;complete

  &lt;/span&gt;CloudFormation events from stack operations
  &lt;span class="nt"&gt;-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
  ResourceStatus                                             ResourceType                                               LogicalResourceId                                          ResourceStatusReason                                     
  &lt;span class="nt"&gt;-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
  CREATE_IN_PROGRESS                                         AWS::IAM::Role                                             LambdaEnvVarsFunctionRole                                  -                                                        
  CREATE_IN_PROGRESS                                         AWS::IAM::Role                                             LambdaEnvVarsFunctionRole                                  Resource creation Initiated                              
  CREATE_COMPLETE                                            AWS::IAM::Role                                             LambdaEnvVarsFunctionRole                                  -                                                        
  CREATE_IN_PROGRESS                                         AWS::Lambda::Function                                      LambdaEnvVarsFunction                                      -                                                        
  CREATE_IN_PROGRESS                                         AWS::Lambda::Function                                      LambdaEnvVarsFunction                                      Resource creation Initiated                              
  CREATE_COMPLETE                                            AWS::Lambda::Function                                      LambdaEnvVarsFunction                                      -                                                        
  CREATE_IN_PROGRESS                                         AWS::Logs::LogGroup                                        LambdaLogGroup                                             -                                                        
  CREATE_IN_PROGRESS                                         AWS::ApiGateway::RestApi                                   ServerlessRestApi                                          -                                                        
  CREATE_IN_PROGRESS                                         AWS::ApiGateway::RestApi                                   ServerlessRestApi                                          Resource creation Initiated                              
  CREATE_COMPLETE                                            AWS::ApiGateway::RestApi                                   ServerlessRestApi                                          -                                                        
  CREATE_IN_PROGRESS                                         AWS::Logs::LogGroup                                        LambdaLogGroup                                             Resource creation Initiated                              
  CREATE_IN_PROGRESS                                         AWS::ApiGateway::Deployment                                ServerlessRestApiDeploymentcaa6ada684                      -                                                        
  CREATE_IN_PROGRESS                                         AWS::Lambda::Permission                                    LambdaEnvVarsFunctionLambdaEnvVarsPermissionProd           Resource creation Initiated                              
  CREATE_IN_PROGRESS                                         AWS::Lambda::Permission                                    LambdaEnvVarsFunctionLambdaEnvVarsPermissionProd           -                                                        
  CREATE_COMPLETE                                            AWS::Logs::LogGroup                                        LambdaLogGroup                                             -                                                        
  CREATE_IN_PROGRESS                                         AWS::ApiGateway::Deployment                                ServerlessRestApiDeploymentcaa6ada684                      Resource creation Initiated                              
  CREATE_COMPLETE                                            AWS::ApiGateway::Deployment                                ServerlessRestApiDeploymentcaa6ada684                      -                                                        
  CREATE_IN_PROGRESS                                         AWS::ApiGateway::Stage                                     ServerlessRestApiProdStage                                 -                                                        
  CREATE_IN_PROGRESS                                         AWS::ApiGateway::Stage                                     ServerlessRestApiProdStage                                 Resource creation Initiated                              
  CREATE_COMPLETE                                            AWS::ApiGateway::Stage                                     ServerlessRestApiProdStage                                 -                                                        
  CREATE_COMPLETE                                            AWS::Lambda::Permission                                    LambdaEnvVarsFunctionLambdaEnvVarsPermissionProd           -                                                        
  CREATE_COMPLETE                                            AWS::CloudFormation::Stack                                 lambda-env-vars                                            -                                                        
  &lt;span class="nt"&gt;-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;

  CloudFormation outputs from deployed stack
  &lt;span class="nt"&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
  Outputs                                                                                                                                                                                                                                   
  &lt;span class="nt"&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
  Key                 LambdaEnvVarsApi                                                                                                                                                                                                      
  Description         API Gateway endpoint URL &lt;span class="k"&gt;for &lt;/span&gt;Prod stage &lt;span class="k"&gt;for &lt;/span&gt;Lambda Env Vars &lt;span class="k"&gt;function                                                                                                                                                  
  &lt;/span&gt;Value               https://e4cusxvy7f.execute-api.us-east-2.amazonaws.com/Prod/env_vars/                                                                                                                                                 

  Key                 LambdaLogGroup                                                                                                                                                                                                        
  Description         Cloudwatch Log Group ARN                                                                                                                                                                                              
  Value               /aws/lambda/lambda-env-vars-LambdaEnvVarsFunction-6r0qwkWGeZSp                                                                                                                                                        

  Key                 LambdaEnvVarsFunctionIamRole                                                                                                                                                                                          
  Description         Implicit IAM Role created &lt;span class="k"&gt;for &lt;/span&gt;Lambda Env Vars &lt;span class="k"&gt;function                                                                                                                                                                
  &lt;/span&gt;Value               arn:aws:iam::592431548397:role/lambda-env-vars-LambdaEnvVarsFunctionRole-L05B77IT7WK0                                                                                                                                 

  Key                 LambdaEnvVarsFunction                                                                                                                                                                                                 
  Description         Lambda Env Vars Function ARN                                                                                                                                                                                          
  Value               arn:aws:lambda:us-east-2:592431548397:function:lambda-env-vars-LambdaEnvVarsFunction-6r0qwkWGeZSp                                                                                                                     
  &lt;span class="nt"&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;

  Successfully created/updated stack - lambda-env-vars &lt;span class="k"&gt;in &lt;/span&gt;us-east-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;sam deploy&lt;/code&gt; command has deployed the Lambda function to AWS and returned the API Gateway endpoint which we can use to invoke the Lambda with an API call.  We need the API Gateway URL from the above output, look for the &lt;code&gt;LambdaEnvVarsApi&lt;/code&gt; key. In our example above, this is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://e4cusxvy7f.execute-api.us-east-2.amazonaws.com/Prod/env_vars/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  GET Request to Invoke the Lambda Function
&lt;/h2&gt;

&lt;p&gt;Now that we have the API Gateway URL (also known as the endpoint,) we can use a tool like curl to perform an HTTP GET request against the endpoint. Because we defined the &lt;code&gt;Events&lt;/code&gt; &lt;a href="https://github.com/mbacchi/lambda-env-vars/blob/main/template.yaml#L20-L25" rel="noopener noreferrer"&gt;stanza&lt;/a&gt; with a &lt;code&gt;Type&lt;/code&gt; of &lt;code&gt;Api&lt;/code&gt;, and &lt;code&gt;Method&lt;/code&gt; &lt;code&gt;get&lt;/code&gt;, SAM has deployed an API Gateway resource to trigger our Function. (More detailed discussion on this implicit API Gateway resource &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-api.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;We will use the command &lt;code&gt;curl&lt;/code&gt; to perform a &lt;code&gt;GET&lt;/code&gt; request as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; GET https://e4cusxvy7f.execute-api.us-east-2.amazonaws.com/Prod/env_vars/
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"_HANDLER"&lt;/span&gt;: &lt;span class="s2"&gt;"app.lambda_handler"&lt;/span&gt;, &lt;span class="s2"&gt;"AWS_REGION"&lt;/span&gt;: &lt;span class="s2"&gt;"us-east-2"&lt;/span&gt;, &lt;span class="s2"&gt;"AWS_EXECUTION_ENV"&lt;/span&gt;: &lt;span class="s2"&gt;"AWS_Lambda_python3.8"&lt;/span&gt;, &lt;span class="s2"&gt;"AWS_LAMBDA_FUNCTION_NAME"&lt;/span&gt;: &lt;span class="s2"&gt;"lambda-env-vars-LambdaEnvVarsFunction-6r0qwkWGeZSp"&lt;/span&gt;, &lt;span class="s2"&gt;"AWS_LAMBDA_FUNCTION_MEMORY_SIZE"&lt;/span&gt;: &lt;span class="s2"&gt;"128"&lt;/span&gt;, &lt;span class="s2"&gt;"AWS_LAMBDA_FUNCTION_VERSION"&lt;/span&gt;: &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LATEST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;, &lt;span class="s2"&gt;"AWS_LAMBDA_LOG_GROUP_NAME"&lt;/span&gt;: &lt;span class="s2"&gt;"/aws/lambda/lambda-env-vars-LambdaEnvVarsFunction-6r0qwkWGeZSp"&lt;/span&gt;, &lt;span class="s2"&gt;"AWS_LAMBDA_LOG_STREAM_NAME"&lt;/span&gt;: &lt;span class="s2"&gt;"2022/01/31/[&lt;/span&gt;&lt;span class="nv"&gt;$LATEST&lt;/span&gt;&lt;span class="s2"&gt;]c2b9b9c953154c8daa7c9b2c0637b70e"&lt;/span&gt;, &lt;span class="s2"&gt;"LANG"&lt;/span&gt;: &lt;span class="s2"&gt;"en_US.UTF-8"&lt;/span&gt;, &lt;span class="s2"&gt;"TZ"&lt;/span&gt;: &lt;span class="s2"&gt;":UTC"&lt;/span&gt;, &lt;span class="s2"&gt;"LAMBDA_TASK_ROOT"&lt;/span&gt;: &lt;span class="s2"&gt;"/var/task"&lt;/span&gt;, &lt;span class="s2"&gt;"LAMBDA_RUNTIME_DIR"&lt;/span&gt;: &lt;span class="s2"&gt;"/var/runtime"&lt;/span&gt;, &lt;span class="s2"&gt;"PATH"&lt;/span&gt;: &lt;span class="s2"&gt;"/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin"&lt;/span&gt;, &lt;span class="s2"&gt;"LD_LIBRARY_PATH"&lt;/span&gt;: &lt;span class="s2"&gt;"/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib"&lt;/span&gt;, &lt;span class="s2"&gt;"PYTHONPATH"&lt;/span&gt;: &lt;span class="s2"&gt;"/var/runtime"&lt;/span&gt;, &lt;span class="s2"&gt;"AWS_LAMBDA_RUNTIME_API"&lt;/span&gt;: &lt;span class="s2"&gt;"127.0.0.1:9001"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On line 2, we can see the response JSON, which is what the &lt;br&gt;
Lambda function returned to us via the API Gateway. And we see all the environment variables that we had expected, just as in the &lt;code&gt;sam local invoke&lt;/code&gt; command earlier.&lt;/p&gt;

&lt;p&gt;But how can we observe the output of the &lt;a href="https://github.com/mbacchi/lambda-env-vars/blob/main/env_vars/app.py#L50" rel="noopener noreferrer"&gt;print statement at line 50&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;To do that we look at the CloudWatch log output. Browse to your AWS console, and search for the Cloudwatch log group name that was in the &lt;code&gt;Outputs&lt;/code&gt; section of the &lt;code&gt;sam deploy&lt;/code&gt; command output.&lt;/p&gt;

&lt;p&gt;In our case the LambdaLogGroup name was: &lt;code&gt;/aws/lambda/lambda-env-vars-LambdaEnvVarsFunction-6r0qwkWGeZSp&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When I browse there in CloudWatch, I see the "Hello there" line in the output, which was sent to CloudWatch because we performed a &lt;code&gt;print&lt;/code&gt; to standard output within the Lambda function code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbacchi.org%2Fimages%2Fcloudwatch-log-output.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbacchi.org%2Fimages%2Fcloudwatch-log-output.png" alt="CloudWatch Log Output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Just Occurred? What's Next?
&lt;/h2&gt;

&lt;p&gt;Congratulations, you just deployed and invoked your first Lambda function.&lt;/p&gt;

&lt;p&gt;Take some time to examine the output of the function in CloudWatch Logs, as well as what was returned during your &lt;code&gt;curl&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;Challenge 1: If you've heard of the &lt;a href="https://www.postman.com/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt; tool, you can also call the API endpoint from there. Give it a shot.&lt;/p&gt;

&lt;p&gt;Challenge 2: You can display the HTTP headers when calling the API endpoint. How would you see the headers with the &lt;code&gt;curl&lt;/code&gt; command? How would you see them with Postman?&lt;/p&gt;

&lt;p&gt;Challenge 3: Read more about Lambda functions, SAM, Serverless Framework, CDK.  Build your own Lambda and deploy it. Let me know how it goes on Twitter &lt;a href="https://twitter.com/fshwsprr" rel="noopener noreferrer"&gt;@fshwsprr&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean Up AWS Resources
&lt;/h2&gt;

&lt;p&gt;When you are done playing around with this Lambda function, run these commands to remove your resources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;aws cloudformation delete-stack --stack-name lambda-env-vars --region us-east-2&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws s3 rb s3://aws-sam-cli-managed-default-samclisourcebucket-61z4v277mwy0 --force&lt;/code&gt; &lt;strong&gt;NOTE:&lt;/strong&gt; This requires you to look at the output of your &lt;code&gt;sam package&lt;/code&gt; or &lt;code&gt;sam deploy&lt;/code&gt; command above to get the name of the S3 bucket to remove!&lt;/li&gt;
&lt;li&gt;Finally, you can deactivate your python virtual environment with the command: &lt;code&gt;deactivate&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;If you've read this far, the important takeaways about AWS Lambda in my opinion, are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The concept that Lambda functions are only one part of the cloud serverless landscape.&lt;/li&gt;
&lt;li&gt;Understanding invocation options and how your single function gets called by AWS.
&lt;/li&gt;
&lt;li&gt;The fact that you can locally invoke the function in SAM Local before you deploy to AWS.&lt;/li&gt;
&lt;li&gt;Anything you print inside your function will be written to CloudWatch Logs (assuming you created a Log Group for your Lambda.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading! Enjoy your serverless and Lambda journey!&lt;/p&gt;

&lt;p&gt;Cover photo by &lt;a href="https://unsplash.com/@chuklanov" rel="noopener noreferrer"&gt;Avel Chuklanov&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/learn" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>serverless</category>
      <category>python</category>
    </item>
    <item>
      <title>How to Avoid CIDR Conflicts in AWS Sagemaker Notebooks</title>
      <dc:creator>Matt Bacchi</dc:creator>
      <pubDate>Thu, 13 Jan 2022 15:06:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-avoid-cidr-conflicts-in-aws-sagemaker-notebooks-4kaf</link>
      <guid>https://dev.to/aws-builders/how-to-avoid-cidr-conflicts-in-aws-sagemaker-notebooks-4kaf</guid>
      <description>&lt;p&gt;Networking can sometimes be quite complicated. Despite the oft repeated joke that "It's always DNS", sometimes your problem is even more difficult to diagnose than DNS.&lt;/p&gt;

&lt;p&gt;According to Wikipedia, Classless Inter-Domain Routing (or &lt;a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing"&gt;CIDR&lt;/a&gt;) "is the method for allocating IP addresses and for IP routing" on the internet and on private networks. If there are conflicts in two networks' CIDR ranges, it can cause headaches that make DNS problems look like childs play.&lt;/p&gt;

&lt;p&gt;This is a story about how I unknowingly created a CIDR conflict, and I hope it will be useful to help you avoid CIDR conflicts in Sagemaker Notebooks in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Sagemaker
&lt;/h2&gt;

&lt;p&gt;AWS Sagemaker is a popular Machine Learning platform that provides preconfigured environments which allow you to begin training ML models quickly.&lt;/p&gt;

&lt;p&gt;Sagemaker Notebooks provide the platform to build and train your models. Under the covers an AWS Sagemaker Notebook is an EC2 instance running a Jupyter notebook packaged with numerous libraries and algorithms. This prevents users from having to configure their own compute, storage and libraries themselves.&lt;/p&gt;

&lt;p&gt;A benefit of Sagemaker Notebooks is that you can run generic Python commands alongside your more complex ML code within the Jupyter notebooks. For example, while testing out some connectivity issues recently I was able to use the ubiquitous Python Requests library to perform an HTTP GET request within the Sagemaker Notebook to verify I was able to communicate with a webserver I launched in another AWS account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a VPC
&lt;/h2&gt;

&lt;p&gt;When I create new projects or resources in AWS accounts, I find it useful to create a new VPC just for the resources in that specific project. This provides network isolation, security and connectivity to the specific components needed for the project. During VPC creation you need to provide a CIDR range that the VPC will use.&lt;/p&gt;

&lt;p&gt;CIDR ranges must be built using the private address spaces that &lt;a href="https://datatracker.ietf.org/doc/html/rfc1918#section-3"&gt;IETF RFC 1918&lt;/a&gt; dictates. This means that you choose the CIDR range, but it must be in the &lt;code&gt;192.168.0.0&lt;/code&gt; range for 256 Class C networks, or &lt;code&gt;172.16.0.0&lt;/code&gt; for 16 Class B networks, or &lt;code&gt;10.0.0.0&lt;/code&gt; for a Class A network.&lt;/p&gt;

&lt;p&gt;The day I was working on these Sagemaker resources, I randomly chose the range &lt;code&gt;172.17.0.0&lt;/code&gt; for this VPC, and this fateful decision would come to haunt me.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sagemaker Notebooks
&lt;/h2&gt;

&lt;p&gt;As described briefly above, Sagemaker Notebooks run on an AWS EC2 instance, and provide preconfigured tools for ML use cases. Part of that tooling is a wonderful open source product called &lt;a href="http://jupyter.org/"&gt;Jupyter Notebook&lt;/a&gt;,&lt;br&gt;
which is a web based Python experimentation environment. Jupyter Notebooks run inside Docker on the Sagemaker Notebook EC2 instance.&lt;/p&gt;

&lt;p&gt;When you configure Docker on an EC2 instance or any workstation/laptop/Linux machine, it sets up virtual networking of its own to provide network connectivity to the Docker containers. This local virtual networking sets up a bridge network interface that allows communication between the Docker network and the local machine as well as the internet if allowed. The &lt;a href="https://docs.docker.com/network/bridge/"&gt;default Docker bridge&lt;/a&gt; network uses the range &lt;code&gt;172.17.0.0&lt;/code&gt; for the &lt;code&gt;docker0&lt;/code&gt; network interface.&lt;/p&gt;
&lt;h2&gt;
  
  
  CIDR Range Conflict Causes Connectivity Problem
&lt;/h2&gt;

&lt;p&gt;Now that the background is set, you can see why a connectivity problem was destined to occur on the Sagemaker Notebook instance. Since the VPC CIDR range utilized &lt;code&gt;172.17.0.0&lt;/code&gt;, that meant all EC2 instances or network interfaces created in that VPC would be provided with an IP address within the &lt;code&gt;172.17.0.0&lt;/code&gt; range.&lt;/p&gt;

&lt;p&gt;Because the Jupyter Notebook running in Docker on the Sagemaker Notebook EC2 instance used the Docker bridge network &lt;code&gt;172.17.0.0&lt;/code&gt; and listened for all traffic being sent to that destination, it superceded the traffic sent to the&lt;br&gt;
outside world. Any network packets sent to the default route (&lt;code&gt;0.0.0.0&lt;/code&gt;) via the default gateway (&lt;code&gt;172.17.112.1&lt;/code&gt;) were actually intercepted by the same Docker bridge network, and not sent outside the bridge network.&lt;/p&gt;

&lt;p&gt;This is shown via the &lt;code&gt;route -n&lt;/code&gt; command on the Sagemaker Notebook EC2 instance terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;sh-4.2$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;route &lt;span class="nt"&gt;-n&lt;/span&gt;
&lt;span class="go"&gt;Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.112.1    0.0.0.0         UG    10002  0        0 eth2
169.254.0.0     0.0.0.0         255.255.255.0   U     0      0        0 veth_def_agent
169.254.169.254 0.0.0.0         255.255.255.255 UH    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.17.112.0    0.0.0.0         255.255.240.0   U     0      0        0 eth2
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-9bb29e923d05
192.168.0.0     0.0.0.0         255.255.0.0     U     0      0        0 bridge0
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Was This Unexpected?
&lt;/h2&gt;

&lt;p&gt;What could have been done to avoid this you ask?&lt;/p&gt;

&lt;p&gt;As I mentioned earlier, the &lt;a href="https://docs.docker.com/network/bridge/"&gt;default Docker bridge&lt;/a&gt; network uses the range &lt;code&gt;172.17.0.0&lt;/code&gt; for the &lt;code&gt;docker0&lt;/code&gt; network interface. It's fine that AWS simply used that default without modifying it, but that implementation detail should have been documented in the Sagemaker Notebook documentation.&lt;/p&gt;

&lt;p&gt;Ideally my suggested resolution is different. If the AWS end user (aka customer) tries creating a Sagemaker Notebook in a VPC that has a CIDR range the same as the default Docker bridge range (&lt;code&gt;172.17.0.0&lt;/code&gt;), they should change the Docker bridge network range to something else to avoid a conflict.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap Up
&lt;/h2&gt;

&lt;p&gt;Ultimately the responsibility was on me to create the VPC and the Sagemaker Notebook and make sure they worked correctly. But this was made infinitely more difficult by AWS making choices themselves. Because AWS Sagemaker did not document the Docker bridge network range they used, and did not programatically configure an alternate Docker bridge network range if the customer selected the same CIDR range for their VPC as the default Docker bridge network, I was left troubleshooting a nasty problem.&lt;/p&gt;

&lt;p&gt;Please be aware this is still an open issue. (I say that in a figurative way not literal. There is no AWS Support ticket open, and they have not committed to fixing this in any way.)&lt;/p&gt;

&lt;p&gt;If you want to avoid CIDR conflicts on Sagemaker Notebooks in your AWS VPC, make sure you use a CIDR range other than &lt;code&gt;172.17.0.0&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sagemaker</category>
      <category>machinelearning</category>
      <category>vpc</category>
    </item>
  </channel>
</rss>
