<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Carlo Mencarelli</title>
    <description>The latest articles on DEV Community by Carlo Mencarelli (@mencarellic).</description>
    <link>https://dev.to/mencarellic</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mencarellic"/>
    <language>en</language>
    <item>
      <title>AWS Amplify Through An Infrastructure Lens</title>
      <dc:creator>Carlo Mencarelli</dc:creator>
      <pubDate>Wed, 19 Apr 2023 04:20:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-amplify-through-an-infrastructure-lens-28mo</link>
      <guid>https://dev.to/aws-builders/aws-amplify-through-an-infrastructure-lens-28mo</guid>
      <description>&lt;p&gt;Every company wants an app these days, and creating one is getting easier every day. Amazon has specific solutions for this, advertising that you can "Build full-stack web and mobile apps in hours." And it's "Easy to start, easy to scale."&lt;/p&gt;

&lt;p&gt;I spent a few weeks toying with Amplify. I wanted to learn how easy it was to build a mobile app. I'm not a front-end developer; I've spent the last ten years doing systems and infrastructure engineering. My programming experience is limited to scripting or side projects with no audience beyond myself.&lt;/p&gt;

&lt;p&gt;In this write-up, I present my thoughts on Amplify as a platform for building mobile apps.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Project
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujmj3bps3z55u4kx9ncb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujmj3bps3z55u4kx9ncb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I chose to create a water consumption tracking app, for no real reason besides that it was something I could require authentication for, I needed data persistence, and I could get away with a minimal GUI.&lt;/p&gt;

&lt;p&gt;I chose React Native somewhat on a whim. Some of my friends in the industry feel that it's a good solution, and I didn't have experience; plus, it is cross-platform, so I could theoretically build an iOS or Android app if I was going to release this to the public.&lt;/p&gt;




&lt;h2&gt;
  
  
  Documentation
&lt;/h2&gt;

&lt;p&gt;In my opinion, the &lt;a href="https://docs.amplify.aws/start/" rel="noopener noreferrer"&gt;Amplify docs&lt;/a&gt; are a cut above the rest of the AWS documentation. They have a modern feel and are easy to consume, with panels for important callouts, code blocks with highlighting, and up-to-date UI screenshots.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Development Process
&lt;/h2&gt;

&lt;p&gt;When creating an Amplify project, I was impressed with the CLI that the Amplify team maintains. You can add storage, APIs, databases, and more using &lt;code&gt;amplify add&lt;/code&gt;. After creating your project, you start a new Amplify project with &lt;code&gt;amplify init&lt;/code&gt;, which walks you through several options for your app. The CLI does its best to figure out some values. For example, it knew I was building a React Native app and selected the appropriate framework.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71n8nl2s2y31mqvarw6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71n8nl2s2y31mqvarw6r.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, I added authentication to the app. Using &lt;code&gt;amplify add auth&lt;/code&gt;, I am given a short questionnaire to follow. An excellent quality of life feature in the Amplify CLI has an option for many of the wizard questions: "I want to learn more." This gives the user a brief explanation of the question and the possibilities.&lt;/p&gt;

&lt;p&gt;Speaking of authentication, the Amplify library makes connecting Cognito super easy. Include the library and wrap App in my &lt;code&gt;App.tsx&lt;/code&gt;: &lt;code&gt;export default withAuthenticator(App);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk5rqdmqaytxp5t7bwqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk5rqdmqaytxp5t7bwqv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After adding various components using the add command, you deploy them with &lt;code&gt;amplify push&lt;/code&gt;. This pushes your configuration to AWS, which deploys via CloudFormation. It's generally a smooth process and simplifies the Cloud infrastructure components so an app developer can focus on building their app using their language of choice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhggw9sb0qmsptsvd9wt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhggw9sb0qmsptsvd9wt6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am not generally impressed with the AWS Console experience. It's inconsistent, dated, and buggy. My go-to solution is usually infrastructure as code or the AWS CLI. However! The Amplify Studio is an outstanding example of what the console could be. It raises the bar in how an AWS service team can construct and manage the console. In a lot of ways, it feels like a completely different experience. This makes sense, given the target audience that Amplify has.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcru605u8nlqr87gg5nb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcru605u8nlqr87gg5nb5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Infrastructure
&lt;/h2&gt;

&lt;p&gt;So what about the infrastructure that's created? Does it follow best practices and well-architected pillars? Using the Amplify CLI, I made a Cognito user pool, S3 buckets, a DynamoDB table, and a GraphQL API with AppSync.&lt;/p&gt;

&lt;p&gt;The S3 bucket is created to hold user-uploaded content, public or private. It manages this separation using an IAM policy that allows access to a specific set of app users using the &lt;code&gt;cognito-identity.amazonaws.com:sub&lt;/code&gt; IAM condition key. The bucket doesn't have versioning or logging enabled and still uses the default S3 KMS key. This is fine but should be a consideration depending on what the app's intention is.&lt;/p&gt;

&lt;p&gt;When creating the DynamoDB table, the Amplify CLI allows you to select your columns and partition key, but also any sort keys, global secondary indexes, and Lambda triggers which is a nice touch when creating it in the first place. It's made using the AWS-owned KMS key for DynamoDB, which like S3, may not be an issue. A nice thing about the table that's created is that it is created with the "On-Demand" capacity mode, which is excellent for unknown loads.&lt;/p&gt;

&lt;p&gt;AppSync is deployed with a simple configuration. No logging, WAF, or XRay enabled. However, when creating the API, you can have Amplify create all of the operations for the API. This was great for speeding up development time. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73uifdk3l9yqxaibczue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73uifdk3l9yqxaibczue.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;Looking at the infrastructure created with a more security-minded lens, I used one of my favorite tools: &lt;a href="https://github.com/prowler-cloud/prowler" rel="noopener noreferrer"&gt;Prowler&lt;/a&gt; and SecurityHub, for a second opinion. They both generally found the same problems. &lt;/p&gt;

&lt;p&gt;Regarding S3, a lot of what you'd expect alerted: Missing MFA delete, versioning was disabled, SSE-KMS was not used, and access logging, to name a few. None of these are deal breakers, and they are easy enough to enable and understand what you are doing with little to no experience. I'd argue that KMS is the most complicated to get right, and as stated above, the use case would need to be right for it to be a required facet to enable.&lt;/p&gt;

&lt;p&gt;Beyond this, Prowler and SecurityHub caught some misconfigurations with Cloudwatch logging and IAM. This last one has no relation to Amplify, but if I created an account for an app, I'd still have some security work here to do.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxe9wixap990a8def93g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxe9wixap990a8def93g.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd Like To See More Of
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;Security is paramount in today's Internet. Some intelligent defaults are in place, but there's no easy CLI way to enable features such as the WAF or better IAM and account-level practices with a single click. Amazon knows what these practices are since they alert in SecurityHub. Allowing an easy adoption of these via Amplify makes a lot of sense in protecting the app and the developers from potential compromises.&lt;/p&gt;

&lt;p&gt;That's not to say Amazon should be more responsible, but it's clear that Amplify is aimed at audiences that may not have a robust Cloud or Security background.&lt;/p&gt;

&lt;h3&gt;
  
  
  More Service Tie-Ins
&lt;/h3&gt;

&lt;p&gt;Amplify has a rich tie-in with multiple standard services, including AppSync, DynamoDB, and Cognito. It can also hook up to some of Amazon's ML offerings, including Rekognition, Textract, Translate, and Pinpoint. I would love to see Amplify expand to connect to more services. What if I need to connect to my legacy API using the API Gateway or to an Elasticache cluster? Sure, nothing stops me from doing that, but Amplify loses some of its charm this way.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Amplify is an excellent platform for single or small teams of app developers. It's a novel app development approach that breaks down many barriers and allows you to get started very fast. I'd be curious to find some case studies of large, million-user apps that leverage Amplify just to see how they approach development, security, and operations.&lt;/p&gt;

&lt;p&gt;If you are a developer with an idea or even someone who has never touched a line of code in their lives, you could get started reasonably quickly, which is impressive.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>builtwithamplify</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How To Make AWS Budgets Work For You</title>
      <dc:creator>Carlo Mencarelli</dc:creator>
      <pubDate>Sun, 21 Aug 2022 14:03:07 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-make-aws-budgets-work-for-you-2n1f</link>
      <guid>https://dev.to/aws-builders/how-to-make-aws-budgets-work-for-you-2n1f</guid>
      <description>&lt;h3&gt;
  
  
  TLDR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;By spending a few minutes of your time now, you can prevent a billing catastrophe in the future!&lt;/li&gt;
&lt;li&gt;There’s a lot of flexibility in what you can alert on and how that lets you fit it to your needs.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Why Use AWS Budgets?
&lt;/h3&gt;

&lt;p&gt;A few days ago, I was speaking with a colleague who was toying with &lt;a href="https://aws.amazon.com/textract/"&gt;AWS Textract&lt;/a&gt; and was surprised when he had gotten contacted by an AWS rep trying to sell him business-tier support a few days later. When he logged in to his account, he saw that he managed to accumulate a $16,000 AWS bill due to an oversight in how he set up his code. Instead of testing Textract against a few documents, he tried it against thousands, each with a hundred pages. This got me thinking about a &lt;a href="https://carlo.cloud/reducing-aws-kms-costs-with-s3-bucket-encryption-1f934603e921"&gt;post I had written&lt;/a&gt; not too long ago that centered around keeping up with technology trends and how not doing so caused a massive spike in KMS costs.&lt;/p&gt;

&lt;p&gt;Both of these costly mistakes could have been prevented with an easy setup of the &lt;a href="https://aws.amazon.com/aws-cost-management/aws-budgets/"&gt;AWS Budgets&lt;/a&gt; service.&lt;/p&gt;




&lt;h3&gt;
  
  
  Creating a Budget
&lt;/h3&gt;

&lt;p&gt;Creating a basic budget is pretty straightforward. There’s a workflow that can be followed in the &lt;a href="https://us-east-1.console.aws.amazon.com/billing/home#/budgets/overview"&gt;AWS console&lt;/a&gt;. There are different options for types of budgets, but the primary one is the Cost Budget. That’s not to say the others aren’t useful, but the cost budget lets you convert usage into actual spend and notify on what’s important: “How much of your money are you spending?”&lt;/p&gt;

&lt;p&gt;While not reporting and alerting on usage, the cost budget has more options to filter and set the budget scope on than the usage budget, which is an interesting choice by the AWS team. The cost budget scope contains many of the same dimensions in the Cost Explorer.&lt;/p&gt;

&lt;p&gt;After selecting the cost budget, you’re presented with a simple web form asking for the budget name, amount, and scope. Putting name aside, let’s talk about the Budget Amount panel.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Budget Amount panel
&lt;/h4&gt;

&lt;p&gt;This panel has a few settings. None are overly complex, but I’ll briefly go into each one.&lt;/p&gt;

&lt;p&gt;The period selection is pretty simple: How often do you want the budget to reset? You can select daily, monthly, quarterly, or annually. I’ve found monthly to be a good sweet spot, but daily spending is suitable for personal accounts. Quarterly and annually are likely ones you may encounter for enterprise or financial planning budgets.&lt;/p&gt;

&lt;p&gt;The renewal type is also straightforward. Do you want the budget to recur or expire at some time in the future? Then you’ll select the start (and end if you pick an expiring budget) dates.&lt;/p&gt;

&lt;p&gt;Budgeting method is the most complex option on this part of the form. Fixed is the easiest. Set a fixed amount for the budget period that was selected above. Planned will let you set a value for each month or autogenerate it based on percentage growth. Auto-adjusting is a newer option that leverages forecasted values or historical data to generate your budget value. It’s an interesting approach for those who want to remove any budget maintenance that may come from growth or optimization occurring monthly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--77pGmHTX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wcwy8y351aub5tux3elz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--77pGmHTX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wcwy8y351aub5tux3elz.png" alt="The Budget Amount Panel" width="880" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Budget Scope selection
&lt;/h4&gt;

&lt;p&gt;Next is the scope selection. Simple on the surface, but it contains a lot of potential for a very detailed selection of filters. This is also where you can exclude certain types of charges, such as credits, taxes, support, etc. If you need a simple, account-wide budget, then this is fine to leave alone, but what if you wanted to set up a budget to monitor your EC2 spot usage or your data transfer costs out of Lambda for US-West-2 and US-East-2? Well, you can!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pBIY-ZqE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rrr0xpq28rjmrg2fiyqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pBIY-ZqE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rrr0xpq28rjmrg2fiyqn.png" alt="A Simple Budget Scope" width="880" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m4FV9uLa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbjw3bnvsvgv3royjfpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m4FV9uLa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbjw3bnvsvgv3royjfpv.png" alt="A More Complex Budget Scope" width="880" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuring Alerts
&lt;/h4&gt;

&lt;p&gt;What good are these budgets if they don’t notify anyone when triggered? You can create numerous alerts on both forecasted and actual values. You can send an email, SNS, or AWS Chatbot alerts. I’d recommend setting up one for actual values and one for forecasted values. AWS cost forecasting isn’t always the most accurate, but it can be an additional layer of detection against things going awry in your account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YLDDO18N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nnob4e4h66lua7rd0haz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YLDDO18N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nnob4e4h66lua7rd0haz.png" alt="The AWS Budgets Alert Panel" width="880" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuring Actions
&lt;/h4&gt;

&lt;p&gt;Budget actions are fascinating; from one perspective, it’s incredibly interesting that based on alerts configured in the previous step, you can apply a Service Control Policy (SCP) or even shut down RDS or EC2 instances. However, it’s also very limited. You need to select the instance ids when wanting to shutdown EC2 instances; this doesn’t work if you have a dynamic environment with instances being terminated and recreated. The actions also don’t allow for anything more complex than that. If, for example, you could trigger a lambda or step function for a Budget alert being triggered, you would be able to take many different types of actions.&lt;/p&gt;




&lt;h3&gt;
  
  
  Scaling Budget Creation (Using Terraform)
&lt;/h3&gt;

&lt;p&gt;If you only have one account, you may only need one budget. However, that’s not always the case. Budget creation and management are fully compatible with Terraform. In fact, for some, it may be simpler to understand and put together budgets in Terraform than through the Console wizard.&lt;/p&gt;

&lt;p&gt;Terraform has two related resources: &lt;code&gt;aws_budgets_budget&lt;/code&gt; (&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/budgets_budget"&gt;Doc Link&lt;/a&gt;) and &lt;code&gt;aws_budgets_budget_action&lt;/code&gt; (&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/budgets_budget_action"&gt;Doc Link&lt;/a&gt;). Below is a brief example similar to the example built in the GUI above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_budgets_budget"&lt;/span&gt; &lt;span class="s2"&gt;"account"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AccountWide"&lt;/span&gt;
  &lt;span class="nx"&gt;budget_type&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"COST"&lt;/span&gt;
  &lt;span class="nx"&gt;limit_amount&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"5000"&lt;/span&gt;
  &lt;span class="nx"&gt;limit_unit&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"USD"&lt;/span&gt;
  &lt;span class="nx"&gt;time_unit&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MONTHLY"&lt;/span&gt;

  &lt;span class="nx"&gt;notification&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;comparison_operator&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"GREATER_THAN"&lt;/span&gt;
    &lt;span class="nx"&gt;threshold&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;95&lt;/span&gt;
    &lt;span class="nx"&gt;threshold_type&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"PERCENTAGE"&lt;/span&gt;
    &lt;span class="nx"&gt;notification_type&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ACTUAL"&lt;/span&gt;
    &lt;span class="nx"&gt;subscriber_email_addresses&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"me@carlo.cloud"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;notification&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;comparison_operator&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"GREATER_THAN"&lt;/span&gt;
    &lt;span class="nx"&gt;threshold&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;110&lt;/span&gt;
    &lt;span class="nx"&gt;threshold_type&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"PERCENTAGE"&lt;/span&gt;
    &lt;span class="nx"&gt;notification_type&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"FORECASTED"&lt;/span&gt;
    &lt;span class="nx"&gt;subscriber_email_addresses&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"me@carlo.cloud"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Wrapping Up
&lt;/h3&gt;

&lt;p&gt;That’s all there is to it; this is a simple but often overlooked service in AWS. It can give some peace of mind, so you never need to receive that call from finance or AWS asking if you want to buy business-level support due to your prolific usage of a service you never intended to use.&lt;/p&gt;




&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cost-management/latest/userguide/budgets-create.html"&gt;AWS - Budgets Documenation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Linux’s Cloud Init — Benefits, Quirks, and Drawbacks</title>
      <dc:creator>Carlo Mencarelli</dc:creator>
      <pubDate>Mon, 13 Jun 2022 16:55:20 +0000</pubDate>
      <link>https://dev.to/aws-builders/linuxs-cloud-init-benefits-quirks-and-drawbacks-1ld0</link>
      <guid>https://dev.to/aws-builders/linuxs-cloud-init-benefits-quirks-and-drawbacks-1ld0</guid>
      <description>&lt;h3&gt;
  
  
  Originally created by Canonical for Ubuntu on AWS EC2, it’s now the de facto early boot configuration method
&lt;/h3&gt;




&lt;h3&gt;
  
  
  TLDR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Init is an invaluable resource for Cloud Engineers and Software Developers alike.&lt;/li&gt;
&lt;li&gt;It's a straightforward service on the surface but is highly customizable to whatever needs an org may have the case for.&lt;/li&gt;
&lt;li&gt;Cloud Init isn't only AWS EC2 user data; it does network configuration, vendor configuration, and provides metadata services.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;You're probably using Cloud Init and don't even realize it. Created by Canonical in the early days of EC2, it helped revolutionize how we treat our servers and how runtime initialization is conducted. Since its inception, it has been one of the primary methods of early configurations for our infrastructure. It's also run in the stacks of every major public cloud provider and many private cloud environments like &lt;a href="https://linuxcontainers.org/lxd/"&gt;LXD&lt;/a&gt;, &lt;a href="https://www.linux-kvm.org/page/Main_Page"&gt;KVM&lt;/a&gt;, and &lt;a href="https://www.openstack.org/"&gt;OpenStack&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Cloud Init allows engineers to reduce or even eliminate package installs or configurations during application deployment. "Why should I need to install ImageMagick on every single Rails deployment?" Similarly, Cloud Init can provide breathing room between OS image builds since you can do any security patching as a part of Cloud Init so you can rotate your AMI on a more manageable basis, such as a weekly cadence.&lt;/p&gt;




&lt;h3&gt;
  
  
  How Does Cloud Init Work
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Cloud Init Stages
&lt;/h4&gt;

&lt;p&gt;Cloud Init works in a couple of different stages.&lt;/p&gt;

&lt;p&gt;First, for systemd machines, is the &lt;code&gt;Generator&lt;/code&gt; stage. If you're unfamiliar with systemd, a generator is a binary executed early in the boot process to dynamically generate unit files, symlinks, and more. Cloud Init's generator determines if the rest of the Cloud Init process should continue. If so, Cloud Init is included in the list of boot goals for the system.&lt;/p&gt;

&lt;p&gt;Next is the &lt;code&gt;Local&lt;/code&gt; phase. This phase runs the &lt;code&gt;cloud-init-local.service&lt;/code&gt; systemd service and runs as early as possible. Essentially its entire purpose is to locate data sources and generate (or apply) networking configurations for the system. It's worth noting that this phase blocks much of the boot process, including the network initialization.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Network&lt;/code&gt; phase continues the Cloud Init boot. This phase relies on networking being up (and, by association, the &lt;code&gt;Local&lt;/code&gt; phase). This stage will run any &lt;code&gt;cloud_init&lt;/code&gt; modules found. These might be things such as &lt;code&gt;mount&lt;/code&gt; and &lt;code&gt;bootcmd&lt;/code&gt; options.&lt;/p&gt;

&lt;p&gt;After the &lt;code&gt;Network&lt;/code&gt; phase is the &lt;code&gt;Config&lt;/code&gt; phase, this is the phase that runs the modules that don't affect any other stages. Specifically, it runs the &lt;code&gt;cloud_config&lt;/code&gt; modules in the Cloud Init config directory. &lt;code&gt;runcmd&lt;/code&gt; is included in this step.&lt;/p&gt;

&lt;p&gt;Cloud Init closes out with the &lt;code&gt;Final&lt;/code&gt; phase. Running any &lt;code&gt;cloud_final&lt;/code&gt; modules, this phase runs as late as possible. It is the stage that includes any user data scripts and configuration management tooling (Puppet, Chef, etc.).&lt;/p&gt;

&lt;h4&gt;
  
  
  Instance Metadata
&lt;/h4&gt;

&lt;p&gt;Each server using Cloud Init also has a collection of data that Cloud Init uses to configure the instance. This includes what we generally think of as &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html"&gt;instance metadata&lt;/a&gt; on EC2 instances but also more. &lt;/p&gt;

&lt;p&gt;Some providers will create or attach a config drive containing metadata service information files. &lt;a href="https://docs.openstack.org/nova/latest/admin/config-drive.html"&gt;OpenStack&lt;/a&gt; is an example of one such provider.&lt;/p&gt;

&lt;p&gt;While we interact with user data, Cloud providers can also implement &lt;a href="https://cloudinit.readthedocs.io/en/latest/topics/vendordata.html"&gt;vendor data&lt;/a&gt;. The idea here is the same as user data; it exists to allow the cloud provider to customize the image at runtime. Some potential vendor data tasks might involve setting the instance's hostname or configuring package repository paths. Vendor data can be disabled if desired. It's also worth mentioning that user data overwrites vendor data when Cloud Init determines the final configuration. &lt;/p&gt;




&lt;h3&gt;
  
  
  Getting Started with Cloud Init
&lt;/h3&gt;

&lt;p&gt;Cloud Init can be instrumented in two ways: a shell script or a YAML formatted cloud-config file. Both approaches are pretty straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;

&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nt"&gt;--assumeyes&lt;/span&gt; &lt;span class="nt"&gt;--security&lt;/span&gt; update-minimal

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or, the equivalent cloud-config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#cloud-config&lt;/span&gt;

&lt;span class="na"&gt;runcmd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;sudo yum --assumeyes --security update-minimal&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script option is pretty easy to understand. As mentioned above, it's executed in the &lt;code&gt;Final&lt;/code&gt; phase. The cloud-config option is more interesting since you can set up modules to run in the different phases, such as the &lt;code&gt;bootcmd&lt;/code&gt; option. Check out the &lt;a href="https://cloudinit.readthedocs.io/en/latest/topics/modules.html"&gt;module reference page&lt;/a&gt; for a complete list of available modules. There is also a great list of example configurations on the &lt;a href="https://cloudinit.readthedocs.io/en/latest/topics/examples.html"&gt;cloud-config examples page&lt;/a&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  Disabling Cloud Init
&lt;/h3&gt;

&lt;p&gt;If for some reason, you want to, you can prevent Cloud Init from running. This can be accomplished in a couple of different ways. The easiest is to add a file during the AMI build time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;touch&lt;/span&gt; /etc/cloud/cloud-init.disabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also add a parameter to &lt;code&gt;/proc/cmdline&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cloud-init&lt;span class="o"&gt;=&lt;/span&gt;disabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's also possible to disable user data by setting the &lt;code&gt;allow_userdata&lt;/code&gt; parameter in &lt;code&gt;/etc/cloud/cloud.cfg&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;allow_userdata: &lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Troubleshooting Cloud Init
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Logs
&lt;/h4&gt;

&lt;p&gt;Occasionally, you may want to dig deeper into Cloud Init. Maybe your user data isn't executing how you expect or possibly taking longer than expected. Fortunately, Cloud Init tracks a lot of details for debugging.&lt;/p&gt;

&lt;p&gt;The main logs are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;/var/log/cloud-init.log&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/var/log/cloud-init-output.log&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These logs can interact with the &lt;code&gt;cloud-init&lt;/code&gt; command with the &lt;code&gt;analyze&lt;/code&gt; sub-command. This can help parse the logs into a more usable format.&lt;/p&gt;

&lt;p&gt;There are also logs in the &lt;code&gt;/run/cloud-init&lt;/code&gt; directory. These logs are more related to some of the inner workings and decisions of Cloud Init.&lt;/p&gt;

&lt;h4&gt;
  
  
  Data Files
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;/var/lib/cloud/&lt;/code&gt; directory is where the data files are kept. A handy file in this directory is the &lt;code&gt;status.json&lt;/code&gt; file. This includes the stages ran and the start/finish times for each one (in epoch format).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;ec2-user@ip-10-0-0-60 data]&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /var/lib/cloud/data/status.json
&lt;span class="o"&gt;{&lt;/span&gt;
 &lt;span class="s2"&gt;"v1"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"datasource"&lt;/span&gt;: &lt;span class="s2"&gt;"DataSourceEc2"&lt;/span&gt;,
  &lt;span class="s2"&gt;"init"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
   &lt;span class="s2"&gt;"errors"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;,
   &lt;span class="s2"&gt;"finished"&lt;/span&gt;: 1655096178.478916,
   &lt;span class="s2"&gt;"start"&lt;/span&gt;: 1655096152.503821
  &lt;span class="o"&gt;}&lt;/span&gt;,
  &lt;span class="s2"&gt;"init-local"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
   &lt;span class="s2"&gt;"errors"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;,
   &lt;span class="s2"&gt;"finished"&lt;/span&gt;: 1655096151.389412,
...File snipped &lt;span class="k"&gt;for &lt;/span&gt;brevity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Configuration Files
&lt;/h4&gt;

&lt;p&gt;Config files are kept in &lt;code&gt;/etc/cloud/cloud.cfg&lt;/code&gt; and the &lt;code&gt;/etc/cloud/cloud.cfg.d/&lt;/code&gt; directory. &lt;/p&gt;




&lt;h3&gt;
  
  
  Useful Cloud Init Commands to Know
&lt;/h3&gt;

&lt;p&gt;Systems equipped with Cloud Init come with a binary used to interact with it. The command to use is &lt;code&gt;cloud-init&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;One of the most useful commands is &lt;code&gt;cloud-init status&lt;/code&gt; which returns the status of the Cloud Init run. An optional &lt;code&gt;--long&lt;/code&gt; flag grants more detail:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;ec2-user@ip-10-0-0-41 ~]# &lt;span class="nb"&gt;sudo &lt;/span&gt;cloud-init status
status: running
&lt;span class="o"&gt;[&lt;/span&gt;ec2-user@ip-10-0-0-41 ~]# &lt;span class="nb"&gt;sudo &lt;/span&gt;cloud-init status &lt;span class="nt"&gt;--long&lt;/span&gt;
status: &lt;span class="k"&gt;done
&lt;/span&gt;&lt;span class="nb"&gt;time&lt;/span&gt;: Mon, 13 Jun 2022 04:47:45 +0000
detail:
DataSourceEc2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;cloud-init status&lt;/code&gt; command also has another great flag: &lt;code&gt;--wait&lt;/code&gt;. This flag waits until Cloud Init is completed before returning. It's helpful if you are using AWS CodeDeploy or a configuration management system that phones home on startup but &lt;em&gt;isn't&lt;/em&gt; tied to Cloud Init for some reason. There is a very real chance that your CodeDeploy may start up before Cloud Init is finished which means any configuration, binaries, or environment variables set by your user data script would not be available.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;ec2-user@ip-10-0-0-41 ~]&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;cloud-init status &lt;span class="nt"&gt;--wait&lt;/span&gt;
..................
status: &lt;span class="k"&gt;done&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another useful command is &lt;code&gt;cloud-init query&lt;/code&gt; which references the cached instance metadata that was captured by Cloud Init:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;ec2-user@ip-10-0-0-41 ~]&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;cloud-init query cloud_name
aws
&lt;span class="o"&gt;[&lt;/span&gt;ec2-user@ip-10-0-0-41 ~]&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;cloud-init query availability_zone
us-west-2b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Wrap Up
&lt;/h3&gt;

&lt;p&gt;Knowing more about Cloud Init and how to properly leverage it can be extremely advantageous to multiple facets of an org. It can make Cloud Engineers and System Administrators' lives easier by reducing the need for configuration tooling and AMI rotations. It can also speed up application deployments.&lt;/p&gt;

&lt;p&gt;The documentation for Cloud Init is pretty in-depth and a valuable resource. It has great details on many of the cloud providers' implementations of the metadata service. The documentation also has information about creating custom modules that can be injected and executed just like &lt;code&gt;runcmd&lt;/code&gt; or &lt;code&gt;mounts&lt;/code&gt;. &lt;/p&gt;




&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloud-init.io/"&gt;Cloud Init&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloudinit.readthedocs.io/en/latest/"&gt;ReadTheDocs - Cloud Init&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html"&gt;AWS - EC2 User Data and Cloud Init&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.hashicorp.com/resources/cloudinit-the-good-parts"&gt;Hashicorp Talk -  Cloud Init: The Good Parts&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>Reducing KMS Costs with S3 Bucket Encryption</title>
      <dc:creator>Carlo Mencarelli</dc:creator>
      <pubDate>Mon, 18 Apr 2022 13:29:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/reducing-kms-costs-with-s3-bucket-encryption-2hi8</link>
      <guid>https://dev.to/aws-builders/reducing-kms-costs-with-s3-bucket-encryption-2hi8</guid>
      <description>&lt;h3&gt;
  
  
  A tale of why keeping up in the industry is critical
&lt;/h3&gt;

&lt;h3&gt;
  
  
  TLDR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When encrypting at scale, customer-managed keys in KMS can be very costly. Using bucket keys will reduce the calls to KMS, which will reduce costs.&lt;/li&gt;
&lt;li&gt;The implementation of bucket keys changed the cost from $1,500 per day in just KMS requests to $300 per day ($36,000 per month!)&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I want to share a story that might help others from a technical perspective but also could encourage a thought that just because everything is running, there is still reason to continue to improve on existing infrastructure all of the time:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You get an alert from your finance team. Last month's AWS bill had an unexpected spike in cost. You pull up cost explorer and immediately see the increase. When you dig in, you see a spike in the expenses to KMS. It crept up from the average of a few hundred to a few thousand dollars per day.&lt;/p&gt;

&lt;p&gt;It's not the end of the world, but you need to figure it out. You'll investigate it when you have time between different projects.&lt;/p&gt;

&lt;p&gt;A few days later, you get a message in Slack: "We're hitting our KMS limit in production."&lt;/p&gt;

&lt;p&gt;If you've never looked, the default rate limit for symmetric cryptographic requests in KMS is 50,000 requests/second.&lt;/p&gt;

&lt;p&gt;Your cost issue just became a production incident though.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The incident itself isn't too significant, but the discovery of what was happening was. &lt;/p&gt;




&lt;h3&gt;
  
  
  What was making 50,000 KMS requests/second?
&lt;/h3&gt;

&lt;p&gt;The KMS monitoring isn't great. Cloudwatch doesn't even split the operations, so you can't get the rates of just &lt;code&gt;kms:decrypt&lt;/code&gt; calls. The &lt;a href="https://docs.aws.amazon.com/kms/latest/developerguide/monitoring-overview.html" rel="noopener noreferrer"&gt;monitoring overview page of the KMS developer guide&lt;/a&gt; briefly hints at this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS KMS API activity for &lt;em&gt;data plane&lt;/em&gt; operations. These are &lt;a href="https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#cryptographic-operations" rel="noopener noreferrer"&gt;cryptographic operations&lt;/a&gt; that use a KMS key, such as &lt;a href="https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.html" rel="noopener noreferrer"&gt;Decrypt&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/kms/latest/APIReference/API_Encrypt.html" rel="noopener noreferrer"&gt;Encrypt&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/kms/latest/APIReference/API_ReEncrypt.html" rel="noopener noreferrer"&gt;ReEncrypt&lt;/a&gt;, and &lt;a href="https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.html" rel="noopener noreferrer"&gt;GenerateDataKey&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So when looking at CloudWatch you see a &lt;code&gt;CryptographicOperationsSymmetric&lt;/code&gt; metric under &lt;code&gt;All &amp;gt; Usage &amp;gt; By AWS Resource&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Fortunately, we had a timeframe: the spikes were happening every hour, on the hour. With nothing standing out in the applications, we turned to Cloudtrail logs. The logs showed us an abnormally large amount of requests from our Quicksight service role to KMS, with the encryption context being our data warehouse.&lt;/p&gt;

&lt;p&gt;That presented a wrinkle, we needed to maintain the freshness of our SPICE dataset, but we weren't given too much in the way of options. So the team was able to perform some mitigation by spreading out the refreshes; we shouldn't hit the rate limit anymore. We were still seeing hundreds of millions of requests to KMS daily, which is expensive based on &lt;a href="https://aws.amazon.com/kms/pricing/" rel="noopener noreferrer"&gt;current prices&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;500,000,000 requests/day * $0.03 per 10,000/requests = $1,500 per day
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Enter S3 Bucket Keys
&lt;/h3&gt;

&lt;p&gt;In December of 2020, Amazon introduced &lt;a href="https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-s3-bucket-keys-reduce-the-costs-of-server-side-encryption-with-aws-key-management-service-sse-kms/" rel="noopener noreferrer"&gt;S3 Bucket Keys&lt;/a&gt;. Advertised as a method to reduce requests to KMS (and the associated costs). &lt;/p&gt;

&lt;p&gt;The implementation is simple. In AWS, it's a simple check box: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3q6ornx0t5rs8iuhiqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3q6ornx0t5rs8iuhiqq.png" alt="AWS S3 Encryption configuration screen showing Bucket Key disabled"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Terraform, it's equally simple using the &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_server_side_encryption_configuration" rel="noopener noreferrer"&gt;aws_s3_bucket_server_side_encryption_configuration&lt;/a&gt; resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket_server_side_encryption_configuration"&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mybucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;

  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;apply_server_side_encryption_by_default&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;kms_master_key_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_kms_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mykey&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
      &lt;span class="nx"&gt;sse_algorithm&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"aws:kms"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;bucket_key_enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above solved requests for new objects in the future; however, the nature of our data warehouse was that we were fetching the entire data set on each refresh and the bucket contained terabytes of data encrypted with the old key. This revelation meant we were still seeing millions of KMS requests. AWS calls this out in the &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-key.html" rel="noopener noreferrer"&gt;S3 Bucket Key documentation&lt;/a&gt;: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When you configure an S3 Bucket Key, objects that are already in the bucket do not use the S3 Bucket Key. To configure an S3 Bucket Key for existing objects, you can use a COPY operation&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So we used an S3 Batch Operation to re-encrypt the data within a few hours. The effect was noticeable immediately. Looking back, it's even more distinct:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vd9063ioe9si1vzqlec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vd9063ioe9si1vzqlec.png" alt="An AWS Cloudwatch resource graph showing the number of cryptographic operations spiking to millions of requests per minute before smoothing out after the fix."&gt;&lt;/a&gt; &lt;/p&gt;




&lt;h3&gt;
  
  
  The Results
&lt;/h3&gt;

&lt;p&gt;With essentially a one-line code change and a straightforward batch operation, we reduced our KMS requests from, on average, 500M requests per day to 38M requests per day. The bill showed similar results, $1,500 per day back to $300 per day in KMS requests:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5rz6lcvi5yleglsurtz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5rz6lcvi5yleglsurtz.png" alt="An AWS Cost Explorer past showing KMS costs over $1,000 per day before dropping to $300 per day."&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  This Story is Never Finished
&lt;/h3&gt;

&lt;p&gt;The production incident was resolved, accounting would be happy, but the story isn't finished.&lt;/p&gt;

&lt;p&gt;The story is never finished.&lt;/p&gt;

&lt;p&gt;We live in a cycle of continuous improvement.&lt;/p&gt;

&lt;p&gt;As long as Amazon, Microsoft, and the rest of the industry continue to iterate and improve, it will fall onto the engineering staff of companies to keep their knowledge relevant and constantly improve. Our implementation and feedback to the industry feed the next big release.&lt;/p&gt;

&lt;p&gt;It's easy to fall into feature factory mode, where you only track new code and features going into the environment. It's essential to take a step back and consider the larger picture sometimes. How has the landscape changed since you created that Cloudfront distribution? What new features can be leveraged to improve engineering productivity and customer experience? It depends on the engineering leadership to guide and mentor the rest of the staff in these thoughts and approaches. It won't only benefit the company but also you, your engineers, and others with whom you share the learnings.&lt;/p&gt;




&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-key.html" rel="noopener noreferrer"&gt;AWS S3 - Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-s3-bucket-keys-reduce-the-costs-of-server-side-encryption-with-aws-key-management-service-sse-kms/" rel="noopener noreferrer"&gt;Amazon S3 Bucket Keys reduce the costs of Server-Side Encryption with AWS Key Management Service (SSE-KMS) &lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
      <category>cloudskills</category>
    </item>
    <item>
      <title>Multi-Region S3 Strategies</title>
      <dc:creator>Carlo Mencarelli</dc:creator>
      <pubDate>Fri, 01 Apr 2022 13:51:13 +0000</pubDate>
      <link>https://dev.to/aws-builders/multi-region-s3-strategies-4inh</link>
      <guid>https://dev.to/aws-builders/multi-region-s3-strategies-4inh</guid>
      <description>&lt;h2&gt;
  
  
  TLDR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Whether starting a fresh project or enhancing a project that has a lot of  mileage, the data layer should be one of the first ones tackled with  respect to multi-region and disaster recovery.&lt;/li&gt;
&lt;li&gt;Greenfield projects are straightforward to configure and manage. It can be as few  as just 2 or 3 more Terraform resources. For more flexibility at the  cost of simplicity, we can add bi-directional replication and  multi-region access points.&lt;/li&gt;
&lt;li&gt;Existing projects are a little more complicated but can be accomplished with not too much more work.&lt;/li&gt;
&lt;li&gt;Check out the sample repos here:

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/mencarellic/terraform-aws-s3-multi-region/tree/main/day-0-dual-deployment"&gt;mencarellic/terraform-aws-s3-multi-region/day-0-dual-deployment&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mencarellic/terraform-aws-s3-multi-region/tree/main/day-0-bucket-replication"&gt;mencarellic/terraform-aws-s3-multi-region/day-0-bucket-replication&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mencarellic/terraform-aws-s3-multi-region/tree/main/day-1-existing-replication"&gt;mencarellic/terraform-aws-s3-multi-region/day-1-existing-replication&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting the Stage
&lt;/h2&gt;

&lt;p&gt;Whether you are starting a project greenfield or working with a bucket that is  celebrating its thirteenth birthday, you should consider what your data  layer looks like through a multi-region and disaster recovery lens. Of  course, S3 touts its durability (99.999999999% — 11 9’s!), and through  its multiple availability zone design, there is very high availability;  however we have certainly seen regional S3 outages.&lt;/p&gt;

&lt;p&gt;In this article, I’ll explore what implementing multi-region S3 looks like for both existing and new buckets.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Buckets
&lt;/h2&gt;

&lt;p&gt;Starting with the easy scenario first:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Assume you are at a company based out of a single region, us-east-2. The  company has been successful in a single region and has weathered most  major Amazon outages with minimal reputational damage. The company is  starting up a new web project, and you’re tasked with creating the S3  buckets for the static assets. The catch is that your manager asked you  to design it with disaster recovery in mind in case us-east-2 ever has  an outage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;The Approaches&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the start the solution seems simple: Create two buckets and just deploy to both of them.&lt;/p&gt;

&lt;p&gt;In reality, that’s not exactly true. There’s a decision that needs to be  made before any buckets are created. Do you augment your deployment  pipeline to deploy to both buckets, or do you leverage AWS native S3  bucket replication? There are definitely benefits and drawbacks to each  approach; for example, what if the deployment pipeline breaks halfway  through the deployment of the change when uploading to the bucket in the DR region, or what if a manual change is made in us-east-2 and  replicated to us-west-2? I won’t discuss the pros and cons too much  since adoption ultimately falls to the design paradigms of you and the  business at the end of the day.&lt;/p&gt;

&lt;p&gt;Setting the stage for the examples below, each one will have a pair of  versioned, encrypted buckets that have public access disabled.  Additionally, a KMS key for each region will be created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying to Multiple Buckets with Your CI/CD Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vt6xqsqk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AXMQsn9vwynW7Jv-P" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vt6xqsqk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AXMQsn9vwynW7Jv-P" alt="A flow diagram with a server labeled CI/CD on the left with arrows to two boxes labeled us-west-2 and us-east-2 with an S3 bucket inside each. The S3 buckets then have arrows pointing to a single box labeled “App/CDN/etc”" width="880" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leveraging your CI/CD pipeline to push to both buckets simultaneously.&lt;/p&gt;

&lt;p&gt;This is probably the more straightforward system to set up. In most  deployment systems, you can add additional targets to deploy your  artifact. The two buckets in AWS are treated as separate and do not  interact with each other in any way.&lt;/p&gt;

&lt;p&gt;The Terraform is equally simple. It’s only two buckets. You can see the sample here: &lt;a href="https://github.com/mencarellic/terraform-aws-s3-multi-region/tree/main/day-0-dual-deployment"&gt;mencarellic/terraform-aws-s3-multi-region/day-0-dual-deployment&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using S3 MRAP and Bi-Directional Replication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xFb8RCxX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AfvmOkrKYJ-sMYu5T" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xFb8RCxX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2AfvmOkrKYJ-sMYu5T" alt="img" width="880" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Letting AWS do the replication across region for you.&lt;/p&gt;

&lt;p&gt;This strategy is a little more complex, but I think it has the added benefit of not needing your CI/CD tool to implement data resiliency. Let AWS  and S3 worry about that; after all, they &lt;em&gt;are&lt;/em&gt; the professionals at it.&lt;/p&gt;

&lt;p&gt;Your artifact building pipeline pushes the artifact to two (or more) buckets that are tied together with a &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiRegionAccessPointRequests.html"&gt;multi-region access point&lt;/a&gt;. The buckets use bi-directional replication, and from there, your  deployment tool uses the same access point to deploy the code. This  method has the additional safety of continued operations even if a  region’s S3 service is down.&lt;/p&gt;

&lt;p&gt;The Terraform for this is a little more complex. Mainly in the form of the inclusion of the &lt;code&gt;aws_s3control_multi_region_access_point&lt;/code&gt; resource and the replication configuration to support bi-directional replication of the buckets.&lt;/p&gt;

&lt;p&gt;First the multi-region access point resource. This is a straight-forward  resource, just probably not common yet since it has a pretty narrow use  case and is relatively new (re:Invent 2021). You can find the Terraform  docs for the resource here: &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3control_multi_region_access_point"&gt;aws_s3control_multi_region_access_point&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3control_multi_region_access_point" "app-artifact" {
  details {
    name = "app-artifact"region {
      bucket = aws_s3_bucket.app-artifact-east-2.id
    }region {
      bucket = aws_s3_bucket.app-artifact-west-2.id
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, the replication configuration. The configuration exists in both buckets since we’re doing bi-directional replication. The replication also  needs an IAM role for the replication to occur.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: In versions before the &lt;a href="https://github.com/hashicorp/terraform-provider-aws/issues/20433"&gt;4.X refactor of the AWS provider&lt;/a&gt; this was much more difficult to achieve since to apply bi-directional  replication, you would either have to create the buckets first. Then add the configuration or accept that the first Terraform plan/apply would  fail since there will be a race condition of both buckets having a  replication configuration that requires the destination bucket to exist  already before it could complete.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The configuration for the replication resource is straightforward as well (found here: &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_replication_configuration"&gt;aws_s3_bucket_replication_configuration&lt;/a&gt;). The catches come from the IAM policy and the KMS encryption.&lt;/p&gt;

&lt;p&gt;In your IAM policy you’ll want to ensure you’re granting permission for  the S3 actions for both buckets and their children objects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;statement {
    sid = "CrossRegionReplication"
    actions = [
      "s3:ListBucket",
      "s3:GetReplicationConfiguration",
      "s3:GetObjectVersionForReplication",
      "s3:GetObjectVersionAcl",
      "s3:GetObjectVersionTagging",
      "s3:GetObjectRetention",
      "s3:GetObjectLegalHold",
      "s3:ReplicateObject",
      "s3:ReplicateDelete",
      "s3:ReplicateTags",
      "s3:GetObjectVersionTagging",
      "s3:ObjectOwnerOverrideToBucketOwner"
    ]
    resources = [
      aws_s3_bucket.app-artifact-east-2.arn,
      aws_s3_bucket.app-artifact-west-2.arn,
      "${aws_s3_bucket.app-artifact-east-2.arn}/*",
      "${aws_s3_bucket.app-artifact-west-2.arn}/*",
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re doing KMS/SSE, you can enforce it with the following conditions in the above statement:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    condition {
      test     = "StringLikeIfExists"
      variable = "s3:x-amz-server-side-encryption"
      values   = ["aws:kms", "AES256"]
    }
    condition {
      test     = "StringLikeIfExists"
      variable = "s3:x-amz-server-side-encryption-aws-kms-key-id"
      values = [
        aws_kms_key.bucket-encryption-west-2.arn,
        aws_kms_key.bucket-encryption-east-2.arn
      ]
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which, speaking of encryption, you need to remember to include &lt;code&gt;kms:Encrypt&lt;/code&gt; and &lt;code&gt;kms:Decrypt&lt;/code&gt; permissions in your IAM policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;statement {
    sid = "CrossRegionEncryption"
    actions = [
      "kms:Encrypt",
      "kms:Decrypt"
    ]
    resources = [
      aws_kms_key.bucket-encryption-west-2.arn,
      aws_kms_key.bucket-encryption-east-2.arn
    ]    condition {
      test     = "StringLike"
      variable = "kms:ViaService"
      values = [
        "s3.${aws_s3_bucket.app-artifact-east-2.region}.amazonaws.com",
        "s3.${aws_s3_bucket.app-artifact-west-2.region}.amazonaws.com"
      ]
    }    condition {
      test     = "StringLike"
      variable = "kms:EncryptionContext:aws:s3:arn"
      values = [
        "${aws_s3_bucket.app-artifact-east-2.arn}/*",
        "${aws_s3_bucket.app-artifact-west-2.arn}/*"
      ]
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2bEMOdK2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2A9jYVJ8X_OXSTPU405Kg2OA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2bEMOdK2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2A9jYVJ8X_OXSTPU405Kg2OA.png" alt="img" width="880" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS S3 MRAP Showing two-way replication between our application buckets. Success!&lt;/p&gt;

&lt;p&gt;The full set of Terraform code can be found at &lt;a href="https://github.com/mencarellic/terraform-aws-s3-multi-region/tree/main/day-0-bucket-replication"&gt;mencarellic/terraform-aws-s3-multi-region/day-0-bucket-replication&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Existing Buckets
&lt;/h2&gt;

&lt;p&gt;Now the interesting scenario:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The new application you helped build to be multi-region from day 0 has  taken off, and now you’ve been asked to help add disaster recovery to  the legacy apps. Sounds easy, except the existing legacy bucket has  terabytes of data that would need to be replicated.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, realistically, an application might not have terabytes of data that is  absolutely required for complete disaster recovery, but also who would  want to sift through countless files in a bucket that is almost as old  as S3 to find the relevant files?&lt;/p&gt;

&lt;p&gt;Your existing applications infrastructure looks pretty standard: S3 bucket,  public access block, KMS encryption. The first thing we need to do is  stand up the secondary region’s bucket. We should probably enable  replication so that anything going forward from today is replicated.  This is done exactly as it was done above: with a &lt;code&gt;aws_s3_bucket_replication_configuration&lt;/code&gt;resource.&lt;/p&gt;

&lt;p&gt;Next, you’ll want to get an inventory of everything in the source bucket. We’ll do this with the &lt;code&gt;aws_s3_bucket_inventory&lt;/code&gt; resource (documentation here: &lt;a href="https://registry.terraform.io/providers/hashicorp/aws%20%20/latest/docs/resources/s3_bucket_inventory"&gt;aws_s3_bucket_inventory&lt;/a&gt;). You’ll notice that our destination is actually a whole separate bucket. When taking on this endeavor, you’ll probably want to have all of your  inventories go into the same place so you’ll know where they all are.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket_inventory" "app-artifact-east-2" {
  bucket = aws_s3_bucket.app-artifact-east-2.id
  name   = "EntireBucketDaily"  included_object_versions = "Current"  schedule {
    frequency = "Daily"
  }  destination {
    bucket {
      format     = "CSV"
      bucket_arn = aws_s3_bucket.inventory.arn
    }
  }provider = aws.east-2
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have an inventory file, you can use an S3 batch operation to copy  the files in the inventory file from the legacy bucket to the new  bucket.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: The COPY batch operation is new as of February 8, 2022. You can read more about it in the AWS News post here: &lt;a href="https://aws.amazon.com/blogs/aws/new-replicate-existing-objects-with-amazon-s3-batch-replication/"&gt;NEW — Replicate Existing Objects with Amazon S3 Batch Replication&lt;/a&gt;. Before this release, there were two options, and neither were very  good: Manually copy the files using a script that takes time and costs  money, or open a support ticket and hope they can get to it. When my  team opened a request for this before batch job support, we were told it would be several weeks before they could get to it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Regrettably, we can’t configure batch operations via Terraform just yet. There is an open issue in the Terraform provider though: &lt;a href="https://github.com/hashicorp/terraform-provider-aws/issues/18538"&gt;Feature Request: Support for S3 Batch Operations&lt;/a&gt;. We can, however, configure the IAM role for the batch operation.&lt;/p&gt;

&lt;p&gt;The IAM role is pretty easy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An assume role policy for &lt;code&gt;batchoperations.s3.amazonaws.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;s3:GetObject*&lt;/code&gt; for the source bucket&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;s3:PutObject*&lt;/code&gt; for the destination bucket&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;s3:GetObject*&lt;/code&gt; for the inventory bucket&lt;/li&gt;
&lt;li&gt;If you are configuring a job report, you’ll want to allow &lt;code&gt;s3:PutObject&lt;/code&gt; for an S3 destination too. I use the inventory bucket for this&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kms:Encrypt&lt;/code&gt; and &lt;code&gt;kms:Decrypt&lt;/code&gt; for encryption and decryption if there is encryption enabled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the buckets and role are correctly configured, you’ll want to navigate to the S3 Batch Operations console.&lt;/p&gt;

&lt;p&gt;If desired, you could change tags, metadata, or storage class. But if we  use this for an active/active configuration, we’ll just leave everything as the default value.&lt;/p&gt;

&lt;p&gt;After the job is created, it will run through a series of checks and then  wait for your confirmation before executing. After confirmation, the job starts, and you’re able to monitor the status from the job status page. It runs pretty quickly. In my sample, I copy 1,000 objects in 14  seconds. Admittedly they are essentially empty objects, but it is still  pretty fast.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_isiH7bz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AK-305oVGrxBdnkiwL8kOSA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_isiH7bz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AK-305oVGrxBdnkiwL8kOSA.png" alt="The batch job status page with the overview and status groups." width="880" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The job status page with an overview and high-level status&lt;/p&gt;

&lt;p&gt;Regarding other options for replicating existing buckets, as stated above,  alternate options would be to set up replication between the source and  destination buckets and set up either a lambda or batch job that touches each file to trigger bucket replication from the original bucket to the new bucket. You can do the same in an EC2 instance or even from a local laptop though those options are less recommended.&lt;/p&gt;

&lt;p&gt;There are some negatives with this approach, mainly in that you’re recreating something that exists natively in AWS already, and I’m sure you have  better things to be doing than recreating the wheel.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/language/modules"&gt;Terraform Docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>sre</category>
      <category>terraform</category>
    </item>
    <item>
      <title>The Case for the Terraform Seed Workspace</title>
      <dc:creator>Carlo Mencarelli</dc:creator>
      <pubDate>Mon, 21 Mar 2022 18:30:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/the-case-for-the-terraform-seed-workspace-46jn</link>
      <guid>https://dev.to/aws-builders/the-case-for-the-terraform-seed-workspace-46jn</guid>
      <description>&lt;h2&gt;
  
  
  Why not use Terraform to manage your Terraform?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  TLDR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Terraform collaboration tools such as &lt;a href="https://cloud.hashicorp.com/products/terraform"&gt;Terraform Cloud&lt;/a&gt;, &lt;a href="https://scalr.io/"&gt;Scalr&lt;/a&gt;, and &lt;a href="https://spacelift.io/"&gt;SpaceLift&lt;/a&gt; offer the ability to be managed by Terraform.&lt;/li&gt;
&lt;li&gt;Managing your infrastructure automation tool, while more overhead, brings in many benefits ranging from consistency to security.&lt;/li&gt;
&lt;li&gt;Terraform Cloud Demo repo: &lt;a href="https://github.com/mencarellic/terraform-cloud-workspace"&gt;mencarellic/terraform-cloud-workspace
&lt;/a&gt;- Scalr Demo repo: &lt;a href="https://github.com/mencarellic/terraform-scalr-workspace"&gt;mencarellic/terraform-scalr-workspace&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Spacelift Demo repo: &lt;a href="https://github.com/mencarellic/terraform-spacelift-workspace"&gt;mencarellic/terraform-spacelift-workspace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;When working with a production Terraform configuration, there will inevitably be a time when you outgrow the single workspace pattern. Whether you split workspaces by account, region, service, or something else, soon you'll find that you need to break your Terraform configuration apart.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;An aside on the term workspace&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The word workspace in this post follows the Terraform Cloud and Scalr definition instead of the Terraform CLI definition. Hashicorp has a blurb here: &lt;a href="https://www.terraform.io/cloud-docs/workspaces#terraform-cloud-vs-terraform-cli-workspaces"&gt;Terraform Cloud vs. Terraform CLI Workspaces&lt;/a&gt;, but it boils down to this:&lt;/p&gt;

&lt;p&gt;Terraform CLI workspaces are infrastructure as code split into different directories and have different state files. Terraform Cloud and Scalr workspaces are similar in that they have individual state files; however, these tools also provide more features like specific role-based access control and variable configurations.&lt;/p&gt;

&lt;p&gt;Also worth noting is that in Spacelift, the equivalent of a workspace is a stack.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IGI84Tcp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l9cglppfsckoj0y5p1j4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IGI84Tcp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l9cglppfsckoj0y5p1j4.png" alt="A snippet of a Terraform Cloud metric side bar showing 9,595 resources and an average apply time of 17 minutes." width="800" height="754"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Terraform configuration I work with daily consists of 31 active workspaces and 36,747 resources as of writing this. Two of those workspaces have over 9,000 resources a piece, and as you can imagine they take quite a bit of time to run. Logically breaking these workspaces down into smaller chunks will help speed those Terraform plans and applies up. It also helps unlock safe developer inclusion since I can allow an application team access to a workspace that contains only their infrastructure. All of a sudden, one of my 9,000 resource workspace turns into 12 workspaces that have 750 resources each with the added benefit of my engineering teams being able to iterate faster and independently of each other.&lt;/p&gt;

&lt;p&gt;Of course, if I'm managing fifty or even thirty workspaces, I probably have repeated variables across them. The seed workspace lets me manage all of these variables (including sensitive ones) via code. I also can manage teams, tags, VCS configuration, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p0DTyv6S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dv8byiw6u2pkvnjxsnru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p0DTyv6S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dv8byiw6u2pkvnjxsnru.png" alt="Screenshot of Terraform Cloud showing the Seed workspace and seven workspaces created with Terraform." width="800" height="684"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pattern lets me leverage Terraform patterns to maintain consistency and state on a critical layer of my tooling. The three demo repos I created are all very straightforward and should be easy to comprehend. The primary difficulty ends up being the use of the providers isn't as thoroughly tested as some of the primary Terraform providers like AWS or AzureRM so the documentation ends up having quirks. For example, in the Terraform Cloud demo I originally created teams for a more fleshed out sample, but the &lt;a href="https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/team"&gt;provider documentation&lt;/a&gt; for the &lt;code&gt;tfe_team&lt;/code&gt; resource doesn't mention that the capability isn't available in the tier I was using.&lt;/p&gt;

&lt;p&gt;In my Terraform Cloud Workspace demo, you can see I define four key values. The values for the keys ultimately live in Terraform Cloud's variable section which I can lock down using the built-in access controls. But since those values are a part of my seed workspace, I can assign them to the workspaces I create programmatically which makes rotation a breeze as well.&lt;/p&gt;

&lt;p&gt;The fifth variable I define is actually a global Terraform version that I apply to each of my workspaces. This is just an example. I could do the same with branch name, SSH key, etc. This pattern is all about keeping consistency in a repeatable and secure method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"terraform-version"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Global Terraform version to use"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.1.4"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"azure-dev-key"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Development key for Azure"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"azure-prod-key"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Production key for Azure"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"aws-dev-key"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Development key for AWS"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"aws-prod-key"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Production key for AWS"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Managing variables for your Terraform tooling with Terraform ends up carrying a lot of overhead. Hashicorp has introduced something named &lt;a href="https://www.terraform.io/cloud-docs/workspaces/variables#scope"&gt;variable sets&lt;/a&gt; to Terraform Cloud. In Scalr, you can do something similar with shell variables in your account dashboard or in your environment configuration. It's a different way of doing the same thing and works just as well as what I did above.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Cost
&lt;/h3&gt;

&lt;p&gt;There is a downside to this pattern. If you are just getting started or only need a fast prototype or change, why would you want to go through opening up a PR against your seed workspace and running it through Terraform? It's certainly much faster to create a new workspace and wire it up with the existing VCS connection. The only answer I can provide is consistency. If you are a team of yourself or just two or three, this pattern makes less sense. But once the use of Terraform is growing and you begin to have tiers of access and experience levels with your Terraform usage, perhaps this pattern makes sense.&lt;/p&gt;




&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/cloud-docs"&gt;Hashicorp Terraform Cloud Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.toScalr%20Documentation"&gt;Scalr Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.toSpacelift%20Documentation"&gt;Spacelift Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>terraform</category>
      <category>cloud</category>
      <category>productivity</category>
      <category>iac</category>
    </item>
    <item>
      <title>Terraform Module Testing with LocalStack and GitHub Actions</title>
      <dc:creator>Carlo Mencarelli</dc:creator>
      <pubDate>Mon, 21 Mar 2022 18:09:44 +0000</pubDate>
      <link>https://dev.to/aws-builders/terraform-module-testing-with-localstack-and-github-actions-7i8</link>
      <guid>https://dev.to/aws-builders/terraform-module-testing-with-localstack-and-github-actions-7i8</guid>
      <description>&lt;p&gt;Testing your infrastructure as code is just as important as testing your application code. And it doesn’t have to be a nightmare!&lt;/p&gt;

&lt;h3&gt;
  
  
  TLDR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;With a combination of LocalStack and GitHub Actions, you can do easy and effective testing for most of your Terraform code&lt;/li&gt;
&lt;li&gt;This method reduces external dependencies on libraries and knowing best practices for programming languages like Ruby and GoLang.&lt;/li&gt;
&lt;li&gt;Given this is essentially a mocked AWS service, there are limitations in the service offering.&lt;/li&gt;
&lt;li&gt;See demo repo here: &lt;a href="//github.com/mencarellic/terraform-aws-module"&gt;mencarellic/terraform-aws-module&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;When I first saw production Terraform code, it was a single repository split into three directories: one for the production account, one for a development account, and one for the modules. Fast forward a few years, and I’m now with a new company with a similar format, except we use two clouds and have 18 different environments across those clouds.&lt;/p&gt;

&lt;p&gt;When I checked earlier, our modules sub-directory was 38,312 lines across 57 modules. Needless to say, we’re in the midst of moving from using a single repo to using a repo per module and leveraging Terraform Cloud’s Private Registry. Now the question is: when I push changes to a module, how do I ensure that module will validate, plan, and apply successfully without too much interruption in the release workflow. The validate and plan are pretty straightforward; I can test it locally with a CLI validate and plan. Applies are trickier, and no one wants to debug Terraform failures while the rest of the team is trying to publish their changes.&lt;/p&gt;

&lt;p&gt;There are plenty of patterns, such as using sandbox or testing accounts and Terraform test frameworks like &lt;a href="https://github.com/newcontext-oss/kitchen-terraform" rel="noopener noreferrer"&gt;Kitchen&lt;/a&gt; or &lt;a href="https://terratest.gruntwork.io/" rel="noopener noreferrer"&gt;Terratest&lt;/a&gt;. But I don’t want to introduce another account and I definitely don’t want to introduce more code to manage. That’s where &lt;a href="https://localstack.cloud/" rel="noopener noreferrer"&gt;LocalStack&lt;/a&gt; fits in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynjlkeh6vp0w8ejk6n3s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynjlkeh6vp0w8ejk6n3s.png" alt="A match made in the cloud"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  LocalStack
&lt;/h3&gt;

&lt;p&gt;LocalStack self-describes in their &lt;a href="https://github.com/localstack/localstack" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; as: “A fully functional local AWS cloud stack…” It is a feature-rich AWS service that runs in a container with pretty wide coverage. The full list of services can be found here: &lt;a href="https://docs.localstack.cloud/aws/feature-coverage/" rel="noopener noreferrer"&gt;AWS Service Feature Coverage&lt;/a&gt;. The community version contains a reasonable number of services that can cover a lot of different modules, the Pro feature set contains some major parts of the AWS stack (notably API Gateway v2, CloudFront, and RDS).&lt;/p&gt;

&lt;p&gt;The setup for Localstack is pretty straight-forward. Pull the docker image and run it. The development team released a CLI tool that helps with orchestration, so what we end up running is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pip install localstack
$ docker pull localstack/localstack
$ localstack start -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For setting up Terraform to work with LocalStack, the team over there has an &lt;a href="https://docs.localstack.cloud/integrations/terraform/" rel="noopener noreferrer"&gt;integration page with details&lt;/a&gt;, though I’ll cover some here as well.&lt;/p&gt;

&lt;p&gt;The main thing that you need to know is that you’ll be setting some custom endpoints in your provider configuration to tell Terraform to reach out to localhost instead of AWS for the plan and apply steps.&lt;/p&gt;




&lt;h3&gt;
  
  
  GitHub Action
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/mencarellic/terraform-aws-module/blob/main/.github/workflows/terraform-apply.yml" rel="noopener noreferrer"&gt;GitHub Action workflow file&lt;/a&gt; really only needs to do three things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installs Terraform&lt;/li&gt;
&lt;li&gt;Installs LocalStack CLI and starts the Docker container (See above)&lt;/li&gt;
&lt;li&gt;Runs &lt;code&gt;terraform apply -auto-approve&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The workflow file I have does a couple other things based on personal preference, but you can really boil the file down to less than twenty lines if you really want to.&lt;/p&gt;

&lt;p&gt;Some of the additional things I do (and why) are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ignore the &lt;code&gt;main&lt;/code&gt; branch since I don’t want this to run on pushes to that branch. Only feature branches&lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://asdf-vm.com/" rel="noopener noreferrer"&gt;ASDF&lt;/a&gt; and a &lt;code&gt;.tool-versions&lt;/code&gt; file to manage my Terraform version. You could also use the &lt;a href="https://github.com/hashicorp/setup-terraform" rel="noopener noreferrer"&gt;GitHub Action hashicorp/setup-terraform&lt;/a&gt; if desired&lt;/li&gt;
&lt;li&gt;Run a &lt;code&gt;terraform plan&lt;/code&gt; before the apply to make sure I can see in the logs whether a failure is a plan or apply error&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Bringing It All Together
&lt;/h3&gt;

&lt;p&gt;Once you get your GitHub Action workflow file setup, you just need to add a directory where you can place your tests. Inside the directory, you can have a single file where you define your provider and module block.&lt;/p&gt;

&lt;p&gt;You’ll need to force the provider to target your Localstack endpoints instead of the live AWS endpoints. You can do that with the &lt;code&gt;endpoints&lt;/code&gt; block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;endpoints {
    apigateway     = "http://localhost:4566"
    apigatewayv2   = "http://localhost:4566"
    cloudformation = "http://localhost:4566"
    cloudwatch     = "http://localhost:4566"
    dynamodb       = "http://localhost:4566"
    ec2            = "http://localhost:4566"
    es             = "http://localhost:4566"
    elasticache    = "http://localhost:4566"
    firehose        = "http://localhost:4566"
    iam            = "http://localhost:4566"
    kinesis        = "http://localhost:4566"
    kms            = "http://localhost:4566"
    lambda         = "http://localhost:4566"
    rds            = "http://localhost:4566"
    redshift       = "http://localhost:4566"
    route53        = "http://localhost:4566"
    s3             = "http://s3.localhost.localstack.cloud:4566"
    secretsmanager = "http://localhost:4566"
    ses            = "http://localhost:4566"
    sns            = "http://localhost:4566"
    sqs            = "http://localhost:4566"
    ssm            = "http://localhost:4566"
    stepfunctions  = "http://localhost:4566"
    sts            = "http://localhost:4566"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that you can open up some PRs to test your &lt;a href="https://github.com/mencarellic/terraform-aws-module/pull/3" rel="noopener noreferrer"&gt;positive&lt;/a&gt; and &lt;a href="https://github.com/mencarellic/terraform-aws-module/pull/4" rel="noopener noreferrer"&gt;negative&lt;/a&gt; test cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9kpl3wlg0jrzdlamkvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9kpl3wlg0jrzdlamkvm.png" alt="GitHub PR checks showing that all have passed, including “terraform-apply-with-localstack"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  References and Docs I Used
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/language/modules" rel="noopener noreferrer"&gt;Terraform Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.localstack.cloud/" rel="noopener noreferrer"&gt;LocalStack Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.localstack.cloud/integrations/terraform/" rel="noopener noreferrer"&gt;LocalStack Terraform Integration Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;GitHub Actions Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mencarellic/terraform-aws-module" rel="noopener noreferrer"&gt;Example Repo (mencarellic/terraform-aws-module)
&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>terraform</category>
      <category>github</category>
      <category>cloud</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
