<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jessica Aparecida Bueno</title>
    <description>The latest articles on DEV Community by Jessica Aparecida Bueno (@jessicaapbueno).</description>
    <link>https://dev.to/jessicaapbueno</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jessicaapbueno"/>
    <language>en</language>
    <item>
      <title>FinOps Hands-On: AWS Costs + CO_2e in Every PR with Infracost &amp; Terraform</title>
      <dc:creator>Jessica Aparecida Bueno</dc:creator>
      <pubDate>Sat, 02 May 2026 14:45:53 +0000</pubDate>
      <link>https://dev.to/jessicaapbueno/finops-hands-on-aws-costs-co-e-in-every-pr-with-infracost-terraform-4hjj</link>
      <guid>https://dev.to/jessicaapbueno/finops-hands-on-aws-costs-co-e-in-every-pr-with-infracost-terraform-4hjj</guid>
      <description>&lt;p&gt;Have you ever felt that knot in your stomach when running &lt;code&gt;terraform apply&lt;/code&gt;, unsure of how many zeros will show up on your AWS bill at the end of the month? As a Cloud professional transitioning into DevOps, my goal isn't just to build infrastructure that works, but infrastructure that is sustainable and governed.&lt;/p&gt;

&lt;p&gt;In this article, I'll walk you through a hands-on project where I integrated Infracost directly into my CI/CD pipeline. The goal is simple yet powerful: every time I propose a change via Pull Request, the system automatically calculates the financial impact and posts a detailed comment. This is FinOps in action!&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Infracost?
&lt;/h2&gt;

&lt;p&gt;For those unfamiliar, Infracost anticipates cloud costs and integrates them into your engineering workflow. It provides cost estimates for Terraform, CloudFormation, and AWS CDK before deployment, identifying FinOps issues aligned with well-architected frameworks—fixable with a single merge.&lt;/p&gt;

&lt;p&gt;Fully compatible with AWS, Azure, and Google Cloud, it's essential for multicloud setups. Dive deeper in the &lt;a href="https://www.infracost.io/docs/" rel="noopener noreferrer"&gt;official Infracost docs&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Alignment with AWS Well-Architected Framework
&lt;/h2&gt;

&lt;p&gt;My cloud journey taught me that building on AWS goes beyond deploying resources—it's about excellence standards. This project is rooted in the &lt;a href="https://aws.amazon.com/architecture/well-architected/" rel="noopener noreferrer"&gt;AWS Well-Architected Framework&lt;/a&gt;, focusing on two vital pillars:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Optimization&lt;/strong&gt;: Shift visibility from month-end to code time, spotting oversized resources pre-creation for efficient investment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sustainability (GreenOps)&lt;/strong&gt;: Use Infracost to estimate CO_2e emissions, enabling eco-friendly architectural choices.&lt;/p&gt;




&lt;h2&gt;
  
  
  Project Architecture
&lt;/h2&gt;

&lt;p&gt;To make it easier to understand how everything connects, I’ve prepared this diagram illustrating our automation flow:&lt;/p&gt;

&lt;p&gt;As you can see in the diagram, the flow is divided into two fundamental parts:&lt;/p&gt;

&lt;h4&gt;
  
  
  The CI/CD Pipeline
&lt;/h4&gt;

&lt;p&gt;Pull Request: Upon pushing the code to GitHub, a GitHub Actions workflow is triggered.&lt;/p&gt;

&lt;p&gt;Analysis: The pipeline executes a terraform plan and Infracost analyzes this plan, generating the cost table directly as a PR comment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozhvaowrpf9z5cgagyti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozhvaowrpf9z5cgagyti.png" alt=" " width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Project Structure: Organizing Folders and Files
&lt;/h2&gt;

&lt;p&gt;Organization is the foundation of any Infrastructure as Code (IaC) project. A well-defined structure simplifies automation and ensures that the infrastructure is scalable and easy for other developers to understand. To make this lab simple to replicate, the project is built on a modular structure.&lt;/p&gt;

&lt;p&gt;Modularization is more than just organization; it is a best practice that allows different layers — such as Network, Security, and Application — to be managed independently, ensuring a proper separation of concerns.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;infracost_aws/
├── .github/
│   └── workflows/
│       └── infracost.yml      # Pipeline automation
├── modules/
│   ├── network/               # Network layer (VPC, Subnets)
│   └── compute/               # Cost-generating resources (EC2, RDS, S3)
├── main.tf                    # Module orchestrator
├── providers.tf               # Identity and Governance
├── terraform.tfvars           # Variable values
└── variables.tf               # Global definitions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Cloud Contract: &lt;code&gt;providers.tf&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Every Terraform project begins with the providers file. This is where we establish the connection with the cloud provider and set the governance strategy right from "day zero."&lt;/p&gt;

&lt;p&gt;Here is the configuration I used for this project:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 5.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;

  &lt;span class="c1"&gt;# The FinOps and Governance pillar starts here!&lt;/span&gt;
  &lt;span class="nx"&gt;default_tags&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;Project&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"FinOps-Infracost-Study"&lt;/span&gt;
      &lt;span class="nx"&gt;ManagedBy&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Terraform"&lt;/span&gt;
      &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Dev"&lt;/span&gt;
      &lt;span class="nx"&gt;Service&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"FinOps-Project"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Why is this configuration strategic?
&lt;/h2&gt;

&lt;p&gt;Provider Versioning: Locking the version (e.g., ~&amp;gt; 5.0) prevents automatic provider updates from breaking the code, ensuring pipeline stability.&lt;/p&gt;

&lt;p&gt;The Magic of default_tags: This is the most efficient way to implement cost traceability. By declaring tags in the provider block, all compatible resources (EC2, RDS, S3, etc.) automatically inherit these labels.&lt;/p&gt;

&lt;p&gt;Hands-on FinOps: With Environment and Service tags, finance teams can filter invoices with surgical precision, eliminating "orphan" resources and simplifying audits.&lt;/p&gt;


&lt;h2&gt;
  
  
  Variables: The Power of Parameterization
&lt;/h2&gt;

&lt;p&gt;In Infrastructure as Code, variables act like function arguments. They allow us to write generic code that can be adapted to different environments (Dev, Stage, Prod) without rewriting the core logic. To keep the project organized and simple to replicate, I split this logic into two distinct files:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt;: The "Contract"
In this file, I define which variables my project accepts, their types (string, number, list), and an optional description. It’s the "blueprint" telling Terraform what to expect.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"aws_region"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS region where resources will be created"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"instance_type"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"EC2 instance type"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"db_instance_class"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"RDS database instance class"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;terraform.tfvars&lt;/code&gt;: The "Control Panel"
This is where the FinOps magic happens. Instead of searching through lines of code for instance sizes, I centralize all values here.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="nx"&gt;aws_region&lt;/span&gt;        &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="nx"&gt;instance_type&lt;/span&gt;     &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"c6g.2xlarge"&lt;/span&gt;
&lt;span class="nx"&gt;db_instance_class&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"db.t4g.medium"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Why separate definitions from values?
&lt;/h2&gt;

&lt;p&gt;Adhering to Cloud excellence standards makes this separation essential. It provides three fundamental benefits:&lt;/p&gt;

&lt;p&gt;FinOps Agility: If Infracost warns me that costs are too high, I don't need to hunt for instance types across multiple .tf files. I change only the terraform.tfvars, push the update, and see the new cost calculation instantly.&lt;/p&gt;

&lt;p&gt;Security and Reusability: The core code (modules) remains immutable. This prevents accidental architecture errors while only adjusting "parts" (values). It also helps avoid hardcoding sensitive data.&lt;/p&gt;

&lt;p&gt;Standardization: For those replicating this lab, having a single configuration file makes learning much more intuitive. You focus on the parameters that truly impact infrastructure and costs.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Orchestrator: &lt;code&gt;main.tf (Root)&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The main.tf file at the root of the project acts as the conductor of an orchestra. It doesn’t create resources directly; instead, it calls the modules we’ve defined and passes the necessary information between them.&lt;/p&gt;

&lt;p&gt;In this project, modularization is a strategic choice. I separate the Network (base infrastructure) from the Compute (where costs are more volatile). Here is how the module calls look:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Network Module: Creates VPC, Subnets, and Gateways&lt;/span&gt;
&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"network"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"./modules/network"&lt;/span&gt;
  &lt;span class="nx"&gt;aws_region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Compute Module: Creates EC2, RDS, and S3&lt;/span&gt;
&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"compute"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"./modules/compute"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_id&lt;/span&gt;
  &lt;span class="nx"&gt;public_subnet_id&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_subnet_id&lt;/span&gt;
  &lt;span class="nx"&gt;private_subnet_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private_subnet_id&lt;/span&gt;

  &lt;span class="c1"&gt;# Passing the cost-related variables defined in terraform.tfvars&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance_type&lt;/span&gt;
  &lt;span class="nx"&gt;db_instance_class&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db_instance_class&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Why Module Integration Matters
&lt;/h2&gt;

&lt;p&gt;The key takeaway here is interdependency. Notice that the compute module receives the vpc_id and subnets directly from the network module outputs.&lt;/p&gt;

&lt;p&gt;Security by Design: By passing the private_subnet_id to the RDS database, I ensure it is never exposed to the public internet.&lt;/p&gt;

&lt;p&gt;Flexibility: If I need to overhaul the network architecture, my compute module remains untouched as long as the network outputs remain consistent.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Resource Layer: &lt;code&gt;modules/compute/recursos.tf&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we understand how the orchestrator organizes the execution, let's dive into the layer where the "magic" (and the cost) actually happens: the Compute module.&lt;/p&gt;

&lt;p&gt;In this file, I have configured three fundamental AWS services: EC2, RDS, and S3. The key point here is not just creating the resource, but how the variables we defined earlier in terraform.tfvars are applied to control costs.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# EC2 Instance for the Web Server&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"web_server"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ami-0c101f26f147fa7fd"&lt;/span&gt; &lt;span class="c1"&gt;# Amazon Linux 2023&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance_type&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_subnet_id&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"WebServer-FinOps"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# RDS Database (PostgreSQL)&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_db_instance"&lt;/span&gt; &lt;span class="s2"&gt;"database"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;allocated_storage&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;
  &lt;span class="nx"&gt;db_name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"finopsdb"&lt;/span&gt;
  &lt;span class="nx"&gt;engine&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"postgres"&lt;/span&gt;
  &lt;span class="nx"&gt;engine_version&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"16.1"&lt;/span&gt;
  &lt;span class="nx"&gt;instance_class&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db_instance_class&lt;/span&gt;
  &lt;span class="nx"&gt;username&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"admin"&lt;/span&gt;
  &lt;span class="nx"&gt;password&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"password123"&lt;/span&gt; &lt;span class="c1"&gt;# In prod, use Secrets Manager!&lt;/span&gt;
  &lt;span class="nx"&gt;parameter_group_name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"default.postgres16"&lt;/span&gt;
  &lt;span class="nx"&gt;skip_final_snapshot&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;db_subnet_group_name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_db_subnet_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db_sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;storage_type&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"gp3"&lt;/span&gt; &lt;span class="c1"&gt;# Strategic FinOps decision&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# S3 Bucket for Static Assets&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"assets"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"finops-project-assets-jessica"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Strategic Decisions in the Code:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Instance Flexibility: Notice that both EC2 and RDS use variables for their classes/types. This allows me to switch from an expensive family (like C5) to a more efficient one (like Graviton/C6g) by simply changing a text file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Smart Storage (gp2 vs gp3): In the RDS resource, I set the storage_type to gp3. This is a classic FinOps recommendation, as gp3 is typically 20% cheaper than gp2 while providing better, independent performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security and Isolation: The database is linked to private subnets, ensuring that the cost of security (preventing breaches) is mitigated by proper network design from the start.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  Visualizing Results: &lt;code&gt;outputs.tf&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;After Terraform orchestrates our entire AWS infrastructure, we need a way to retrieve essential data for our daily operations. Outputs act as the "receipt" of the operation.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"web_server_public_ip"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Public IP address of the EC2 instance"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web_server_public_ip&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"rds_endpoint"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Connection endpoint for the RDS database"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rds_endpoint&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Inside the Modules: Network and Compute
&lt;/h2&gt;

&lt;p&gt;To ensure our architecture is scalable, each folder within modules/ contains its own set of files. This allows me to test the network module in isolation before even considering the servers.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Network Module (modules/network/)
This is the foundation. Without a configured VPC and subnets, we have nowhere to "park" our instances.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt;: This is where I define parameters like the VPC CIDR block.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;main.tf&lt;/code&gt;: Contains the logic to create the VPC, Internet Gateway, and Subnets (public and private).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;outputs.tf&lt;/code&gt;: This is the crucial part. I need to "export" the VPC ID and Subnet IDs so the compute module can use them.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# modules/network/outputs.tf&lt;/span&gt;
&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"vpc_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"public_subnet_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"private_subnet_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Compute Module (modules/compute/)&lt;br&gt;
This is where Infracost focuses most of its analysis, as it houses the resources that generate direct hourly usage charges.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt;: It receives the IDs coming from the network module and the instance classes defined in the root terraform.tfvars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;recursos.tf&lt;/code&gt;: (Which we have already detailed) where we create the EC2, RDS, and S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;outputs.tf&lt;/code&gt;: Returns information such as the server's public IP so I can access it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Just as the network module exports IDs so we can build our resources, the compute module must return essential information for us to interact with the infrastructure after provisioning.&lt;/p&gt;

&lt;p&gt;Since this project focuses on FinOps and governance, having well-defined outputs helps with quick auditing of what has been created.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# modules/compute/outputs.tf&lt;/span&gt;

&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"web_server_public_ip"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Public IP address for SSH or HTTP access"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web_server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"rds_endpoint"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Connection endpoint for the database"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_db_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;database&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;endpoint&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Why does this matter?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Connectivity: Without the public IP or the database endpoint, the infrastructure exists in AWS but remains inaccessible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Transparency: In GitHub Actions, these values can be displayed at the end of the pipeline, confirming that the resources mapped by Infracost were indeed created as planned.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  Technical Breakdown: GitHub Actions &amp;amp; Security
&lt;/h2&gt;
&lt;h4&gt;
  
  
  Workload Identity Federation (OIDC)
&lt;/h4&gt;

&lt;p&gt;Security is non-negotiable. By using OIDC, we eliminate the need for persistent AWS Access Keys. The id-token: write permission allows GitHub Actions to request short-lived, temporary tokens directly from AWS, ensuring a "Keyless" and highly secure authentication flow.&lt;/p&gt;

&lt;p&gt;In our code, this is enabled by this block:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;   &lt;span class="c1"&gt;# Obrigatório para solicitar o token JWT da AWS&lt;/span&gt;
  &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;    &lt;span class="c1"&gt;# Para ler o código do repositório&lt;/span&gt;
  &lt;span class="na"&gt;pull-requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt; &lt;span class="c1"&gt;# Para o Infracost comentar no PR&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Setting Up the "Bridge" in AWS (Console)
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Step 1: Create the Identity Provider (OIDC)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Go to the IAM service in the AWS Console.&lt;/li&gt;
&lt;li&gt; In the left sidebar, click Identity Providers.&lt;/li&gt;
&lt;li&gt; Click Add provider.&lt;/li&gt;
&lt;li&gt; Select OpenID Connect.&lt;/li&gt;
&lt;li&gt; For Provider URL, paste: 
&lt;code&gt;[https://token.actions.githubusercontent.com](https://token.actions.githubusercontent.com)&lt;/code&gt; and click Get thumbprint.&lt;/li&gt;
&lt;li&gt; For Audience, type: sts.amazonaws.com.&lt;/li&gt;
&lt;li&gt; Click Add provider.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Step 2: Create the IAM Role for GitHub Actions
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In the IAM sidebar, click Roles and then Create role.&lt;/li&gt;
&lt;li&gt;Select Web Identity.&lt;/li&gt;
&lt;li&gt;Under Identity provider, select the one you just created.&lt;/li&gt;
&lt;li&gt;Under Audience, select sts.amazonaws.com.&lt;/li&gt;
&lt;li&gt;Click Next, add the necessary permissions (e.g., specific Terraform policies), and click Next.&lt;/li&gt;
&lt;li&gt;Name the role (e.g., GitHubActionsInfracostRole) and create it.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Step 3: Refine the Trust Policy
&lt;/h3&gt;

&lt;p&gt;To ensure only your repository can assume this role, edit the Trust Relationship:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open your new Role, go to the Trust relationships tab, and click Edit trust policy.&lt;/li&gt;
&lt;li&gt;Ensure the sub condition points strictly to your GitHub repository as shown in the JSON block above.&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  Workflow Jobs Breakdown
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Baseline Main: Captures the current infrastructure cost from the main branch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Diff Analysis: Compares your new Pull Request code against the baseline to calculate the exact financial impact.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Post Results: Delivers a visual cost breakdown directly into the PR. The continue-on-error: true setting ensures that minor commenting issues don't block critical deployments.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  Why Developer Feedback Matters?
&lt;/h2&gt;

&lt;p&gt;Bringing cost visibility into the PR phase allows for immediate course correction. It empowers developers to align architectural choices with Cost Optimization and Sustainability (CO_2e) before a single cent is spent.&lt;/p&gt;


&lt;h2&gt;
  
  
  Impact on Pull Request (PR)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9tbn7ait5br0p82p049.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9tbn7ait5br0p82p049.png" alt=" " width="800" height="786"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Facspfsz69urawghx3m8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Facspfsz69urawghx3m8h.png" alt=" " width="670" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Strategic Insight: Notice that by switching from a t3.micro instance to an m5.large, Infracost immediately alerts us to the spike in monthly costs and CO_2e emissions. Having this visibility empowers developers to make informed architectural choices—such as opting for Graviton instances or gp3 storage—directly impacting the business's bottom line and sustainability goals.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl16wwukxvl3mqk9lk0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl16wwukxvl3mqk9lk0i.png" alt=" " width="800" height="763"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmf1qg4494mixzcu2grle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmf1qg4494mixzcu2grle.png" alt=" " width="796" height="434"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Conclusion: The Future of Infrastructure is Conscious
&lt;/h2&gt;

&lt;p&gt;Implementing FinOps is not just about reducing the bill at the end of the month; it’s about fostering a culture of accountability and transparency. Throughout this lab, we’ve seen that it is entirely possible to bridge security (via OIDC), governance (via Tags), and cost visibility directly within the development lifecycle.By bringing cost estimation into the Pull Request, we eliminate the financial "blind spot."&lt;br&gt;
Developers evolve from simply consuming resources to becoming conscious architects, capable of pivoting toward more efficient solutions—such as Graviton instances or gp3 storage—before a single cent is even spent.This project serves as a practical example of how technology can be leveraged to build more sustainable and well-managed systems. &lt;br&gt;
Ultimately, every CO_2e saved and every dollar optimized contributes to a more mature and resilient operation.&lt;/p&gt;

&lt;p&gt;I hope this guide assists fellow students and professionals on their cloud journey! 🚀&lt;/p&gt;



&lt;p&gt;📂 Project Repository&lt;/p&gt;

&lt;p&gt;All the code detailed in this article, including the networking and compute modules and the GitHub Actions workflow, is available on my GitHub. Feel free to clone, test, and contribute!&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/JessicaApBueno" rel="noopener noreferrer"&gt;
        JessicaApBueno
      &lt;/a&gt; / &lt;a href="https://github.com/JessicaApBueno/infracost_aws" rel="noopener noreferrer"&gt;
        infracost_aws
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;FinOps Hands-On: Custos AWS + CO₂e em Todo PR com Infracost &amp;amp; Terraform&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a href="https://dev.to/jessicaapbueno/finops-hands-on-aws-costs-co-e-in-every-pr-with-infracost-terraform-4hjj" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/7ed4fe17943f45000218ac73151fd2c60f3c4f561af3bac75ba76949c38b90d8/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4465762e746f2d3041304130413f7374796c653d666f722d7468652d6261646765266c6f676f3d646576746f266c6f676f436f6c6f723d7768697465" alt="Dev.to"&gt;&lt;/a&gt;
&lt;a href="https://medium.com/@buenojessicaaparecida/finops-hands-on-custos-aws-co%E2%82%82e-em-todo-pr-com-infracost-terraform-760583674ecf" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/701ed3dd2f73ceed1fb2fca31e73cfc9745e4c16705ea19832c50a167349d7cc/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4d656469756d2d3132313030453f7374796c653d666f722d7468652d6261646765266c6f676f3d6d656469756d266c6f676f436f6c6f723d7768697465" alt="Medium"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Este repositório contém um laboratório prático de &lt;strong&gt;FinOps na AWS&lt;/strong&gt; usando &lt;strong&gt;Terraform&lt;/strong&gt;, &lt;strong&gt;GitHub Actions&lt;/strong&gt; e &lt;strong&gt;Infracost&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;A proposta é trazer visibilidade de custos para mais perto do desenvolvimento, permitindo que cada Pull Request mostre o impacto financeiro da mudança antes do deploy.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Objetivo&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;O projeto demonstra como integrar:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; para provisionamento de infraestrutura.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infracost&lt;/strong&gt; para estimativa de custos antes da implantação.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt; para automação do fluxo de CI/CD.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OIDC&lt;/strong&gt; para autenticação segura na AWS sem chaves de longa duração.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tags padronizadas&lt;/strong&gt; para governança e rastreabilidade.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CO₂e&lt;/strong&gt; como apoio a decisões mais sustentáveis.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Arquitetura&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;A infraestrutura está organizada em módulos:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;modules/network&lt;/code&gt;: cria a base de rede com VPC, subnets e componentes de conectividade.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;modules/compute&lt;/code&gt;: cria os recursos de aplicação e banco.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.github/workflows/infracost.yml&lt;/code&gt;: executa o fluxo de custo no Pull Request.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;providers.tf&lt;/code&gt;…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/JessicaApBueno/infracost_aws" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>aws</category>
      <category>devops</category>
      <category>finops</category>
      <category>sustainability</category>
    </item>
    <item>
      <title>Cloud Simplified: Hands-on DevSecOps Lab with Terraform and LocalStack</title>
      <dc:creator>Jessica Aparecida Bueno</dc:creator>
      <pubDate>Thu, 19 Mar 2026 01:54:52 +0000</pubDate>
      <link>https://dev.to/jessicaapbueno/cloud-simplified-hands-on-devsecops-lab-with-terraform-and-localstack-1k2e</link>
      <guid>https://dev.to/jessicaapbueno/cloud-simplified-hands-on-devsecops-lab-with-terraform-and-localstack-1k2e</guid>
      <description>&lt;h1&gt;
  
  
  Demystifying the Cloud: A Practical DevSecOps Lab with Terraform and LocalStack
&lt;/h1&gt;

&lt;p&gt;Imagine being able to test your entire AWS infrastructure with rigorous security validation and professional automation, without spending a single cent.&lt;/p&gt;

&lt;p&gt;In the real world, cloud mistakes are expensive. That's why simulation environments and well-structured CI/CD pipelines are "game changers" for a Cloud Engineer.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 The Protagonist: LocalStack
&lt;/h2&gt;

&lt;p&gt;To make this zero-cost environment possible, I used &lt;strong&gt;LocalStack&lt;/strong&gt;. It is a cloud service emulator that runs in a single Docker container on your local machine or within CI/CD environments.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How does it work?&lt;/strong&gt; LocalStack intercepts the API calls you would send to the real AWS and processes them locally. For your Terraform, it's as if it were talking to the true cloud, but the data and resources never leave your controlled environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How to start locally?&lt;/strong&gt; If you want to run it manually on your machine for quick experiments, just run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;localstack start &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;In this project, however, LocalStack is started automatically by the GitHub Actions pipeline&lt;/strong&gt; — you don't need to run it locally for the CI/CD flow to work. The local command above is optional, useful for manual testing before pushing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Official Documentation:&lt;/strong&gt; To explore all supported services, you can access the &lt;a href="https://docs.localstack.cloud" rel="noopener noreferrer"&gt;LocalStack Documentation&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🌟 Overview
&lt;/h2&gt;

&lt;p&gt;This project was born from the need to unite theory and practice in a &lt;strong&gt;DevSecOps&lt;/strong&gt; scenario. The main goal isn't just to "create a resource," but to build a learning journey on how &lt;strong&gt;IaC (Infrastructure as Code)&lt;/strong&gt; and &lt;strong&gt;Automation&lt;/strong&gt; technologies integrate into the daily routine of a technology team.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛡️ The "Sec" in DevSecOps: Why is this not just another automation project?
&lt;/h2&gt;

&lt;p&gt;We often hear the term &lt;strong&gt;DevOps&lt;/strong&gt;, but when we add &lt;strong&gt;Sec (Security)&lt;/strong&gt; in the middle, we are talking about a paradigm shift called &lt;strong&gt;Shift Left&lt;/strong&gt;. In practice, this means bringing security to the beginning of the development cycle rather than leaving it as a final task before deployment.&lt;/p&gt;

&lt;p&gt;In this lab, security is not optional; it is a &lt;strong&gt;structural part of the pipeline&lt;/strong&gt;. Here's how I transformed a delivery flow into a &lt;strong&gt;&lt;em&gt;secure&lt;/em&gt;&lt;/strong&gt; delivery flow:&lt;/p&gt;

&lt;h3&gt;
  
  
  Shift Left with Static Analysis (tfsec)
&lt;/h3&gt;

&lt;p&gt;Unlike a traditional flow where you would create the resource and then run a scanner, here I used &lt;strong&gt;tfsec&lt;/strong&gt; directly in GitHub Actions — running it before LocalStack even starts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The code is analyzed even before any resource is simulated or created.&lt;/li&gt;
&lt;li&gt;In this lab, &lt;code&gt;soft_fail: true&lt;/code&gt; is configured so that security warnings appear in the logs without blocking the pipeline — allowing you to observe and learn from each finding. In a real production environment, this flag would be removed, making &lt;code&gt;tfsec&lt;/code&gt; a hard gate: any critical vulnerability would immediately stop the pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Secure Infrastructure by Design (Hardening)
&lt;/h3&gt;

&lt;p&gt;The developed S3 module doesn't just focus on creating the bucket, but on its &lt;strong&gt;Hardening&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public Access Block&lt;/strong&gt;: I implemented features that prevent the bucket from being accidentally exposed to the internet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AES256 Encryption&lt;/strong&gt;: Ensures that data at rest is always protected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versioning&lt;/strong&gt;: Added a recovery layer against accidental deletions or malicious attacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Security as a Troubleshooting Culture
&lt;/h3&gt;

&lt;p&gt;During development, the pipeline "broke" several times due to &lt;code&gt;tfsec&lt;/code&gt; alerts. In a common scenario, a developer might simply disable the scanner. Here, the approach was &lt;strong&gt;remediation&lt;/strong&gt;: understanding the pointed risk (like the lack of &lt;code&gt;public_access_block&lt;/code&gt;) and updating the code to meet market security standards.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"DevSecOps is not about tools; it's about not allowing delivery speed to compromise data integrity."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🎯 Why this Lab?
&lt;/h2&gt;

&lt;p&gt;Often, when studying cloud, we hit the fear of costs or the complexity of setting up a pipeline from scratch. This lab was designed to be a &lt;strong&gt;safe learning environment&lt;/strong&gt;, where I applied concepts of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Modularization&lt;/strong&gt;: How to organize files so that the code is reusable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preventive Security (Shift Left)&lt;/strong&gt;: How to use scanning tools (tfsec) to block security errors before the resource even exists.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local Simulation&lt;/strong&gt;: How to bypass physical and financial limitations using LocalStack to emulate complex AWS services.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🛠️ The Tech Stack: The Automation "Engine"
&lt;/h2&gt;

&lt;p&gt;For this ecosystem to work in harmony, I selected industry-standard tools that complement each other perfectly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt;: The choice for IaC. It allows defining infrastructure through declarative code, ensuring the environment is replicable and free from error-prone manual configurations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LocalStack&lt;/strong&gt;: Emulates a complete AWS cloud inside a Docker container on the GitHub runner, allowing testing of resources like S3 without generating real costs on the AWS account.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt;: The CI/CD engine. It orchestrates the execution of validation, security, and simulation jobs on every &lt;code&gt;git push&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tfsec&lt;/strong&gt;: The static analysis tool that ensures the "Sec" of DevSecOps, scanning the code for insecure configurations before deployment.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📂 Organization and Structure: Thinking at Scale
&lt;/h2&gt;

&lt;p&gt;A professional infrastructure project cannot be a "single file." The folder organization reflects the maturity of the code and facilitates maintenance by other engineers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;localstack-terraform-lab/
├── .github/workflows/
│   └── terraform.yml       # Where the automation magic happens
├── modules/
│   └── s3-bucket/          # Our reusable and secure component
│       ├── main.tf         # Security logic and S3 resources
│       └── variables.tf    # Module parameterization
├── main.tf                 # Entry point (calling modules)
├── provider.tf             # LocalStack and AWS Provider configuration
└── variables.tf            # Global project variables
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this structure, the highlight goes to the &lt;code&gt;modules/&lt;/code&gt; folder. Instead of creating the bucket directly in the root, the logic was isolated within a &lt;strong&gt;reusable module&lt;/strong&gt;. This means that if 10 more buckets are needed tomorrow, they will all strictly follow the same security standard we defined once.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏗️ Dissecting the Pipeline: The &lt;code&gt;terraform.yml&lt;/code&gt; Workflow
&lt;/h2&gt;

&lt;p&gt;The heart of this automation resides in the &lt;code&gt;.github/workflows/terraform.yml&lt;/code&gt; file. It is divided into three major pillars (Jobs) that ensure project integrity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Header and Trigger
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Professional CI&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;name&lt;/strong&gt;: The name that will appear in the GitHub "Actions" tab. Choosing a professional name helps quickly identify the purpose of the automation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;on: [push]&lt;/strong&gt;: Defines the trigger. Every time new code is sent to the repository, the pipeline comes to life automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Job 1: Check Code Quality (Validation)
&lt;/h3&gt;

&lt;p&gt;This is the first quality filter. It ensures the code is well-written before anything else happens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;validate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Check&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Code&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Quality"&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/setup-terraform@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform init -backend=false&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform validate&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;runs-on: ubuntu-latest&lt;/strong&gt;: Tells GitHub to provision a clean Linux virtual machine to run these commands.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;actions/checkout@v4&lt;/strong&gt;: Makes the virtual machine download your code from the repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;terraform init -backend=false&lt;/strong&gt;: A crucial learning. Since we use local modules, Terraform needs to "install" them before validating. We use &lt;code&gt;-backend=false&lt;/code&gt; because we don't need a real cloud connection at this stage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;terraform validate&lt;/strong&gt;: The command that checks for missing brackets, typos, or incorrectly declared variables.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Job 2: Security Scan
&lt;/h3&gt;

&lt;p&gt;This is where DevOps becomes &lt;strong&gt;DevSecOps&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;security&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Security&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Scan"&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;validate&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run tfsec&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aquasecurity/tfsec-action@v1.0.0&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;soft_fail&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;needs: validate&lt;/strong&gt;: Creates the dependency between jobs. The Security Scan only starts if the Quality Validation passes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;aquasecurity/tfsec-action&lt;/strong&gt;: The "security inspector." It scans the code for vulnerabilities, such as S3 buckets without encryption or with public access enabled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;soft_fail: true&lt;/strong&gt;: A deliberate choice for this lab context. It allows security warnings to appear in the logs without blocking the pipeline, so you can observe all findings and plan your remediations. &lt;strong&gt;Important&lt;/strong&gt;: in a real production pipeline, this flag should be removed — making tfsec a hard blocker that stops the pipeline on any critical finding.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Job 3: LocalStack Plan (Simulation)
&lt;/h3&gt;

&lt;p&gt;The final stage, where the infrastructure is simulated in the zero-cost cloud.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;terraform-plan&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LocalStack&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Plan"&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;security&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start LocalStack&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localstack/setup-localstack@main&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/setup-terraform@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Init &amp;amp; Plan&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;terraform init&lt;/span&gt;
          &lt;span class="s"&gt;terraform plan&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;AWS_DEFAULT_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;localstack/setup-localstack&lt;/strong&gt;: Starts a Docker container with LocalStack, creating an AWS simulator inside the GitHub runner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;terraform plan&lt;/strong&gt;: Generates the execution plan, showing exactly what would be created in real AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;env &amp;amp; secrets&lt;/strong&gt;: Uses GitHub &lt;strong&gt;Repository Secrets&lt;/strong&gt; to inject credentials safely, simulating exactly how you would protect access keys in a real production project.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📦 Hands-on: Building a Secure S3 Module
&lt;/h2&gt;

&lt;p&gt;To keep the project organized and scalable, I developed an &lt;strong&gt;isolated module&lt;/strong&gt; for S3. My intention was to create a security standard I could replicate in any other project. Here is the &lt;code&gt;main.tf&lt;/code&gt; of the module, block by block:&lt;/p&gt;

&lt;h3&gt;
  
  
  The Base Resource
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucket_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where the bucket itself is defined. I used a variable (&lt;code&gt;var.bucket_name&lt;/code&gt;) to make the module flexible, allowing different names to be set without changing the module's internal security logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Public Access Block (The "Vault Lock")
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket_public_access_block"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;block_public_acls&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;block_public_policy&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;ignore_public_acls&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;restrict_public_buckets&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Four security locks that ensure that even if someone tries to change permissions manually, the bucket will remain private — preventing sensitive data from being accidentally exposed on the internet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Versioning
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket_versioning"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;versioning_configuration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Enabled"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With versioning enabled, it's possible to recover previous versions of deleted or accidentally modified files. It's an essential protection layer against human error or ransomware attacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Encryption at Rest
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket_server_side_encryption_configuration"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;apply_server_side_encryption_by_default&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;sse_algorithm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AES256"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This block was added to address the security findings identified by &lt;strong&gt;tfsec&lt;/strong&gt; during the CI/CD pipeline execution. AES256 encryption ensures that all files stored on disk (or simulated in LocalStack) are protected by default.&lt;/p&gt;




&lt;h2&gt;
  
  
  📥 Flexibility with &lt;code&gt;variables.tf&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;To avoid a "hardcoded" project, I followed one of Terraform's golden rules: &lt;strong&gt;never leave fixed values in the middle of the code&lt;/strong&gt;. Think of variables as function parameters — they allow you to use the same module to create different buckets just by changing the name at the "entry point," without touching the security logic you already validated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"bucket_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Unique name of the S3 bucket"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Description (Living Documentation)&lt;/strong&gt;: In a real team scenario, this avoids any doubt about what that field expects to receive — for other engineers and for your future self.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type Constraint (Type Safety)&lt;/strong&gt;: By defining the type as &lt;code&gt;string&lt;/code&gt;, Terraform validates the input. If a list or number is accidentally passed, Terraform warns you immediately, preventing strange errors during deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Working with variables is what separates a simple script from professional, scalable infrastructure."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🏗️ The Command Center: Root Files
&lt;/h2&gt;

&lt;p&gt;With the security modules properly built and "locked," it was time to organize the &lt;strong&gt;Command Center&lt;/strong&gt; of the project: the root folder. This is where the pieces connect to LocalStack. Separating the files into &lt;code&gt;provider.tf&lt;/code&gt;, &lt;code&gt;main.tf&lt;/code&gt;, and &lt;code&gt;variables.tf&lt;/code&gt; ensures that each one has a single responsibility, avoiding the hard-to-maintain "monolith."&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;provider.tf&lt;/code&gt;: The Bridge and the Security Lock
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The Terraform Block: Ensuring the Correct Version
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 5.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this?&lt;/strong&gt; During testing, I noticed that the more recent versions (v6.x) of the AWS Provider had protocol XML errors when talking to LocalStack v4. By pinning the version to &lt;code&gt;~&amp;gt; 5.0&lt;/code&gt;, I ensured stability and avoided the &lt;strong&gt;MalformedXML&lt;/strong&gt; error.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Provider Block: Pointing to LocalStack
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt;                      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
  &lt;span class="nx"&gt;access_key&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test"&lt;/span&gt;
  &lt;span class="nx"&gt;secret_key&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test"&lt;/span&gt;
  &lt;span class="nx"&gt;s3_use_path_style&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;skip_credentials_validation&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;skip_metadata_api_check&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;skip_requesting_account_id&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;endpoints&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;s3&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"http://localhost:4566"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Skip configurations&lt;/strong&gt;: These flags (&lt;code&gt;skip_credentials_validation&lt;/code&gt;, &lt;code&gt;skip_metadata_api_check&lt;/code&gt;, &lt;code&gt;skip_requesting_account_id&lt;/code&gt;) are &lt;strong&gt;specific to the LocalStack test environment&lt;/strong&gt;. They tell Terraform not to try validating the &lt;code&gt;"test"&lt;/code&gt; keys against real AWS servers. &lt;strong&gt;Never use these settings in a real production provider configuration.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Path Style&lt;/strong&gt;: &lt;code&gt;s3_use_path_style = true&lt;/code&gt; because LocalStack handles this URL format for buckets better than the subdomain format used in real cloud.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Endpoints&lt;/strong&gt;: The most important line — it redirects all S3 traffic to &lt;code&gt;localhost:4566&lt;/code&gt;, where LocalStack is listening.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔐 What About Secrets?
&lt;/h3&gt;

&lt;p&gt;Although the code uses &lt;code&gt;"test"&lt;/code&gt; keys (which LocalStack accepts by default), I took care to configure &lt;strong&gt;GitHub Secrets&lt;/strong&gt; in my CI/CD pipeline. This means the real access values are never exposed in the code — they are injected via environment variables, simulating exactly how you would protect access keys for a real AWS account.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;main.tf&lt;/code&gt;: The Conductor
&lt;/h3&gt;

&lt;p&gt;If the modules are the musicians, the root &lt;code&gt;main.tf&lt;/code&gt; is the conductor. It doesn't create the bucket by itself; it &lt;strong&gt;calls&lt;/strong&gt; the module I developed and passes the necessary instructions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"my_bucket"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"./modules/s3-bucket"&lt;/span&gt;
  &lt;span class="nx"&gt;bucket_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucket_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;source&lt;/strong&gt;: Points to the path of the module folder.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Variable passing&lt;/strong&gt;: The global variable &lt;code&gt;bucket_name&lt;/code&gt; is passed directly into the module's &lt;code&gt;bucket_name&lt;/code&gt; input, creating a clean and consistent hierarchy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;variables.tf&lt;/code&gt;: The Global Control Panel
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"bucket_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Bucket name defined at the root level"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-devsecops-study-bucket"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file at the root concentrates all definitions that might change from one environment to another, acting as the project's central control panel.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔗 How Does It All Connect?
&lt;/h2&gt;

&lt;p&gt;Unlike a manual process where you would run commands one by one in your terminal, the intelligence here lies in the &lt;strong&gt;automation&lt;/strong&gt;. The flow works like a synchronized gear every time I perform a &lt;code&gt;git push&lt;/code&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The Trigger&lt;/strong&gt;: The moment I send my code to GitHub, the CI/CD pipeline identifies the change and starts the jobs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Inspection&lt;/strong&gt;: Before even thinking about creating the bucket, GitHub Actions runs the validation and security scan. If there's an error in the S3 module or an exposed bucket, the pipeline stops here, protecting the environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scenario Building&lt;/strong&gt;: Once security gives the green light, GitHub starts a &lt;strong&gt;LocalStack&lt;/strong&gt; container within the execution environment itself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Silent Connection&lt;/strong&gt;: The &lt;code&gt;provider.tf&lt;/code&gt; file acts as a GPS, guiding Terraform to the local container using the credentials configured in the repository &lt;strong&gt;Secrets&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan Delivery&lt;/strong&gt;: The global &lt;code&gt;main.tf&lt;/code&gt; calls the S3 module, injects the name defined in &lt;code&gt;variables.tf&lt;/code&gt;, and Terraform generates the final &lt;strong&gt;Plan&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqnfbo55l2owt6op1wli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqnfbo55l2owt6op1wli.png" alt="Pipeline flow diagram" width="784" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This structure allows me to test different configurations simply by changing the code and pushing to the repository — with agility and the confidence that the infrastructure is &lt;strong&gt;secure by design&lt;/strong&gt; and automatically validated.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏁 Conclusion: What Do I Take from This Lab?
&lt;/h2&gt;

&lt;p&gt;Finishing this project brought me much greater clarity on the role of automation for a Cloud Engineer. More than just writing &lt;code&gt;.tf&lt;/code&gt; files, I understood that true excellence lies in creating processes that are &lt;strong&gt;secure, repeatable, and cost-efficient&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My biggest lessons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security is not the end, it's the beginning&lt;/strong&gt;: Implementing &lt;code&gt;tfsec&lt;/code&gt; taught me that "Shift Left" isn't just a buzzword; it saved me time and prevented vulnerable infrastructure from ever being deployed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Troubleshooting is part of learning&lt;/strong&gt;: Solving the &lt;strong&gt;MalformedXML&lt;/strong&gt; error by adjusting the AWS Provider v5.x versions and LocalStack v4 configurations gave me the confidence to handle the real-world "traps" that arise when integrating different tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero Cost, Maximum Value&lt;/strong&gt;: LocalStack proved to be an indispensable ally. The freedom to fail, destroy, and rebuild a simulated environment accelerated my learning without the worry of an AWS bill at the end of the month.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The journey to the cloud is continuous, and tools like Terraform and GitHub Actions are the engines that allow me to navigate with safety and agility.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Did you like this project?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can check out the full code in my repository: &lt;a href="https://github.com/JessicaApBueno/localstack-terraform-lab" rel="noopener noreferrer"&gt;JessicaApBueno/localstack-terraform-lab&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devsecops</category>
      <category>localstack</category>
      <category>tfsec</category>
    </item>
  </channel>
</rss>
