Demystifying the Cloud: A Practical DevSecOps Lab with Terraform and LocalStack
Imagine being able to test your entire AWS infrastructure with rigorous security validation and professional automation, without spending a single cent.
In the real world, cloud mistakes are expensive. That's why simulation environments and well-structured CI/CD pipelines are "game changers" for a Cloud Engineer.
🚀 The Protagonist: LocalStack
To make this zero-cost environment possible, I used LocalStack. It is a cloud service emulator that runs in a single Docker container on your local machine or within CI/CD environments.
- How does it work? LocalStack intercepts the API calls you would send to the real AWS and processes them locally. For your Terraform, it's as if it were talking to the true cloud, but the data and resources never leave your controlled environment.
- How to start locally? If you want to run it manually on your machine for quick experiments, just run:
localstack start -d
In this project, however, LocalStack is started automatically by the GitHub Actions pipeline — you don't need to run it locally for the CI/CD flow to work. The local command above is optional, useful for manual testing before pushing.
- Official Documentation: To explore all supported services, you can access the LocalStack Documentation.
🌟 Overview
This project was born from the need to unite theory and practice in a DevSecOps scenario. The main goal isn't just to "create a resource," but to build a learning journey on how IaC (Infrastructure as Code) and Automation technologies integrate into the daily routine of a technology team.
🛡️ The "Sec" in DevSecOps: Why is this not just another automation project?
We often hear the term DevOps, but when we add Sec (Security) in the middle, we are talking about a paradigm shift called Shift Left. In practice, this means bringing security to the beginning of the development cycle rather than leaving it as a final task before deployment.
In this lab, security is not optional; it is a structural part of the pipeline. Here's how I transformed a delivery flow into a secure delivery flow:
Shift Left with Static Analysis (tfsec)
Unlike a traditional flow where you would create the resource and then run a scanner, here I used tfsec directly in GitHub Actions — running it before LocalStack even starts:
- The code is analyzed even before any resource is simulated or created.
- In this lab,
soft_fail: trueis configured so that security warnings appear in the logs without blocking the pipeline — allowing you to observe and learn from each finding. In a real production environment, this flag would be removed, makingtfseca hard gate: any critical vulnerability would immediately stop the pipeline.
Secure Infrastructure by Design (Hardening)
The developed S3 module doesn't just focus on creating the bucket, but on its Hardening:
- Public Access Block: I implemented features that prevent the bucket from being accidentally exposed to the internet.
- AES256 Encryption: Ensures that data at rest is always protected.
- Versioning: Added a recovery layer against accidental deletions or malicious attacks.
Security as a Troubleshooting Culture
During development, the pipeline "broke" several times due to tfsec alerts. In a common scenario, a developer might simply disable the scanner. Here, the approach was remediation: understanding the pointed risk (like the lack of public_access_block) and updating the code to meet market security standards.
"DevSecOps is not about tools; it's about not allowing delivery speed to compromise data integrity."
🎯 Why this Lab?
Often, when studying cloud, we hit the fear of costs or the complexity of setting up a pipeline from scratch. This lab was designed to be a safe learning environment, where I applied concepts of:
- Modularization: How to organize files so that the code is reusable.
- Preventive Security (Shift Left): How to use scanning tools (tfsec) to block security errors before the resource even exists.
- Local Simulation: How to bypass physical and financial limitations using LocalStack to emulate complex AWS services.
🛠️ The Tech Stack: The Automation "Engine"
For this ecosystem to work in harmony, I selected industry-standard tools that complement each other perfectly:
- Terraform: The choice for IaC. It allows defining infrastructure through declarative code, ensuring the environment is replicable and free from error-prone manual configurations.
- LocalStack: Emulates a complete AWS cloud inside a Docker container on the GitHub runner, allowing testing of resources like S3 without generating real costs on the AWS account.
-
GitHub Actions: The CI/CD engine. It orchestrates the execution of validation, security, and simulation jobs on every
git push. - tfsec: The static analysis tool that ensures the "Sec" of DevSecOps, scanning the code for insecure configurations before deployment.
📂 Organization and Structure: Thinking at Scale
A professional infrastructure project cannot be a "single file." The folder organization reflects the maturity of the code and facilitates maintenance by other engineers.
localstack-terraform-lab/
├── .github/workflows/
│ └── terraform.yml # Where the automation magic happens
├── modules/
│ └── s3-bucket/ # Our reusable and secure component
│ ├── main.tf # Security logic and S3 resources
│ └── variables.tf # Module parameterization
├── main.tf # Entry point (calling modules)
├── provider.tf # LocalStack and AWS Provider configuration
└── variables.tf # Global project variables
In this structure, the highlight goes to the modules/ folder. Instead of creating the bucket directly in the root, the logic was isolated within a reusable module. This means that if 10 more buckets are needed tomorrow, they will all strictly follow the same security standard we defined once.
🏗️ Dissecting the Pipeline: The terraform.yml Workflow
The heart of this automation resides in the .github/workflows/terraform.yml file. It is divided into three major pillars (Jobs) that ensure project integrity.
Header and Trigger
name: Terraform Professional CI
on: [push]
- name: The name that will appear in the GitHub "Actions" tab. Choosing a professional name helps quickly identify the purpose of the automation.
- on: [push]: Defines the trigger. Every time new code is sent to the repository, the pipeline comes to life automatically.
Job 1: Check Code Quality (Validation)
This is the first quality filter. It ensures the code is well-written before anything else happens.
jobs:
validate:
name: "Check Code Quality"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init -backend=false
- run: terraform validate
- runs-on: ubuntu-latest: Tells GitHub to provision a clean Linux virtual machine to run these commands.
- actions/checkout@v4: Makes the virtual machine download your code from the repository.
-
terraform init -backend=false: A crucial learning. Since we use local modules, Terraform needs to "install" them before validating. We use
-backend=falsebecause we don't need a real cloud connection at this stage. - terraform validate: The command that checks for missing brackets, typos, or incorrectly declared variables.
Job 2: Security Scan
This is where DevOps becomes DevSecOps.
security:
name: "Security Scan"
needs: validate
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run tfsec
uses: aquasecurity/tfsec-action@v1.0.0
with:
soft_fail: true
- needs: validate: Creates the dependency between jobs. The Security Scan only starts if the Quality Validation passes.
- aquasecurity/tfsec-action: The "security inspector." It scans the code for vulnerabilities, such as S3 buckets without encryption or with public access enabled.
- soft_fail: true: A deliberate choice for this lab context. It allows security warnings to appear in the logs without blocking the pipeline, so you can observe all findings and plan your remediations. Important: in a real production pipeline, this flag should be removed — making tfsec a hard blocker that stops the pipeline on any critical finding.
Job 3: LocalStack Plan (Simulation)
The final stage, where the infrastructure is simulated in the zero-cost cloud.
terraform-plan:
name: "LocalStack Plan"
needs: security
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Start LocalStack
uses: localstack/setup-localstack@main
- uses: hashicorp/setup-terraform@v3
- name: Terraform Init & Plan
run: |
terraform init
terraform plan
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
- localstack/setup-localstack: Starts a Docker container with LocalStack, creating an AWS simulator inside the GitHub runner.
- terraform plan: Generates the execution plan, showing exactly what would be created in real AWS.
- env & secrets: Uses GitHub Repository Secrets to inject credentials safely, simulating exactly how you would protect access keys in a real production project.
📦 Hands-on: Building a Secure S3 Module
To keep the project organized and scalable, I developed an isolated module for S3. My intention was to create a security standard I could replicate in any other project. Here is the main.tf of the module, block by block:
The Base Resource
resource "aws_s3_bucket" "this" {
bucket = var.bucket_name
}
This is where the bucket itself is defined. I used a variable (var.bucket_name) to make the module flexible, allowing different names to be set without changing the module's internal security logic.
Public Access Block (The "Vault Lock")
resource "aws_s3_bucket_public_access_block" "this" {
bucket = aws_s3_bucket.this.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
Four security locks that ensure that even if someone tries to change permissions manually, the bucket will remain private — preventing sensitive data from being accidentally exposed on the internet.
Versioning
resource "aws_s3_bucket_versioning" "this" {
bucket = aws_s3_bucket.this.id
versioning_configuration {
status = "Enabled"
}
}
With versioning enabled, it's possible to recover previous versions of deleted or accidentally modified files. It's an essential protection layer against human error or ransomware attacks.
Encryption at Rest
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
This block was added to address the security findings identified by tfsec during the CI/CD pipeline execution. AES256 encryption ensures that all files stored on disk (or simulated in LocalStack) are protected by default.
📥 Flexibility with variables.tf
To avoid a "hardcoded" project, I followed one of Terraform's golden rules: never leave fixed values in the middle of the code. Think of variables as function parameters — they allow you to use the same module to create different buckets just by changing the name at the "entry point," without touching the security logic you already validated.
variable "bucket_name" {
description = "Unique name of the S3 bucket"
type = string
}
- Description (Living Documentation): In a real team scenario, this avoids any doubt about what that field expects to receive — for other engineers and for your future self.
-
Type Constraint (Type Safety): By defining the type as
string, Terraform validates the input. If a list or number is accidentally passed, Terraform warns you immediately, preventing strange errors during deployment.
"Working with variables is what separates a simple script from professional, scalable infrastructure."
🏗️ The Command Center: Root Files
With the security modules properly built and "locked," it was time to organize the Command Center of the project: the root folder. This is where the pieces connect to LocalStack. Separating the files into provider.tf, main.tf, and variables.tf ensures that each one has a single responsibility, avoiding the hard-to-maintain "monolith."
provider.tf: The Bridge and the Security Lock
The Terraform Block: Ensuring the Correct Version
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
Why this? During testing, I noticed that the more recent versions (v6.x) of the AWS Provider had protocol XML errors when talking to LocalStack v4. By pinning the version to ~> 5.0, I ensured stability and avoided the MalformedXML error.
The Provider Block: Pointing to LocalStack
provider "aws" {
region = "us-east-1"
access_key = "test"
secret_key = "test"
s3_use_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
s3 = "http://localhost:4566"
}
}
-
Skip configurations: These flags (
skip_credentials_validation,skip_metadata_api_check,skip_requesting_account_id) are specific to the LocalStack test environment. They tell Terraform not to try validating the"test"keys against real AWS servers. Never use these settings in a real production provider configuration. -
Path Style:
s3_use_path_style = truebecause LocalStack handles this URL format for buckets better than the subdomain format used in real cloud. -
Endpoints: The most important line — it redirects all S3 traffic to
localhost:4566, where LocalStack is listening.
🔐 What About Secrets?
Although the code uses "test" keys (which LocalStack accepts by default), I took care to configure GitHub Secrets in my CI/CD pipeline. This means the real access values are never exposed in the code — they are injected via environment variables, simulating exactly how you would protect access keys for a real AWS account.
main.tf: The Conductor
If the modules are the musicians, the root main.tf is the conductor. It doesn't create the bucket by itself; it calls the module I developed and passes the necessary instructions.
module "my_bucket" {
source = "./modules/s3-bucket"
bucket_name = var.bucket_name
}
- source: Points to the path of the module folder.
-
Variable passing: The global variable
bucket_nameis passed directly into the module'sbucket_nameinput, creating a clean and consistent hierarchy.
variables.tf: The Global Control Panel
variable "bucket_name" {
description = "Bucket name defined at the root level"
default = "my-devsecops-study-bucket"
}
This file at the root concentrates all definitions that might change from one environment to another, acting as the project's central control panel.
🔗 How Does It All Connect?
Unlike a manual process where you would run commands one by one in your terminal, the intelligence here lies in the automation. The flow works like a synchronized gear every time I perform a git push:
- The Trigger: The moment I send my code to GitHub, the CI/CD pipeline identifies the change and starts the jobs.
- The Inspection: Before even thinking about creating the bucket, GitHub Actions runs the validation and security scan. If there's an error in the S3 module or an exposed bucket, the pipeline stops here, protecting the environment.
- Scenario Building: Once security gives the green light, GitHub starts a LocalStack container within the execution environment itself.
-
The Silent Connection: The
provider.tffile acts as a GPS, guiding Terraform to the local container using the credentials configured in the repository Secrets. -
Plan Delivery: The global
main.tfcalls the S3 module, injects the name defined invariables.tf, and Terraform generates the final Plan.
This structure allows me to test different configurations simply by changing the code and pushing to the repository — with agility and the confidence that the infrastructure is secure by design and automatically validated.
🏁 Conclusion: What Do I Take from This Lab?
Finishing this project brought me much greater clarity on the role of automation for a Cloud Engineer. More than just writing .tf files, I understood that true excellence lies in creating processes that are secure, repeatable, and cost-efficient.
My biggest lessons:
-
Security is not the end, it's the beginning: Implementing
tfsectaught me that "Shift Left" isn't just a buzzword; it saved me time and prevented vulnerable infrastructure from ever being deployed. - Troubleshooting is part of learning: Solving the MalformedXML error by adjusting the AWS Provider v5.x versions and LocalStack v4 configurations gave me the confidence to handle the real-world "traps" that arise when integrating different tools.
- Zero Cost, Maximum Value: LocalStack proved to be an indispensable ally. The freedom to fail, destroy, and rebuild a simulated environment accelerated my learning without the worry of an AWS bill at the end of the month.
The journey to the cloud is continuous, and tools like Terraform and GitHub Actions are the engines that allow me to navigate with safety and agility.
Did you like this project?
You can check out the full code in my repository: JessicaApBueno/localstack-terraform-lab

Top comments (0)