<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Udoh Deborah</title>
    <description>The latest articles on DEV Community by Udoh Deborah (@udoh_deborah_b1e484c474bf).</description>
    <link>https://dev.to/udoh_deborah_b1e484c474bf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/udoh_deborah_b1e484c474bf"/>
    <language>en</language>
    <item>
      <title>Managing Terraform State: Best Practices for DevOps</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Tue, 31 Mar 2026 20:39:35 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/managing-terraform-state-best-practices-for-devops-35nn</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/managing-terraform-state-best-practices-for-devops-35nn</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;If Day 5 was about building scaled infrastructure, Day 6 was about understanding what keeps it all together terraform state. &lt;/p&gt;

&lt;p&gt;Today I migrated from local state to a fully remote S3 backend with state locking, and the difference between the two is not a small thing. It is the difference between infrastructure you can trust and infrastructure that is one concurrent run away from disaster.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Terraform State?
&lt;/h2&gt;

&lt;p&gt;Every time you run &lt;code&gt;terraform apply&lt;/code&gt;, Terraform writes a file called &lt;code&gt;terraform.tfstate&lt;/code&gt;. This JSON file is Terraform's complete record of everything it manages — every resource, every attribute, every dependency. It is not a log. It is the source of truth.&lt;/p&gt;

&lt;p&gt;When you run &lt;code&gt;terraform plan&lt;/code&gt;, Terraform does three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reads your configuration code&lt;/li&gt;
&lt;li&gt;Reads the state file&lt;/li&gt;
&lt;li&gt;Queries real AWS infrastructure&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It then calculates the difference between what your code says should exist and what actually exists. Without state, none of this is possible. Terraform would have no way to know what it already created.&lt;/p&gt;

&lt;h3&gt;
  
  
  What the state file actually stores
&lt;/h3&gt;

&lt;p&gt;After applying my Day 6 infrastructure, I ran &lt;code&gt;terraform state show aws_lb.web&lt;/code&gt; and was surprised by how much detail was recorded. Every attribute AWS returns for the load balancer is stored — not just the ones I configured. Fields like &lt;code&gt;desync_mitigation_mode&lt;/code&gt;, &lt;code&gt;idle_timeout&lt;/code&gt;, &lt;code&gt;preserve_host_header&lt;/code&gt;, and &lt;code&gt;xff_header_processing&lt;/code&gt; were all there, even though I never set them in my config.&lt;/p&gt;

&lt;p&gt;Running &lt;code&gt;terraform state list&lt;/code&gt; showed every resource Terraform was tracking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ami&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;amazon_linux&lt;/span&gt;
&lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnets&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;
&lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;
&lt;span class="nx"&gt;aws_autoscaling_group&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web&lt;/span&gt;
&lt;span class="nx"&gt;aws_launch_template&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web&lt;/span&gt;
&lt;span class="nx"&gt;aws_lb&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web&lt;/span&gt;
&lt;span class="nx"&gt;aws_lb_listener&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;http&lt;/span&gt;
&lt;span class="nx"&gt;aws_lb_target_group&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web&lt;/span&gt;
&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;alb&lt;/span&gt;
&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why Local State Breaks Down
&lt;/h2&gt;

&lt;p&gt;Local state works fine when you are the only person touching the infrastructure. The moment a second person gets involved, everything breaks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concurrent runs&lt;/strong&gt; — Two engineers run &lt;code&gt;terraform apply&lt;/code&gt; at the same time. Both read the same local state, make different changes, and write back conflicting versions. State is now corrupted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lost state&lt;/strong&gt; — An engineer runs apply on their laptop and the laptop dies. The state file is gone. Terraform no longer knows what it created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No locking&lt;/strong&gt; — Local state has no locking mechanism. Nothing stops two operations from running simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secrets in plaintext&lt;/strong&gt; — The state file stores sensitive values like passwords and access keys in plaintext JSON. Committing it to Git exposes those secrets to everyone with repo access — and to anyone who ever had access, since Git history is permanent.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Solution: Remote State with S3 and DynamoDB
&lt;/h2&gt;

&lt;p&gt;The fix is to store state remotely in AWS S3, with DynamoDB handling state locking. Every engineer and every CI/CD pipeline reads and writes to the same state file, and only one operation can hold the lock at a time.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Bootstrap Problem
&lt;/h3&gt;

&lt;p&gt;Here is the challenge: you cannot use Terraform to create the S3 bucket that Terraform itself needs as a backend. The bucket has to exist before &lt;code&gt;terraform init&lt;/code&gt; can use it.&lt;/p&gt;

&lt;p&gt;The solution is to split the setup into two separate configurations. First, a &lt;code&gt;backend-setup&lt;/code&gt; folder creates the S3 bucket and DynamoDB table using local state. Once those exist, the main configuration can use them as its backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the S3 Bucket and DynamoDB Table
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_state"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-state-585706661633"&lt;/span&gt;

  &lt;span class="nx"&gt;lifecycle&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;prevent_destroy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket_versioning"&lt;/span&gt; &lt;span class="s2"&gt;"enabled"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;versioning_configuration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Enabled"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket_server_side_encryption_configuration"&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;apply_server_side_encryption_by_default&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;sse_algorithm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AES256"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket_public_access_block"&lt;/span&gt; &lt;span class="s2"&gt;"public_access"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;block_public_acls&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;block_public_policy&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;ignore_public_acls&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;restrict_public_buckets&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_dynamodb_table"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_locks"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-state-locks"&lt;/span&gt;
  &lt;span class="nx"&gt;billing_mode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"PAY_PER_REQUEST"&lt;/span&gt;
  &lt;span class="nx"&gt;hash_key&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"LockID"&lt;/span&gt;

  &lt;span class="nx"&gt;attribute&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"LockID"&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"S"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key decisions here:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;prevent_destroy = true&lt;/code&gt; — stops anyone from accidentally deleting the state bucket with &lt;code&gt;terraform destroy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Versioning enabled — every version of the state file is kept, so you can roll back if something goes wrong&lt;/li&gt;
&lt;li&gt;Server-side encryption — state is encrypted at rest using AES256&lt;/li&gt;
&lt;li&gt;Public access blocked — the state bucket is never accessible from the internet&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Configuring the Backend
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;bucket&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-state-585706661633"&lt;/span&gt;
    &lt;span class="nx"&gt;key&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"day6/terraform.tfstate"&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
    &lt;span class="nx"&gt;use_lockfile&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;encrypt&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every argument matters here. &lt;code&gt;bucket&lt;/code&gt; is where state lives. &lt;code&gt;key&lt;/code&gt; is the path inside the bucket — using a path like &lt;code&gt;day6/terraform.tfstate&lt;/code&gt; means multiple projects can share one bucket without overwriting each other. &lt;code&gt;use_lockfile&lt;/code&gt; enables S3-native state locking. &lt;code&gt;encrypt&lt;/code&gt; ensures state is encrypted in transit and at rest.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The older &lt;code&gt;dynamodb_table&lt;/code&gt; parameter is now deprecated in Terraform v5. Use &lt;code&gt;use_lockfile = true&lt;/code&gt; instead — it achieves the same locking behaviour using S3 natively.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Proof It Worked
&lt;/h2&gt;

&lt;p&gt;After running &lt;code&gt;terraform apply&lt;/code&gt;, the infrastructure came up successfully with state stored remotely in S3.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Terminal output showing successful apply with state lock releasing: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;[! &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/owzmwk03aa4ktpuz5ogm.png" rel="noopener noreferrer"&gt;Image description&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The terminal shows all 7 resources created and "Releasing state lock" confirming the lock was acquired and released correctly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ALB response in browser confirming Day 6 infrastructure is live:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bf0rgpwwn592nizzvf0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bf0rgpwwn592nizzvf0.png" alt=" "&gt;&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The page explicitly confirms state is stored in the S3 remote backend.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Checking the S3 bucket confirmed the state file was there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;aws s3 ls s3://terraform-state-585706661633/day6/
2026-03-31 08:29:18      28315 terraform.tfstate
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;28KB, versioned, encrypted, and safely stored in S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing State Locking
&lt;/h2&gt;

&lt;p&gt;To prove locking works, I opened two terminals pointing at the same configuration. Terminal 1 ran &lt;code&gt;terraform apply&lt;/code&gt;. Immediately, Terminal 2 ran &lt;code&gt;terraform plan&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Terminal 2 was blocked with a lock error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Error acquiring the state lock

Error message: ConditionalRequestFailed: The conditional request failed
Lock Info:
  Path:      terraform-state-585706661633/day6/terraform.tfstate.tflock
  Operation: OperationTypeApply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is exactly the behaviour you want in a team environment. No two operations can run simultaneously. The second one waits or fails until the first releases the lock.&lt;/p&gt;




&lt;h2&gt;
  
  
  Errors I Hit and How I Fixed Them
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;S3 bucket does not exist on terraform init&lt;/strong&gt; — I ran &lt;code&gt;terraform init&lt;/code&gt; in the root folder before the S3 bucket existed. Fix: run the &lt;code&gt;backend-setup&lt;/code&gt; config first to create the bucket, then init the main config.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deprecated &lt;code&gt;dynamodb_table&lt;/code&gt; parameter&lt;/strong&gt; — Terraform v5 replaced this with &lt;code&gt;use_lockfile = true&lt;/code&gt;. Updated the backend block accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stuck state lock after DNS failure&lt;/strong&gt; — A DNS drop mid-apply left a &lt;code&gt;.tflock&lt;/code&gt; file in S3. Fix: &lt;code&gt;aws s3 rm s3://terraform-state-585706661633/day6/terraform.tfstate.tflock&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intermittent DNS failures&lt;/strong&gt; — An unstable internet connection caused repeated &lt;code&gt;no such host&lt;/code&gt; errors. Fix: wait for connection to stabilise and retry — the infrastructure and state were always fine, just the network dropping temporarily.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Terraform state is the source of truth — treat it with the same care as your database&lt;/li&gt;
&lt;li&gt;Never commit &lt;code&gt;terraform.tfstate&lt;/code&gt; to Git — use remote state from day one&lt;/li&gt;
&lt;li&gt;The bootstrap problem is real — always create your backend infrastructure in a separate config&lt;/li&gt;
&lt;li&gt;State locking is not optional in a team environment — it is what prevents catastrophic corruption&lt;/li&gt;
&lt;li&gt;S3 versioning is your safety net — it lets you recover from a bad apply by rolling back to a previous state version&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Managing High Traffic Applications with AWS Elastic Load Balancer and Terraform</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Mon, 30 Mar 2026 16:49:16 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/managing-high-traffic-applications-with-aws-elastic-load-balancer-and-terraform-233n</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/managing-high-traffic-applications-with-aws-elastic-load-balancer-and-terraform-233n</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;On Day 5 of the 30-Day Terraform Challenge, I tackled two of the most important concepts in production infrastructure: scaling with an AWS Application Load Balancer (ALB) and understanding Terraform state. By the end of the day, I had a fully load-balanced cluster running across multiple availability zones — and a much deeper understanding of what Terraform is actually doing behind the scenes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;A fully production-ready scaled infrastructure consisting of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Application Load Balancer accepting public HTTP traffic on port 80&lt;/li&gt;
&lt;li&gt;An Auto Scaling Group running a minimum of 2 EC2 instances across multiple AZs&lt;/li&gt;
&lt;li&gt;A Target Group with HTTP health checks ensuring only healthy instances receive traffic&lt;/li&gt;
&lt;li&gt;Security Groups that restrict direct instance access — only the ALB can talk to the instances&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ALB + ASG Setup Walkthrough
&lt;/h2&gt;

&lt;p&gt;The Architecture&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Internet
    │
    ▼
[ ALB - port 80 ]
    │
    ▼
[ Target Group ]
    │         │
    ▼         ▼
[EC2 - AZ-a] [EC2 - AZ-b]
  (port 8080) (port 8080)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ALB sits in front of the ASG. All public traffic hits the ALB on port 80, which forwards it to healthy instances in the target group on port 8080. Instances are not directly accessible from the internet — their security group only allows traffic from the ALB security group.&lt;br&gt;
Security Groups — The Right Way&lt;br&gt;
A common mistake is opening instance security groups to 0.0.0.0/0. The correct pattern is to reference the ALB security group directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ALB accepts traffic from the internet&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"alb"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-day5-alb-sg"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;egress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Instances only accept traffic FROM the ALB security group&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"instance"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-day5-instance-sg"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;server_port&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;server_port&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;security_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;alb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;egress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Launch Template and User Data
&lt;/h2&gt;

&lt;p&gt;Each instance runs a simple Python HTTP server on port 8080, started via a User Data script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_launch_template"&lt;/span&gt; &lt;span class="s2"&gt;"web"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name_prefix&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-day5-"&lt;/span&gt;
  &lt;span class="nx"&gt;image_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ami&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;amazon_linux&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance_type&lt;/span&gt;

  &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;user_data&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;base64encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
    #!/bin/bash
    mkdir -p /var/www
    cat &amp;gt; /var/www/index.html &amp;lt;&amp;lt;HTML
    &amp;lt;html&amp;gt;
      &amp;lt;body&amp;gt;
        &amp;lt;h1&amp;gt;Hello from $(hostname)&amp;lt;/h1&amp;gt;
        &amp;lt;p&amp;gt;Instance ID: $(curl -s http://169.254.169.254/latest/meta-data/instance-id)&amp;lt;/p&amp;gt;
        &amp;lt;p&amp;gt;AZ: $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)&amp;lt;/p&amp;gt;
      &amp;lt;/body&amp;gt;
    &amp;lt;/html&amp;gt;
    HTML
    cd /var/www
    nohup python3 -m http.server 8080 &amp;amp;
&lt;/span&gt;&lt;span class="no"&gt;  EOF
&lt;/span&gt;  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Auto Scaling Group
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_autoscaling_group"&lt;/span&gt; &lt;span class="s2"&gt;"web"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-day5-asg"&lt;/span&gt;
  &lt;span class="nx"&gt;min_size&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;min_size&lt;/span&gt;
  &lt;span class="nx"&gt;max_size&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;max_size&lt;/span&gt;
  &lt;span class="nx"&gt;desired_capacity&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;min_size&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_zone_identifier&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ids&lt;/span&gt;
  &lt;span class="nx"&gt;target_group_arns&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_lb_target_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;health_check_type&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ELB"&lt;/span&gt;
  &lt;span class="nx"&gt;health_check_grace_period&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;
  &lt;span class="nx"&gt;wait_for_capacity_timeout&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10m"&lt;/span&gt;

  &lt;span class="nx"&gt;launch_template&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_launch_template&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
    &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="s2"&gt;Latest"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Application Load Balancer and Target Group
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lb"&lt;/span&gt; &lt;span class="s2"&gt;"web"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-day5-alb"&lt;/span&gt;
  &lt;span class="nx"&gt;internal&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="nx"&gt;load_balancer_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"application"&lt;/span&gt;
  &lt;span class="nx"&gt;security_groups&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;alb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;subnets&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ids&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lb_target_group"&lt;/span&gt; &lt;span class="s2"&gt;"web"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-day5-tg"&lt;/span&gt;
  &lt;span class="nx"&gt;port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;server_port&lt;/span&gt;
  &lt;span class="nx"&gt;protocol&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;health_check&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;enabled&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;path&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/"&lt;/span&gt;
    &lt;span class="nx"&gt;port&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"traffic-port"&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP"&lt;/span&gt;
    &lt;span class="nx"&gt;healthy_threshold&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="nx"&gt;unhealthy_threshold&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="nx"&gt;interval&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;
    &lt;span class="nx"&gt;timeout&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
    &lt;span class="nx"&gt;matcher&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"200"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lb_listener"&lt;/span&gt; &lt;span class="s2"&gt;"http"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;load_balancer_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_lb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
  &lt;span class="nx"&gt;port&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
  &lt;span class="nx"&gt;protocol&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP"&lt;/span&gt;

  &lt;span class="nx"&gt;default_action&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"forward"&lt;/span&gt;
    &lt;span class="nx"&gt;target_group_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_lb_target_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Proof It Worked
&lt;/h2&gt;

&lt;p&gt;After terraform apply completed, hitting the ALB DNS name in the browser returned:&lt;/p&gt;

&lt;p&gt;[! [Image description](&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h0hhsc601uyw5pl8nn3e.png" rel="noopener noreferrer"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h0hhsc601uyw5pl8nn3e.png&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Refreshing the page cycled through different instance IDs and AZs — confirming the load balancer was distributing traffic across the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  terraform State Deep Dive
&lt;/h2&gt;

&lt;p&gt;What is Terraform State?&lt;br&gt;
When you run terraform apply, Terraform creates a file called terraform.tfstate. This JSON file is Terraforms source of truth — it maps every resource in your configuration to the real resource that exists in AWS.&lt;br&gt;
Without state, Terraform would have no way to know:&lt;/p&gt;

&lt;p&gt;Which resources it already created&lt;br&gt;
What the current configuration of those resources is&lt;br&gt;
What needs to change when you update your code&lt;/p&gt;

&lt;p&gt;What the State File Contains&lt;br&gt;
Opening terraform.tfstate after the Day 5 deployment revealed detailed information about every resource:&lt;/p&gt;

&lt;p&gt;Resource type and name — e.g. aws_lb.web&lt;br&gt;
Provider metadata — which provider manages the resource&lt;br&gt;
All attributes — ARNs, IDs, DNS names, tags, ports, every setting&lt;br&gt;
Dependencies — which resources depend on which&lt;/p&gt;

&lt;p&gt;It is essentially a complete snapshot of your infrastructure at the time of the last apply.&lt;br&gt;
Why State Must Never Be Committed to Git&lt;br&gt;
The state file contains sensitive data — resource IDs, ARNs, and potentially secrets if you have outputs exposing them. Beyond security, committing state to Git causes serious problems in team environments:&lt;/p&gt;

&lt;p&gt;Two engineers apply at the same time — state gets corrupted&lt;br&gt;
Someone applies from an old branch — state goes out of sync&lt;br&gt;
Merge conflicts in state files are nearly impossible to resolve safely&lt;/p&gt;

&lt;p&gt;The solution is remote state — storing state in S3, Terraform Cloud, or another backend that supports locking.&lt;/p&gt;
&lt;h2&gt;
  
  
  State Locking
&lt;/h2&gt;

&lt;p&gt;State locking prevents two operations from running simultaneously against the same state. Without it, two engineers running terraform apply at the same time can corrupt the state file permanently. When using S3 as a remote backend, DynamoDB is used for locking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;bucket&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-terraform-state"&lt;/span&gt;
    &lt;span class="nx"&gt;key&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"day5/terraform.tfstate"&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
    &lt;span class="nx"&gt;dynamodb_table&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-locks"&lt;/span&gt;
    &lt;span class="nx"&gt;encrypt&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  State Experiments
&lt;/h2&gt;

&lt;p&gt;Experiment 1 — Manual State Tampering&lt;br&gt;
I manually edited terraform.tfstate and changed the Day tag value from "5" to "99", then ran terraform plan.&lt;br&gt;
Terraform immediately detected the discrepancy. It compared the state file (which said Day=99) against the configuration code (which said Day=5) and proposed to update the tag back to 5.&lt;br&gt;
Key insight: &lt;/p&gt;

&lt;p&gt;Terraform always reconciles three thing... your code, the state file, and real infrastructure. When state and code disagree, Terraform treats the code as the desired state and proposes changes to match it.&lt;br&gt;
After running terraform apply, the tag was corrected and state was back in sync.&lt;/p&gt;

&lt;p&gt;Experiment 2 — Infrastructure Drift via AWS Console&lt;br&gt;
I manually changed the Day tag on a running EC2 instance directly in the AWS Console from 5 to MANUAL, without touching any Terraform code.&lt;br&gt;
Running terraform plan detected the drift immediately. Even though the code and state file both said Day=5, Terraform queried AWS directly and saw the real value was MANUAL. It proposed to revert the tag back to 5.&lt;/p&gt;

&lt;p&gt;[! &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bggtk6sngy48xlhmanu3.png" rel="noopener noreferrer"&gt;Image description&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Key insight: Terraform does not rely solely on the state file — it also refreshes real infrastructure on every plan. This is how it detects drift caused by manual changes outside of Terraform.&lt;br&gt;
Running terraform apply corrected the drift automatically.&lt;/p&gt;
&lt;h2&gt;
  
  
  Errors I Hit and How I Fixed Them
&lt;/h2&gt;

&lt;p&gt;Error 1 — 502 Bad Gateway&lt;br&gt;
Cause: The EC2 instances had no web server running, so the ALB had no healthy targets to route traffic to.&lt;br&gt;
Fix: Added a User Data script to the Launch Template that starts a Python HTTP server on port 8080 on every instance boot.&lt;/p&gt;

&lt;p&gt;Error 2 — Instance type not supported in us-east-1e&lt;br&gt;
Your requested instance type (t3.micro) is not supported in your&lt;br&gt;
requested Availability Zone (us-east-1e)&lt;br&gt;
Cause: The aws_subnets data source was fetching all subnets including us-east-1e, which does not support t3.micro.&lt;br&gt;
Fix: Added an AZ filter to the subnets data source to explicitly exclude us-east-1e:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_subnets"&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vpc-id"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;default&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"availabilityZone"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"us-east-1a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1b"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1d"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1f"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The ALB + ASG pattern is the foundation of every scalable AWS architecture&lt;/li&gt;
&lt;li&gt;Security groups should reference each other, not open 0.0.0.0/0 to everything&lt;/li&gt;
&lt;li&gt;Terraform state is the source of truth — understand it before you trust it&lt;/li&gt;
&lt;li&gt;Never commit terraform.tfstate to Git — use remote state with locking&lt;/li&gt;
&lt;li&gt;Drift happens — terraform plan is your best tool for detecting and fixing it&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Day 4: Scaling to the Clouds – Building a Self-Healing Cluster on AWS with Terraform</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Thu, 26 Mar 2026 19:12:17 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/day-4-scaling-to-the-clouds-building-a-self-healing-cluster-on-aws-with-terraform-2ag1</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/day-4-scaling-to-the-clouds-building-a-self-healing-cluster-on-aws-with-terraform-2ag1</guid>
      <description>&lt;p&gt;After successfully launching a single web server on Day 3, today was all about High Availability (HA). I moved away from the "Single Point of Failure" model and built a distributed system that can handle traffic spikes and hardware failures automatically.&lt;/p&gt;

&lt;p&gt;The Architecture: From One to Many&lt;br&gt;
Instead of one lonely EC2 instance, I deployed an Application Load Balancer (ALB) sitting in front of an Auto Scaling Group (ASG).&lt;/p&gt;

&lt;p&gt;The Stack:&lt;/p&gt;

&lt;p&gt;Infrastructure as Code: Terraform&lt;/p&gt;

&lt;p&gt;Compute: AWS EC2 (t3.micro)&lt;/p&gt;

&lt;p&gt;Scaling: Auto Scaling Group (min: 2, max: 5)&lt;/p&gt;

&lt;p&gt;Networking: Application Load Balancer (ALB)&lt;/p&gt;

&lt;p&gt;Configuration: Launch Templates with Base64 User Data&lt;/p&gt;

&lt;p&gt;The Reality Check: Troubleshooting the "502 Bad Gateway"&lt;br&gt;
If you think Cloud Engineering is just writing code and hitting "Apply," Day 4 will humble you. I ran into several "final bosses" today:&lt;/p&gt;

&lt;p&gt;The Launch Template Trap: Unlike standard EC2 instances, Launch Templates require user data to be Base64 encoded. Without this, the bash script never runs, and the server never starts.&lt;/p&gt;

&lt;p&gt;The Silent Firewall: I had my security groups open for inbound traffic, but I forgot the egress (outbound) rules. If the ALB can't "talk" to the instances to check their health, it marks them as unhealthy and gives you a 502 Bad Gateway.&lt;/p&gt;

&lt;p&gt;Target Group Health Checks: I learned that the "Health Check" is a literal conversation. If the timeout is too short, the instance gets killed before it even finishes booting. Relaxing the health check intervals was the key to stability.&lt;/p&gt;

&lt;p&gt;Key Takeaways&lt;br&gt;
DRY (Don't Repeat Yourself): Used Terraform variables to ensure the Load Balancer and the EC2 instances were always aligned on the same port (8080).&lt;/p&gt;

&lt;p&gt;Self-Healing: I tested the ASG by manually terminating an instance, and watched as AWS automatically detected the loss and spun up a replacement.&lt;/p&gt;

&lt;p&gt;Decoupling: By using an ALB, my "users" only ever see one DNS name, even if the servers behind it are constantly changing.&lt;/p&gt;

&lt;p&gt;What’s Next?&lt;br&gt;
Day 4 was a masterclass in networking and state management. Tomorrow, I’ll be diving into Terraform State and how to manage these resources in a team environment without causing a "state-file war."&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Deploying Your First Server with Terraform: A Beginner’s Guide</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Wed, 25 Mar 2026 20:31:09 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/deploying-your-first-server-with-terraform-a-beginners-guide-300</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/deploying-your-first-server-with-terraform-a-beginners-guide-300</guid>
      <description>&lt;p&gt;Deploying Your First Server with Terraform: A Beginner’s Guide&lt;br&gt;
Today, I stopped clicking around the console and started writing code.&lt;/p&gt;

&lt;p&gt;Welcome to Day 3 of my journey with the #30DayTerraformChallenge. Up until now, everything was conceptual. But today, the rubber hit the road: I successfully deployed a real, live virtual server (EC2) on AWS using only Terraform.&lt;/p&gt;

&lt;p&gt;It felt like magic. But like any good magic trick, there's logic, syntax, and a few failed attempts behind the scenes. In this post, I will walk you through exactly how I did it, what the code looks like, what the basic commands do, and the errors that almost stopped me.&lt;/p&gt;

&lt;p&gt;What We Are Building Today&lt;br&gt;
We aren't just creating a blank virtual machine. The goal of this task is to spin up a basic, accessible Web Server.&lt;/p&gt;

&lt;p&gt;To do that, our configuration file needs to describe three key components working together:&lt;/p&gt;

&lt;p&gt;The Environment (Provider): Tell Terraform to talk to AWS and which "room" (region) to work in.&lt;/p&gt;

&lt;p&gt;The Firewall (Security Group): Tell AWS to open Port 80, allowing regular web traffic (HTTP) from the internet to reach our server.&lt;/p&gt;

&lt;p&gt;The Server (EC2 Instance): This is the virtual machine itself. We will select an eligible "Free Tier" instance type (t3.micro) and add a small User Data script to automatically install Apache and launch a "Hello World" webpage.&lt;/p&gt;

&lt;p&gt;The Architecture (Visualized)&lt;br&gt;
This diagram shows exactly how my code relates to real AWS resources and how a user (like you!) connects to the web page over the internet.&lt;/p&gt;

&lt;p&gt;Internet Gateway: This is the entrance. Web traffic flows from the User's browser to our network.&lt;/p&gt;

&lt;p&gt;Security Group: This is the stateful firewall perimeter surrounding the server. My code opens an "Ingress" path for Port 80 (HTTP) to allow incoming requests.&lt;/p&gt;

&lt;p&gt;EC2 Instance: The core virtual server running Amazon Linux 2023. This is what we are deploying.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deconstructing the Code
&lt;/h2&gt;

&lt;p&gt;This is the entire content of my main.tf file. When writing this, I avoided copy-pasting and typed it all out manually, which is crucial for building muscle memory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;##  1. THE PROVIDER BLOCK&lt;/span&gt;
&lt;span class="c1"&gt;# This tells Terraform that our "cloud platform of choice" is AWS and we want to deploy in N. Virginia (us-east-1). Think of this as the initial connection "handshake."&lt;/span&gt;

&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 5.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# 2. THE AMI DATA SOURCE&lt;/span&gt;
&lt;span class="c1"&gt;# I learned early on that AMI IDs from tutorials expire. This block automatically searches AWS to find the current, compatible, and FREE-TIER ELIGIBLE image for my region.&lt;/span&gt;

&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_ami"&lt;/span&gt; &lt;span class="s2"&gt;"latest_amazon_linux"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;owners&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"amazon"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"name"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"al2023-ami-202*-x86_64"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"virtualization-type"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"hvm"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# 3. THE SECURITY GROUP RESOURCE&lt;/span&gt;
&lt;span class="c1"&gt;# This is our firewall. It defines what can come "in" (ingress) and what can go "out" (egress). &lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"web_sg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"day3-web-sg"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow HTTP inbound traffic on Port 80"&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Open to ALL IP addresses&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;egress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt; &lt;span class="c1"&gt;# Represents ALL Protocols (TCP, UDP, etc.)&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# 4. THE EC2 INSTANCE RESOURCE&lt;/span&gt;
&lt;span class="c1"&gt;# This is the star of the show. Note how we reference the outputs of other blocks rather than hardcoding.&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"web_server"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ami&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;latest_amazon_linux&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="c1"&gt;# Links to the dynamic search result&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t2.micro"&lt;/span&gt;                      &lt;span class="c1"&gt;# Falls under AWS Free Tier&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web_sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Links the firewall to the server&lt;/span&gt;

  &lt;span class="c1"&gt;# User Data Script: Runs only once on the first boot&lt;/span&gt;
  &lt;span class="nx"&gt;user_data&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
              #!/bin/bash
              dnf update -y
              dnf install -y httpd
              systemctl start httpd
              systemctl enable httpd
              echo "&amp;lt;h1&amp;gt;Terraform Day 3: Server Live!&amp;lt;/h1&amp;gt;" &amp;gt; /var/www/html/index.html
&lt;/span&gt;&lt;span class="no"&gt;              EOF

&lt;/span&gt;  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Terraform-Day3-Server"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# 5. THE PUBLIC IP OUTPUT&lt;/span&gt;
&lt;span class="c1"&gt;# This prints the final Public IP address in my terminal so I don't have to log into the AWS Console to find it.&lt;/span&gt;

&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"public_ip"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web_server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running the Workflow
&lt;/h2&gt;

&lt;p&gt;Once my code was ready, I ran the foundational Terraform commands in my VS Code terminal.&lt;/p&gt;

&lt;p&gt;1.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(The Handshake)&lt;br&gt;
This initializes the directory. It reads your provider block, goes online, downloads the necessary AWS plugin (the provider package), and creates the .terraform directory.&lt;/p&gt;

&lt;p&gt;[IMAGE PLACEHOLDER: Insert screenshot of terraform init output here]&lt;/p&gt;

&lt;p&gt;2.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(The Dry Run)&lt;br&gt;
This is the most critical step. It compares your desired state (your code) with the current state (an empty AWS account) and tells you exactly what it will do. This is your insurance policy against accidental deletions. My plan confirmed: "Plan: 2 to add, 0 to change, 0 to destroy."&lt;/p&gt;

&lt;p&gt;3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform apply 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(The Provisioning)&lt;br&gt;
This executes the plan. After review, I typed yes and hit Enter. For about 30 seconds, I watched my terminal spin, creating the security group and then the server.&lt;/p&gt;

&lt;p&gt;[IMAGE PLACEHOLDER: Insert screenshot of "Apply complete!" with Public IP output here]&lt;/p&gt;

&lt;p&gt;Finally, the glorious green text: "Apply complete! Resources: 2 added, 0 changed, 0 destroyed."&lt;/p&gt;

&lt;p&gt;Success and Challenges (The Part You Came For)&lt;br&gt;
Confirmation:&lt;br&gt;
I copied the public_ip output from my terminal ([your IP here, e.g., 3.84.152.1]) and pasted it into my browser. Success! The page loaded instantly with my custom message.&lt;/p&gt;

&lt;p&gt;[IMAGE PLACEHOLDER: Insert screenshot of the webpage loading in your browser]&lt;/p&gt;

&lt;p&gt;Challenges:&lt;br&gt;
It wasn't all smooth sailing. I encountered three major errors that initially stopped me.&lt;/p&gt;

&lt;p&gt;Permission Error: My initial apply failed with UnauthorizedOperation.&lt;/p&gt;

&lt;p&gt;Fix: I forgot that my newly created IAM user had zero permissions. I went to the AWS IAM console and attached the AdministratorAccess policy directly to the user to allow the code to create resources.&lt;/p&gt;

&lt;p&gt;Environment Pathing Error: When I first tried to run terraform init, the terminal gave an error that the command wasn't recognized.&lt;/p&gt;

&lt;p&gt;Fix: My initial installation script had a small error. I navigated to my desktop and downloaded a pre-configured, corrected zip file. Once unzipped, I used the PowerShell command Set-Location to move into the correct terraform-day3 folder, where the executable was properly pathed.&lt;/p&gt;

&lt;p&gt;AMI Compatibility Error: My deployment repeatedly failed with the message InvalidParameterCombination. It stated the instance type was not eligible for Free Tier.&lt;/p&gt;

&lt;p&gt;Fix: I was using an AMI ID from a months-old tutorial. That specific image was deprecated and no longer supported the t2.micro. I resolved this definitively by adding the data "aws_ami" block to dynamically fetch the latest compatible Amazon Linux AMI. Lesson learned: Never hardcode AMI IDs.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Cleanup
&lt;/h2&gt;

&lt;p&gt;To avoid any unexpected AWS charges, I ran the final command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After reviewing the plan (which confirmed "2 to destroy"), I typed yes. Within a minute, Terraform wiped the slate clean.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion and Day 4 Preview
&lt;/h2&gt;

&lt;p&gt;Infrastructure as Code feels like a superpower. Going from a blank text file to a live server without ever touching the console interface is incredibly powerful for consistency, speed, and recovery.&lt;/p&gt;

&lt;p&gt;Stay tuned for Day 4, where we will move past single resources and learn how to manage state files and collaborate effectively.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>terraform</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Day 2: Step-by-Step Guide to Setting Up Terraform, AWS CLI, and Your AWS Environment</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Sat, 21 Mar 2026 19:25:09 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/day-2-setting-up-your-terraform-environment-5883</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/day-2-setting-up-your-terraform-environment-5883</guid>
      <description>&lt;h1&gt;
  
  
  Step-by-Step Guide to Setting Up Terraform, AWS CLI, and Your AWS Environment
&lt;/h1&gt;

&lt;p&gt;Getting started with Terraform can feel overwhelming at first, especially when you have to connect multiple tools like AWS, the CLI, and your local environment. But once your setup is done correctly, everything else becomes much easier.&lt;/p&gt;

&lt;p&gt;In this guide, I’ll walk through how I set up my environment step-by-step so I’m ready to start deploying real infrastructure using Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Set Up Your AWS Account
&lt;/h2&gt;

&lt;p&gt;If you don’t already have an AWS account, create one and make sure to secure it properly.&lt;/p&gt;

&lt;p&gt;The first thing I did after creating my account was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable &lt;strong&gt;Multi-Factor Authentication (MFA)&lt;/strong&gt; on the root account&lt;/li&gt;
&lt;li&gt;Set up a &lt;strong&gt;billing alert&lt;/strong&gt; to avoid unexpected charges&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is important because even small mistakes in cloud environments can lead to costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create an IAM User for Terraform
&lt;/h2&gt;

&lt;p&gt;Instead of using the root account (which is not recommended), I created a dedicated &lt;strong&gt;IAM user&lt;/strong&gt; for Terraform.&lt;/p&gt;

&lt;p&gt;Key steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enabled &lt;strong&gt;programmatic access&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Attached appropriate permissions (for learning, I used broad access, but in production, least privilege is best)&lt;/li&gt;
&lt;li&gt;Saved the &lt;strong&gt;Access Key ID&lt;/strong&gt; and &lt;strong&gt;Secret Access Key&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This IAM user will be used by Terraform to interact with AWS securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Install and Configure AWS CLI
&lt;/h2&gt;

&lt;p&gt;Next, I installed the AWS CLI, which allows me to interact with AWS from the terminal.&lt;/p&gt;

&lt;p&gt;After installation, I configured it using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I entered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access Key&lt;/li&gt;
&lt;li&gt;Secret Key&lt;/li&gt;
&lt;li&gt;Default region (I chose &lt;code&gt;us-east-1&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Output format (json)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To confirm everything was working, I ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws sts get-caller-identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This returned my AWS account details, which confirmed that authentication was successful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Install Terraform
&lt;/h2&gt;

&lt;p&gt;I then installed Terraform on my machine.&lt;/p&gt;

&lt;p&gt;To confirm installation, I ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this worked, I knew Terraform was ready to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Connect Terraform to AWS
&lt;/h2&gt;

&lt;p&gt;One thing I learned is that Terraform doesn’t need separate credentials if AWS CLI is already configured.&lt;/p&gt;

&lt;p&gt;Terraform automatically uses the credentials stored by the AWS CLI.&lt;/p&gt;

&lt;p&gt;This makes things much simpler because once your CLI is working, Terraform can immediately interact with AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Set Up Visual Studio Code
&lt;/h2&gt;

&lt;p&gt;To make development easier, I installed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HashiCorp Terraform extension&lt;/li&gt;
&lt;li&gt;AWS Toolkit extension&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These help with syntax highlighting, validation, and managing resources more efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Issues and Fixes
&lt;/h2&gt;

&lt;p&gt;One issue I ran into was Terraform not recognizing AWS credentials.&lt;/p&gt;

&lt;p&gt;This was resolved by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Re-running &lt;code&gt;aws configure&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Making sure the correct IAM user credentials were used&lt;/li&gt;
&lt;li&gt;Verifying setup using &lt;code&gt;aws sts get-caller-identity&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This step is important because most Terraform errors at the beginning come from misconfigured credentials.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Always use an &lt;strong&gt;IAM user&lt;/strong&gt;, not the root account&lt;/li&gt;
&lt;li&gt;Verify your setup early using CLI commands&lt;/li&gt;
&lt;li&gt;Terraform depends on AWS CLI configuration for authentication&lt;/li&gt;
&lt;li&gt;A proper setup saves time and prevents errors later&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Day 2 was all about getting the foundation right. It might not feel as exciting as deploying infrastructure, but this setup is what makes everything else possible.&lt;/p&gt;

&lt;p&gt;Now that everything is configured and working, I’m ready to start building real infrastructure with Terraform.&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>Day 1 – 30 Days Terraform Challenge</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Mon, 16 Mar 2026 22:13:19 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/day-1-30-days-terraform-challenge-2fam</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/day-1-30-days-terraform-challenge-2fam</guid>
      <description>&lt;p&gt;What is Infrastructure as Code and Why It’s Transforming DevOps&lt;/p&gt;

&lt;p&gt;Modern software moves fast. Applications are deployed multiple times a day, teams work across different environments, and infrastructure needs to scale quickly. Managing servers and cloud resources manually simply cannot keep up with this pace. This is where Infrastructure as Code (IaC) comes in.&lt;/p&gt;

&lt;p&gt;Infrastructure as Code is the practice of managing and provisioning infrastructure using code instead of manual processes. Instead of logging into a cloud console and creating servers, networks, or databases one by one, engineers write configuration files that define what infrastructure should look like. These files can then be executed to automatically create and manage the resources.&lt;/p&gt;

&lt;p&gt;In simple terms, IaC allows infrastructure to be version-controlled, repeatable, and automated, just like application code.&lt;/p&gt;

&lt;p&gt;The Problem IaC Solves&lt;/p&gt;

&lt;p&gt;Before IaC became widely adopted, infrastructure was often created manually. Engineers would log into servers, configure environments step by step, and deploy resources through web dashboards.&lt;/p&gt;

&lt;p&gt;This approach created several problems:&lt;br&gt;
    • Inconsistency between environments (development, staging, production)&lt;br&gt;
    • Human errors during manual configuration&lt;br&gt;
    • Difficult troubleshooting because changes were not always tracked&lt;br&gt;
    • Slow deployments due to manual setup processes&lt;/p&gt;

&lt;p&gt;Infrastructure as Code solves these problems by allowing teams to define infrastructure in a structured, repeatable way. Once written, the same configuration can be used to create identical environments anywhere.&lt;/p&gt;

&lt;p&gt;Declarative vs Imperative Infrastructure&lt;/p&gt;

&lt;p&gt;When working with Infrastructure as Code, there are two main approaches: imperative and declarative.&lt;/p&gt;

&lt;p&gt;An imperative approach requires you to write instructions step by step. You explicitly tell the system how to create each resource.&lt;/p&gt;

&lt;p&gt;A declarative approach, which Terraform uses, focuses on describing the desired final state. Instead of listing every step, you simply declare what resources should exist, and Terraform figures out how to create or modify them.&lt;/p&gt;

&lt;p&gt;For example, instead of saying:&lt;br&gt;
    • Create a server&lt;br&gt;
    • Configure networking&lt;br&gt;
    • Attach storage&lt;/p&gt;

&lt;p&gt;You define something like:&lt;/p&gt;

&lt;p&gt;“I want a server with this configuration.”&lt;/p&gt;

&lt;p&gt;Terraform then handles the process of building it.&lt;/p&gt;

&lt;p&gt;This makes infrastructure management simpler, safer, and easier to maintain.&lt;/p&gt;

&lt;p&gt;Why Terraform is Worth Learning&lt;/p&gt;

&lt;p&gt;Terraform is one of the most popular Infrastructure as Code tools because it is cloud-agnostic and highly scalable.&lt;/p&gt;

&lt;p&gt;With Terraform, you can manage infrastructure across multiple platforms including AWS, Azure, Google Cloud, and many others using the same workflow.&lt;/p&gt;

&lt;p&gt;Some key advantages of Terraform include:&lt;br&gt;
    • Automation of infrastructure provisioning&lt;br&gt;
    • Consistent environments&lt;br&gt;
    • Version control through Git&lt;br&gt;
    • Infrastructure visibility through state management&lt;br&gt;
    • Ability to manage complex systems easily&lt;/p&gt;

&lt;p&gt;Terraform also uses a declarative configuration language called HCL (HashiCorp Configuration Language), which is relatively easy to read and write.&lt;/p&gt;

&lt;p&gt;Because of these benefits, Terraform has become a core tool in many DevOps and cloud engineering workflows.&lt;/p&gt;

&lt;p&gt;My Goals for the 30-Day Terraform Challenge&lt;/p&gt;

&lt;p&gt;As someone actively building skills in cloud engineering and DevOps, I joined the 30-Day Terraform Challenge to deepen my practical understanding of Infrastructure as Code.&lt;/p&gt;

&lt;p&gt;My goals for this challenge are simple:&lt;br&gt;
    • Gain hands-on experience writing Terraform configurations&lt;br&gt;
    • Understand how infrastructure can be automated and scaled&lt;br&gt;
    • Learn best practices for managing cloud infrastructure&lt;br&gt;
    • Build real-world projects that strengthen my DevOps skills&lt;/p&gt;

&lt;p&gt;I believe the best way to learn cloud technologies is by building consistently and sharing the journey.&lt;/p&gt;

&lt;p&gt;This challenge is just the beginning.&lt;/p&gt;

&lt;p&gt;Over the next 30 days, I’ll be exploring Terraform deeper, creating infrastructure using code, and documenting what I learn along the way.&lt;/p&gt;

&lt;p&gt;If you’re also learning Terraform or DevOps, feel free to follow along.&lt;/p&gt;

&lt;p&gt;Let’s keep building.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>beginners</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Day 72 : Grafana</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Sun, 26 Oct 2025 14:37:43 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/day-72-grafana-3j6n</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/day-72-grafana-3j6n</guid>
      <description>&lt;p&gt;Day 72 — Monitoring with Grafana&lt;/p&gt;

&lt;p&gt;1️⃣ What is Grafana?&lt;/p&gt;

&lt;p&gt;Grafana is an open-source monitoring and visualization platform that allows you to query, analyze, and display metrics from different data sources in real-time dashboards.&lt;br&gt;
It helps teams observe infrastructure, applications, and services through interactive charts and alerts.&lt;/p&gt;

&lt;p&gt;2️⃣ Why Grafana?&lt;/p&gt;

&lt;p&gt;You can’t manually monitor infrastructure 24/7 — Grafana does it smartly by providing:&lt;br&gt;
    • Centralized dashboards for multiple environments.&lt;br&gt;
    • Real-time metric visualization.&lt;br&gt;
    • Custom alerts that notify you when something’s wrong.&lt;br&gt;
    • Easy integration with tools like Prometheus, AWS CloudWatch, InfluxDB, and more.&lt;/p&gt;

&lt;p&gt;In short — Grafana turns raw monitoring data into insightful visuals and alerts.&lt;/p&gt;

&lt;p&gt;3️⃣ Features of Grafana&lt;br&gt;
    • Interactive Dashboards — Create panels, graphs, and charts with drag-and-drop.&lt;br&gt;
    • Alerting System — Send notifications via email, Slack, PagerDuty, etc.&lt;br&gt;
    • Multi-Source Support — Connect to over 30+ data sources like Prometheus, MySQL, Elasticsearch, AWS CloudWatch, and Loki.&lt;br&gt;
    • Plugins — Extend functionality with custom panels and data source plugins.&lt;br&gt;
    • User Management — Control who can view, edit, or administer dashboards.&lt;br&gt;
    • Annotations — Mark events on graphs to visualize changes (like deployments or outages).&lt;/p&gt;

&lt;p&gt;4️⃣ What Type of Monitoring Can Be Done via Grafana?&lt;br&gt;
    • Infrastructure Monitoring — Servers, CPU, Memory, Disk, Network.&lt;br&gt;
    • Application Monitoring — Microservices, APIs, latency, error rates.&lt;br&gt;
    • Cloud Monitoring — AWS, Azure, GCP metrics via native integrations.&lt;br&gt;
    • Container Monitoring — Kubernetes, Docker metrics through Prometheus.&lt;br&gt;
    • Business Metrics — Custom metrics like sales, traffic, user signups, etc.&lt;/p&gt;

&lt;p&gt;5️⃣ Databases That Work with Grafana&lt;/p&gt;

&lt;p&gt;Grafana connects to a wide range of data sources, including:&lt;br&gt;
    • Prometheus&lt;br&gt;
    • InfluxDB&lt;br&gt;
    • Graphite&lt;br&gt;
    • Elasticsearch&lt;br&gt;
    • AWS CloudWatch&lt;br&gt;
    • MySQL / PostgreSQL&lt;br&gt;
    • Loki (for logs)&lt;/p&gt;

&lt;p&gt;6️⃣ Metrics and Visualizations in Grafana&lt;br&gt;
    • Metrics → Quantitative data points collected from systems (e.g., CPU usage, requests per second, memory utilization).&lt;br&gt;
    • Visualizations → How you display those metrics (graphs, heatmaps, gauges, tables, etc.).&lt;/p&gt;

&lt;p&gt;Grafana turns metrics into actionable visualizations.&lt;/p&gt;

&lt;p&gt;7️⃣ Grafana vs Prometheus&lt;/p&gt;

&lt;p&gt;Feature Grafana Prometheus&lt;br&gt;
Purpose Visualization &amp;amp; Dashboarding    Data Collection &amp;amp; Storage&lt;br&gt;
Function    Displays metrics &amp;amp; alerts visually  Scrapes metrics from targets&lt;br&gt;
Data Storage    No data storage (reads from sources)    Built-in time-series database&lt;br&gt;
Usage   Used for dashboards and alerts  Used for monitoring &amp;amp; metric scraping&lt;br&gt;
Integration Uses Prometheus as a data source    Exposes data to Grafana&lt;/p&gt;

&lt;p&gt;In essence — Prometheus collects data, while Grafana visualizes it.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>grafana</category>
      <category>terraforn</category>
    </item>
    <item>
      <title>Day 71 - Terraform Interview Questions</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Fri, 24 Oct 2025 13:02:08 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/day-71-terraform-interview-questions-4c9o</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/day-71-terraform-interview-questions-4c9o</guid>
      <description>&lt;p&gt;Today’s focus is on preparing for Terraform-related interview questions. These are some of the common ones you might encounter and how to approach them with confidence.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;What is Terraform and how is it different from other IaC tools?&lt;br&gt;
Terraform is an open-source Infrastructure as Code tool by HashiCorp that allows you to define and provision infrastructure using a declarative configuration language. It differs from other tools because it’s cloud-agnostic, maintains a state file, uses a declarative syntax, and supports immutable infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do you call a main.tf module?&lt;br&gt;
By using a module block that references the module path or source. For example:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "ec2_instance" {
  source = "./modules/ec2"
  instance_type = "t2.micro"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;What is Sentinel and where can it be used?
Sentinel is HashiCorp’s policy-as-code framework used to enforce compliance and governance. It helps define rules before Terraform applies changes, such as enforcing tagging standards, ensuring encryption, or restricting certain resource types.&lt;/li&gt;
&lt;li&gt;How to create multiple instances of the same resource?
You can use the count or for_each meta-arguments. For example:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "web" {
  count = 3
  ami   = "ami-0abcd"
  instance_type = "t2.micro"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;How to enable debug messages to find provider loading paths?
Set the environment variable TF_LOG=TRACE. This enables detailed debug logs showing how Terraform loads providers and modules.&lt;/li&gt;
&lt;li&gt;How to exclude a specific resource during destroy?
Use the -target flag to specify what to destroy or preserve. Alternatively, comment out the resource you wish to keep before running terraform destroy.&lt;/li&gt;
&lt;li&gt;Which module stores the .tfstate file in S3?
The S3 backend is used for remote state storage.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket = "my-terraform-state-bucket"
    key    = "global/s3/terraform.tfstate"
    region = "us-east-1"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;How to manage sensitive data like API keys or passwords?
Use sensitive variables, external secret managers, or environment variables. Avoid committing sensitive data to version control.&lt;/li&gt;
&lt;li&gt;How to provision an S3 bucket and a user with read/write access?
Use the aws_s3_bucket, aws_iam_user, aws_iam_policy, and aws_iam_policy_attachment resources to configure access permissions for the user.&lt;/li&gt;
&lt;li&gt;Who maintains Terraform providers?
Terraform providers are maintained by HashiCorp, cloud vendors, or the community. They are available through the Terraform Registry.&lt;/li&gt;
&lt;li&gt;How to export data from one module to another?
Use outputs from one module and pass them as input variables to another.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "vpc_id" {
  value = aws_vpc.main.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then reference it as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vpc_id = module.vpc.vpc_id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Day 71 reinforces how Terraform’s strength lies not only in writing configurations but also in understanding its internal workings, governance features, and modular structure. Knowing these concepts helps you build scalable, secure, and maintainable infrastructure systems.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 70 - Terraform Modules</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Fri, 24 Oct 2025 12:55:00 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/day-70-terraform-modules-2k53</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/day-70-terraform-modules-2k53</guid>
      <description>&lt;p&gt;modules are where Terraform starts to feel like real engineering. Below is a practical, step-by-step workflow follow to design, build, test and publish reusable Terraform modules.&lt;/p&gt;

&lt;p&gt;1) Decide module responsibility&lt;br&gt;
    1.  Pick a single, focused purpose (networking, ec2, rds, alb, etc.).&lt;br&gt;
    2.  Keep modules small and opinionated enough to enforce best practices, but configurable via variables.&lt;/p&gt;

&lt;p&gt;Goal: one module = one responsibility.&lt;/p&gt;

&lt;p&gt;2) Create module directory structure&lt;/p&gt;

&lt;p&gt;Create a folder for the module and a small, consistent layout:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;modules/
  my-server/
    README.md
    main.tf
    variables.tf
    outputs.tf
    examples/
      simple/
        main.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;examples/simple is for a runnable example that shows how to consume the module.&lt;/p&gt;

&lt;p&gt;3) Write module files (what to include)&lt;br&gt;
    • main.tf: resource definitions (no hardcoded values).&lt;br&gt;
    • variables.tf: declare all inputs, provide clear descriptions and sensible defaults where appropriate. Mark sensitive variables if needed.&lt;br&gt;
    • outputs.tf: expose IDs/ARNs/important values consumers will need.&lt;br&gt;
    • README.md: usage, inputs, outputs, example, constraints and notes.&lt;/p&gt;

&lt;p&gt;Keep tasks idempotent and avoid side effects.&lt;/p&gt;

&lt;p&gt;4) Make the module configurable (variables best practices)&lt;br&gt;
    • Give descriptive names and description for every variable.&lt;br&gt;
    • Use types (string, number, list(string), map(string), object({...})) for clarity.&lt;br&gt;
    • Provide reasonable defaults for optional values; require explicit for critical ones (e.g., vpc_id).&lt;/p&gt;

&lt;p&gt;Example pattern (conceptually):&lt;br&gt;
variable "instance_type" { type = string; default = "t3.micro"; }&lt;/p&gt;

&lt;p&gt;5) Expose useful outputs&lt;/p&gt;

&lt;p&gt;Only output what consumers need: resource IDs, ARNs, connection info. Keep outputs minimal and stable. Example: instance_id, subnet_id, security_group_id.&lt;/p&gt;

&lt;p&gt;6) Add an example (and test locally)&lt;/p&gt;

&lt;p&gt;Create modules/my-server/examples/simple/main.tf that calls your module with realistic variable values. This is how people (and CI) will sanity-check the module.&lt;/p&gt;

&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd modules/my-server/examples/simple
terraform init
terraform validate
terraform plan -out plan.tfplan
terraform apply plan.tfplan
# check resources, then:
terraform destroy -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7) Format and validate&lt;/p&gt;

&lt;p&gt;Before committing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform fmt -recursive
terraform validate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install and run terraform-docs to auto-generate README sections:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform-docs md . &amp;gt; README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Keep a human-written intro + generated inputs/outputs.)&lt;/p&gt;

&lt;p&gt;8) Version control and collaboration&lt;br&gt;
    • Keep modules in Git (e.g., git repo with modules/ or individual repos per module).&lt;br&gt;
    • Use semantic versioning for published modules (v1.0.0).&lt;br&gt;
    • Tag releases (git tag v1.0.0 and push tags) so consumers can reference git::https://...//?ref=v1.0.0.&lt;/p&gt;

&lt;p&gt;9) CI: run plan &amp;amp; tests on PR&lt;/p&gt;

&lt;p&gt;Set up CI that:&lt;br&gt;
    1.  Runs terraform init and terraform validate for module and examples.&lt;br&gt;
    2.  Runs terraform fmt -check.&lt;br&gt;
    3.  Optionally runs integration tests:&lt;br&gt;
    • Terratest (Go) for real cloud checks (recommended for modules that create real infra).&lt;br&gt;
    • Or lightweight smoke tests: terraform apply against a short-lived test workspace and terraform destroy.&lt;/p&gt;

&lt;p&gt;Keep credentials secure (CI secrets, temporary accounts).&lt;/p&gt;

&lt;p&gt;10) Documentation &amp;amp; discoverability&lt;br&gt;
    • Write a clear README: purpose, usage, inputs, outputs, example, constraints.&lt;br&gt;
    • Add use-cases and recommended defaults.&lt;br&gt;
    • Add CHANGELOG.md for breaking changes.&lt;/p&gt;

&lt;p&gt;11) Publishing and consumption&lt;br&gt;
    • Consume locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "server" {
  source = "../modules/my-server"
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Consume from Git:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "server" {
  source = "git::https://github.com/you/terraform-modules.git//my-server?ref=v1.0.0"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Publish to Terraform Registry (public or private) if you want organization-wide reuse.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;12) Maintenance &amp;amp; versioning policy&lt;br&gt;
    • Keep modules backward compatible when possible.&lt;br&gt;
    • For breaking changes bump major version; provide migration notes.&lt;br&gt;
    • Use terraform state mv guidance in docs if consumers need to migrate resources after structural changes.&lt;/p&gt;

&lt;p&gt;13) Testing checklist before release&lt;br&gt;
    • terraform fmt passed&lt;br&gt;
    • terraform validate passed&lt;br&gt;
    • Example apply/destroy succeeded&lt;br&gt;
    • README autogenerated/updated (terraform-docs)&lt;br&gt;
    • CI checks green&lt;br&gt;
    • Module documented with inputs/outputs and examples&lt;br&gt;
    • Release tag created&lt;/p&gt;

&lt;p&gt;14) Example workflow: make a network module then use it&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Implement modules/network with VPC/subnets/route tables.&lt;/li&gt;
&lt;li&gt;Add example for modules/network/examples/simple and verify apply/destroy.&lt;/li&gt;
&lt;li&gt;In root project main.tf reference module via source = "../modules/network".&lt;/li&gt;
&lt;li&gt;Run root terraform init → plan → apply.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 69 : Meta Arguments in Terraform</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Fri, 24 Oct 2025 12:31:51 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/day-69-meta-arguments-in-terraform-4ead</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/day-69-meta-arguments-in-terraform-4ead</guid>
      <description>&lt;p&gt;A. two ready-to-run Terraform examples (one count demo, one for_each demo), &lt;/p&gt;

&lt;p&gt;B. step-by-step instructions to run and inspect results, &lt;/p&gt;

&lt;p&gt;C.  a clear explanation of meta-arguments and best practices, and (D) cleanup &amp;amp; cost warnings. Replace  (AMI, region, key name, etc.) before running.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quick safety note: the AWS EC2 examples will create real instances that can incur cost. If you only want to experiment without charges, use the local_file or null_resource variants (I include a lightweight local demo below).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A — Example 1 — count (create N identical resources)&lt;/p&gt;

&lt;p&gt;Purpose: create N identical resources (same config). Use count when you want a number of identical copies.&lt;/p&gt;

&lt;p&gt;Create folder day69-count/ and files:&lt;/p&gt;

&lt;p&gt;main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = { source = "hashicorp/aws" version = "~&amp;gt; 5.0" }
  }
}
provider "aws" {
  region = var.region
}

resource "aws_instance" "server" {
  count         = var.instance_count
  ami           = var.instance_ami
  instance_type = var.instance_type
  key_name      = var.key_name

  tags = {
    Name = "server-${count.index}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;variables.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "region" {
  type    = string
  default = "us-east-1"
}

variable "instance_count" {
  type    = number
  default = 2
}

variable "instance_ami" {
  type    = string
  default = "ami-08c40ec9ead489470" # replace for your region
}

variable "instance_type" {
  type    = string
  default = "t2.micro"
}

variable "key_name" {
  type = string
  default = "&amp;lt;YOUR_KEY_PAIR&amp;gt;"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;outputs.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "instance_ids" {
  value = aws_instance.server[*].id
}
output "instance_public_ips" {
  value = aws_instance.server[*].public_ip
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan -out plan.tfplan
terraform apply "plan.tfplan"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inspect addresses / outputs:&lt;br&gt;
• Resources created are referenced as aws_instance.server[0], aws_instance.server[1], etc.&lt;br&gt;
• count.index inside the block gives the zero-based index.&lt;/p&gt;

&lt;p&gt;Change count:&lt;br&gt;
• Update var.instance_count = 4 (or pass -var="instance_count=4"), run terraform apply — Terraform will create additional instances for indices 2 and 3.&lt;/p&gt;

&lt;p&gt;B — Example 2 — for_each (create resources with distinct values)&lt;/p&gt;

&lt;p&gt;Purpose: multiple similar resources which have different attributes (AMIs, names, or other per-item configuration). Use for_each with a map or set-of-strings so each instance is keyed and stable.&lt;/p&gt;

&lt;p&gt;Create folder day69-foreach/ and files:&lt;/p&gt;

&lt;p&gt;main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = { source = "hashicorp/aws" version = "~&amp;gt; 5.0" }
  }
}
provider "aws" {
  region = var.region
}

# map of named instances -&amp;gt; ami
variable "servers_map" {
  type = map(string)
  default = {
    "linux"  = "ami-0b0dcb5067f052a63"
    "ubuntu" = "ami-08c40ec9ead489470"
  }
}

resource "aws_instance" "server" {
  for_each      = var.servers_map
  ami           = each.value
  instance_type = var.instance_type
  key_name      = var.key_name

  tags = {
    Name = "server-${each.key}"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;variables.tf&lt;/p&gt;

&lt;p&gt;variable "region" {&lt;br&gt;
  type    = string&lt;br&gt;
  default = "us-east-1"&lt;br&gt;
}&lt;br&gt;
variable "instance_type" {&lt;br&gt;
  type    = string&lt;br&gt;
  default = "t2.micro"&lt;br&gt;
}&lt;br&gt;
variable "key_name" {&lt;br&gt;
  type = string&lt;br&gt;
  default = ""&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;outputs.tf&lt;/p&gt;

&lt;p&gt;output "server_public_ips" {&lt;br&gt;
  value = { for k, r in aws_instance.server : k =&amp;gt; r.public_ip }&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Run: same terraform init &amp;amp;&amp;amp; terraform apply.

Inspect addresses / outputs:
    • Resources are referenced by key: e.g. aws_instance.server["linux"], aws_instance.server["ubuntu"].
    • each.key and each.value available inside the block.

Change for_each map:
    • Add or remove keys in servers_map then terraform apply. Terraform will create or destroy only the specific keyed resources — stable mapping reduces accidental replacements.



C — Lightweight demo (no AWS charges) — use local_file and null_resource

If you want to learn semantics without creating cloud resources, try this local demo:

Folder day69-local/:

main.tf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;resource "local_file" "count_files" {&lt;br&gt;
  count    = 3&lt;br&gt;
  filename = "${path.module}/count_file_${count.index}.txt"&lt;br&gt;
  content  = "file created with count index ${count.index}"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;locals {&lt;br&gt;
  items = {&lt;br&gt;
    "one" = "first"&lt;br&gt;
    "two" = "second"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "local_file" "foreach_files" {&lt;br&gt;
  for_each = local.items&lt;br&gt;
  filename = "${path.module}/foreach_${each.key}.txt"&lt;br&gt;
  content  = "for_each created ${each.key} =&amp;gt; ${each.value}"&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Run terraform init &amp;amp;&amp;amp; terraform apply — you’ll get files in your folder; no cloud cost.



D — Meta-arguments: what they are &amp;amp; how to use them

Meta-arguments are special arguments accepted by resource/module blocks that control Terraform language behavior. Common meta-arguments include:
• count — number of instances (integer). Addresses use numeric indices: resource.type.name[index].
• for_each — iterate over a map or set; produces one instance per key. Addresses use keys: resource.type.name["key"].
• depends_on — explicit dependency list (useful in rare cases).
• provider — select which provider configuration to use.
• lifecycle — control create_before_destroy, prevent_destroy, ignore_changes.
• provisioner — (discouraged for most cases) run local/remote commands.

Count vs For_each — when to use which
• Use count:
• When you just need N identical copies.
• When order/indices are fine and attributes are identical.
• Example: count = 4 to create 4 identical web servers.
• Use for_each:
• When each instance needs unique attributes (different AMIs, names, sizes).
• When you want stable resource identity across changes (keys are stable).
• Use a map if you need key→value pairs (each.key and each.value).
• Use a set of strings if only names/IDs are needed.

Addressing differences
• count → aws_instance.foo[0], aws_instance.foo[1]
• for_each → aws_instance.foo["linux"], aws_instance.foo["ubuntu"]

State &amp;amp; lifecycle implications
• Changing count may cause Terraform to destroy the highest indexed resources if you reduce the number — which can be destructive.
• for_each keyed resources are more stable when adding/removing different keys.
• If you convert from count → for_each or vice versa, Terraform will typically want to recreate resources (state address changes). Use terraform state mv to migrate state safely if needed.

Tips &amp;amp; best practices
• Prefer for_each (map) for heterogeneous resources or when identity matters.
• Use toset() or tolist() conversions to ensure predictable input types.
• Avoid dynamic index assumptions; don’t depend on count.index to persist a machine’s identity over time.
• Keep resource blocks small and idempotent.
• Use lifecycle { create_before_destroy = true } when replacing resources that affect availability (but be careful).
• For large fleets, consider modules to group repeated logic and pass counts/for_each into module blocks.



E — Example: multiple key/value iteration (map with structured attributes)

If instances need multiple attributes:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;variable "servers" {&lt;br&gt;
  default = {&lt;br&gt;
    web1 = { ami = "ami-aaa", instance_type = "t3.micro" }&lt;br&gt;
    web2 = { ami = "ami-bbb", instance_type = "t2.micro" }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_instance" "web" {&lt;br&gt;
  for_each = var.servers&lt;br&gt;
  ami      = each.value.ami&lt;br&gt;
  instance_type = each.value.instance_type&lt;br&gt;
  tags = { Name = each.key }&lt;br&gt;
}&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


This is powerful — each.key becomes your stable identifier.



F — How to demonstrate for an interview (suggested steps)
1. Show the count example: instance_count = 2 apply, then change to 4 and apply — show terraform plan then apply.
2. Show the for_each example: add a new map key and terraform apply — point out only that key’s resource is created.
3. Show terraform state list and how addresses look (aws_instance.server[0] vs aws_instance.server["linux"]).
4. Explain migration: demonstrate terraform state mv if you rename keys or move from count→for_each.
5. Discuss real-world choice: use for_each for stable identity (e.g., DB replicas with different roles), count for identical worker nodes when identity is irrelevant (but note the stability issues).



G — Cleanup &amp;amp; cost control
• For AWS examples: terraform destroy -auto-approve to remove resources.
• Always confirm what will be destroyed with terraform plan -destroy.
• Use small instance types (e.g., t3.micro) and a single AZ for demos, or use local_file demo to avoid charges.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 68 — Scaling with Terraform</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Mon, 20 Oct 2025 16:55:44 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/day-68-scaling-with-terraform-4ig1</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/day-68-scaling-with-terraform-4ig1</guid>
      <description>&lt;p&gt;Important: replace every  with your actual values (region, AMI, key pair name, VPC/subnet IDs). The AMI in your example may be region-specific — verify it for your region.&lt;/p&gt;

&lt;p&gt;Project layout&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;day68-autoscaling/
  ├─ main.tf
  ├─ variables.tf
  ├─ outputs.tf
  └─ terraform.tfvars   (optional)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variables.tf

variable "region" {
  type    = string
  default = "us-east-1"
}

variable "ami" {
  type    = string
  default = "ami-005f9685cb30f234b" # replace if not available in your region
}

variable "instance_type" {
  type    = string
  default = "t2.micro"
}

variable "key_name" {
  type    = string
  default = "&amp;lt;YOUR_KEY_PAIR_NAME&amp;gt;"
}

variable "vpc_id" {
  type = string
  default = "&amp;lt;YOUR_VPC_ID&amp;gt;"
}

variable "public_subnet_ids" {
  type = list(string)
  default = ["&amp;lt;PUBLIC_SUBNET_ID_1&amp;gt;", "&amp;lt;PUBLIC_SUBNET_ID_2&amp;gt;"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;main.tf&lt;/p&gt;

&lt;p&gt;This creates a security group (SSH &amp;amp; HTTP), a classic ELB (optional but included because your example referenced a load balancer), a launch configuration, and an autoscaling group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {

  required_providers {
    aws = { source = "hashicorp/aws" version = "~&amp;gt; 5.0" }
  }
}

provider "aws" {
  region = var.region
}

# --- Security group for web instances ---
resource "aws_security_group" "web_server" {
  name        = "day68-web-sg"
  description = "Allow SSH and HTTP"
  vpc_id      = var.vpc_id

  ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]    # For demo only. Restrict in production.
  }

  ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = { Name = "day68-web-sg" }
}

# --- Optional Classic ELB to attach to ASG (example) ---
resource "aws_elb" "web_server_lb" {
  name               = "day68-web-elb"
  subnets            = var.public_subnet_ids
  security_groups    = [aws_security_group.web_server.id]

  listener {
    instance_port     = 80
    instance_protocol = "http"
    lb_port           = 80
    lb_protocol       = "http"
  }

  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "HTTP:80/"
    interval            = 30
  }

  tags = { Name = "day68-web-elb" }
}

# --- Launch configuration (legacy) ---
resource "aws_launch_configuration" "web_server_lc" {
  name_prefix   = "day68-lc-"
  image_id      = var.ami
  instance_type = var.instance_type
  key_name      = var.key_name
  security_groups = [aws_security_group.web_server.id]

  user_data = &amp;lt;&amp;lt;-EOF
              #!/bin/bash
              # simple web page bootstrap
              if command -v yum &amp;gt;/dev/null 2&amp;gt;&amp;amp;1; then
                yum update -y
                yum install -y httpd
                systemctl enable httpd
                systemctl start httpd
                echo "&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;You're doing really Great&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;" &amp;gt; /var/www/html/index.html
              elif command -v apt-get &amp;gt;/dev/null 2&amp;gt;&amp;amp;1; then
                apt-get update -y
                apt-get install -y apache2
                systemctl enable apache2
                systemctl start apache2
                echo "&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;You're doing really Great&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;" &amp;gt; /var/www/html/index.html
              fi
              EOF

  lifecycle {
    create_before_destroy = true
  }
}

# --- Auto Scaling Group ---
resource "aws_autoscaling_group" "web_server_asg" {
  name                      = "web-server-asg"
  launch_configuration      = aws_launch_configuration.web_server_lc.name
  min_size                  = 1
  max_size                  = 3
  desired_capacity          = 2
  vpc_zone_identifier       = var.public_subnet_ids
  health_check_type         = "EC2"
  health_check_grace_period = 120

  # attach ELB (classic) created above
  load_balancers = [aws_elb.web_server_lb.name]

  tag {
    key                 = "Name"
    value               = "day68-web"
    propagate_at_launch = true
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;outputs.tf

output "asg_name" {
  value = aws_autoscaling_group.web_server_asg.name
}

output "elb_dns_name" {
  value = aws_elb.web_server_lb.dns_name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it (commands)&lt;br&gt;
A. Initialize:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd day68-autoscaling
terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;B. Validate &amp;amp; plan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform validate
terraform plan -out plan.tfplan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;C. Apply:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply "plan.tfplan"
# or: terraform apply -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will create the ELB, Launch Configuration, and ASG which launches instances in the two public subnets.&lt;/p&gt;

&lt;p&gt;Test scaling (console + CLI)&lt;/p&gt;

&lt;p&gt;Console&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to EC2, Auto Scaling Groups, select web-server-asg.&lt;/li&gt;
&lt;li&gt;Click Edit ,change Desired capacity to 3 ,Save.&lt;/li&gt;
&lt;li&gt;Wait a few minutes for instances to launch.&lt;/li&gt;
&lt;li&gt;Verify in EC2 , Instances.&lt;/li&gt;
&lt;li&gt;Change desired capacity back to 1 to scale down and confirm termination.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS CLI (optional)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# scale up
aws autoscaling set-desired-capacity --auto-scaling-group-name web-server-asg --desired-capacity 3 --honor-cooldown


# scale down
aws autoscaling set-desired-capacity --auto-scaling-group-name web-server-asg --desired-capacity 1 --honor-cooldown
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify web page&lt;br&gt;
    • If you used ELB, open the ELB DNS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://$(terraform output -raw elb_dns_name)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see You’re doing really Great.&lt;/p&gt;

&lt;p&gt;• Or open the public IP of any instance.&lt;/p&gt;

&lt;p&gt;Cleanup&lt;/p&gt;

&lt;p&gt;To remove resources and stop charges:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Troubleshooting tips&lt;br&gt;
• ASG not launching instances: check subnet IDs, IAM permissions, and AMI availability in region.&lt;br&gt;
• User data script failed: inspect instance system logs in EC2 console or SSH and check /var/log/cloud-init-output.log.&lt;br&gt;
• Health check failures: increase health_check_grace_period to allow setup time.&lt;br&gt;
• SSH exposed to 0.0.0.0/0: change ingress to your IP only for security.&lt;/p&gt;

&lt;p&gt;Notes &amp;amp; best practice recommendations&lt;br&gt;
• aws_launch_configuration is legacy. Prefer aws_launch_template for new projects — launch templates are more flexible (and needed for many modern features). I can provide a launch_template + ASG + ALB example if you want.&lt;br&gt;
• For HTTP scaling, prefer an ALB (Application Load Balancer) + Target Group instead of Classic ELB. ALB integrates with target tracking and metrics better.&lt;br&gt;
• In production, restrict SSH to your IP or use SSM Session Manager instead of opening port 22.&lt;br&gt;
• For autoscaling based on metrics, add scaling policies (target tracking or step policies) to auto-adjust based on CPU, request counts, etc.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 67 : AWS S3 Bucket creation and management using Terraform</title>
      <dc:creator>Udoh Deborah</dc:creator>
      <pubDate>Fri, 17 Oct 2025 14:54:51 +0000</pubDate>
      <link>https://dev.to/udoh_deborah_b1e484c474bf/day-67-aws-s3-bucket-creation-and-management-using-terraform-1aop</link>
      <guid>https://dev.to/udoh_deborah_b1e484c474bf/day-67-aws-s3-bucket-creation-and-management-using-terraform-1aop</guid>
      <description>&lt;p&gt;Before you start (checks)&lt;br&gt;
    • Have AWS CLI configured (aws configure) or set AWS_* env vars.&lt;br&gt;
    • Terraform installed (terraform -v).&lt;br&gt;
    • Recommended: use an S3 remote backend for state in real projects (not required for testing).&lt;/p&gt;

&lt;p&gt;1) Create Terraform files (example structure)&lt;/p&gt;

&lt;p&gt;day67-s3/&lt;br&gt;
  ├─ main.tf&lt;br&gt;
  ├─ variables.tf&lt;br&gt;
  ├─ outputs.tf&lt;br&gt;
  └─ terraform.tfvars   # optional: put bucket_name, region etc here&lt;/p&gt;

&lt;p&gt;2) variables.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "region" {
  type    = string
  default = "us-east-1"
}

variable "bucket_name" {
  type = string
  # bucket names must be globally unique
  default = "my-unique-day67-bucket-&amp;lt;your-unique-suffix&amp;gt;"
}

variable "read_only_principal_arn" {
  description = "ARN of IAM user or role to grant read-only access"
  type        = string
  default     = "arn:aws:iam::&amp;lt;ACCOUNT_ID&amp;gt;:user/&amp;lt;USERNAME&amp;gt;"  # replace
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) main.tf — bucket, versioning, policy, (optional) public access block&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = var.region
}

# S3 bucket
resource "aws_s3_bucket" "site" {
  bucket = var.bucket_name

  # If you plan to host a static website, uncomment the website block and adjust
  # website {
  #   index_document = "index.html"
  # }
}

# IMPORTANT: If you want public access via ACL/policy, you may need to allow it:
# Create/Disable the account-level Block Public Access? Here we manage at bucket-level:
resource "aws_s3_bucket_public_access_block" "no_block" {
  bucket = aws_s3_bucket.site.id

  # To allow public access, set all to false. BE CAREFUL in production.
  block_public_acls       = false
  block_public_policy     = false
  ignore_public_acls      = false
  restrict_public_buckets = false
}

# Enable versioning
resource "aws_s3_bucket_versioning" "v" {
  bucket = aws_s3_bucket.site.id

  versioning_configuration {
    status = "Enabled"
  }
}

# Make bucket objects public-read by default using ACL on the bucket (optional)
# Note: Many modern setups prefer using bucket policy only. If you set ACL,
# ensure the public access block allows it (above).
resource "aws_s3_bucket_acl" "acl" {
  bucket = aws_s3_bucket.site.id
  acl    = "public-read"
}

# Bucket policy to allow public GetObject (if you want fully public)
data "aws_iam_policy_document" "public_read" {
  statement {
    sid     = "AllowPublicGetObject"
    effect  = "Allow"
    principals {
      type        = "AWS"
      identifiers = ["*"]
    }
    actions   = ["s3:GetObject"]
    resources = ["${aws_s3_bucket.site.arn}/*"]
  }
}

resource "aws_s3_bucket_policy" "public_policy" {
  bucket = aws_s3_bucket.site.id
  policy = data.aws_iam_policy_document.public_read.json
}

# Bucket policy granting read-only to a specific IAM user/role
data "aws_iam_policy_document" "read_only_for_principal" {
  statement {
    sid = "AllowReadForSpecificPrincipal"
    effect = "Allow"
    principals {
      type        = "AWS"
      identifiers = [var.read_only_principal_arn]
    }
    actions = [
      "s3:GetObject",
      "s3:ListBucket"
    ]
    # ListBucket must reference bucket arn, GetObject references objects
    resources = [
      aws_s3_bucket.site.arn,
      "${aws_s3_bucket.site.arn}/*"
    ]
  }
}

resource "aws_s3_bucket_policy" "principal_policy" {
  bucket = aws_s3_bucket.site.id
  policy = data.aws_iam_policy_document.read_only_for_principal.json
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notes&lt;br&gt;
    • The above creates both a public policy (everyone) and a principal-specific policy. In practice use one or the other depending on your requirements. If the bucket should be fully public, keep public_policy; if you want only a specific IAM user/role to read, remove public_policy and aws_s3_bucket_acl and rely only on the principal_policy.&lt;br&gt;
    • Modern AWS best practice: avoid public buckets unless necessary. Instead, grant access to specific principals or use CloudFront with signed URLs.&lt;/p&gt;

&lt;p&gt;4) outputs.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "bucket_name" {
  value = aws_s3_bucket.site.bucket
}

output "bucket_arn" {
  value = aws_s3_bucket.site.arn
}

output "website_endpoint" {
  value = aws_s3_bucket.site.website_endpoint
  description = "Empty unless you enabled static website hosting."
  sensitive   = false
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5) Initialize &amp;amp; apply (commands)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd day67-s3
terraform init
terraform validate
terraform plan -out plan.tfplan
terraform apply "plan.tfplan"
# or terraform apply -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Watch outputs for bucket name/ARN. If apply fails with public access errors, check AWS Account Public Access Block settings — account-level block may prevent public policy/ACL.&lt;/p&gt;

&lt;p&gt;6) Test public read access&lt;br&gt;
    1.  Upload a sample file (AWS CLI):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 cp sample.txt s3://&amp;lt;your-bucket-name&amp;gt;/sample.txt --acl public-read
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Open in browser:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://&amp;lt;your-bucket-name&amp;gt;.s3.amazonaws.com/sample.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If publicly accessible you’ll see content.&lt;/p&gt;

&lt;p&gt;If you used static website hosting, URL is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;your-bucket-name&amp;gt;.s3-website-&amp;lt;region&amp;gt;.amazonaws.com/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7) Verify versioning&lt;/p&gt;

&lt;p&gt;Use AWS CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3api get-bucket-versioning --bucket &amp;lt;your-bucket-name&amp;gt;
# Expected: Status: Enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upload an object, then upload again; list versions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3api list-object-versions --bucket &amp;lt;your-bucket-name&amp;gt; --prefix sample.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;8) Clean up (optional)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# remove objects and versions first, then destroy
aws s3 rm s3://&amp;lt;your-bucket-name&amp;gt; --recursive
terraform destroy -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;S3 buckets with versioning require special handling to delete versions before bucket deletion.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>s3</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
