<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Manish R Warang</title>
    <description>The latest articles on DEV Community by Manish R Warang (@g33kzone).</description>
    <link>https://dev.to/g33kzone</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/g33kzone"/>
    <language>en</language>
    <item>
      <title>TOON: The Token Ninja</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Fri, 21 Nov 2025 05:22:09 +0000</pubDate>
      <link>https://dev.to/g33kzone/toon-the-token-ninja-3134</link>
      <guid>https://dev.to/g33kzone/toon-the-token-ninja-3134</guid>
      <description>&lt;p&gt;&lt;em&gt;Because your LLM doesnt need another 4,000-character JSON blob to cry about.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Scene Every Cloud Architect Knows Too Well
&lt;/h3&gt;

&lt;p&gt;Its Friday evening. Youve just wrapped up a solid week of architecture reviews and CI/CD firefighting.&lt;/p&gt;

&lt;p&gt;Then suddenlyPagerDuty pings.&lt;br&gt;&lt;br&gt;
 Your AI-powered deployment validator has failed again.&lt;/p&gt;

&lt;p&gt;The LLM was supposed to summarise an Azure App Service deployment manifest.&lt;br&gt;&lt;br&gt;
 Instead, it choked on a 19KB JSON blob and replied:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;I cannot process this PowerPoint file.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You check the logs.&lt;/p&gt;

&lt;p&gt;The JSON blob fed into the LLM has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;42 services&lt;/li&gt;
&lt;li&gt;118 policy rules&lt;/li&gt;
&lt;li&gt;7 nested objects spelling doom&lt;/li&gt;
&lt;li&gt;And a token bill that could fund a small island nation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Meanwhile, your FinOps team messages you:&lt;br&gt;&lt;br&gt;
 Why did &lt;em&gt;one&lt;/em&gt; prompt consume &lt;strong&gt;18,432 tokens&lt;/strong&gt;?!&lt;/p&gt;

&lt;p&gt;Just as you stare into the void, contemplating a career in the tea-making business&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TOON walks in quietly.&lt;br&gt;&lt;br&gt;
Like a minimalist samurai.&lt;br&gt;&lt;br&gt;
Ready to solve your token trauma.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Meet TOONToken-Oriented Object Notation
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;A compact, human-readable, LLM-friendly format designed for the AI era.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;TOON is what happens when JSON goes to coding bootcamp, takes minimalism seriously, and stops hoarding punctuation.&lt;/p&gt;

&lt;p&gt;Its designed to serialise structured data in a way that LLMs understand more easilyusing fewer tokens and with fewer hallucinations.&lt;/p&gt;

&lt;p&gt;If JSON is a 20-page resume,&lt;br&gt;&lt;br&gt;
TOON is the clean 1-page CV that actually gets read.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why TOON Matters (Especially If You Live in the Cloud)
&lt;/h3&gt;

&lt;p&gt;LLMs struggle with noisy punctuation, long prompts, deeply nested brackets, and repeated field names.&lt;br&gt;&lt;br&gt;
 TOON fixes that by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Removing unnecessary syntax&lt;/li&gt;
&lt;li&gt;Flattening repetitive structures&lt;/li&gt;
&lt;li&gt;Compressing arrays into tabular form&lt;/li&gt;
&lt;li&gt;Reducing token count by &lt;strong&gt;3060%&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For engineers dealing with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API specs&lt;/li&gt;
&lt;li&gt;Kubernetes manifests&lt;/li&gt;
&lt;li&gt;Azure Bicep outputs&lt;/li&gt;
&lt;li&gt;CI/CD summaries&lt;/li&gt;
&lt;li&gt;Observability logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TOON becomes an instant superpower.&lt;/p&gt;

&lt;h3&gt;
  
  
  Few Cloud &amp;amp; DevOps Examples
&lt;/h3&gt;

&lt;p&gt;ExampleKubernetes Pod Health Summary&lt;/p&gt;

&lt;p&gt;JSON&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;&lt;br&gt;
 "namespace": "prod",&lt;br&gt;&lt;br&gt;
 "pods": [&lt;br&gt;&lt;br&gt;
 { "name": "web-7f984d", "status": "Running", "restarts": 1 },&lt;br&gt;&lt;br&gt;
 { "name": "payments-4f29d1", "status": "CrashLoopBackOff", "restarts": 5 }&lt;br&gt;&lt;br&gt;
 ]&lt;br&gt;&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;TOON&lt;/p&gt;

&lt;p&gt;namespace: prod&lt;br&gt;&lt;br&gt;
pods[2]{name,status,restarts}:&lt;br&gt;&lt;br&gt;
 web-7f984d,Running,1&lt;br&gt;&lt;br&gt;
 payments-4f29d1,CrashLoopBackOff,5&lt;/p&gt;

&lt;p&gt;Example 2 Observability Metrics (from official TOON benchmarks)&lt;/p&gt;

&lt;p&gt;JSON (5 rows)&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;&lt;br&gt;
 "metrics": [&lt;br&gt;&lt;br&gt;
 {&lt;br&gt;&lt;br&gt;
 "date": "2025-01-01",&lt;br&gt;&lt;br&gt;
 "views": 5715,&lt;br&gt;&lt;br&gt;
 "clicks": 211,&lt;br&gt;&lt;br&gt;
 "conversions": 28,&lt;br&gt;&lt;br&gt;
 "revenue": 7976.46,&lt;br&gt;&lt;br&gt;
 "bounceRate": 0.47&lt;br&gt;&lt;br&gt;
 }&lt;br&gt;&lt;br&gt;
 ]&lt;br&gt;&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;TOON&lt;/p&gt;

&lt;p&gt;metrics[5]{date,views,clicks,conversions,revenue,bounceRate}:&lt;br&gt;&lt;br&gt;
 2025-01-01,5715,211,28,7976.46,0.47&lt;br&gt;&lt;br&gt;
 ...&lt;/p&gt;

&lt;p&gt;Perfect for telemetry, Grafana streams, log analysis, and dashboards.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Should You Use TOON? (Straight from the Spec)
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Use TOON When:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Your data is &lt;strong&gt;mostly flat or semi-flat&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Youre sending structured data to &lt;strong&gt;an LLM&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;You want to &lt;strong&gt;reduce token costs&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;You need to handle &lt;strong&gt;large arrays of similar objects&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Youre feeding logs, metrics, or infra configs to an AI agent&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Avoid TOON When:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Your structure is &lt;strong&gt;deeply nested&lt;/strong&gt; (complex trees, recursive objects)&lt;/li&gt;
&lt;li&gt;You need strong schema validation&lt;/li&gt;
&lt;li&gt;Youre interacting with APIs expecting &lt;strong&gt;strict JSON&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Your data has &lt;strong&gt;highly irregular shapes&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A good rule:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;If JSON looks like a mazekeep JSON.&lt;br&gt;&lt;br&gt;
If JSON looks like a spreadsheetuse TOON.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  TOON vs JSON vs YAMLDeveloper Cage Match
&lt;/h3&gt;

&lt;p&gt;| Feature | JSON | YAML | TOON |&lt;br&gt;
| Readability | Good | Great | Excellent |&lt;br&gt;
| Token Efficiency | Meh | Better | 🔥 Best |&lt;br&gt;
| LLM Accuracy | Decent | Moderate | Highest |&lt;br&gt;
| Human Happiness | Medium | Low (indentation PTSD) | High |&lt;br&gt;
| Ideal Use | APIs | Configs | AI prompts |&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation &amp;amp; Quick Start (Developers Love This Part)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  CLI (no installation required)
&lt;/h4&gt;

&lt;p&gt;npx @toon-format/cli input.json -o output.toon&lt;/p&gt;

&lt;h4&gt;
  
  
  Pipe from stdin
&lt;/h4&gt;

&lt;p&gt;echo '{"name":"Ada"}' | npx @toon-format/cli&lt;/p&gt;

&lt;h4&gt;
  
  
  Library (TypeScript)
&lt;/h4&gt;

&lt;p&gt;npm install @toon-format/toon&lt;/p&gt;

&lt;h4&gt;
  
  
  Usage
&lt;/h4&gt;

&lt;p&gt;import { encode } from '@toon-format/toon';&lt;/p&gt;

&lt;p&gt;console.log(encode({&lt;br&gt;&lt;br&gt;
 users: [&lt;br&gt;&lt;br&gt;
 {id: 1, name: 'Alice'},&lt;br&gt;&lt;br&gt;
 {id: 2, name: 'Bob'}&lt;br&gt;&lt;br&gt;
 ]&lt;br&gt;&lt;br&gt;
}));&lt;/p&gt;

&lt;h3&gt;
  
  
  Final ThoughtsThe Future Is TOON-Shaped
&lt;/h3&gt;

&lt;p&gt;TOON isnt just another data format.&lt;br&gt;&lt;br&gt;
 Its a &lt;strong&gt;practical upgrade&lt;/strong&gt; for anyone using LLMs in Cloud or DevOps.&lt;/p&gt;

&lt;p&gt;It reduces token costs.&lt;br&gt;&lt;br&gt;
It improves AI accuracy.&lt;br&gt;&lt;br&gt;
It removes JSON noise.&lt;br&gt;&lt;br&gt;
It cleans up your prompts.&lt;br&gt;&lt;br&gt;
And most importantly&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It gives your AI agents a fighting chance to understand your infrastructure.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Try converting one of your ugly JSON prompts today.&lt;br&gt;&lt;br&gt;
Your LLMand your token billwill thank you.&lt;/p&gt;

&lt;p&gt;]]&amp;gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>tooling</category>
      <category>devops</category>
      <category>azure</category>
    </item>
    <item>
      <title>Secure by Design: Integrating Security Policies from Code to Cloud with Terraform</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Fri, 17 Jan 2025 04:03:59 +0000</pubDate>
      <link>https://dev.to/g33kzone/secure-by-design-integrating-security-policies-from-code-to-cloud-with-terraform-2opg</link>
      <guid>https://dev.to/g33kzone/secure-by-design-integrating-security-policies-from-code-to-cloud-with-terraform-2opg</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AedgUH7nvGeV6J2HG61b9cg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AedgUH7nvGeV6J2HG61b9cg.jpeg" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction: A Balancing Act Between Agility and Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the fast-paced world of Cloud computing, security has often been treated as an afterthought, a necessary evil tacked on after infrastructure is already deployed. But with Cloud-native architectures growing more complex by the day, this approach simply doesn’t cut it anymore. We can no longer afford to rely on traditional security practices to safeguard our sprawling infrastructure. Instead, what if we could bake security directly into the infrastructure provisioning process? Welcome to the era of “Secure by Design”, where Terraform allows us to enforce security policies at every stage — from code development to Cloud deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 1: The Evolution of Cloud Security and Infrastructure as Code (IaC)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Back in the days when on-premise servers roamed the Earth, security was a well-guarded perimeter. Firewalls, intrusion detection systems, and manual audits dominated the scene. But as we transitioned to Cloud-native environments, security became everyone’s responsibility — or in other words, nobody’s problem.&lt;/p&gt;

&lt;p&gt;Infrastructure as Code (IaC) tools like Terraform revolutionized the way we manage infrastructure, but it also introduced new challenges. Mismanaged credentials? Publicly exposed S3 buckets? Misconfigured access policies? If you’ve spent enough time in Cloud environments, you’ve seen it all. It’s no wonder that &lt;strong&gt;IaC without security by design&lt;/strong&gt; often results in unpredictable configurations that might look like a security nightmare.&lt;/p&gt;

&lt;p&gt;Luckily, Terraform is not just about infrastructure automation; it’s about integrating security as an intrinsic part of that automation. It helps ensure that your Cloud infrastructure is deployed securely, efficiently, and most importantly, consistently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 2: Terraform’s Core Principles: The Foundation of Secure Infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before diving deeper into security features, let’s understand how Terraform’s fundamental principles naturally align with, and enhance security practices. These principles aren’t just architectural choices — they’re the bedrock upon which we build secure infrastructure.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Declarative Configuration&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The declarative nature of Terraform isn’t just about simplicity — it’s your first line of defence. By declaring the desired state rather than writing procedural steps, you reduce the risk of configuration drift and security misconfigurations. When your infrastructure is defined as code, security becomes visible, reviewable, and most importantly, reproducible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Impact&lt;/strong&gt; : Every resource’s security configuration is explicitly declared, making it impossible to “accidentally” skip security settings. For instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "data_bucket" {
  bucket = "secure-company-data"
  versioning {
    enabled = true # Explicitly declared security feature
  }
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256" # Explicit encryption requirement
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;State Management&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Terraform’s state management isn’t just about tracking resources — it’s about maintaining security consistency. The state file contains your infrastructure’s entire security posture, making it a crucial security artifact that needs protection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practice&lt;/strong&gt; : Always use remote state storage with encryption and proper access controls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket = "terraform-state-secure"
    key = "prod/terraform.tfstate"
    region = "us-west-2"
    encrypt = true
    kms_key_id = "arn:aws:kms:us-west-2:111122223333:key/1234abcd…"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Resource Graph&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Terraform’s resource graph isn’t just for dependency management — it’s a security relationship mapper. Understanding these relationships is crucial for security, because it helps identify potential security implications of resource changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Application&lt;/strong&gt; : Use the graph to visualize security group dependencies and ensure proper network segmentation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Database security group depends on application security group
resource "aws_security_group_rule" "db_ingress" {
  security_group_id = aws_security_group.database.id
  type = "ingress"
  from_port = 5432
  to_port = 5432
  protocol = "tcp"
  source_security_group_id = aws_security_group.application.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Provider Architecture&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Terraform’s provider architecture isn’t just about multi-Cloud support — it’s about standardizing security across different platforms. Each provider enforces platform-specific security best practices while maintaining a consistent security approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation example&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-west-2"
  assume_role {
    role_arn = "arn:aws:iam::ACCOUNT_ID:role/TerraformExecutionRole"
    session_name = "TerraformSecureSession"
  }
  default_tags {
    tags = {
      Environment = "Production"
      SecurityLevel = "High"
      DataClassification = "Confidential"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Section 3: Key Security Features in Terraform — And Why You Should Care&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform, despite its innocent-looking HCL syntax, is the unsung hero of secure Cloud infrastructures. But let’s not just take its word for it. Let’s break down the &lt;strong&gt;key security features&lt;/strong&gt; Terraform offers and see how they directly translate into real-world use cases.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provider Authentication and Authorization&lt;/strong&gt; : Terraform ensures secure communication with Cloud providers like AWS, Azure and Google Cloud through provider-specific authentication methods. Imagine managing multiple Cloud providers — Terraform helps you authenticate securely without juggling credentials like hot potatoes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Managing Secrets and Sensitive Data&lt;/strong&gt; : Secrets management is the Achilles’ heel of infrastructure security. Storing sensitive data like API keys in plain text is like handing over your house keys to a stranger. With Terraform, you can integrate tools like HashiCorp Vault or AWS Secrets Manager to keep those secrets, well, secret.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt; : In a multi-Cloud environment, where sensitive credentials need to be shared across platforms, Terraform can abstract these credentials securely, so they are never exposed, ensuring compliance with security standards.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Remote State Storage and Encryption&lt;/strong&gt; : Storing your Terraform state remotely, especially for larger teams, is non-negotiable. Using Terraform’s remote state with encryption ensures that your state files (which hold the keys to your infrastructure) are both available and secure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt; : Consider an enterprise spanning different geographies — remote state storage backed by strong encryption ensures that your infrastructure’s blueprint is safe, even when collaborating across regions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;IAM and Access Policies&lt;/strong&gt; : Following the principle of least privilege, Terraform allows you to manage IAM policies across various Cloud providers. Over-provisioning access is a rookie mistake, and Terraform’s fine-grained control makes sure that your users only get the access they deserve.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt; : For a financial services company managing multiple AWS accounts, IAM policies controlled via Terraform ensure that users and services only have access to what they truly need, reducing the risk of a breach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 4: Integrating Security Policies into Terraform Workflows — Because Automation Without Security is a Liability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So now that we’ve established that Terraform can handle security, how do we automate it? Enter &lt;strong&gt;Policy as Code&lt;/strong&gt; , where security policies are encoded into your Terraform workflows to ensure that nothing slips through the cracks. Tools like HashiCorp Sentinel and Open Policy Agent (OPA) enable policy enforcement as part of your infrastructure’s lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 1: Enforcing Encryption for Storage&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "secure_bucket" {
  bucket = "secure-data-storage"
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, Terraform ensures that your S3 bucket has server-side encryption enabled by default. No more excuses for “Oops, I forgot to turn on encryption!”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 2: Restricting Public Access to Resources&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "secure_sg" {
  name = "no_public_access"
  description = "Security group with restricted public access"
  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["10.0.0.0/16"] # No public IP allowed
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This security group example makes sure that only internal IPs from your network can access certain services — effectively shutting the door to the outside world. Let’s see someone misconfigure that!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 5: Securing the Entire Lifecycle — From Code to Cloud&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security in Development&lt;/strong&gt; : Developers often leave security for later. Bad move. By integrating Terraform early in the development phase, they can proactively implement best practices like encryption, access restrictions and logging from day one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 3: Secure CI/CD Pipelines&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_codebuild_project" "secure_build" {
  name = "secure-ci-pipeline"
  environment {
    compute_type = "BUILD_GENERAL1_SMALL"
    image = "aws/codebuild/standard:4.0"
    environment_variable {
      name = "SECURE_VAR"
      value = "sensitive_value"
      type = "SECRETS_MANAGER"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Terraform, CI/CD pipelines don’t just automate deployments — they enforce security as they go. This pipeline ensures sensitive variables are stored securely, so every deployment step is auditable and safe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Post-Deployment Monitoring&lt;/strong&gt; : Once your infrastructure is deployed, the job isn’t done. Terraform integrates with Cloud-native security tools like AWS Security Hub and Azure Security Centre to continuously monitor your environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 4: Continuous Monitoring with Terraform&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_securityhub_account" "example" {
  enable_default_standards = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple snippet ensures that Security Hub is activated in your AWS account, constantly scanning for security threats. Sleep tight — Terraform’s got your back.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 6: Bringing It All Together — Native Principles Meet Security Practices&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The magic happens when we combine Terraform’s native principles with security features. This integration creates a robust security framework that’s both powerful and maintainable:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Modular Security&lt;/strong&gt; : Use Terraform modules to create reusable security configurations that enforce your organization’s standards:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "secure_vpc" {
  source = "./modules/secure-vpc"
  environment = "prod"
  enable_flow_logs = true
  enable_network_firewall = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Workspaces for Security Isolation&lt;/strong&gt; : Leverage Terraform workspaces to maintain separate security contexts for different environments:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform workspace select prod
# Production-specific security configurations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Sources for Security Compliance&lt;/strong&gt; : Use data sources to query existing security configurations and ensure compliance:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_security_group" "existing" {
  name = "production-baseline"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion: Security is Not Optional&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the end, security isn’t just a feature — it’s a necessity. By adopting a “Secure by Design” approach with Terraform, you’re not just building infrastructure; you’re building a fortress that evolves with your needs. Don’t wait until it’s too late — start integrating security policies into your Terraform workflows today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt; :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Start enforcing security policies in your Terraform workflows from day one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use tools like HashiCorp Sentinel or OPA to automate security checks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implement secure CI/CD pipelines to ensure continuous security throughout the deployment lifecycle.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leverage Cloud-native security tools for continuous post-deployment monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Embrace Terraform’s native principles as the foundation for secure infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Call to Action&lt;/strong&gt; : Evaluate your current Terraform setup. It’s time to stop playing catch-up with security and start being proactive. Go ahead — give your infrastructure the security makeover it deserves, built on the solid foundation of Terraform’s core principles.&lt;/p&gt;




</description>
      <category>devops</category>
      <category>cloud</category>
      <category>engineering</category>
      <category>terraform</category>
    </item>
    <item>
      <title>The Terraform Fork in the Road</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Mon, 23 Sep 2024 09:39:38 +0000</pubDate>
      <link>https://dev.to/g33kzone/the-terraform-fork-in-the-road-532n</link>
      <guid>https://dev.to/g33kzone/the-terraform-fork-in-the-road-532n</guid>
      <description>&lt;h3&gt;
  
  
  A Multi-Cloud Architect’s Guide to Choosing Your IaC Path
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkobdsaqn9ix3tne7pmcd.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkobdsaqn9ix3tne7pmcd.jpeg" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Introduction
&lt;/h4&gt;

&lt;p&gt;As a multi-cloud architect in the industry, I have the privilege of guiding organizations through complex Cloud infrastructure decisions. Today, we’re facing a pivotal moment in the Infrastructure as Code (IaC) landscape, prompted by significant changes involving Terraform and the rise of OpenTofu. This blog aims to provide insights into the ongoing debate between sticking with Terraform or migrating to the open-source alternative, OpenTofu. Given recent events such as IBM’s acquisition of HashiCorp and Terraform’s licensing changes, it’s crucial to assess the implications for your organization. While there’s no one-size-fits-all answer, understanding the nuances will help you make an informed decision that aligns with your unique needs and risk tolerance.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Terraform Landscape: Before and After
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Terraform’s Rise&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform has long been the de-facto standard for Infrastructure as Code (IaC). Its declarative configuration language, extensive provider ecosystem, and ability to manage infrastructure across multiple Cloud platforms have made it indispensable for DevOps teams worldwide. Its open-source nature fostered a robust community, contributing to its rapid adoption and evolution. Organizations of all sizes have relied on Terraform for its stability, scalability and extensive documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IBM Acquisition and BSL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The landscape changed dramatically with IBM’s acquisition of HashiCorp and the shift to the Business Source License (BSL) for Terraform. The BSL, while allowing free use under certain conditions, introduces restrictions that could impact enterprises relying heavily on Terraform. This shift has raised questions about the future trajectory of Terraform, particularly concerning potential cost implications, and the prioritization of features that serve IBM’s strategic goals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community Response&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In response to these changes, the community has rallied around OpenTofu, an open-source fork of Terraform. OpenTofu aims to maintain the open-source ethos that originally made Terraform popular. It promises to be a free and community-driven alternative, addressing concerns about vendor lock-in and licensing costs. This development has sparked a significant debate within the DevOps community about the best path forward.&lt;/p&gt;

&lt;h4&gt;
  
  
  Key Considerations for Decision-Makers
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenTofu&lt;/strong&gt; : As an open-source project, OpenTofu eliminates licensing costs, which can be a significant factor for organizations, especially those operating at scale. This cost-saving can be redirected towards other strategic initiatives or infrastructure improvements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; : The introduction of the BSL necessitates a careful evaluation of the potential financial impact. Organizations must assess whether the benefits of staying with Terraform outweigh the costs associated with its new licensing model.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Feature Parity and Roadmap&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenTofu&lt;/strong&gt; : It is crucial to assess whether OpenTofu’s current feature set meets your organization’s requirements. While OpenTofu aims to replicate Terraform’s capabilities, it is still in its nascent stage. Evaluating its roadmap and community support is essential to ensure it aligns with your long-term goals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; : Under IBM’s stewardship, Terraform’s future roadmap may prioritize enterprise features that align with IBM’s strategic objectives. Organizations need to consider whether these priorities align with their own goals, and whether they can trust IBM to continue developing Terraform in a way that meets their needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Community and Support&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenTofu&lt;/strong&gt; : While OpenTofu benefits from a growing community eager to contribute, it may not yet match Terraform’s maturity in terms of documentation, plugins and support. However, the open-source community has a track record of quickly filling gaps and addressing issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; : Terraform’s extensive community, comprehensive documentation and professional support options make it a reliable choice for organizations requiring robust support and resources. This established ecosystem can be crucial for enterprises that need guaranteed reliability and quick resolution of issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Risk Tolerance and Vendor Lock-in&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenTofu&lt;/strong&gt; : One of the main advantages of OpenTofu is the lower risk of vendor lock-in. Organizations retain full control over their IaC tool, which can be a significant advantage in terms of flexibility and autonomy. However, there are potential concerns about long-term support and development continuity that need to be addressed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; : Reliance on a proprietary tool owned by a large corporation like IBM introduces certain risks, such as changes in licensing, pricing, or development focus. Organizations need to evaluate their comfort level with these risks and the potential impact on their operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Migration Effort&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenTofu&lt;/strong&gt; : Designed for compatibility with Terraform, OpenTofu aims to minimize migration complexity. However, the specific effort required will depend on the intricacies of your current infrastructure and how deeply integrated Terraform is within your systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; : Staying with Terraform avoids the immediate cost and effort of migration, but requires careful consideration of the long-term implications of the BSL and IBM’s strategic direction.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Recommendations
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;For Organizations Highly Invested in Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Evaluate the Impact of the BSL&lt;/strong&gt; : Conduct a thorough analysis of how the BSL affects your organization. Consider both the direct costs and the potential indirect effects on your operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Engage with HashiCorp&lt;/strong&gt; : Reach out to HashiCorp to understand their licensing terms and future plans. This engagement can provide clarity and help you negotiate terms that align with your needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explore OpenTofu as an Alternative&lt;/strong&gt; : If cost or vendor lock-in are major concerns, start exploring OpenTofu as a potential alternative. Pilot projects can help assess its suitability and ease of migration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;For Organizations Starting New IaC Projects&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Compare Features and Community Support&lt;/strong&gt; : Conduct a detailed comparison of Terraform and OpenTofu in terms of features, community support and ecosystem maturity. This comparison should include both current capabilities and future roadmaps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consider Long-Term Costs and Risks&lt;/strong&gt; : Factor in the long-term cost implications and your organization’s risk appetite. Assess whether the potential savings and flexibility of OpenTofu outweigh the stability and support offered by Terraform.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;The decision between Terraform and OpenTofu is nuanced and depends on your organization’s unique circumstances. While Terraform’s established ecosystem and professional support make it a compelling choice, the cost implications of the BSL and potential vendor lock-in under IBM’s ownership are significant considerations. OpenTofu offers a promising open-source alternative that can reduce costs and increase flexibility, but it requires careful evaluation of its maturity and community support.&lt;/p&gt;

&lt;p&gt;I encourage you to actively assess your options, engage with both communities, and make an informed choice that aligns with your organization’s goals and values. The landscape of Cloud infrastructure management is rapidly evolving, and staying informed and adaptable is key to navigating these changes successfully.&lt;/p&gt;




</description>
      <category>terraform</category>
      <category>engineering</category>
      <category>technology</category>
      <category>devops</category>
    </item>
    <item>
      <title>12 Cloud Commandments: Applying 12 Factor App Principles to Master Terraform — Part 4</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Mon, 24 Jun 2024 12:47:11 +0000</pubDate>
      <link>https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-4-3n02</link>
      <guid>https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-4-3n02</guid>
      <description>&lt;h3&gt;
  
  
  12 Cloud Commandments: Applying 12 Factor App Principles to Master Terraform — Part 4
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Processes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the previous &lt;a href="https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-3-2446-temp-slug-7981531"&gt;part&lt;/a&gt; of this series, we delved into how the 12 Factor App principles advocate for executing tasks as stateless, isolated processes. This approach helps prevent resource inefficiencies and system failures. By ensuring that background processes remain lightweight, scalable and resilient, we safeguard our infrastructure’s stability and efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Port Binding&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Port binding was discussed as a crucial practice for ensuring efficient traffic distribution and high availability. We highlighted scenarios like port collisions and ephemeral port issues, emphasizing the importance of dynamic port mapping to avoid conflicts and ensure seamless scaling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concurrency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Concurrency involves managing multiple processes efficiently to prevent resource collisions. Using strategies like Terraform workspaces and state locking mechanisms helps maintain resource isolation and prevent conflicts, thus ensuring the stability and integrity of the infrastructure.&lt;/p&gt;

&lt;p&gt;In the below part, we will continue our exploration of the 12 Factor App principles in the context of Terraform, focusing on additional principles and their practical applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disposability — Maximize robustness with fast start-up and graceful shutdown
&lt;/h3&gt;

&lt;p&gt;It’s the magic trick of your infrastructure show — it’s all about making things disappear without a trace. But beware the magician who forgets to clean up after themselves — keep your disposability tidy and your stage clear for the next act!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A04_pgxghADVJUvuuyLXI8Q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A04_pgxghADVJUvuuyLXI8Q.jpeg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: The “Server Zombie Apocalypse”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine you’re managing a fleet of servers in the Cloud, each responsible for a critical component of your application. Suddenly, one of them decides to play dead in the middle of the night, just like a tired toddler refusing to sleep. You try everything to revive it, from CPR (Cloud Provider Resuscitation) to offering it virtual chicken soup, but nothing works. Now you’re stuck in a server zombie apocalypse, where one unresponsive instance threatens to bring down your entire application like a house of cards. It’s like a horror movie where the villain is a stubborn piece of hardware instead of a masked maniac.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: The “Patchwork Quilt of Pain”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Picture this: you’re responsible for maintaining a legacy system that’s been around since the dawn of Cloud computing. Every time you need to apply a security patch or update a component, it’s like performing surgery on a Frankenstein’s monster of code. You’re forced to patch up holes and stitch together workarounds just to keep the system limping along like a wounded gazelle. The result? A patchwork quilt of pain, where every fix introduces new vulnerabilities and instability. It’s like trying to remodel a house made of Jenga blocks, where every change threatens to bring the whole structure crashing down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 3: Lack of graceful shutdown handling in Terraform deployments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One common challenge faced by Cloud and DevOps engineers is ensuring the graceful disposal of resources when scaling down or updating infrastructure. Without proper handling, abrupt termination of resources can lead to data loss or service disruption. Let’s take the example of an auto-scaling group in AWS managed by Terraform. When instances are terminated due to scaling down or updating, there might be ongoing processes or connections that need to be gracefully handled to prevent data loss or service interruptions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_autoscaling_group" "example" {
// Other configurations…
lifecycle {
// Graceful handling of instances during scale down
ignore_changes = ["desired_capacity"]
create_before_destroy = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By utilizing Terraform’s &lt;strong&gt;lifecycle&lt;/strong&gt; block, engineers can ensure that instances are gracefully replaced during updates or scaling down, allowing ongoing processes to complete without interruption.&lt;/p&gt;

&lt;p&gt;In conclusion, the “Disposability” principle of the Twelve-Factor App is crucial for maintaining resilience and efficiency in Cloud infrastructure managed with Terraform. As Cloud and DevOps engineers, we face the challenge of gracefully disposing of resources during scaling down or updates. Without proper handling, abrupt terminations can lead to data loss or service disruption. By adhering to disposability principles, such as managing connections and processes effectively, we ensure a smoother transition during scaling events. With Terraform, leveraging features like lifecycle hooks and graceful termination policies, we can mitigate risks and enhance the reliability of our infrastructure. Embracing disposability not only fosters resilience, but also paves the way for seamless scaling and maintenance in dynamic Cloud environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dev/Prod Parity — Keep development, staging and production environments as similar as possible
&lt;/h3&gt;

&lt;p&gt;These are the twin towers of your infrastructure skyline — they should look alike, walk alike, and talk alike. But watch out for the evil twin who tries to steal the spotlight — keep your dev and prod environments in sync and your audiences guessing!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: The “Works on My Machine” Conundrum&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 You’ve spent hours perfecting your code in the cozy confines of your development environment. Everything seems to be running smoothly, and you’re ready to unleash your masterpiece into the wild. But as soon as you push it to production, disaster strikes. Your app crashes, users revolt, and chaos reigns supreme. What went wrong? It turns out, your development environment was a utopian paradise compared to the harsh realities of production. Different configurations, dependencies and environments can turn your code into a ticking time bomb. It’s like sending a penguin to the desert and expecting it to thrive — it’s just not going to happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: The “Mystery Bug” Saga&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 You’re on a mission to hunt down a pesky bug that’s been wreaking havoc in your production environment. You scour through logs, comb through code, and even consult the ancient scrolls of Stack Overflow. But no matter how hard you try, the bug remains elusive, like a ghost haunting your server room. The culprit? Dev/Prod disparity strikes again. What worked perfectly in your development environment is now causing chaos in production, thanks to subtle differences in configurations or dependencies. It’s like trying to solve a murder mystery where the clues keep changing every time you look away. Without parity between environments, you’re left scratching your head and cursing the heavens for your misfortune.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 3: Configuration Drift in Terraform Environments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine you’ve diligently crafted your Terraform infrastructure code to provision resources for both your development and production environments. However, over time, subtle differences creep in between these environments due to manual changes or updates made directly in the console. This configuration drift can lead to inconsistencies and unexpected behavior when deploying new features or scaling up/down resources. For instance, let’s say in your development environment, you’ve set up a smaller instance type for cost efficiency, while in production, you’ve opted for larger instances to handle higher traffic loads. If these configurations aren’t reflected accurately in your Terraform code, you risk deploying changes that work perfectly in development, but fail in production, disrupting service availability.&lt;/p&gt;

&lt;p&gt;In a nutshell, ensuring “Dev/Prod Parity” within your Terraform codebase is paramount for maintaining consistency and predictability across environments. By adhering to this 12 factor principle, you safeguard against the insidious drift that often plagues development and production setups. Imagine your development environment humming along smoothly with cost-efficient setups, only to encounter deployment nightmares when transitioning to production due to unnoticed disparities. With Dev/Prod Parity, you mitigate these risks, ensuring seamless scalability and reliable deployments. So, let’s champion this principle in our Terraform workflows, empowering teams to confidently manage infrastructure with cohesion and clarity, from development inception to production excellence.&lt;/p&gt;
&lt;h3&gt;
  
  
  Logs — Treat logs as event streams
&lt;/h3&gt;

&lt;p&gt;They are the breadcrumbs of your infrastructure fairy tale — they lead you from the beginning to the happily ever after. But beware the breadcrumbs that lead you astray — keep your logs clear and your story straight!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futpw8d3u91nzxai3gg37.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futpw8d3u91nzxai3gg37.jpeg" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: The Mystery of the Vanishing Logs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Picture this: you’ve just deployed a shiny new microservice into the Cloud, confident that it’ll revolutionize your company’s workflow. But as soon as it goes live, something strange happens — your logs start disappearing faster than socks in a dryer. You check your logging platform, only to find cryptic error messages like “404: Logs not found.” It’s like trying to solve a mystery with no clues and no detective. Without proper logs, debugging becomes a game of “Where’s Waldo?”, except Waldo is your crucial debugging information, and he’s nowhere to be found.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: The Great Flood of Logs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You’ve finally tracked down the source of that pesky bug that’s been plaguing your application for weeks. Victory is within your grasp…until you realize your logging system is spewing out more messages than a malfunctioning sprinkler system. Your logs are flooding your monitoring tools faster than you can say “log rotation.” It’s like trying to take a sip of water from a fire hose — overwhelming, messy and definitely not sustainable. Sorting through this deluge of information is like searching for a needle in a haystack, except the haystack is made of more needles than a porcupine on a sewing spree.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 3: Security Risks Due to Insecure Log Handling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another common challenge faced by Cloud and DevOps engineers in Terraform environments is the potential security risks associated with insecure log handling practices. Inadequate protection of sensitive information, such as API keys, passwords, or access tokens, within log files can expose the infrastructure to unauthorized access and data breaches. Consider a scenario where a Terraform script responsible for provisioning Cloud resources inadvertently logs confidential credentials in plaintext format. Without proper encryption or masking mechanisms in place, these sensitive details become vulnerable to exploitation by malicious actors, compromising the integrity and confidentiality of the entire infrastructure.&lt;/p&gt;

&lt;p&gt;In conclusion, implementing the Logs principle of the 12 Factor App within Terraform scripts is paramount for ensuring robust security measures. By adhering to this principle, we mitigate the significant risk posed by insecure log handling practices. Safeguarding sensitive information like API keys and passwords within logs is imperative to prevent unauthorized access and potential data breaches. Incorporating encryption or masking mechanisms within Terraform code enhances confidentiality and integrity, fortifying the entire infrastructure against malicious exploits. As Cloud and DevOps practitioners, embracing the Logs principle not only aligns with best practices, but also underscores our commitment to building resilient and secure Cloud environments.&lt;/p&gt;
&lt;h3&gt;
  
  
  Admin processes — Run admin/management tasks as one-off processes
&lt;/h3&gt;

&lt;p&gt;Admin processes are the gatekeepers of your infrastructure castle — they hold the keys to the kingdom and keep the riff-raff at bay.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi41xk1az1soj0uso8p5n.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi41xk1az1soj0uso8p5n.jpeg" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: The “Sudo Spree” Syndrome&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 Imagine you’re the proud owner of the keys to the kingdom, bestowed upon you by the mighty ‘sudo’ command. You wield this power like a medieval knight with a broadsword, granting you access to every nook and cranny of your Cloud kingdom. But with great power comes great temptation, and soon you find yourself on a ‘sudo spree,’ granting elevated privileges left and right faster than a squirrel on a caffeine high. Before you know it, chaos ensues. Resources are misconfigured, security holes abound, and your once pristine infrastructure resembles a medieval battlefield. It’s a cautionary tale of the dangers of unchecked admin access, where even the mightiest knights can fall prey to the allure of absolute power.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: The “Secrets Stash” Conundrum&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 In the quest to secure your kingdom, you’ve amassed a treasure trove of secrets: API keys, passwords, encryption keys, you name it. Like a dragon guarding its hoard, you stash these secrets away in the deepest, darkest corners of your infrastructure, hoping to keep them safe from prying eyes. But as any seasoned adventurer will tell you, secrets have a way of slipping through the cracks. Whether it’s a misplaced configuration file or a disgruntled employee turned rogue, your secrets are as vulnerable as a castle made of sand. And when they inevitably fall into the wrong hands, it’s not just your kingdom that’s at risk, but the entire realm of Cloud and DevOps. It’s a tale as old as time: the struggle to balance security with accessibility, where even the mightiest fortresses can crumble at the slightest misstep.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 3: Lack of Immutable Infrastructure in Terraform Workflows&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another challenge faced by Cloud and DevOps engineers is maintaining immutable infrastructure while managing Terraform workflows, which aligns with the Admin processes principle. In traditional setups, updates to infrastructure are often made in-place, directly modifying existing resources. However, this approach can introduce inconsistencies and make it challenging to rollback changes if issues arise. Suppose you’re deploying a Kubernetes cluster using Terraform, and during an update, a misconfiguration causes instability. Reverting to the previous state becomes cumbersome without proper versioning and immutable infrastructure practices.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Example Terraform code for deploying a Kubernetes cluster
// using immutable infrastructure
// Define Kubernetes cluster resources
resource "aws_eks_cluster" "example" {
  name = "example-cluster"
  role_arn = aws_iam_role.example.arn
  version = var.kubernetes_version
  vpc_config {
  subnet_ids = var.subnet_ids
    }
}
// Define immutable update process using Terraform
resource "null_resource" "k8s_update" {
  triggers = {
  eks_cluster_version = aws_eks_cluster.example.version
  desired_version = var.desired_kubernetes_version
    }
  provisioner "local-exec" {
  command = "kubectl apply -f k8s_manifests/"
    }
  provisioner "local-exec" {
  command = "kubectl rollout restart deployment - all"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To mitigate this, it’s crucial to adopt immutable infrastructure principles in Terraform workflows. Rather than modifying existing resources, treat infrastructure as disposable and recreate it entirely for updates.&lt;/p&gt;

&lt;p&gt;In conclusion, the Admin processes principle of the 12 Factor App highlights the significance of treating administrative tasks as one-off processes that run against a single, immutable release. When applied to Terraform workflows, this principle underscores the importance of maintaining immutable infrastructure to ensure consistency and reliability. By adhering to immutable infrastructure practices, Cloud and DevOps engineers can mitigate the risks associated with in-place updates, enabling smoother rollbacks and better management of resources. Embracing this principle not only enhances the stability of infrastructure but also streamlines the management of Terraform code, empowering teams to confidently navigate the complexities of Cloud provisioning.&lt;/p&gt;

&lt;p&gt;In wrapping up, adopting the 12 Factor App principles in Terraform development is more than just a trend — it’s a game-changer for Cloud and DevOps engineers. These principles offer a roadmap for building resilient, scalable and maintainable infrastructure in the Cloud era. By applying concepts like declarative formats, dependency management, and strict isolation, teams can elevate their Terraform workflows to new heights of efficiency and reliability.&lt;/p&gt;

&lt;p&gt;Whether you’re provisioning Kubernetes clusters or spinning up serverless functions, integrating these principles ensures consistency across environments, and simplifies the management of complex infrastructure-as-code projects. From enhancing collaboration to enabling seamless deployments, the 12 Factor App principles serve as a guiding light for modern Cloud architecture.&lt;/p&gt;

&lt;p&gt;So, as we forge ahead in this era of rapid innovation, let’s not forget the timeless wisdom embedded in these principles. Let’s embrace them, iterate upon them, and together, let’s build the resilient infrastructure of tomorrow, one Terraform module at a time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More Read&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-1-57oh-temp-slug-6392089"&gt;Part 1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-2-430o-temp-slug-4993577"&gt;Part 2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-3-2446-temp-slug-7981531"&gt;Part 3&lt;/a&gt;&lt;/p&gt;




</description>
      <category>terraform</category>
      <category>devops</category>
      <category>engineering</category>
      <category>cloud</category>
    </item>
    <item>
      <title>12 Cloud Commandments: Applying 12 Factor App Principles to Master Terraform — Part 3</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Wed, 05 Jun 2024 10:48:02 +0000</pubDate>
      <link>https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-3-36p1</link>
      <guid>https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-3-36p1</guid>
      <description>&lt;h3&gt;
  
  
  12 Cloud Commandments: Applying 12 Factor App Principles to Master Terraform — Part 3
&lt;/h3&gt;

&lt;p&gt;In the previous &lt;a href="https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-2-430o-temp-slug-4993577"&gt;part&lt;/a&gt; of this series, we explored the initial steps to applying the 12 Factor App principles in Terraform code. We discussed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Config&lt;/strong&gt; : Emphasizing the separation of configuration from code, enabling dynamic and environment-specific settings. This enhances the reliability and scalability of your infrastructure provisioning process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backing Services&lt;/strong&gt; : Treating backing services such as databases as attached resources. This involves securely managing credentials and leveraging tools like AWS Secrets Manager to ensure portability and maintainability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build, Release, Run&lt;/strong&gt; : Strictly separating these stages to prevent configuration drift and maintain consistency across environments. This ensures what you build is exactly what you release and run, fostering a robust deployment process.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These foundational principles help in creating a scalable, maintainable, and secure infrastructure using Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Processes — Execute the app as one or more stateless processes
&lt;/h3&gt;

&lt;p&gt;Choreographers of your Cloud ballet — they keep everyone in step and moving in the right direction. But watch out for the prima donna who insists on stealing the spotlight — keep your processes lean and your performance flawless!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fi45tuv8k0sqf0vxtlk.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fi45tuv8k0sqf0vxtlk.jpeg" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: The “Zombie Apocalypse” of Processes&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 Imagine this: you’re managing a fleet of Cloud servers, each running multiple services to keep your application humming along. But as time goes by, you start noticing something eerie — zombie processes lurking in the shadows. These undead remnants of past deployments refuse to die gracefully, hogging precious resources and haunting your system like ghosts of past deployments. They slow down your servers, drain your budgets, and turn your Cloud infrastructure into a graveyard of wasted compute power. It’s a nightmare straight out of a horror movie, except instead of brains, these zombies crave CPU cycles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: The “Whack-a-Mole” Conundrum&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 Ever felt like you’re playing an endless game of whack-a-mole with your Cloud infrastructure? You squash one pesky process causing trouble, only for another one to pop up somewhere else. It’s like trying to plug leaks in a sinking ship with duct tape — a never-ending cycle of firefighting and frustration. You scale up to handle increased traffic, only to find that your processes aren’t playing nice with each other, leading to bottlenecks and slowdowns. It’s enough to make even the most seasoned DevOps engineer feel like they’re chasing their own tail in a never-ending game of cat and mouse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 3: Unmanaged Background Processes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a typical Cloud infrastructure setup managed through Terraform, engineers often need to provision various resources like virtual machines, databases, and networking components. Sometimes, these resources require background processes for tasks such as data migration, log streaming, or periodic clean-up. However, if these background processes are not properly managed, they can lead to resource wastage, performance issues, or even unexpected downtime. Consider a scenario where a DevOps team provisions a set of EC2 instances using Terraform to host a microservices architecture. Each instance needs to run a background process for log aggregation and forwarding to a centralized monitoring system. Without proper management, these background processes might consume excessive CPU or memory, impacting the performance of the microservices. Furthermore, if one of these processes crashes or hangs, it can disrupt the entire application’s functionality.&lt;/p&gt;

&lt;p&gt;In the realm of Terraform coding for Cloud infrastructure, the “Processes” principle from the 12 Factor App offers a crucial beacon of guidance. With its focus on executing tasks as stateless, isolated processes, it serves as a guardrail against resource inefficiencies and potential system failures. Picture this: DevOps teams spinning up EC2 instances, each necessitating background processes for vital functions like log aggregation. Without adherence to this principle, these processes could turn into resource hogs, jeopardizing the performance of our microservices. But by embracing the 12 Factor approach, we ensure that our processes remain lightweight, scalable and resilient. So, as we code our Terraform configurations, let’s keep the essence of “Processes” alive, safeguarding our infrastructure’s stability and efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Port Binding — Export services via port binding
&lt;/h3&gt;

&lt;p&gt;Port binding is like musical chairs for your applications — they need a seat at the table to make beautiful music together. But don’t let them fight over the best spot — keep your ports organized and your applications harmonizing like a well-tuned orchestra!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkcte9m1i2flnukqg96o.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkcte9m1i2flnukqg96o.jpeg" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: The “Port Collision” Conundrum&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine this scenario: you’re deploying a shiny new microservice into your Cloud environment, complete with all the bells and whistles. Everything seems to be going smoothly until you hit a snag — port collision. You see, in the wild world of Cloud infrastructure, ports are like prime real estate — everyone wants a piece of the action. But when two services try to stake their claim on the same port, chaos ensues. It’s like trying to fit two puzzle pieces into the same slot — it just doesn’t work.&lt;/p&gt;

&lt;p&gt;As a DevOps engineer, you find yourself playing referee in this port showdown, trying to untangle the mess without disrupting the flow of traffic. You’re juggling firewall rules, network configurations, and angry service owners, all while praying that your troubleshooting skills are sharper than a katana. It’s a high-stakes game of “Portopoly,” where one wrong move could send your entire infrastructure crashing down like a house of cards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: The “Ephemeral Port Panic” Predicament&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Picture this: you’ve spent weeks fine-tuning your Cloud infrastructure, optimizing every last byte for peak performance. But just when you think you’ve got it all figured out, along comes the dreaded ephemeral port panic. Ephemeral ports are like the wild cards of the networking world — they come and go as they please, leaving chaos in their wake. One minute, your service is happily chugging along, and the next, it’s stuck in a port purgatory, unable to communicate with the outside world.&lt;/p&gt;

&lt;p&gt;As a Cloud and Infrastructure Architect, you find yourself on the front lines of this ephemeral port panic, desperately trying to keep your services afloat amidst a sea of random port assignments. You’re wrangling load balancers, tweaking security groups, and praying to the demo gods for mercy, all while cursing the ephemeral nature of it all. It’s a wild ride through the port jungle, where the only law is Murphy’s Law — if something can go wrong, it will.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 3: Managing Port Allocation in Load Balancing Environments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a load-balanced environment, ensuring proper port binding is crucial for efficient traffic distribution and high availability. Without adhering to the 12 Factor App principle of Port Binding, you might face challenges in managing port allocations across different instances of your application behind the load balancer. For instance, if you’re using Terraform to provision EC2 instances in AWS and configure them behind an Elastic Load Balancer (ELB), specifying static ports in your Terraform code can lead to port conflicts or inefficient resource utilization. Consider a scenario where you have multiple EC2 instances serving the same application behind an ELB. Without dynamic port binding mechanisms, you might configure each instance to listen on a predefined port, such as port 80. However, as your application scales and additional instances are launched, conflicts may arise as each instance competes for the same port. By following the Port Binding principle and utilizing features like dynamic port mapping in Terraform, you can ensure that each instance binds to a unique port dynamically, allowing the load balancer to efficiently distribute traffic without encountering port conflicts.&lt;/p&gt;

&lt;p&gt;In conclusion, adopting the Port Binding principle within your Terraform code is essential for seamless scalability and efficient resource utilization in Cloud environments. By dynamically assigning ports to instances behind load balancers, you mitigate the risk of port conflicts and ensure optimal traffic distribution. Terraform’s dynamic port mapping capabilities empower you to embrace this principle effectively, enabling smoother deployments and enhancing the overall reliability of your infrastructure. Embracing Port Binding not only aligns with 12 Factor App principles, but also fosters agility and resilience within your Cloud architecture. So, let’s bind wisely, scale effortlessly, and keep our applications running smoothly in the Cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  Concurrency — Scale out via the process model
&lt;/h3&gt;

&lt;p&gt;The juggling act of your infrastructure circus — it’s all about keeping multiple balls in the air without dropping a single one. But watch out for the clown who thinks they can juggle flaming torches — keep your concurrency safe and your audience entertained!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbirg6mfwxjg6ktlgg1pf.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbirg6mfwxjg6ktlgg1pf.jpeg" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: The “Racecar Deployment” Conundrum&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine you’re deploying updates to your Cloud infrastructure, sprinting towards the finish line like a Formula 1 driver. Everything seems to be going smoothly, until you hit a curveball: multiple engineers deploying changes simultaneously. Suddenly, it’s less like a race and more like rush hour traffic in downtown LA. Requests collide, resources clash, and chaos ensues. You find yourself stuck in a deadlock, waiting for one deployment to finish before another can even start. It’s a racecar deployment, where everyone’s trying to be the first across the finish line, but nobody’s getting anywhere fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: The “Herding Cats” Debacle&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ever tried to coordinate with a group of DevOps engineers, each with their own agenda and timeline? It’s like herding cats on a caffeine bender. One engineer wants to deploy updates to the database, while another insists on rolling out changes to the networking infrastructure. Meanwhile, you’re stuck in the middle, trying to keep everyone moving in the same direction. But just when you think you’ve got everyone on the same page, someone decides to throw a spanner in the works by deploying their changes without warning. It’s a circus of chaos, where coordination is about as likely as finding a unicorn in your server room.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 3: Resource Collisions during Parallel Execution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine a scenario where a team of DevOps engineers is managing a large-scale infrastructure on AWS using Terraform. They have multiple modules defining various resources like EC2 instances, RDS databases, and security groups. Each engineer works on different modules simultaneously to speed up development and deployment. However, without careful coordination, this concurrent development can lead to resource collisions. For example, two engineers might inadvertently attempt to create resources with the same name or in the same subnet, causing conflicts and potentially breaking the infrastructure. To address this, engineers must implement strategies to ensure resource isolation and prevent collisions. One approach is to use Terraform workspaces or state locking mechanisms provided by Terraform backends like Amazon S3 or HashiCorp Consul. By utilizing these features, engineers can ensure that only one process can modify the state at a time, preventing conflicts and maintaining the integrity of the infrastructure.&lt;/p&gt;

&lt;p&gt;In conclusion, while the allure of concurrent development in Cloud infrastructure management is undeniable for accelerating deployment, it introduces a critical challenge: concurrency. Without proper handling, simultaneous work on Terraform modules can lead to resource collisions, jeopardizing the stability of the infrastructure. Fortunately, by embracing the 12 Factor App principle of Concurrency, DevOps teams can implement strategies like Terraform workspaces and state locking mechanisms to ensure resource isolation and prevent conflicts. These measures not only maintain the integrity of the infrastructure but also foster smoother collaboration among engineers. So, remember, when it comes to managing Cloud infrastructure with Terraform, prioritizing concurrency management isn’t just a best practice — it’s a necessity for sustained efficiency and reliability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More Read&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-1-57oh-temp-slug-6392089"&gt;Part 1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-2-430o-temp-slug-4993577"&gt;Part 2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Part 4&lt;/p&gt;




</description>
      <category>engineering</category>
      <category>terraform</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>12 Cloud Commandments: Applying 12 Factor App Principles to Master Terraform — Part 2</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Tue, 28 May 2024 07:20:31 +0000</pubDate>
      <link>https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-2-3khc</link>
      <guid>https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-2-3khc</guid>
      <description>&lt;h3&gt;
  
  
  12 Cloud Commandments: Applying 12 Factor App Principles to Master Terraform — Part 2
&lt;/h3&gt;

&lt;p&gt;In &lt;a href="https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-1-57oh-temp-slug-6392089"&gt;Part 1&lt;/a&gt;, we introduced the integration of the 12 Factor App principles with Terraform to optimize Cloud infrastructure management. We began with an overview of these principles, highlighting their relevance in building scalable and maintainable applications.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Codebase&lt;/strong&gt; principle emphasized maintaining a single codebase with multiple deploys, ensuring a clean and organized environment. We also discussed the &lt;strong&gt;Dependencies&lt;/strong&gt; principle, focusing on the need to explicitly declare and isolate dependencies to prevent hidden or unmanaged issues.&lt;/p&gt;

&lt;p&gt;These foundational concepts set the stage for effective Infrastructure as Code practices using Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Config — Store configuration in the environment
&lt;/h3&gt;

&lt;p&gt;Configurations are the seasoning of your infrastructure — they add flavor and personality, but too much can leave a bad taste in your mouth. Keep it simple, sprinkle just enough to enhance the flavor, and avoid drowning your dish in a sea of spices. Your infrastructure will thank you for it!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl73fmh4wwxc9t7auhyos.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl73fmh4wwxc9t7auhyos.jpeg" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: The “Configuration Chaos Carnival”&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Ever felt like you’re juggling a dozen flaming torches while riding a unicycle on a tightrope? Welcome to the Configuration Chaos Carnival! You have configurations spread across multiple files, environments, and even secret management systems. Keeping track of which setting applies where is like herding cats during a fireworks display — chaotic, unpredictable and downright dangerous. One wrong move could set off a chain reaction of misconfigurations, turning your infrastructure into a virtual circus of errors. It’s enough to make even the most seasoned Cloud or DevOps engineer question their sanity.&lt;/p&gt;
&lt;h4&gt;
  
  
  Scenario 2: The “Secrets Slip-Up”
&lt;/h4&gt;

&lt;p&gt;Picture this: you’re deploying a critical application to production, and everything is going smoothly. But as you reach the final stages, you realize you forgot to properly manage your secrets. Your database passwords are hardcoded in plain text, API keys are floating around in Slack channels, and SSH keys are stored in a folder cleverly named “Not_Secrets.” It’s a security nightmare waiting to happen, like leaving the keys to your Ferrari in the ignition with a sign saying “Free Joyrides.”&lt;/p&gt;
&lt;h4&gt;
  
  
  Scenario 3: Environment-specific Configuration
&lt;/h4&gt;

&lt;p&gt;Another challenge arises when managing environment-specific configuration in Terraform. In a typical application deployment pipeline, you may have multiple environments such as development, staging and production. Each environment requires different configuration settings, such as database endpoints, API URLs or feature flags. Hardcoding these settings directly into Terraform code leads to maintenance overhead and potential errors when promoting code across environments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "example" {
  ami = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  subnet_id = "subnet-029b2e75"
  security_groups = ["${var.security_group_name}"]
  tags = {
  Name = "ExampleInstance"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the subnet_id and security_group_name are hardcoded. While this may work for a specific environment, it becomes cumbersome to manage when deploying to different environments with unique network configurations.&lt;/p&gt;

&lt;p&gt;In conclusion, adhering to the “Config” principle of the 12 Factor App is paramount, especially in the context of Terraform code. By separating configuration from code, we mitigate the headaches of managing environment-specific settings. Leveraging tools like Terraform’s input variables, or even external configuration management systems, enables seamless configuration across development, staging and production environments. This approach not only streamlines maintenance, but also enhances the reliability and scalability of our infrastructure provisioning process. Embracing the “Config” principle empowers DevOps teams to efficiently manage dynamic infrastructure requirements while fostering a culture of agility and resilience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backing Services — Treat backing services as attached resources
&lt;/h3&gt;

&lt;p&gt;They are like your trusty sous chefs — they handle the grunt work so you can focus on the main course. Whether it’s a database slicing and dicing your data or a CDN spreading your content far and wide, choose your sidekicks wisely. After all, no one wants a sous chef who burns the soufflé!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Folcsmisi6ezotv4fy47n.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Folcsmisi6ezotv4fy47n.jpeg" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Scenario 1: The Vanishing Database Conundrum
&lt;/h4&gt;

&lt;p&gt;Imagine this scenario: you’re in the final stages of deploying a new application to the Cloud. Everything seems to be going smoothly` until it’s time to connect to the database. You enter the credentials provided by your colleague, only to be met with a dreaded error message: “Connection refused.” After several frantic Slack messages and a few cups of coffee, you discover that the database instance has mysteriously vanished into the digital ether. Turns out, your colleague forgot to renew the subscription, and now you’re left scrambling to spin up a new instance, and migrate the data before the stakeholders start breathing down your neck. It’s like trying to catch smoke with a butterfly net — frustrating, futile, and just a tad ridiculous.&lt;/p&gt;

&lt;h4&gt;
  
  
  Scenario 2: The Shadowy Service Blackout
&lt;/h4&gt;

&lt;p&gt;Picture this: you’re knee-deep in troubleshooting a production issue that’s causing your application to crash more frequently than a Windows 95 computer. After hours of digging through logs and scratching your head, you finally uncover the culprit — a third-party service that your application relies on for critical functionality. But here’s the kicker: the service provider has suddenly gone dark, with no updates on their status page, and no response to your frantic support tickets. Now you’re stuck in a digital purgatory, waiting for the service to resurface like a submarine in distress. Meanwhile, your users are flooding your inbox with complaints, and your boss is giving you the stink eye from across the office. It’s a lesson in the importance of vetting and monitoring backing services, lest you find yourself adrift in a sea of uncertainty.&lt;/p&gt;

&lt;h4&gt;
  
  
  Scenario 3: Managing Database Credentials in Terraform Code
&lt;/h4&gt;

&lt;p&gt;DevOps engineers face challenges in securely managing database credentials in Cloud infrastructure provisioning. To follow the 12 Factor App’s “Backing Service” principle, backing services like databases should be treated as attached resources. However, hardcoding sensitive information into Terraform configurations violates security best practices and makes the infrastructure less portable and maintainable. To address this, Terraform can be used to retrieve database credentials from secure secret management services like AWS Secrets Manager or HashiCorp Vault, separating infrastructure provisioning concerns from sensitive data management, adhering to the principle of treating backing services as attached resources.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
data "external" "database_credentials" {&lt;br&gt;
  program = ["bash", "${path.module}/scripts/get_database_credentials.sh"]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_db_instance" "example" {&lt;/p&gt;

&lt;h1&gt;
  
  
  Other configuration options…
&lt;/h1&gt;

&lt;p&gt;username = data.external.database_credentials.result.username&lt;br&gt;
password = data.external.database_credentials.result.password&lt;/p&gt;

&lt;h1&gt;
  
  
  Database instance configuration…
&lt;/h1&gt;

&lt;p&gt;}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In this example, &lt;strong&gt;get_database_credentials.sh&lt;/strong&gt; script retrieves database credentials from a secure source and outputs them in a format that Terraform can consume. By separating credential management from infrastructure code, you adhere to the principles of security, scalability, and portability advocated by the 12 Factor App.&lt;/p&gt;

&lt;p&gt;In wrapping up, the “Backing Services” principle from the 12 Factor App provides a crucial framework for managing Infrastructure resources, e.g. databases securely in Cloud infrastructure provisioning. DevOps engineers often grapple with the challenge of safeguarding sensitive credentials, while ensuring infrastructure portability and maintainability. Terraform emerges as a powerful ally in this regard, enabling the separation of concerns by retrieving credentials from secure secret management services like AWS Secrets Manager or HashiCorp Vault. By adhering to this principle, we not only enhance security, but also streamline the management of backing services, fostering a robust and agile Cloud infrastructure ecosystem. Let’s embrace Terraform’s potential to uphold these principles and propel our Cloud and DevOps endeavors to new heights.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build, Release, Run — Strictly separate build and run stages
&lt;/h3&gt;

&lt;p&gt;Ah, the three musketeers of deployment — build, release and run. Like a well-choreographed dance routine, they work in harmony to bring your creation to life. But beware the rogue dancer who steps on everyone’s toes — keep your deployments smooth and your audiences applauding!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcss8vxvt80et80ej18f.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcss8vxvt80et80ej18f.jpeg" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: The “Release Roulette”&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Imagine this scenario: you’re preparing to deploy a new feature to production. You’ve meticulously built and tested the code on your local machine, but as soon as it hits the production environment, chaos ensues. The application crashes, users start complaining, and you’re left scrambling to figure out what went wrong. Turns out, the code that worked perfectly in your development environment doesn’t play nice with the dependencies and configurations in the production environment. It’s like trying to fit a square peg into a round hole, except the peg is your code and the hole is production. This release roulette not only disrupts the user experience, but also leaves you questioning your life choices as a DevOps engineer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: The “Version Vortex”&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Picture this: you’re knee-deep in managing multiple versions of your application across different environments. You have one version running in development, another in staging, and a different one in production. Keeping track of which version is where feels like playing a game of “Whac-A-Mole” with your sanity. You make a change to fix a bug in the staging environment, only to realize it’s the wrong version, and now you’ve introduced a new bug into production. It’s a never-ending version vortex that sucks you in deeper with each deployment, leaving you feeling more lost than a tourist without a map. Trying to maintain consistency and control across environments becomes a constant battle, with version numbers swirling around you like a tornado of confusion.&lt;/p&gt;

&lt;h4&gt;
  
  
  Scenario 3: Configuration Drift in Production Environment
&lt;/h4&gt;

&lt;p&gt;Cloud and DevOps engineers face the challenge of maintaining consistency between different environments, particularly in dynamic Cloud infrastructure. The “Build, Release, Run” principle is crucial in ensuring environment parity by separating the build phase, where infrastructure configurations are defined in Terraform code, from the release and run phases, where these configurations are applied to different environments. This helps mitigate the risk of configuration drift.  &lt;/p&gt;

&lt;p&gt;A continuous delivery approach with Terraform pipelines enables automated deployments from a centralized source of truth, such as a version-controlled repository. Changes to the infrastructure are tested in lower environments before being promoted to production, reducing the likelihood of unexpected configuration drift. Tools like Terraform Enterprise offer features like state locking and version control, providing visibility and control over changes made to the production environment, thus maintaining consistency and reliability.&lt;/p&gt;

&lt;p&gt;In conclusion, adopting the “Build, Release, Run” principle within Terraform coding not only addresses the challenge of maintaining consistency across various Cloud environments, but also fortifies the reliability of your infrastructure. By segregating the build phase from release and run phases, you effectively combat configuration drift, ensuring that what you build is exactly what you release and run. Embracing continuous delivery methodologies through Terraform pipelines automates deployments, fostering a culture of agility and reliability. With tools like Terraform Enterprise providing robust features such as state locking and version control, you gain unparalleled visibility and control over your infrastructure changes. In essence, integrating 12 Factor App principles into Terraform workflows elevates not just efficiency, but the integrity of your Cloud infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More Reads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-1-57oh-temp-slug-6392089"&gt;Part 1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Part 3&lt;/p&gt;

&lt;p&gt;Part 4&lt;/p&gt;




</description>
      <category>engineering</category>
      <category>devops</category>
      <category>terraform</category>
      <category>cloud</category>
    </item>
    <item>
      <title>12 Cloud Commandments: Applying 12 Factor App Principles to Master Terraform — Part 1</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Tue, 21 May 2024 06:32:51 +0000</pubDate>
      <link>https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-1-149c</link>
      <guid>https://dev.to/g33kzone/12-cloud-commandments-applying-12-factor-app-principles-to-master-terraform-part-1-149c</guid>
      <description>&lt;h3&gt;
  
  
  12 Cloud Commandments: Applying 12 Factor App Principles to Master Terraform — Part 1
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fku3s1ey7n2rv9znk720n.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fku3s1ey7n2rv9znk720n.jpeg" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Remember the days when deploying a new app meant wrestling with servers, deciphering cryptic configuration files, and hoping you didn’t trigger a rogue hamster on the wheel powering your infrastructure? Those days, thankfully, are fading faster than a Snapchat filter. Modern Cloud architecture and DevOps practices have evolved into a sleek, efficient dance between code and infrastructure. We’re talking automation, standardization, and a whole lot less server-induced sweat.&lt;/p&gt;

&lt;p&gt;But amidst this technological tango, one set of principles still shines brightly: the 12 Factor App. These timeless guidelines, born in the early days of the Cloud, remain as relevant as ever. They’re like the secret sauce that keeps your applications scalable, reliable and deployable with the click of a button.&lt;/p&gt;

&lt;p&gt;So, buckle up, because, in this blog post, we’re about to do something pretty cool: &lt;strong&gt;we’re going to marry the power of Terraform, an open-source Infrastructure as Code (IaC) tool, with the wisdom of the 12 Factor App principles.&lt;/strong&gt; Think of it as a match made in DevOps heaven. We’ll show you how each principle can be applied within the context of Terraform, creating an infrastructure that’s as nimble and adaptable as your code itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  12 Factor App Principles
&lt;/h3&gt;

&lt;p&gt;The twelve-factor app is a methodology for building software-as-a-service apps that are designed to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scalable&lt;/li&gt;
&lt;li&gt;Maintainable&lt;/li&gt;
&lt;li&gt;Portable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The principles guide developers in designing Cloud-native applications, focusing on codebases, dependencies, configuration and processes, ensuring productivity, dynamic scaling and resilience, and promoting a consistent approach to modern Cloud platforms.&lt;/p&gt;

&lt;h4&gt;
  
  
  Some key aspects of the twelve-factor methodology include:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Strict separation of config from code&lt;/li&gt;
&lt;li&gt;Ability to scale out via the process model&lt;/li&gt;
&lt;li&gt;Loose coupling between app components&lt;/li&gt;
&lt;li&gt;Designing stateless processes&lt;/li&gt;
&lt;li&gt;Ease of deploying/rolling back versions&lt;/li&gt;
&lt;li&gt;Minimizing divergence between development, staging, production&lt;/li&gt;
&lt;li&gt;Easy portability between execution environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will now examine the amalgamation of two potent techniques in this blog post: Terraform, an open-source Infrastructure as Code (IaC) tool, and the “12 Factor App” principles. This union provides a significant synergy that enables us to create robust repeatable infrastructure at scale.&lt;/p&gt;

&lt;p&gt;So, let’s get started, shall we?&lt;/p&gt;

&lt;h3&gt;
  
  
  Codebase — One Codebase Tracked in Revision Control, Many Deploys
&lt;/h3&gt;

&lt;p&gt;Think of your codebase as the ultimate recipe for your Cloud infrastructure. Just like a well-loved cookbook, it should be organized, easy-to-follow, and free of spaghetti code. When everyone’s on the same page and singing from the same hymn sheet, you’ll avoid the chaos of mismatched ingredients and cooking disasters. Keep it tidy, folks — no one likes a messy kitchen!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5fdka6j9qwqetkaz8g7.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5fdka6j9qwqetkaz8g7.jpeg" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 1: The “Spaghetti Stack” Dilemma
&lt;/h3&gt;

&lt;p&gt;Picture this: you inherit a project from a colleague who left the company to pursue their dream. As you dive into the codebase, you realize it’s more tangled than a plate of spaghetti at an Italian wedding. Modules, functions, and configurations are all mixed-up, making it impossible to decipher what does what. This spaghetti stack not only confuses you, but also increases the risk of introducing bugs or making unintended changes. It’s like trying to find a needle in a haystack, except that the haystack is made of needles.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 2: The “Version Control Vortex”
&lt;/h3&gt;

&lt;p&gt;Ever experienced the frustration of trying to track changes in a project where everyone seems to have their own version of reality? It’s like playing a game of “Chinese Whispers” but with code. One engineer adds a new feature in their local environment, another tweaks a configuration file on the server directly, and chaos ensues. Before you know it, you’re knee-deep in merge conflicts, lost commits and version control mayhem. It’s a nightmare for collaboration, and leads to more headaches than a hangover on a Monday morning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario 3: Terraform Configuration Monolith
&lt;/h3&gt;

&lt;p&gt;A new DevOps team managing a sprawling AWS infrastructure encounters a monolithic configuration file in the Terraform codebase. This file consists of every aspect of the infrastructure, from networking to compute instances, making it intimidating and introducing risks. A simple typo could lead to critical services downtime, and understanding the impact of changes becomes a difficult task. For example, updating security group rules for a specific service can be a “Where’s Waldo” game, increasing the likelihood of human error and downtime.&lt;/p&gt;

&lt;p&gt;The 12-Factor App principle of “Codebase” comes to the rescue. Treat your Terraform code like the precious Infrastructure as Code (IaC) it is. Store it in a central repository like Git, the knight in shining armor of version control. This allows for easy collaboration, disaster recovery, and a sigh of relief knowing you can revert to a working version, if needed.&lt;/p&gt;

&lt;p&gt;In conclusion, embracing the “Codebase” principle of the 12 Factor App not only promotes clarity and manageability, but also mitigates risks inherent in sprawling infrastructure configurations. By modularizing Terraform code into discrete components, teams can effectively tackle complexity, reducing the likelihood of human error and downtime. A monolithic configuration file transforms into a navigable landscape, where updates are precise, and impact assessments are straightforward. With this approach, managing AWS infrastructure becomes less of a daunting task and more of a streamlined operation, empowering DevOps teams to wield Terraform with confidence and efficiency.&lt;/p&gt;

&lt;h4&gt;
  
  
  Dependencies — Explicitly declare and isolate dependencies
&lt;/h4&gt;

&lt;p&gt;They are like the ingredients in your favorite dish — you need them to make the magic happen, but too many can spoil the broth. Keep a close eye on what you’re adding to the pot, and make sure each ingredient pulls its weight. After all, no one wants a soufflé that collapses under the weight of too many eggs!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb2x6avd9iihdwmdzsr4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb2x6avd9iihdwmdzsr4.jpeg" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: The “Dependency Domino Effect”&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You’re happily coding away, when suddenly a wild dependency appears! You install it without a second thought, only to realize it brings along its own entourage of dependencies. Before you know it, your project has more dependencies than a family reunion. Each update becomes a game of Russian roulette, as you pray that one tiny change doesn’t send the entire stack tumbling down like a house of cards. It’s dependency management, but with all the stress of a high-stakes poker game.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: The “Versioning Volcano”&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Picture this: you’re tasked with updating a library in your project to fix a critical security vulnerability. Easy, right? Wrong. Turns out, that library is a critical piece of the puzzle, used in multiple modules across your application. You update it in one place, only to realize that it breaks compatibility with another module that’s still using an older version. Now you’re stuck in a versioning volcano, trying to balance stability with security while dodging eruptions left and right. It’s a precarious game of “Jenga,” where one wrong move could bring the whole tower crashing down.&lt;/p&gt;
&lt;h3&gt;
  
  
  Scenario 3: Uncontrolled Module Dependencies
&lt;/h3&gt;

&lt;p&gt;In the world of Cloud and DevOps, managing dependencies in Terraform modules is crucial for maintaining a scalable and efficient Infrastructure as Code (IaC) setup. Consider a scenario where you have a main Terraform module that provisions AWS resources such as EC2 instances and RDS databases, and it relies on a separate module for managing security groups. Now, imagine that the security group module is updated frequently with new features or fixes. Without careful versioning and dependency management, these updates could unintentionally break the main module, leading to deployment failures or security vulnerabilities.&lt;/p&gt;

&lt;p&gt;To illustrate, let’s say you have a Terraform configuration like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "main" {
source = "./main_module"
// other configurations
security_group_id = module.security_group.id
}
module "security_group" {
source = "./security_group_module"
// other configurations
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the main module depends on the security group module. However, if the security group module undergoes significant changes without proper versioning or testing, it could introduce unexpected behavior or errors in the main module, disrupting the infrastructure deployment process.&lt;/p&gt;

&lt;p&gt;In the ever-evolving landscape of Cloud and DevOps, mastering dependency management in Terraform is non-negotiable. Picture this: your main Terraform module orchestrating vital AWS resources is only as robust as its supporting modules, like those handling security groups. Yet, without diligent versioning and dependency control, each update to these ancillary modules becomes a high-stakes gamble, risking deployment hiccups or worse, security breaches. Embracing the 12 Factor App principle of Dependencies isn’t just about ticking boxes; it’s about safeguarding your infrastructure’s integrity and agility. So, let’s commit to meticulous dependency management, ensuring our Terraform setups scale reliably in the face of constant change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More Read&lt;/strong&gt;&lt;/p&gt;




</description>
      <category>engineering</category>
      <category>terraform</category>
      <category>cloud</category>
      <category>technology</category>
    </item>
    <item>
      <title>Build AWS AMI with HashiCorp Packer using GitHub Actions</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Tue, 10 Aug 2021 07:02:53 +0000</pubDate>
      <link>https://dev.to/g33kzone/build-aws-ami-with-hashicorp-packer-using-github-actions-5f86</link>
      <guid>https://dev.to/g33kzone/build-aws-ami-with-hashicorp-packer-using-github-actions-5f86</guid>
      <description>&lt;p&gt;Security best practices recommend the latest base images (e.g. AWS AMI ) for spinning VM on the cloud. Up to date software patches reduce the risk of any security breach. This emphasizes a strong need to set up an Image Factory to automate the image creation process with the latest software versions.&lt;/p&gt;

&lt;p&gt;Also, every organization has a standard list of software to be installed for a given image. Only the essential software must be part of the list to reduce the blast surface from a security standpoint. This list of software can either be installed once the VM is created or when the VM is in the process of creation. However, both these techniques are time-consuming and hence not recommended.&lt;/p&gt;

&lt;p&gt;It would be preferred to bake this software list in the base image itself. &lt;a href="https://www.packer.io/"&gt;HashiCorp Packer&lt;/a&gt; is an open-source tool that specializes in building automated machine images for multiple platforms from a single source configuration.&lt;/p&gt;

&lt;h4&gt;
  
  
  HashiCorp Packer Installation
&lt;/h4&gt;

&lt;p&gt;Refer to HashiCorp &lt;a href="https://learn.hashicorp.com/tutorials/packer/get-started-install-cli?in=packer/aws-get-started"&gt;documentation&lt;/a&gt; for Packer installation based on your hardware OS.&lt;/p&gt;

&lt;h4&gt;
  
  
  GitHub Actions
&lt;/h4&gt;

&lt;p&gt;For this demo, we will use GitHub Actions to create CI/CD pipeline to automate this workflow and eventually push the baked image (AMI) in AWS. It is a platform to automate tasks within the software development lifecycle. It's an &lt;code&gt;event-driven&lt;/code&gt; framework, which means we can carry series of commands for a given event or can be scheduled for one-off or repetitive tasks. (e.g. Execute a Test Suite on Pull Request creation, Adding labels to issues, Lint checks, etc.)&lt;/p&gt;

&lt;p&gt;Actions are defined in YAML files, which allows pipeline workflow to be triggered using any GitHub events like on creation of Pull Requests, on code commits, and much more.&lt;/p&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS User with Programmatic access

&lt;ul&gt;
&lt;li&gt;AWS Access Key ID&lt;/li&gt;
&lt;li&gt;AWS Secret Access Key&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;AWS IAM Privileges to create EC2 Instance (create, modify and delete EC2 instances). Refer &lt;a href="https://www.packer.io/docs/builders/amazon#iam-task-or-instance-role"&gt;documentation&lt;/a&gt; for the full list of IAM permissions required to run the amazon-ebs builder.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post, we will bake Open JDK (Java 8) in our Ubuntu base Image and push it into AWS. Packer configurations can be written in HCL (&lt;em&gt;.pkr.hcl file extension) and JSON (&lt;/em&gt;.pkr.json) formats. We will use the HCL language for this demo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mK1CBMaI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628578949157/M78aWQCCb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mK1CBMaI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628578949157/M78aWQCCb.png" alt="Packer Flow.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reference GitHub repository - &lt;a href="https://github.com/g33kzone/pkr-aws-ubuntu-java"&gt;pkr-aws-ubuntu-java&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Code Time
&lt;/h4&gt;

&lt;p&gt;Let us start writing Packer configuration. (I am using a &lt;code&gt;Linux&lt;/code&gt; machine for this demo)&lt;/p&gt;

&lt;h4&gt;
  
  
  Packer Configuration
&lt;/h4&gt;

&lt;p&gt;Create Project folder &lt;code&gt;pkr-aws-ubuntu-java&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;pkr-aws-ubuntu-java &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$_&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a file named &lt;code&gt;aws-demo.pkr.hcl&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;touch &lt;/span&gt;aws-demo.pkr.hcl

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open your favorite IDE (e.g. VSCode). Copy the below code in &lt;code&gt;aws-demo.pkr.hcl&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;packer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_plugins&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;amazon&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 0.0.2"&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"github.com/hashicorp/amazon"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;packer {}&lt;/code&gt; block contains Packer settings, including a required Packer version. The &lt;code&gt;required_plugins&lt;/code&gt; block in the Packer block, specifies the plugin required by the template to build your image. The plugin block contains a &lt;code&gt;version&lt;/code&gt; and &lt;code&gt;source&lt;/code&gt; attribute.&lt;/p&gt;

&lt;h4&gt;
  
  
  Source block
&lt;/h4&gt;

&lt;p&gt;The source block configures a specific &lt;code&gt;builder&lt;/code&gt; plugin, which is then invoked by the &lt;code&gt;build&lt;/code&gt; block. Source blocks use builders and communicators to define virtualization type, image launch type, etc.&lt;/p&gt;

&lt;p&gt;Copy the following code to &lt;code&gt;aws-demo.pkr.hcl&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"ami_prefix"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"packer-aws-ubuntu-java"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;locals&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;timestamp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;regex_replace&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;timestamp&lt;/span&gt;&lt;span class="err"&gt;(),&lt;/span&gt; &lt;span class="s2"&gt;"[- TZ:]"&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="s2"&gt;"amazon-ebs"&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu_java"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.ami_prefix}-${local.timestamp}"&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t2.micro"&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
  &lt;span class="nx"&gt;source_ami_filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;filters&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*"&lt;/span&gt;
      &lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;device&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ebs"&lt;/span&gt;
      &lt;span class="nx"&gt;virtualization&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hvm"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;owners&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"099720109477"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;ssh_username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Variable &lt;code&gt;ami_prefix&lt;/code&gt; is used to define the AMI image. Local variable &lt;code&gt;timestamp&lt;/code&gt; helps ensure uniqueness to AMI name.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;amazon-ebs&lt;/code&gt; builder launches the source AMI, runs provisioners within this instance, then repackages it into an EBS-backed AMI. This builder configuration launches a &lt;code&gt;t2.micro&lt;/code&gt; AMI in the us-east-1 region using an &lt;code&gt;ubuntu:xenial&lt;/code&gt; AMI as the base image.&lt;/p&gt;

&lt;p&gt;It creates the AMI named &lt;code&gt;packer-aws-ubuntu-java+timestamp&lt;/code&gt;. AMI names must be unique else it will throw an error.&lt;/p&gt;

&lt;p&gt;It also uses the SSH communicator - by specifying the &lt;code&gt;ssh_username&lt;/code&gt; attribute. Packer is then able to SSH into EC2 instance using a temporary keypair and security group to provision your instances.&lt;/p&gt;

&lt;h4&gt;
  
  
  Build Block
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;build&lt;/code&gt; block defines what Packer should do with the EC2 instance after it launches.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;build&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"packer-ubuntu"&lt;/span&gt;
  &lt;span class="nx"&gt;sources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"source.amazon-ebs.ubuntu_java"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"shell"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="nx"&gt;inline&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;"echo Install Open JDK 8 - START"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"sleep 10"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"sudo apt-get update"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"sudo apt-get install -y openjdk-8-jdk"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"echo Install Open JDK 8 - SUCCESS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;provisioner&lt;/code&gt; block helps automate modifications to your base image. It leverages shell scripts, file uploads, and integrations with modern configuration management tools such as Ansible, Chef, etc.&lt;/p&gt;

&lt;p&gt;The above provisioner defines a shell provisioner and installs Open JDK 8 in the base image.&lt;/p&gt;

&lt;p&gt;The final file &lt;code&gt;aws-demo.pkr.hcl&lt;/code&gt; should look as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;packer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_plugins&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;amazon&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 0.0.2"&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"github.com/hashicorp/amazon"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"ami_prefix"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"packer-aws-ubuntu-java"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;locals&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;timestamp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;regex_replace&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;timestamp&lt;/span&gt;&lt;span class="err"&gt;(),&lt;/span&gt; &lt;span class="s2"&gt;"[- TZ:]"&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="s2"&gt;"amazon-ebs"&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu_java"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.ami_prefix}-${local.timestamp}"&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t2.micro"&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
  &lt;span class="nx"&gt;source_ami_filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;filters&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*"&lt;/span&gt;
      &lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;device&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ebs"&lt;/span&gt;
      &lt;span class="nx"&gt;virtualization&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hvm"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;owners&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"099720109477"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;ssh_username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;build&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"packer-ubuntu"&lt;/span&gt;
  &lt;span class="nx"&gt;sources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"source.amazon-ebs.ubuntu_java"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"shell"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="nx"&gt;inline&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;"echo Install Open JDK 8 - START"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"sleep 10"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"sudo apt-get update"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"sudo apt-get install -y openjdk-8-jdk"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"echo Install Open JDK 8 - SUCCESS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  GitHub Actions
&lt;/h4&gt;

&lt;p&gt;Create a new file in the .github/workflows directory named &lt;code&gt;github-actions-packer.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XrYY5P5y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628519387479/29r1YArQr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XrYY5P5y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628519387479/29r1YArQr.png" alt="Folder Structure.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will schedule this workflow to run in the wee hours - let's say 04:00 Hrs in the morning.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;name&lt;/code&gt; - The name of your workflow. GitHub displays the names of your workflows on your repository's actions page - "AWS AMI using Packer Config"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS AMI using Packer Config&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;on - (Required) The name of the GitHub event that triggers the workflow. We have configured to trigger the workflow on schedule.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# * is a special character in YAML so you have to quote this string&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;4&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;jobs&lt;/code&gt; - A workflow run is made up of one or more jobs. These jobs can run in parallel or sequentially. Each job executes in a runner environment specified by runs-on.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;job name&lt;/code&gt; - The name of the job displayed on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;runs-on&lt;/code&gt; - (Required) Determines the type of machine to run the job on. The machine can be either a GitHub-hosted runner or a self-hosted runner. Available GitHub-hosted runner types are: windows-latest / windows-2019 / windows-2016 / ubuntu-latest / ubuntu-20.04 etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;packer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;packer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;steps&lt;/code&gt; - Sequence of tasks called steps within a Job. They can execute commands, set up tasks, or run actions in your repository, a public repository, or action published in a Docker registry.&lt;/p&gt;

&lt;p&gt;The first step is to check out the source code in the runner environment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Checkout V2&lt;/code&gt;- This action checks out your repository under $GITHUB_WORKSPACE, so your workflow can access it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout Repository&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To ensure access to the AWS Cloud environment we need to configure &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; and &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; in the runner environment. The values for these variables will be configured as GitHub Secrets in the below section.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Configure AWS Credentials&lt;/code&gt; - This action configures AWS credential and region environment variables for use in other GitHub Actions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS Credentials&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
          &lt;span class="c1"&gt;# aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }} &lt;/span&gt;
          &lt;span class="c1"&gt;# if you have/need it&lt;/span&gt;
          &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Init&lt;/code&gt; initializes the Packer configuration used in the GitHub action workflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="c1"&gt;# Initialize Packer templates&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Initialize Packer Template&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/packer-github-actions@master&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;init&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Validate&lt;/code&gt; checks whether the configuration has been properly written. It will throw an error otherwise.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="c1"&gt;# validate templates&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Validate Template&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/packer-github-actions@master&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;validate&lt;/span&gt;
          &lt;span class="na"&gt;arguments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-syntax-only&lt;/span&gt;
          &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-demo.pkr.hcl&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Build&lt;/code&gt; executes the Packer configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="c1"&gt;# build artifact&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build Artifact&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/packer-github-actions@master&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
          &lt;span class="na"&gt;arguments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-color=false&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-on-error=abort"&lt;/span&gt;
          &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-demo.pkr.hcl&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;PACKER_LOG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The complete file github-actions-packer.yml will look as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS AMI using Packer Config&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# * is a special character in YAML so you have to quote this string&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;4&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;packer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;packer&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout Repository&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS Credentials&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
          &lt;span class="c1"&gt;# aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }} &lt;/span&gt;
          &lt;span class="c1"&gt;# if you have/need it&lt;/span&gt;
          &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;

      &lt;span class="c1"&gt;# Initialize Packer templates&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Initialize Packer Template&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/packer-github-actions@master&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;init&lt;/span&gt;

      &lt;span class="c1"&gt;# validate templates&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Validate Template&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/packer-github-actions@master&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;validate&lt;/span&gt;
          &lt;span class="na"&gt;arguments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-syntax-only&lt;/span&gt;
          &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-demo.pkr.hcl&lt;/span&gt;

      &lt;span class="c1"&gt;# build artifact&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build Artifact&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/packer-github-actions@master&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
          &lt;span class="na"&gt;arguments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-color=false&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-on-error=abort"&lt;/span&gt;
          &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-demo.pkr.hcl&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;PACKER_LOG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The source code is ready and can be pushed to the GitHub repository. As configured, the workflow will be triggered at 04:00 hrs in the morning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q6LnuL8z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628576817615/Kfu1V7PYy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q6LnuL8z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628576817615/Kfu1V7PYy.png" alt="Packer Actions 1.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4A1W4x0z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628576910401/Fq8fWOlw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4A1W4x0z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628576910401/Fq8fWOlw7.png" alt="Packer Actions 2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pTGfn-d9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628576917881/UD6XM51kx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pTGfn-d9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628576917881/UD6XM51kx.png" alt="Packer Actions 3.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JMVqxngb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628577773100/-nN3Wi9AI.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JMVqxngb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1628577773100/-nN3Wi9AI.png" alt="Packer Actions 4.png"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>packer</category>
      <category>aws</category>
      <category>githubactions</category>
      <category>github</category>
    </item>
    <item>
      <title>Terraform -Automate CI/CD Workflows via GitHub Actions</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Sun, 18 Jul 2021 14:45:39 +0000</pubDate>
      <link>https://dev.to/g33kzone/terraform-automate-ci-cd-workflows-via-github-actions-307j</link>
      <guid>https://dev.to/g33kzone/terraform-automate-ci-cd-workflows-via-github-actions-307j</guid>
      <description>&lt;p&gt;This article will set up a CI/CD pipeline for our Terraform source code ( &lt;a href="https://dev.to/g33kzone/terraform-code-structure-3nhj"&gt;refer post&lt;/a&gt; ) to spin AWS EC2 instance. The aim is to automate our development workflow by building the DevOps pipeline using GitHub Actions.&lt;/p&gt;

&lt;h4&gt;
  
  
  GitHub Actions
&lt;/h4&gt;

&lt;p&gt;Before we proceed further let's understand GitHub Actions. It is a platform to automate tasks within the software development lifecycle. It's an &lt;code&gt;event-driven&lt;/code&gt; framework, which means we can carry series of commands for a given event or can be scheduled for one-off or repetitive tasks. (e.g. Execute a Test Suite on Pull Request creation, Adding labels to issues, Lint checks, etc.)&lt;/p&gt;

&lt;p&gt;It is fully integrated into GitHub. It also gives added advantage to store the source code and the CI/CD pipeline execution on the same platform. CI/CD pipeline is one of the automation workflow offerings to streamline the overall software development and delivery process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PiogCv8o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626437267724/igD_xUWLM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PiogCv8o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626437267724/igD_xUWLM.png" alt="GitHub Flow.png"&gt;&lt;/a&gt;We can apply GitHub Actions, essentially to any stage of GitHub flow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure CI/CD&lt;/li&gt;
&lt;li&gt;Execute a specific automated task when an issue is opened&lt;/li&gt;
&lt;li&gt;Generate automate reminders for Pull Requests based on owners or reviewers&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Workflow Strategy
&lt;/h4&gt;

&lt;p&gt;Github allows us to create workflows in the following ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a YAML config (file name - *.yml or *.yaml) within the GitHub repository.&lt;/li&gt;
&lt;li&gt;Create the workflow via Actions Tab on Github Repository's Web Interface.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VTpllC8L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626432916028/rEbYq59JZ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VTpllC8L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626432916028/rEbYq59JZ.png" alt="Github Actions Snapshot.png"&gt;&lt;/a&gt;Clicking on &lt;code&gt;Set up this workflow&lt;/code&gt; will pre-fill the required Terraform workflow.&lt;/p&gt;

&lt;p&gt;For this article, we will focus on the first approach.&lt;/p&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt; &lt;a href="https://docs.github.com/en/get-started/quickstart/fork-a-repo"&gt;Fork &lt;/a&gt; the Github  Repository -  &lt;a href="https://github.com/g33kzone/tf-aws-ec2"&gt;tf-aws-ec2&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;AWS User with Programmatic access

&lt;ul&gt;
&lt;li&gt;AWS Access Key ID&lt;/li&gt;
&lt;li&gt;AWS Secret Access Key&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;AWS IAM Privileges to create EC2 Instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Recommended to create a feature branch and checkout this branch&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; github-actions-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a new file in the &lt;code&gt;.github/workflows&lt;/code&gt; directory named &lt;code&gt;github-actions-demo.yml&lt;/code&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3p66v5qI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626438445661/Ia6w_M--s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3p66v5qI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626438445661/Ia6w_M--s.png" alt="Folder Structure.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's start writing the configuration in the yaml file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;name&lt;/code&gt; - The name of your workflow. GitHub displays the names of your workflows on your repository's actions page - "Terraform Build Demo"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Terraform&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Build&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Demo'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;on&lt;/code&gt; - (&lt;em&gt;Required&lt;/em&gt;) The name of the GitHub event that triggers the workflow. We have configured to trigger the workflow on &lt;code&gt;Pull Request&lt;/code&gt; and &lt;code&gt;Push&lt;/code&gt; events to the &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Pull Request&lt;/code&gt; event- Triggered when the Pull request will be raised for the new feature branch&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Push&lt;/code&gt; event - Triggered when the Pull Request is merged into the &lt;code&gt;main&lt;/code&gt; branch.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;jobs&lt;/code&gt; - A workflow run is made up of one or more jobs. These jobs can run in parallel or sequentially. Each job executes in a runner environment specified by &lt;code&gt;runs-on&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;job name&lt;/code&gt; - The name of the job displayed on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;runs-on&lt;/code&gt; - (&lt;em&gt;Required&lt;/em&gt;) Determines the type of machine to run the job on. The machine can be either a GitHub-hosted runner or a self-hosted runner. Available GitHub-hosted runner types are: windows-latest / windows-2019 / windows-2016 / ubuntu-latest / ubuntu-20.04 etc.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;environment&lt;/code&gt; - The environment that the job references. All environment protection rules must pass before a job referencing the environment is sent to a runner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;terraform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TF&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;GitHub&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Actions&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Demo'&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;defaults.run&lt;/code&gt; - Helps define default shell and working-directory options for all run steps in a workflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;defaults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;steps&lt;/code&gt; - Sequence of tasks called steps within a Job. They can execute commands, set up tasks, or run actions in your repository, a public repository, or action published in a Docker registry.&lt;/p&gt;

&lt;p&gt;The first step is to check out the source code in the runner environment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Checkout V2&lt;/code&gt;- This action checks out your repository under $GITHUB_WORKSPACE, so your workflow can access it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second step is to set up Terraform CLI in the runner environment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;setup-terraform&lt;/code&gt; - is a JavaScript action that sets up Terraform CLI&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setup Terraform&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/setup-terraform@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;terraform_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To ensure access to the AWS Cloud environment we need to configure &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; and &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; in the runner environment. The values for these variables will be configured as GitHub Secrets in the below section.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Configure AWS Credentials&lt;/code&gt; - This action configures AWS credential and region environment variables for use in other GitHub Actions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS Credentials&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
        &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
        &lt;span class="c1"&gt;# aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }} &lt;/span&gt;
        &lt;span class="c1"&gt;# if you have/need it&lt;/span&gt;
        &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The runner environment is now configured. We can now configure Terraform commands.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;run&lt;/code&gt; - Runs command-line programs using the operating system's shell.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Terraform Init&lt;/code&gt; initializes the configuration used in the GitHub action workflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Init&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;init&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform init&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Terraform Format&lt;/code&gt; checks whether the configuration has been properly formatted. It will throw an error if the configuration isn't properly formatted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Format&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fmt&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform fmt -check&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;TF_ACTION_WORKING_DIR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="na"&gt;continue-on-error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Terraform Validate&lt;/code&gt; validates the configuration used in the GitHub action workflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Validate&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;validate&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform validate -no-color&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Terraform Plan&lt;/code&gt; generates a Terraform plan.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This step only runs on pull requests. The PR generates a plan. When the PR is merged, that plan will be applied.&lt;/li&gt;
&lt;li&gt;This step will continue even when it errors. This allows the next step to display the plan error message even if this step fails&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Terraform Plan Status&lt;/code&gt; returns whether a plan was successfully generated or not.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Plan Status&lt;/span&gt;
  &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.plan.outcome == 'failure'&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;exit &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Terraform Apply&lt;/code&gt; applies the configuration. This step will only run when a commit is pushed to &lt;code&gt;main&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Apply&lt;/span&gt;
  &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github.ref == 'refs/heads/main' &amp;amp;&amp;amp; github.event_name == 'push'&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform apply -auto-approve&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The complete file &lt;code&gt;github-actions-demo.yml&lt;/code&gt; will look as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Terraform&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Build&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Demo'&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;terraform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TF&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;GitHub&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Actions&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Demo'&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;

    &lt;span class="na"&gt;defaults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setup Terraform&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/setup-terraform@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;terraform_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS Credentials&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
        &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
        &lt;span class="c1"&gt;# aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }} &lt;/span&gt;
        &lt;span class="c1"&gt;# if you have/need it&lt;/span&gt;
        &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Init&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;init&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform init&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Format&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fmt&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform fmt -check&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;TF_ACTION_WORKING_DIR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="na"&gt;continue-on-error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Validate&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;validate&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform validate -no-color&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Plan&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;plan&lt;/span&gt;
      &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github.event_name == 'pull_request'&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform plan -no-color&lt;/span&gt;
      &lt;span class="na"&gt;continue-on-error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Plan Status&lt;/span&gt;
      &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.plan.outcome == 'failure'&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;exit &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Apply&lt;/span&gt;
      &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github.ref == 'refs/heads/main' &amp;amp;&amp;amp; github.event_name == 'push'&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform apply -auto-approve&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to also configure AWS credentials such that they are accessible to the GitHub Actions YAML script. The best way is to configure them as &lt;a href="https://docs.github.com/en/actions/reference/encrypted-secrets#creating-encrypted-secrets-for-a-repository"&gt;GitHub Secrets&lt;/a&gt; against the repository.&lt;/p&gt;

&lt;p&gt;Navigate to your GitHub repository on Web Console --&amp;gt; &lt;code&gt;Settings&lt;/code&gt; --&amp;gt; &lt;code&gt;Secrets&lt;/code&gt; (Left Nav Bar) --&amp;gt; Click &lt;code&gt;New Repository Secret&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Configure values for the following variables as GitHub Secrets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS_ACCESS_KEY_ID&lt;/li&gt;
&lt;li&gt;AWS_SECRET_ACCESS_KEY&lt;/li&gt;
&lt;li&gt;PERSONAL_ACCESS_TOKEN&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For Personal Access Token (PAT) generation, refer to the following &lt;a href="https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/creating-a-personal-access-token"&gt;GitHub docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wuFzoERq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626614780733/CqPjncbCa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wuFzoERq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626614780733/CqPjncbCa.png" alt="GitHub Secrets.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Raise Pull Request for the new branch via Web Console. Refer following &lt;a href="https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests"&gt;GitHub docs&lt;/a&gt; for more information. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OM0qZpgP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626614933973/uYfDorqFq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OM0qZpgP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626614933973/uYfDorqFq.png" alt="GitHub PR.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the PR is raised, the GitHub Actions Job is triggered for the Pull Request event.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qSBCjB0q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626615338537/fy_40p2M_.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qSBCjB0q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626615338537/fy_40p2M_.png" alt="Triggered.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On Clicking the &lt;code&gt;Details&lt;/code&gt; link, we can see all the executed &lt;code&gt;Steps&lt;/code&gt; and their corresponding logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--omlOr5zC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626615503172/HPlT6lihq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--omlOr5zC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626615503172/HPlT6lihq.png" alt="GitHub Actions Steps.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Observations:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Terraform Plan&lt;/code&gt; run was successful, hence the &lt;code&gt;Terraform Plan Status&lt;/code&gt; run execution was skipped due to the &lt;code&gt;failure&lt;/code&gt; filter condition.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;Terraform Apply&lt;/code&gt; run execution was also skipped - configured to be executed on the &lt;code&gt;PUSH&lt;/code&gt; event.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Merge Pull Requests (PUSH event)
&lt;/h4&gt;

&lt;p&gt;On merging the Pull request into the &lt;code&gt;main&lt;/code&gt; branch. The configured GitHub actions workflow will be triggered again for the &lt;code&gt;PUSH&lt;/code&gt; event.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DF6k0WzY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626617708503/1lDuLpZcl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DF6k0WzY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626617708503/1lDuLpZcl.png" alt="Github Push.png"&gt;&lt;/a&gt;Click on the second workflow run of &lt;code&gt;GitHub Actions Demo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0Mn_sHiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626618186801/bgH9SLVGl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0Mn_sHiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626618186801/bgH9SLVGl.png" alt="Github Push Job Success.png"&gt;&lt;/a&gt;Click on &lt;code&gt;TF GitHub Actions Demo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jxKK3X_E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626618296600/JxEWNRdKV.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jxKK3X_E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626618296600/JxEWNRdKV.png" alt="GitHub TF Apply.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Observations:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Terraform Plan&lt;/code&gt; run was skipped as it will be triggered only on &lt;code&gt;PULL Request&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;Terraform Apply&lt;/code&gt; run was successfully executed - configured to be executed on the &lt;code&gt;PUSH&lt;/code&gt; event. As a result, an AWS EC2 instance was created.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fv8Jk1Fh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626618925260/j1vCcOvTp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fv8Jk1Fh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1626618925260/j1vCcOvTp.png" alt="AWS Console.png"&gt;&lt;/a&gt;The existing setup is now capable of handling any changes (new/updates). The Terraform scripts will be automatically deployed to AWS Cloud once source code is merged into the &lt;code&gt;main&lt;/code&gt; branch as demonstrated above.&lt;/p&gt;

&lt;h4&gt;
  
  
  Destroy resources
&lt;/h4&gt;

&lt;p&gt;Remember to destroy the resources (i.e AWS EC2 instance) you created for this tutorial to avoid any costs.&lt;/p&gt;

&lt;p&gt;Refer to the GitHub  &lt;a href="https://github.com/g33kzone/tf-aws-ec2-github-actions"&gt;Repo&lt;/a&gt; for the source code demonstrated in the above post.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>githubactions</category>
      <category>github</category>
    </item>
    <item>
      <title>Terraform - Code Structure</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Mon, 12 Jul 2021 16:22:16 +0000</pubDate>
      <link>https://dev.to/g33kzone/terraform-code-structure-3nhj</link>
      <guid>https://dev.to/g33kzone/terraform-code-structure-3nhj</guid>
      <description>&lt;p&gt;The previous Terraform &lt;a href="https://dev.to/g33kzone/terraform-getting-started-26d1"&gt;blog&lt;/a&gt; gave us a basic introduction to Terraform. We also discussed writing a simple Terraform code to create an AWS EC2 instance with minimal code.&lt;/p&gt;

&lt;p&gt;Maintaining all codebases in a single &lt;code&gt;main.tf&lt;/code&gt; file is good for beginners. However, this approach will lead to maintainability issues as our underlying infrastructure grows. Moreover, in practical situations, you might also need to deal with multiple environments (e.g. DTAP - dev/test/acceptance/production).&lt;/p&gt;

&lt;p&gt;Other deciding factors for code modularization are as follows-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Project Complexity

&lt;ul&gt;
&lt;li&gt;Count of Terraform Providers involved&lt;/li&gt;
&lt;li&gt;Count of Infra resources to be maintained by Terraform&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Cadence of Infrastructure changes - daily / weekly / monthly&lt;/li&gt;

&lt;li&gt;Deployment Strategy - CI/CD Pipeline, GitOps, etc.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;It is recommended to logically split the source code as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;provider.tf&lt;/code&gt; - contains provider configuration in root module&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;main.tf&lt;/code&gt; - call modules, locals, and data sources to create all resources&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt; - contains variable declarations used in &lt;code&gt;main.tf&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;outputs.tf&lt;/code&gt; - contains outputs from the resources created in &lt;code&gt;main.tf&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform.tfvars&lt;/code&gt; - contains variable definitions to provide default variable values. Terraform will automatically load variables from those files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let us refer to the initial Terraform code and understand how we can logically break it down further. &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1626077688967%2FdxrLiwZKB.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1626077688967%2FdxrLiwZKB.png" alt="terraform-getting-started.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown in the above image,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Red block will be part of &lt;code&gt;provider.tf&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Green block will be part of &lt;code&gt;main.tf&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Using Input Variable Values
&lt;/h4&gt;

&lt;p&gt;As we treat Terraform as IaC, we should ensure there are no hardcodings. These must be configured in &lt;code&gt;variables.tf&lt;/code&gt;. The following block must be defined for every variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"aws_region"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS Region"&lt;/span&gt;
&lt;span class="c1"&gt;# default value is optional.&lt;/span&gt;
    &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Variables on the Command Line
&lt;/h4&gt;

&lt;p&gt;A default value can be configured (as shown above) for a variable. However, we can override this value while executing the code at runtime.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform apply &lt;span class="nt"&gt;-var&lt;/span&gt; &lt;span class="nv"&gt;aws_region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"eu-west-1"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Environment Variables
&lt;/h4&gt;

&lt;p&gt;Also, Terraform can search the environment of its own process for environment variables named &lt;code&gt;TF_VAR_&lt;/code&gt; followed by the name of a declared variable. This can be useful when running Terraform in automation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;TF_VAR_aws_region &lt;span class="o"&gt;=&lt;/span&gt; eu-west-1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Variable Definitions (.tfvars) Files
&lt;/h4&gt;

&lt;p&gt;For bulk values, it is of convenience to specify their values in a variable definitions file (with a filename ending in either &lt;code&gt;.tfvars&lt;/code&gt; or &lt;code&gt;.tfvars.json&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;Terraform also automatically loads a number of variable definitions files if they are present:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Files named exactly &lt;code&gt;terraform.tfvars&lt;/code&gt; or &lt;code&gt;terraform.tfvars.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Any files with names ending in &lt;code&gt;.auto.tfvars&lt;/code&gt; or &lt;code&gt;.auto.tfvars.json&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This variable can then be used to replace the hardcodings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_region&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our code post formatting will look as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1626105252916%2FQVBsFkYTW.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1626105252916%2FQVBsFkYTW.png" alt="Terraform File Structure.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  provider.tf
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt;3.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_region&lt;/span&gt;
  &lt;span class="nx"&gt;default_tags&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"Environment"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev"&lt;/span&gt;
      &lt;span class="s2"&gt;"Owner"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"g33kzone"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  variables.tf
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"aws_region"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS Region"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"instance_type"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS EC2 Instance Type"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"aws_ec2_ami"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"EC2 AMI for Amazon Linux 2"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  main.tf
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"web"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance_type&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ec2_ami&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Name"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"aws-ec2-demo"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  terraform.tfvars
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="nx"&gt;aws_region&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t2.micro"&lt;/span&gt;
&lt;span class="nx"&gt;aws_ec2_ami&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ami-04d29b6f966df1537"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are done with the coding. Let us execute this Terraform code with the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# open your shell in the same project folder&lt;/span&gt;

&lt;span class="c"&gt;# download the terraform core components &lt;/span&gt;
&lt;span class="c"&gt;# and initialize terraform in this directory&lt;/span&gt;
terraform init


&lt;span class="c"&gt;# Validate changes to be made in AWS after the execution&lt;/span&gt;
terraform plan


&lt;span class="c"&gt;# -auto-approve is used to skip manual approval prompt&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Very Important
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Clean Up
&lt;/h3&gt;

&lt;p&gt;Do not forget to delete the infrastructure created to avoid incurring any costs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# running this command will destroy all the resources&lt;/span&gt;
terraform destroy &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Github Repo - &lt;a href="https://github.com/g33kzone/tf-aws-ec2/tree/modular-beginner" rel="noopener noreferrer"&gt;https://github.com/g33kzone/tf-aws-ec2.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In future posts, I will delve into Terraform Modules.&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>Configuring Jenkins on AWS EC2 Instance</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Thu, 01 Jul 2021 19:26:54 +0000</pubDate>
      <link>https://dev.to/g33kzone/configuring-jenkins-on-aws-ec2-instance-3apl</link>
      <guid>https://dev.to/g33kzone/configuring-jenkins-on-aws-ec2-instance-3apl</guid>
      <description>&lt;p&gt;This post provides an overview of the steps required to install and run  &lt;a href="https://www.jenkins.io/"&gt;Jenkins&lt;/a&gt;  Server on Amazon Linux 2. By the end of the post, we will have a working Jenkins Server ready to configure pipelines.&lt;/p&gt;

&lt;p&gt;Jenkins can be installed in  &lt;a href="https://www.jenkins.io/doc/book/installing/"&gt;numerous ways&lt;/a&gt;  - Native system packages, Docker, standalone executable session. This post will be focused on native package installation.&lt;/p&gt;

&lt;p&gt;Jenkins recommends the following &lt;a href="https://www.jenkins.io/doc/book/installing/linux/"&gt;minimum hardware requirements&lt;/a&gt; for Server configuration&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 CPU&lt;/li&gt;
&lt;li&gt;256 MB of RAM&lt;/li&gt;
&lt;li&gt;1 GB of drive space (although 10 GB is a recommended minimum if running Jenkins as a Docker container)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For Demo purposes, we can select the &lt;code&gt;T2 micro&lt;/code&gt; instance type as it satisfies the above minimum criteria.&lt;/p&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS Cloud Account (Free tier will work)&lt;/li&gt;
&lt;li&gt;A EC2 instance running with &lt;code&gt;T2 micro&lt;/code&gt; for instance type&lt;/li&gt;
&lt;li&gt;AWS key-pair to &lt;code&gt;SSH&lt;/code&gt; into the EC2 instance&lt;/li&gt;
&lt;li&gt;Configure firewalls (security groups) to allow SSH access into the EC2 instance&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Installation Steps
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Step 1.&lt;/strong&gt; Initiate an SSH session into your Amazon Linux - EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0PRCxXz0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624863709569/NozCbQHxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0PRCxXz0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624863709569/NozCbQHxx.png" alt="Screenshot 2021-06-28 at 12.29.27 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2.&lt;/strong&gt; Update existing installed packages on the EC2 instance&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;yum update &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3.&lt;/strong&gt; Install most recent Open JDK (Java ver 11). Required for Jenkins installation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;amazon-linux-extras &lt;span class="nb"&gt;install &lt;/span&gt;java-openjdk11 &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4.&lt;/strong&gt; Add Jenkins repo to Amazon Linux 2 server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/yum.repos.d/jenkins.repo&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
[jenkins]
name=Jenkins
baseurl=http://pkg.jenkins.io/redhat
gpgcheck=0
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5.&lt;/strong&gt; Import GPG Jenkins repo key&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;rpm &lt;span class="nt"&gt;--import&lt;/span&gt; https://jenkins-ci.org/redhat/jenkins-ci.org.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6.&lt;/strong&gt; Update list of repositories&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;yum repolist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7.&lt;/strong&gt; Install Jenkins in Amazon Linux 2 server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;jenkins &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 8.&lt;/strong&gt; Start Jenkins service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 9.&lt;/strong&gt; Enable Jenkins service to start at OS boot&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 10.&lt;/strong&gt; Confirm that the Jenkins Service is up and running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lcjn2lGX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624864854868/en47NNOpY.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lcjn2lGX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624864854868/en47NNOpY.png" alt="Screenshot 2021-06-28 at 12.47.13 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 11.&lt;/strong&gt; Validate if the Jenkins service is configured to autostart on system reboot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl is-enabled jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W88_0ohG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624865006355/prIa1BQ9G.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W88_0ohG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624865006355/prIa1BQ9G.png" alt="Screenshot 2021-06-28 at 12.51.18 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 12.&lt;/strong&gt; Access Jenkins Server on EC2 instance&lt;br&gt;
Jenkins service by default binds to port 8080. It will be accessible on&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;http://[server-ip-or-hostname]:8080
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the first time, Jenkins will prompt the &lt;code&gt;Unlock Jenkins&lt;/code&gt; screen to authenticate the installation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fgCgwp0f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624866113425/q2ERaD2Pu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fgCgwp0f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624866113425/q2ERaD2Pu.png" alt="Screenshot 2021-06-28 at 1.02.15 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 13.&lt;/strong&gt; Fetch password to Unlock Jenkins.&lt;br&gt;
As advised the following command will provide the relevant password&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /var/lib/jenkins/secrets/initialAdminPassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the automatically-generated alphanumeric password (between the 2 sets of asterisks). On the Unlock Jenkins page, paste this password into the Administrator password field and click &lt;strong&gt;Continue&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 14.&lt;/strong&gt; Customize Jenkins&lt;br&gt;
Recommended selecting &lt;code&gt;Install suggested plugins&lt;/code&gt; to install the recommended set of plugins based on most common use cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SgbafRJU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624880620152/bVw_YaEOn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SgbafRJU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624880620152/bVw_YaEOn.png" alt="Screenshot 2021-06-28 at 5.08.29 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The setup wizard shows the progression of Jenkins plugins being installed&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8B-65Op---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624880667331/3hPQc-7yx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8B-65Op---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624880667331/3hPQc-7yx.png" alt="Screenshot 2021-06-28 at 5.12.11 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 15.&lt;/strong&gt; Create First Admin User&lt;br&gt;
Enter relevant details to create Admin User and click ** Continue **&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PSQWu86U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624880805475/inX_iiXBA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PSQWu86U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624880805475/inX_iiXBA.png" alt="Screenshot 2021-06-28 at 5.13.41 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 16.&lt;/strong&gt; Configure Valid DNS (if required)&lt;br&gt;
Jenkins Instance access URL will be printed on the screen that follows. This can be changed with a valid DNS name. For the purpose of our demo, we will leave it unchanged and click ** Save and Finish **&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KnwtObPM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624881145697/a2vgnJz3F.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KnwtObPM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624881145697/a2vgnJz3F.png" alt="Screenshot 2021-06-28 at 5.17.35 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 17.&lt;/strong&gt; Jenkins is ready, Hurray!! You have now successfully configured Jenkins.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J3vPyFk8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624881227147/ZTybKtcS5Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J3vPyFk8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624881227147/ZTybKtcS5Q.png" alt="Screenshot 2021-06-28 at 5.20.58 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jenkins Dashboard&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZZqaHOAi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624881410926/Fkv07tg3J.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZZqaHOAi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624881410926/Fkv07tg3J.png" alt="Screenshot 2021-06-28 at 5.24.40 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>devops</category>
    </item>
    <item>
      <title>Jenkins - Getting Started</title>
      <dc:creator>Manish R Warang</dc:creator>
      <pubDate>Thu, 01 Jul 2021 19:23:11 +0000</pubDate>
      <link>https://dev.to/g33kzone/jenkins-getting-started-43cd</link>
      <guid>https://dev.to/g33kzone/jenkins-getting-started-43cd</guid>
      <description>&lt;p&gt;&lt;a href="https://www.jenkins.io/"&gt;Jenkins&lt;/a&gt;  is an open-source automation server written in Java. It supports the entire software delivery lifecycle, including build, document, test, package, stage, deployment, static code analysis, and deployments via plugins. Several tools (e.g. unit tests, report generation, SCM, etc.) can be integrated with Jenkins via plugins. Jenkins has been the most widely adopted tool for CI/CD, thanks to its energetic, active community. This Jenkins community offers more than  &lt;a href="https://plugins.jenkins.io/"&gt;1800+ plugins&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The Jenkins project was started in 2004 (originally called Hudson) by  &lt;a href="https://en.wikipedia.org/wiki/Kohsuke_Kawaguchi"&gt;Kohsuke Kawaguchi&lt;/a&gt;. Initially created for Continous Integration (CI), today Jenkins is capable to orchestrate the entire software delivery pipeline – called Continuous Delivery (CD) and even further to provide Continuous Deployment.&lt;/p&gt;

&lt;p&gt;Following is a diagrammatic representation of the Jenkins pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r9avZQYC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624257504548/I57qV2b2C.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r9avZQYC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624257504548/I57qV2b2C.png" alt="Screenshot 2021-06-21 at 12.05.24 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In many scenarios, a single Jenkins server isn't sufficient to cater to the needs of multiple code-commits triggering zillions of pipelines. In such scenarios, distributed Jenkins architecture (Master Controller - Agents) can be leveraged.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lIuosdl2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624270752573/KWy4PcFFk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lIuosdl2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1624270752573/KWy4PcFFk.png" alt="Screenshot 2021-06-21 at 3.45.47 PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>devops</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
