<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Derek Morgan</title>
    <description>The latest articles on DEV Community by Derek Morgan (@morethancertified).</description>
    <link>https://dev.to/morethancertified</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/morethancertified"/>
    <language>en</language>
    <item>
      <title>Developer Self-Service with Resourcely</title>
      <dc:creator>Derek Morgan</dc:creator>
      <pubDate>Tue, 10 Dec 2024 16:10:35 +0000</pubDate>
      <link>https://dev.to/morethancertified/developer-self-service-with-resourcely-5hcn</link>
      <guid>https://dev.to/morethancertified/developer-self-service-with-resourcely-5hcn</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Scenario
&lt;/h3&gt;

&lt;p&gt;Thanks to the success of Infrastructure-as-Code tools, more organizations are allowing their developers to deploy infrastructure resources themselves. Examples of these resources could be AWS S3 buckets for storage, containers for their applications, serverless functions, and anything else needed. While this paradigm shift can reduce the overall load on the infrastructure developers, you must make considerations when shifting that load. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Productivity waste due to context switching.
&lt;/li&gt;
&lt;li&gt;Security risks due to expanded access to infrastructure.
&lt;/li&gt;
&lt;li&gt;Reliability and stability risks due to misconfigurations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Developing a robust self-service infrastructure platform using Resourcely's Blueprints and Guardrails can address all of these concerns with proper planning. &lt;/p&gt;

&lt;p&gt;Let's examine a simple AWS S3 deployment to see how Resourcely's features can empower your developers to deploy their infrastructure securely. &lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To integrate Resourcely into your deployment pipeline, you'll need to configure a few things. In this example, I'll use a GitHub Actions Pipeline.&lt;/p&gt;

&lt;h4&gt;
  
  
  GitHub Repository
&lt;/h4&gt;

&lt;p&gt;You'll need to configure a GitHub repository and provide access to Resourcely. If you haven't already done this, you can find more information here: &lt;a href="https://docs.resourcely.io/integrate/source-code-management/github" rel="noopener noreferrer"&gt;https://docs.resourcely.io/integrate/source-code-management/github&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Terraform code
&lt;/h4&gt;

&lt;p&gt;Start with some basic Terraform code. We'll add more to this, but this is to verify the workflow in the subsequent steps works:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_region" "current" {}

output "current_region" {
  value = data.aws_region.current.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Terraform backend
&lt;/h4&gt;

&lt;p&gt;You'll need a &lt;a href="https://developer.hashicorp.com/terraform/language/backend" rel="noopener noreferrer"&gt;backend&lt;/a&gt; configured for Terraform. You can use any backend you wish, such as one using &lt;a href="https://developer.hashicorp.com/terraform/language/backend/s3" rel="noopener noreferrer"&gt;S3 and DynamoDB&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;versions.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "~&amp;gt; 1.9.5"
  backend "s3" {
    bucket         = "resourcely-tf-backend"
    key            = "terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "ResourcelyTerraformLocks"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  GitHub Actions Workflow
&lt;/h4&gt;

&lt;p&gt;You'll then need to configure a GitHub Actions Workflow that authenticates to AWS (I prefer OIDC), deploy Terraform, and allow Resourcely to analyze the state. You can find more information on configuring your workflow and the necessary role here: &lt;a href="https://docs.resourcely.io/integrate/terraform-integration/github-actions/local-plan/aws-with-openid-connect" rel="noopener noreferrer"&gt;https://docs.resourcely.io/integrate/terraform-integration/github-actions/local-plan/aws-with-openid-connect&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An example of the workflow is below. Ensure you set the &lt;code&gt;role-to-assume&lt;/code&gt; secret: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.github/workflows/main.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Plan and Apply Terraform

on:
  push:
    branches: ["main"]
  pull_request:
    branches: ["main"]

permissions:
  id-token: write
  contents: read

jobs:
  terraform:
    name: 'Terraform'
    runs-on: ubuntu-latest
    environment: production

    defaults:
      run:
        shell: bash

    steps:
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
            role-to-assume: ${{ secrets.ROLE_TO_ASSUME }}
            aws-region: ${{ vars.AWS_REGION }}

      - name: Checkout
        uses: actions/checkout@v4

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3

      - name: Terraform Init
        run: terraform init

      - name: Terraform Plan
        run: terraform plan -out=plan.raw

      - name: Convert the plan to JSON
        id: planToJson
        run: terraform show -json plan.raw

      - name: Save JSON to a file
        using: fishcharlie/CmdToFile@v1.0.0
        with:
          data: ${{ steps.planToJson.outputs.stdout }}
          output: plan.json

      - name: Upload Terraform Plan Output
        uses: actions/upload-artifact@v4
        with:
          name: plan-file
          path: plan.json

      - name: Terraform Apply
        if: GitHub.ref == 'refs/heads/main' &amp;amp;&amp;amp; github.event_name == 'push'
        run: terraform apply -auto-approve -input=false

  resourcely-ci:
    Needs: terraform
    if: GitHub.event_name == 'pull_request'
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Download Terraform Plan Output
        uses: actions/download-artifact@v4
        with:
          name: plan-file
          path: tf-plan-files/

      - name: Resourcely CI
        uses: Resourcely-Inc/resourcely-action@v1
        with:
          resourcely_api_token: ${{ secrets.RESOURCELY_API_TOKEN }}
          resourcely_api_host: "https://api.resourcely.io"
          tf_plan_directory: "tf-plan-files"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once everything is in place and you've tested the workflow, let's start building with Resourcely!&lt;/p&gt;

&lt;h3&gt;
  
  
  Blueprints
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What are blueprints?
&lt;/h4&gt;

&lt;p&gt;Resourcely's Blueprints provide a practical way to simplify and standardize cloud resource deployment using customizable templates that generate Terraform configurations. With Blueprints, teams can create consistent, secure, and compliant infrastructure setups, making the deployment process more efficient and improving collaboration across projects.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configure blueprints
&lt;/h4&gt;

&lt;p&gt;In this tutorial, we'll import an existing module to start our Blueprint. We'll use Resourcely's "Foundry" to author the Blueprint. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhwzm5x3w7h1841jdv1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhwzm5x3w7h1841jdv1e.png" alt="Resourcely Foundry" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the &lt;code&gt;S3 Bucket&lt;/code&gt; option. That should add code similar to the one below to your console:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
constants:
  __name: "{{ bucket }}_{{ __guid }}"
---

resource "aws_s3_bucket" "{{ __name }}" {
  bucket = "{{ bucket }}"
}

resource "aws_s3_bucket_public_access_block" "{{ __name }}" {
    bucket = aws_s3_bucket.{{ __name }}.id

    block_public_acls       = true
    block_public_policy     = true
    ignore_public_acls      = true
    restrict_public_buckets = true
}

resource "aws_s3_bucket_ownership_controls" "{{ __name }}" {
  bucket = aws_s3_bucket.{{ __name }}.id

  rule {
    object_ownership = "BucketOwnerEnforced"
  }
}

resource "aws_s3_bucket_versioning" "{{ __name }}" {
    bucket = aws_s3_bucket.{{ __name }}.id
    versioning_configuration {
        status = "{{ versioning_configuration_status }}"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The templating syntax helps ensure each deployment's bucket names and resource IDs are unique. The "constants" section is similar to a "locals" block in HCL. In this case, it generates a unique name by appending the user-defined bucket to the system-defined "GUID." &lt;/p&gt;

&lt;h4&gt;
  
  
  Developer Experience
&lt;/h4&gt;

&lt;p&gt;When a user deploys a Blueprint, they're not required to modify any code directly. Instead, Resourcely presents users with a clean UI that allows them to define the template's variables easily. To define the bucket in this example, click the "Developer Experience" tab. Resourcely presents you with a text input field for the &lt;code&gt;Bucket&lt;/code&gt; and a dropdown for the "Versioning configuration status":  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07m6xcyrhl0s8r02gq0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07m6xcyrhl0s8r02gq0r.png" alt="Resourcely Blueprints" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you define these items, you can preview the Terraform code that will be created by clicking on the "Terraform" tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fooctty6y1uebvefqbstg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fooctty6y1uebvefqbstg.png" alt="Image description" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The GUID of the deployment is appended to all resource IDs, enabling the resources to be deployed multiple times without creating overlapping bucket names. For more information about authoring Blueprints and available variables, see &lt;a href="https://docs.resourcely.io/build/setting-up-blueprints/authoring-your-own-blueprints#constants-and-special-variables" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Modifying the Blueprint
&lt;/h4&gt;

&lt;p&gt;Blueprints are straightforward to modify manually, but some very cool features simplify the process of maximizing the developer-friendliness of your blueprints. One of these quality-of-life features is the ability to generate "tags" dynamically based on value type. If you highlight an attribute of one of the resources, you can click the "Use Selection" dropdown followed by the "Generate tag" option to generate a tag. You can also simply right-click the attribute to do the same. In the following image, I select the &lt;code&gt;block_public_acls&lt;/code&gt; attribute and generate a tag for it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoyqblazukmnemcrw9vn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoyqblazukmnemcrw9vn.png" alt="Generating Tags" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you've done this, Resourcely creates a new variable. Feel free to edit this as you wish:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u395kybrnyg5krpb5h3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u395kybrnyg5krpb5h3.png" alt="Resourcely Tags" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The "Developer Experience" tab now has the new boolean added:  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fus4boakghqg5exixba8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fus4boakghqg5exixba8h.png" alt="Resourcely DevEx" width="800" height="611"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While this is a limited example, the ability to create a simple developer experience from your even complicated Terraform code could not be more effortless. Typically, you won't parameterize every single attribute in your code as that would not make for a great experience, so we'll add some guardrails to ensure no one makes changes that could cause your deployment to become non-compliant. &lt;/p&gt;

&lt;h3&gt;
  
  
  Guardrails
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What are guardrails?
&lt;/h4&gt;

&lt;p&gt;Resourcely's Guardrails are infrastructure policies seamlessly integrated into developer workflows. They allow you to define rules enforced within your existing CI pipeline, ensuring compliance during deployment. Guardrails can be customized to consider specific contexts and route any Terraform configurations that violate these rules for approval. By integrating Guardrails with Blueprints, developers receive immediate feedback and guidance during configuration, promoting secure and efficient deployments. &lt;/p&gt;

&lt;h4&gt;
  
  
  The "Really" Policy Language
&lt;/h4&gt;

&lt;p&gt;You create guardrails using the innovative "Really" policy language. If you've used other policy languages, such as Rego, you'll find that the Really policy language is a fantastic alternative. I won't dive too deep into the differences since Travis McPeak already has this fantastic post: &lt;a href="https://www.resourcely.io/post/announcing-really" rel="noopener noreferrer"&gt;Announcing Really&lt;/a&gt;. The Really policy language is one of the first things that drew me to Resourcely. I've written countless lines of Rego in my Open Policy Agent policies, and I would have loved to cut those lines down significantly with Really's more concise syntax. Check out the Really docs &lt;a href="https://docs.resourcely.io/build/setting-up-guardrails/authoring-your-own-guardrails" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;h4&gt;
  
  
  Add guardrails
&lt;/h4&gt;

&lt;p&gt;To author a guardrail, it's easiest to begin in the Foundry. Click on the "Author a Guardrail" tab, "Select a Guardrail starter" from the dropdown menu, and choose "[S3] Bucket Versioning Enabled." Once you've done that, you will see the generated Really policy-as-code language:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0osga0g24zuyy669dnio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0osga0g24zuyy669dnio.png" alt="Really Policy-as-Code" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can choose an approver to authorize overrides for the Guardrail. If you haven't created any, "default" will work just fine. Once you set this, click "Create Guardrail" at the top left of the screen, decide if you want this Guardrail to be "Active," "Inactive," or "Evaluate Only," then click "Yes, create Guardrail," choose "Use Guardrail in Blueprint" on the following screen, and this Guardrail is ready to go! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d3cc6vak9lxlu270b7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d3cc6vak9lxlu270b7o.png" alt="Resourcely Guardrail" width="713" height="592"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Create A Custom Guardrail
&lt;/h4&gt;

&lt;p&gt;Creating a Guardrail from a starter is a great way to get things kicked off, but there certainly won't be a starter Guardrail for every attribute you need to enforce. Let's create a custom Guardrail that will enforce the "object_ownership" attribute within the "aws_s3_ownership_controls" resource to be enforced to "BucketOwnerEnforced":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket_ownership_controls" "{{ __name }}" {
  bucket = aws_s3_bucket.{{ __name }}.id

  rule {
    object_ownership = "BucketOwnerEnforced"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, select the "object_ownership" attribute within the resource, click on the "Use Selection" dropdown, and select "Generate Guardrail."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdkycw2u9ocs6iqxz181.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdkycw2u9ocs6iqxz181.png" alt="Generate Guardrail" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Voila! That simple action generated everything needed to enforce that "object_ownership" setting. Feel free to change it if you so desire, but otherwise, click on "Define Metadata" and complete the fields:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6f30hdr36vaog1snrte5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6f30hdr36vaog1snrte5.png" alt="Define Metadata" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once that's finished, click the "Create Guardrail" button again and choose "Use Guardrail in Blueprint" to add it to the list of usable Guardrails. &lt;/p&gt;

&lt;h4&gt;
  
  
  Managing Guardrail Attachments
&lt;/h4&gt;

&lt;p&gt;Once you create the Guardrails and set them to "active," they're automatically attached to the current Blueprint. If you need to add or remove a Guardrail from a Blueprint, you must edit the Blueprint in Foundry. To disable a Guardrail, simply toggle the slider next to the Guardrail, and it will disappear from the Guardrail list: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5pxdba4kwzgo7cz67hu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5pxdba4kwzgo7cz67hu.png" alt="Disable Guardrail" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Final Touches
&lt;/h4&gt;

&lt;p&gt;Now we've created the Blueprint, generated custom tags, and created Guardrails, let's finalize the process by defining metadata:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmccbzxqqs822eq5p7xj1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmccbzxqqs822eq5p7xj1.png" alt="Define Metadata" width="800" height="536"&gt;&lt;/a&gt;&lt;br&gt;
Finally click, "Create Blueprint" at the top right and publish the Blueprint. If you're satisfied with everything, choose "Use Blueprint in a Pull Request," and we'll deploy! &lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Admin Deployment
&lt;/h4&gt;

&lt;p&gt;As long as you configure your repository information correctly, you should be able to pass in the information requested without any issues: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj17vlr8iglfeoxpiq56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj17vlr8iglfeoxpiq56.png" alt="Deployment" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll notice that the "AWS s3 bucket public access block name block public acls" (I know, I could have been more succinct there) option is "true" and can be toggled to "false." You also may notice that "Versioning configuration status" has no option to change. The Guardrails are live and already benefiting your users. Instead of discovering after you open the PR, you immediately know what your Guardrails allow you to do. &lt;/p&gt;

&lt;p&gt;Go ahead and click "Continue." You may see existing code if you've pushed code before using Resourcely. Scroll down, and you'll see any additional code in green: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2svq62d7e9la8fxabs2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2svq62d7e9la8fxabs2g.png" alt="Deployment" width="800" height="624"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If everything looks good, go ahead and use the button at the top right to Open the pull request, fill in the information, and submit! A status page will greet you. &lt;/p&gt;

&lt;p&gt;If the Resourcely action is successful in GitHub, the PR should be Approved!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbsp17vepnqw871nvo33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbsp17vepnqw871nvo33.png" alt="Approved PR" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6p1uv38e5nz63ulk4a01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6p1uv38e5nz63ulk4a01.png" alt="Approved PR" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Dev Deployment
&lt;/h4&gt;

&lt;p&gt;Before we close this guide, let's examine the developer experience by deploying our new S3 Blueprint through the eyes of "Jane Dev." &lt;/p&gt;

&lt;p&gt;Jane will see a resources page similar to what you just saw, but much more limited. The developer only has the tools they need to deploy. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0x9apzs45yw50hous29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0x9apzs45yw50hous29.png" alt="Developer View" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;She'll click on the same "Create Pull Request" button at the top right:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvs0i2cz1upgl4vofrhiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvs0i2cz1upgl4vofrhiu.png" alt="Create PR" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the steps, and you'll see the same process as before. Open the pull request on the final step, and the PR should be opened, GitHub will run the pipeline, Resourcely will then approve the PR if everything goes well:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8go5ocsdau5fg1ni46kc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8go5ocsdau5fg1ni46kc.png" alt="Image description" width="680" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Merge the PR, and Jane Dev's bucket is deployed with all settings enforced:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7yvjyocq29l9dfvf38q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7yvjyocq29l9dfvf38q.png" alt="Image description" width="800" height="727"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This tutorial covered several ways Resourcely makes life easy for Infrastructure-as-Code developers who need resources they can deploy effortlessly. Simple niceties such as tag generation make it quick and easy to pass in attribute values that weren't previously a variable. Guardrails ensure that you can enforce any attribute. With Blueprints tying everything together, I don't know of another product that makes it easier to deploy compliant resources quickly. Be sure to check out Resourcely with an insanely generous free tier: &lt;a href="https://portal.resourcely.io" rel="noopener noreferrer"&gt;https://portal.resourcely.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this tutorial as much as I enjoyed writing it! Until next time, cheers! &lt;/p&gt;

</description>
      <category>terraform</category>
      <category>policy</category>
      <category>devex</category>
      <category>aws</category>
    </item>
    <item>
      <title>Containerizing Terraform</title>
      <dc:creator>Derek Morgan</dc:creator>
      <pubDate>Thu, 13 Jun 2024 13:55:58 +0000</pubDate>
      <link>https://dev.to/morethancertified/containerizing-terraform-3h3e</link>
      <guid>https://dev.to/morethancertified/containerizing-terraform-3h3e</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Like most software, Terraform's behavior on different machines is aggravating. Terraform itself is pretty solid, but dealing with multiple providers, provisioners, keys, variables, and every other piece of entropy can become a management headache!&lt;/p&gt;

&lt;p&gt;Note: Before we get started, please note that this technique is best used in Linux, OSX, or Microsoft’s Windows Subsystem for Linux 2. WSL1 or doing this straight from Powershell probably isn’t the best route. You might be able to get it to work, but it’s best if you’re running with Ubuntu on WSL2. The instructions to get that wired up are here: &lt;a href="https://docs.docker.com/docker-for-windows/install/"&gt;https://docs.docker.com/docker-for-windows/install/&lt;/a&gt;&lt;br&gt;
You’ll also need to install Docker as well:&lt;br&gt;
&lt;a href="https://docs.docker.com/get-docker/"&gt;https://docs.docker.com/get-docker/&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Enter Containers!
&lt;/h1&gt;

&lt;p&gt;So, how does Docker fit into this scenario and potentially solve our woes? Like when using it in automation, Docker can be used as an ad-hoc process, meaning the container is run, completes its purpose, and then removed. Utilizing the &lt;code&gt;hashicorp/terraform&lt;/code&gt; container, we can run the latest version of Terraform with a simple command! Although there’s an extra layer of abstraction that can complicate things depending on what you’re deploying, most, if not all, of these issues can be overcome with a few clever Docker Run flags. Now, before everyone skewers me for mentioning Docker and not , I just want to make it perfectly clear that I am aware there are other runtimes. However, Docker is still the most popular, so I’ll be using it for this article. Feel free to use any runtime you wish as long as the features are the same.&lt;/p&gt;

&lt;p&gt;Ok, let’s build something! As many of you know by now, I like to build stuff vs. talk about it. Let’s build something simple this round, but it’ll be something that will illustrate several snags and solutions you may encounter while running Terraform in Docker. Let’s deploy a Docker image and container using Terraform. Go ahead and create a main.tf file and add some Terraform code:&lt;/p&gt;

&lt;p&gt;Note: If you want to learn how to write deployments like this and much more, check out my course!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    docker = {
      source = "kreuzwerker/docker"
    }
  }
}

provider "docker" {}

resource "null_resource" "dockervol" {
  provisioner "local-exec" {
    command = "echo ${docker_container.nodered_container.name} &amp;gt;&amp;gt; containers.txt"
  }
  provisioner "local-exec" {
    command = "rm -f containers.txt"
    when = destroy
  }
}

resource "docker_image" "nodered_image" {
  name = "nodered/node-red"
}

resource "random_string" "random" {
  length  = 4
  special = false
  upper   = false

resource "docker_container" "nodered_container" {
  name  = join("-", ["nodered", random_string.random.result])
  image = docker_image.nodered_image.latest
  ports {
    internal = 1880
    external = 1880
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok, this code creates a NodeRED container from the NodeRED image and then creating a containers.txt file that will contain the name of the container you create, illustrating that the Terraform binary still has access to your local filesystem. The container will also be exposed on port 1880, so feel free to access it using &lt;a href="http://localhost:1880"&gt;http://localhost:1880&lt;/a&gt; if you wish to play around with it, but make sure you add a volume if you want to do anything fancy, as the data will not persist. Once the deployment is destroyed, everything, including the containers.txt file, will be removed.&lt;/p&gt;

&lt;p&gt;So now that you have your file created and code inserted let’s get down to business!&lt;/p&gt;

&lt;h1&gt;
  
  
  Using the Terraform Docker Container
&lt;/h1&gt;

&lt;p&gt;Typically, you would install Terraform using apt or by downloading the binary, but this time, we will do it the fun way. Unfortunately, you still need to install Docker, so ensure you’ve done that. Once everything is installed, let’s get to work! You can check out the Terraform Container docs here: &lt;a href="https://hub.docker.com/r/hashicorp/terraform"&gt;https://hub.docker.com/r/hashicorp/terraform&lt;/a&gt;&lt;br&gt;
As you can see, the docs are pretty bare, especially for Hashicorp standards. Their docs are typically phenomenal, but I guess they focus more on binary usage itself than containerized use cases. So, let’s make this thing useful!&lt;br&gt;
First, let’s go ahead and pull the latest image. Run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker pull hashicorp/terraform:light&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And you should see the image being pulled:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7qg73curdc6h1m19qy8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7qg73curdc6h1m19qy8.png" alt="Docker pulling Terraform Image" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, if you run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker history --no-trunc hashicorp/terraform:light&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fektulzz7627ef6dlcae1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fektulzz7627ef6dlcae1.PNG" alt="Image description" width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see the “ENTRYPOINT” directive is set to &lt;code&gt;["/bin/terraform"]&lt;/code&gt;. This shows that when you run this container, it will run the terraform command. This is exactly what we’re looking for. So, let’s try it by running the container. We’ll set the container to remove itself on creation with &lt;code&gt;--rm&lt;/code&gt; and to be interactive on the terminal with -it:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run --rm -it hashicorp/terraform:light version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;So this is great; we now know that Terraform is working just as if the binary were installed on our machine, well, almost. Go ahead and run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run --rm -it hashicorp/terraform:light init&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp2ibvg8htfbzj278g3z.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp2ibvg8htfbzj278g3z.PNG" alt="terraform init" width="800" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, that’s not what we were hoping for! Since Terraform is running within a container, it cannot access the files in our current directory. Let’s remedy that by mounting a volume to the current working directory using the "Present Working Directory" or &lt;code&gt;PWD&lt;/code&gt; command. We’ll mount the directory to the directory /data within the container and set /data as the working directory. This will provide the container read/write access to our current directory:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run --rm -it -v $PWD:/data -w /data hashicorp/terraform:light init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sn8cn5nga7b8a18rz7p.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sn8cn5nga7b8a18rz7p.PNG" alt="Docker run" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alright, so we’re closer! Initialization was successful, and all of our providers have been installed! And, if you look at your directory, you can see the Terraform files we expect after a fresh init:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F923yv6l37c9iurd71hs6.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F923yv6l37c9iurd71hs6.PNG" alt="terraform init" width="204" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alright, so now init works, let’s go ahead and attempt a plan and see what breaks next:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42bs0nni1jw0wbm2obpg.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42bs0nni1jw0wbm2obpg.PNG" alt="terraform plan" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;D'oh!&lt;br&gt;
So now we have another issue to solve. We need to connect our Docker container to the machine's local Docker socket. I want to say that I did not develop the exact syntax alone. I used the blog linked below, and I think you’ll find a lot of other interesting tidbits that may come in handy as you make this solution work for you:&lt;br&gt;
&lt;a href="https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/"&gt;https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/&lt;/a&gt;&lt;br&gt;
To utilize our machine’s local Docker socket within the container, we need to add the socket as a volume to the Docker container like so:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run --rm -it -v $PWD:/data -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker hashicorp/terraform:light plan&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now run that command, and let’s see what happens:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioik7707ceyp5ft9hjbe.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioik7707ceyp5ft9hjbe.PNG" alt="terraform plan working" width="739" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome! It worked! So, let’s apply this puppy!&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run --rm -it -v $PWD:/data -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker hashicorp/terraform:light apply --auto-approve&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k57eqbisauef7rr09sh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k57eqbisauef7rr09sh.PNG" alt="terraform apply" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We did it! Nice! Everything appears to have applied just fine! If you run a docker ps, you’ll see that the container is up and running:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77c4uz6rl5oe5jufcmhk.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77c4uz6rl5oe5jufcmhk.PNG" alt="container running" width="800" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you open containers.txt, you should see the name of the running container within. Before we destroy this stack, let’s make this a little bit easier using an alias. Go ahead and run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;alias tform="docker run --rm -it -v $PWD:/data -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker hashicorp/terraform:light"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You shouldn’t have any feedback. Once you’ve done that, run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tform state list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see all of your resources listed! We’ve now simplified the command extensively, and we can now run that entire Docker string by using one command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxcyp36xxlhty4occ2w5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxcyp36xxlhty4occ2w5.PNG" alt="terraform resources" width="625" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Perfect! Now, go ahead and destroy:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tform destroy --auto-approve&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde2fsywm6xizro4dqid2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde2fsywm6xizro4dqid2.PNG" alt="Image description" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we’ve seen how this works let’s make this setup a little more permanent. Depending on your OS, you may want to add this command to your .bashrc file to ensure it persists reboots, logouts, etc. So, if you’re on an OS that supports this file, let’s do this:&lt;br&gt;
Within your ~/.bashrc file, add this line to the very bottom:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;alias tform="docker run --rm -it -v $PWD:/data -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker hashicorp/terraform:light"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And that’s all you need to do! Now, anytime you log back in as your user, you’ll be greeted with your fancy new command!&lt;br&gt;
Alright! So now you’ve got an excellent way to utilize Terraform, manage versioning, and deploy in automation with ease!&lt;/p&gt;

&lt;h1&gt;
  
  
  Other Fun Things
&lt;/h1&gt;

&lt;p&gt;Well, that’s super neat! Definitely play around with that; there are many things you can do involving automation and custom Dockerfiles. For instance, if you require the Python binary, you can potentially create a new Dockerfile from the Python image and add the files from the Terraform image into it:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;# Dockerfile&lt;br&gt;
FROM python &lt;br&gt;
COPY --from=hashicorp/terraform:light /bin/terraform /bin/ &lt;br&gt;
ENTRYPOINT ["/bin/terraform"]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can do the same with Jenkins and other CI/CD platforms as well. The possibilities are endless! You can, of course, utilize any other argument for Docker run as well, such as Environment Variables. If you need to pass an envar, you can run something like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run --rm -it -v $PWD:/data -w /data -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker -e TF_TZ=Europe/London hashicorp/terraform:light&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, you can access that event within your Terraform scripts using standard syntax to access those variables. But I’ll let you experiment with that.&lt;/p&gt;

&lt;p&gt;Alright, so that’s all for this article. If you liked it, please check out my course at &lt;a href="https://courses.morethancertified.com/p/mtc-terraform"&gt;https://courses.morethancertified.com/p/mtc-terraform&lt;/a&gt; to learn a lot more about Terraform, and don’t forget to Terraform Apply Yourself!&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources and More Reading
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://medium.com/@audun.nes/how-to-use-the-official-terraform-docker-image-2609982114b9"&gt;https://medium.com/@audun.nes/how-to-use-the-official-terraform-docker-image-2609982114b9&lt;/a&gt;&lt;br&gt;
&lt;a href="https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/"&gt;https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://nodered.org/docs/getting-started/docker"&gt;https://nodered.org/docs/getting-started/docker&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.reddit.com/r/docker/comments/bugpt0/running_terraform_in_docker/"&gt;https://www.reddit.com/r/docker/comments/bugpt0/running_terraform_in_docker/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.docker.com/get-docker/"&gt;https://docs.docker.com/get-docker/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.terraform.io/downloads.html"&gt;https://www.terraform.io/downloads.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://hub.docker.com/r/hashicorp/terraform/"&gt;https://hub.docker.com/r/hashicorp/terraform/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://courses.morethancertified.com/p/mtc-terraform"&gt;https://courses.morethancertified.com/p/mtc-terraform&lt;/a&gt;&lt;br&gt;
&lt;a href="https://courses.morethancertified.com/p/mtc-docker"&gt;https://courses.morethancertified.com/p/mtc-docker&lt;/a&gt;&lt;br&gt;
&lt;a href="https://youtube.com/morethancertified"&gt;https://youtube.com/morethancertified&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>containers</category>
      <category>terraform</category>
      <category>docker</category>
    </item>
    <item>
      <title>Infrastructure Drift in the Cloud</title>
      <dc:creator>Derek Morgan</dc:creator>
      <pubDate>Thu, 12 Oct 2023 12:38:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/infrastructure-drift-in-the-cloud-1fj1</link>
      <guid>https://dev.to/aws-builders/infrastructure-drift-in-the-cloud-1fj1</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8enl31d6a8999aib1d1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8enl31d6a8999aib1d1.jpg" alt="Image description" width="500" height="750"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Configuration Drift?
&lt;/h1&gt;

&lt;p&gt;If you’ve spent time with infrastructure-as-code, you’ve probably heard about configuration drift. For those new to the topic, configuration drift is when your actual infrastructure deviates from the desired infrastructure. You usually script this infrastructure using tools such as Hashicorp Terraform, OpenTofu, AWS Cloudformation, Azure’s ARM/Bicep, Ansible, et al. Drift happens when an engineer or a process changes something outside these scripts. Think of manually adding a security rule from the console to allow access to a server for an emergency patch at 0300. &lt;/p&gt;

&lt;p&gt;Obviously, you can just lock all engineers out of the console, but honestly,  things (sic) happen, and stuff (sic) goes down! Outages, security breaches, and surprise updates can lead to a need for immediate changes. When the clock is ticking, someone must take action, possibly without time to read your (obviously well-commented) infrastructure code. &lt;/p&gt;

&lt;p&gt;Before I explain how to prevent and mitigate drift, I want to remind you that configuration drift is not always bad! In the above example, this emergency patch may prevent a 0-day exploit launched by a bad actor that has a habit of attacking companies in your industry. A significant security risk is obviously more important than keeping your infrastructure scripts clean. After the emergency has subsided, someone must either add this rule to the infrastructure code or remove it entirely. &lt;/p&gt;

&lt;h1&gt;
  
  
  Prevention, Detection, and Mitigation Methods
&lt;/h1&gt;

&lt;p&gt;Luckily, there are several guardrails you can put into place to help prevent drift or at least mitigate it efficiently when it occurs. The following are some essential strategies and tools to prevent, detect, and mitigate infrastructure drift. &lt;/p&gt;

&lt;h2&gt;
  
  
  Have a Strict Change-Control Policy
&lt;/h2&gt;

&lt;p&gt;A change-control policy should be the bedrock for any company that deals with changes, you know, all companies. The complexity can vary by scale and the possible impact of a change. Overcomplicating change management can lead to a loss of productivity and agility. A fantastic example is the Change Management Policy doc provided by Gitlab: &lt;a href="https://handbook.gitlab.com/handbook/security/change-management-policy/"&gt;https://handbook.gitlab.com/handbook/security/change-management-policy/&lt;/a&gt;&lt;br&gt;
This document is a great template to get you started on your own change management policy. &lt;/p&gt;

&lt;h2&gt;
  
  
  Keep all IaC and config files in Version Control
&lt;/h2&gt;

&lt;p&gt;Hopefully, you keep all infrastructure files in a VCS, such as GitHub or GitLab. Your organization should have a well-documented process for modifying these files, such as PR and comment policies, branch guidelines, permissions, approval policies, etc. Tagging commits properly, having policy frameworks statically scan the code for changes to mission-critical resources and require approvals accordingly, and always ensuring those approvers are available when needed are all great ways to ensure a smooth VCS strategy. &lt;/p&gt;

&lt;h2&gt;
  
  
  Use Configuration Monitoring Tools
&lt;/h2&gt;

&lt;p&gt;Configuration monitoring tools, such as AWS Config, Azure Application Change Analysis, and others, are crucial for maintaining a tight infrastructure policy. They monitor your infrastructure and notify you based on default and custom rules. Many of these tools are not free, so ensure accounting is also involved! Despite this, these tools are pretty much required to keep your infrastructure in check, and they’re worth the small cost to operate. &lt;/p&gt;

&lt;h2&gt;
  
  
  Never Share Credentials
&lt;/h2&gt;

&lt;p&gt;I’m not going to use this section to pad my word count. I think it’s pretty self-explanatory. I’ll just say that OIDC, zero-trust, and IAM roles are vastly superior to passwords. Unfortunately,  this isn’t a perfect world, and passwords still exist. So, if you still have to use passwords, don’t share them. &lt;/p&gt;

&lt;h2&gt;
  
  
  Run Scheduled Plans
&lt;/h2&gt;

&lt;p&gt;Not all IaC tools have the notion of a “plan,” but most do. Running a plan on a scheduled interval allows you to see if anything about the resources known to the tool has changed. &lt;br&gt;
Most tools, such as Terraform, can only check the resources they know about for drift. Due to this, a successful plan may not tell the whole story. You should include tools, like AWS Config, in your strategy to see the entire picture. If you encounter resources outside of your Terraform State due to an emergency or a click-happy engineer, you can always use Terraform’s Import Blocks (v1.5.0+) feature to import those resources. &lt;/p&gt;

&lt;h2&gt;
  
  
  Use drift detection tools such as driftctl
&lt;/h2&gt;

&lt;p&gt;Scheduled plans are a rudimentary but somewhat effective method to detect drift. If you really want to step up your drift-detection game, driftctl is a tool that can help. With driftctl, you can check for resources outside Terraform, simplifying the import process. Unfortunately, driftctl is currently in “maintenance mode” and will not continue to be supported. Cloudquery is one alternative, and there are several other Terraform Automation and Collaboration Tools (TACOS) that include drift detection, so if the End-of-Life of drifctl is a concern, this may not be the tool for you. &lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;As you can see from this non-exhaustive list, there are many methods to manage drift, intentional or unintentional. How your organization handles infrastructure drift can be complicated as numerous moving parts exist. Many of these strategies can become quite pricey, which is always a hurdle for organizations of any size. Overall, having a solid plan is the most important thing you can do. If you want to learn more about DevOps, including Terraform, Docker, Jenkins, Ansible, and more, check out my courses at &lt;a href="https://courses.morethancertifed.com"&gt;https://courses.morethancertifed.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;See my LinkedIn post that inspired this article here: &lt;a href="https://www.linkedin.com/posts/derekm1215_infrastructure-drift-in-the-cloud-more-activity-7118210758550683648-P8zf"&gt;https://www.linkedin.com/posts/derekm1215_infrastructure-drift-in-the-cloud-more-activity-7118210758550683648-P8zf&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>iac</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Self-Service AWS Infrastructure using Spacelift</title>
      <dc:creator>Derek Morgan</dc:creator>
      <pubDate>Tue, 11 Apr 2023 11:34:42 +0000</pubDate>
      <link>https://dev.to/aws-builders/self-service-aws-infrastructure-using-spacelift-1de7</link>
      <guid>https://dev.to/aws-builders/self-service-aws-infrastructure-using-spacelift-1de7</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;In this article of the Self-Service AWS Infrastructure for Your Devs series, we're going to deploy our VPC and the peered Client VPC using Spacelift and several of its features. This will be the easiest of the methods since state is fully managed, all authentication with Github is managed, Authentication to AWS is simple, and the Blueprints feature provides an excellent self-service interface for your devs with very little effort. We'll deploy the entire setup using a few clicks in the GUI followed by writing everything else in Terraform. Let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Spacelift as Code
&lt;/h2&gt;

&lt;p&gt;First, we're going to create the code needed to deploy all of the assets. Once we've done that, we'll create the initial Spacelift admin stack and deploy everything. This will all be created as a monorepo in Github, but you can structure it however you see fit if you have other organizational requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Admin Stack Repository Code
&lt;/h3&gt;

&lt;p&gt;This Terraform code will create the infrastructure stack and the custom Blueprint the developers can use to deploy client VPCs. If you don't name your repository &lt;code&gt;aws-self-service&lt;/code&gt; and use all of the same directory names, ensure you modify all references within the code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./administrative/providers.tf

terraform {
  required_providers {
    spacelift = {
      source = "spacelift-io/spacelift"
    }
  }
}

provider "spacelift" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./administrative/stacks.tf

# this data source will retrieve the stack_id of the admin stack 
# we will create next.
data "spacelift_stack" "admin" {
  stack_id = "admin"
}
# Check the attributes below for your VCS settings
# Learn more about stacks here: 
# https://docs.spacelift.io/concepts/stack/
resource "spacelift_stack" "shared_infra" {

  autodeploy        = false
  branch            = "main"
  project_root      = "shared_infra"
  description       = "Core Infra Stack"
  name              = "shared-infra"
  space_id          = "root"
  repository        = "aws-self-service"
  terraform_version = "1.2.9"
  labels            = ["managed"]
}

# You will create the `dev-context` further in the post. 
# More about contexts here: 
# https://docs.spacelift.io/concepts/configuration/context
resource "spacelift_context_attachment" "attachment" {
  context_id = "dev-context"
  stack_id   = "spacelift_stack.shared_infra.id"
  priority   = 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./administrative/blueprints.tf
# More about Blueprints here: 
# https://docs.spacelift.io/concepts/blueprint/

locals {
  bprint = file("${path.root}/blueprints/client_vpc.tftpl")
}

resource "spacelift_blueprint" "client_vpc" {
    name = "Client VPC"
    description = "Stack to create a new child VPC"
    space = "root"
    template = local.bprint
    state = "PUBLISHED"
    labels = ["client"]
}
# for troubleshooting purpose
output "bprint" {
  value = local.bprint
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./administrative/blueprints/client_vpc.tftpl
# inputs are used to create input fields
inputs:
  - id: client_name
    name: Client name  
  - id: vpc_cidr
    name: CIDR of the VPC
    type: select
# You could use a data source here to iterate over a list of 
# available subnets that don't overlap with the main.
    default: 10.1.0.0/16
    options:
      - 10.2.0.0/16
      - 10.3.0.0/16
      - 10.4.0.0/16
  - id: region
    name: Choose AWS region
    type: select
# ensure you set these appropriately
    options:
      - us-east-1
      - us-east-2
  - id: trigger_run
    name: Trigger a run upon stack creation
    type: boolean
    default: false
stack:
  name: ${{ inputs.client_name }}-stack
# More info about Spaces here: 
# https://docs.spacelift.io/concepts/spaces/
  space: root
  description: &amp;gt;
    Stack created from a blueprint by ${{ context.user.name }} logged in as ${{ context.user.login }}
  labels:
    - "blueprints/${{ context.blueprint.name }}"
# Uncomment the vcs section below and add your information.
  vcs:
   branch: main
   repository: aws-self-service
   project_root: client_vpc
   provider: GITHUB
  vendor:
    terraform:
      manage_state: true
# Use your preferred version of Terraform here
      version: "1.4.0"
  attachments:
    contexts:
      - id: dev-context
        priority: 1
  environment:
    variables:
      - name: TF_VAR_client_name
        value: ${{ inputs.client_name }}
      - name: TF_VAR_vpc_cidr
        value: ${{ inputs.vpc_cidr }}
      - name: TF_VAR_region
        value: ${{ inputs.region }}
options:
  trigger_run: ${{ inputs.trigger_run }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have created all of the code, commit it to your Git repository that contains the code from the first part of this series. If you do not wish to make modifications to VCS settings in the code, make sure you name your repo &lt;code&gt;aws-self-service&lt;/code&gt; and your directories the same as what you see in the code snippets above. This is a relatively intermediate article, so I won't go into depth on how to do this. If you have any questions, feel free to reach out. &lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Spacelift Admin Stack
&lt;/h2&gt;

&lt;p&gt;Setting up Spacelift is easy. If you don't have an account yet, you can see the "Getting Started" documentation here to get you up to speed quickly: &lt;a href="https://docs.spacelift.io/getting-started" rel="noopener noreferrer"&gt;https://docs.spacelift.io/getting-started&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you have your account setup, follow the steps below. &lt;/p&gt;

&lt;h3&gt;
  
  
  1. In the console, click on Add stack
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flg2jdyp9j2jmn72da995.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flg2jdyp9j2jmn72da995.png" alt="Adding a stack in the Spacelift Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Configure your repository settings
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl379fnldlbnwkbns5bj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl379fnldlbnwkbns5bj.png" alt="Configuring repository settings in the Spacelift console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Customize any settings you need and click continue
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5em2hnxv3k7oqe9kjav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5em2hnxv3k7oqe9kjav.png" alt="Customizing stack settings in the Spacelift console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Toggle Administrative and save
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjgldun3prf4kk3ntmvz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjgldun3prf4kk3ntmvz.png" alt="Enabling administrative options"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the AWS Credentials Context
&lt;/h2&gt;

&lt;p&gt;There are multiple ways to provide AWS credentials to our stacks. In Spacelift, you can create a "Cloud Integration" that will assume a temporary role and use it to create resources. This is the preferred route, but in the interest of simplicity and focus, I'm going to pass in the credentials manually. With this method, ensure you rotate your keys frequently and disable them when not in use. Spacelift is a very secure product, but it's always better to be cautious. &lt;/p&gt;

&lt;p&gt;What we're going to do is use a "Spacelift Context" to store the keys as an environment variables that can be accessed by any stack to which the Context is attached. To create the new Context, head to the Contexts pane on the left, fill out the necessary information, create the variables, add their values, and designate them as "secret" as shown in the image below. Unless you wish to modify the code above, ensure you use "dev-context" as the name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tu3qlm2xlg4wtjvudhl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tu3qlm2xlg4wtjvudhl.png" alt="Creating a Spacelift context"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's Deploy!
&lt;/h2&gt;

&lt;p&gt;Once your code is in your repository, the Admin stack is connected to that code, and the Context has been created, it's time to finally deploy! This is going to deploy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;shared-infra&lt;/code&gt; stack that will deploy the shared VPC. &lt;/li&gt;
&lt;li&gt;A Spacelift Blueprint that will allow you to enter the information needed to create a stack that will deploy another VPC that will automatically peer to the shared-services VPC. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the configuration is finished, go ahead and trigger the Admin stack and let's check it out! You should see your Admin stack and your new shared-infra stack:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnv7o41iggtt74zzcjfkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnv7o41iggtt74zzcjfkl.png" alt="Triggering an admin stack in Spacelift"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And your new Blueprint:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtphf8h4lfg1lrenrost.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtphf8h4lfg1lrenrost.png" alt="A Blueprint in the Spacelift console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once those resources have been deployed, it's time to deploy the shared-resources VPC. Trigger the shared_infra stack to do so and verify the resources were created afterwards:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wnuho8t8km7em4an6oa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wnuho8t8km7em4an6oa.png" alt="Deploy the VPC"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once that's complete, your self-service deployment is complete! Head over to the Blueprints tab, fill out the necessary information, and create stack:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F022o2uqf8rej1w34ulu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F022o2uqf8rej1w34ulu5.png" alt="Create a stack from a Blueprint in Spacelift"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the stack is created, you can trigger it and you'll have your very own self-service VPC! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2s069rjkwfiqstvb2qg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2s069rjkwfiqstvb2qg.png" alt="See your new deployed VPC"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Self-Service AWS Infrastructure for Your Devs</title>
      <dc:creator>Derek Morgan</dc:creator>
      <pubDate>Wed, 29 Mar 2023 18:04:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/creating-modular-vpcs-with-dynamic-peering-in-terraform-m8g</link>
      <guid>https://dev.to/aws-builders/creating-modular-vpcs-with-dynamic-peering-in-terraform-m8g</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;If you're a developer or "developer adjacent," you've probably heard about "self-service" infrastructure. As the infrastructure required to launch simple apps becomes more and more distributed and specialized, deploying these apps becomes more difficult for the ones that program them. Previously, you could just upload the new code to your html directory, and BAM! The new site is up! Now, these apps usually require multiple pieces of infrastructure to be set up to be delivered. This allows companies to deploy only the necessary infrastructure and only pay for it when needed. Unfortunately, this adds more overhead for the developer. To combat this, DevOps teams and their subset, "Platform Engineering", are working on abstracting this process from the developers to allow them to create this infrastructure themselves on the fly when needed, without the headaches. In this article, we'll cover different strategies to build a self-service platform that deploys a simple peered VPC in AWS without the developer making any complicated decisions. So let's kick this off! &lt;/p&gt;

&lt;h2&gt;
  
  
  The Terraform Code
&lt;/h2&gt;

&lt;p&gt;As I mentioned before, this is not a series that will cover a sophisticated AWS SaaS deployment. We will keep the AWS code very simple and focus more on the pipeline and the self-service aspects than the deployed infrastructure. So we're going to stick to a simple VPC:&lt;/p&gt;

&lt;h3&gt;
  
  
  Main VPC Code
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./shared_infra/main.tf
provider "aws" {
  region = var.region
}

resource "aws_vpc" "main" {
  cidr_block       = "10.0.0.0/16"
  tags = {
    Name = "main"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./shared_infra/variables.tf
# Set the appropriate default region here.
variable "region" {
    type = "string"
    default = "us-east-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./shared_infra/versions.tf
# specify different versions if appropriate. 

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.15.0"
    }

    random = {
      source  = "hashicorp/random"
      version = "3.1.0"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Client VPC Code
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./client_infra/main.tf
provider "aws" {
  region = var.region
}

data "aws_vpc" "main" {
  filter {
    name   = "tag:Name"
    values = ["main"]
  }
}

resource "aws_vpc" "client_vpc" {
  cidr_block = var.vpc_cidr

  tags = {
    Name = "client_vpc"
  }
}

resource "aws_vpc_peering_connection" "this" {
  peer_vpc_id = aws_vpc.client_vpc.id
  vpc_id      = data.aws_vpc.main.id

  auto_accept = true

  tags = {
    Name = "VPC Peering"
  }
}

resource "aws_route" "requester" {
  route_table_id         = data.aws_vpc.main.main_route_table_id
  destination_cidr_block = aws_vpc.client_vpc.cidr_block
  vpc_peering_connection_id = aws_vpc_peering_connection.this.id
}

resource "aws_route" "accepter" {
  route_table_id         = aws_vpc.client_vpc.main_route_table_id
  destination_cidr_block = data.aws_vpc.main.cidr_block
  vpc_peering_connection_id = aws_vpc_peering_connection.this.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./client_vpc/variables.tf
variable "region" {
    type = string
    default = "us-east-1"
}

variable "vpc_cidr" {
    type = string
    default = "10.1.0.0/16"
}

variable "client_name" {
    type = string
    default = "cool-client"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ./client_infra/versions.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.15.0"
    }

    random = {
      source  = "hashicorp/random"
      version = "3.1.0"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;p&gt;This code is simple enough. And to deploy it, all you have to do is give the client VPC a unique name and &lt;code&gt;terraform apply&lt;/code&gt; each piece individually. Easy peasy, right? Of course it is, for the one who wrote it! If you have many developers, expecting this to be deployed correctly when they add clients is, unfortunately, most likely a pipe dream. How can we deploy this in a way that allows developers to deploy easily and securely? I'll cover the overall approaches we'll take here, then we'll deep dive into them as we continue the series.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Managed Deployment Way
&lt;/h3&gt;

&lt;p&gt;This will be the method with a lot of flexibility and a much simpler deployment process. Using &lt;a href="https://spacelift.io" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt; and its &lt;a href="https://spacelift.io/blog/introducing-spacelift-blueprints" rel="noopener noreferrer"&gt;Blueprints &lt;/a&gt; feature, we'll be able to deploy a frontend for developers to use and a deployment pipeline that can deploy Terraform, Cloudformation, Pulumi, and more. We'll focus on Terraform here and build everything out, including the Spacelift resources using Terraform. This method will be incredibly flexible, but of course, it is not a free solution. To use the Blueprints feature, an enterprise account will be required. I'll show how to deploy everything before the Blueprints for those without an enterprise account, then add the Blueprints customization by the end. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcc90fxoqz53r600woqcm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcc90fxoqz53r600woqcm.png" alt="Diagram illustrating Peered VPCs in AWS and necessary Spacelift infrastructure to create self-service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check out the next post in this series to see the managed way.&lt;/p&gt;

&lt;h3&gt;
  
  
  The AWS Code* Services + Terraform way
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Coming soon!
&lt;/h4&gt;

&lt;h3&gt;
  
  
  The Serverless + Cloudformation Way
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Coming Soon!
&lt;/h4&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
