DEV Community

shah-angita for platform Engineers

Posted on

Integrating Terraform with CI/CD Pipelines

In recent years, there has been a significant shift towards automation of infrastructure deployment processes. One popular tool that has emerged as a key player in this space is Terraform, an open-source infrastructure as code (IaC) software tool developed by HashiCorp. This article will explore how Terraform can be integrated into continuous integration and delivery (CI/CD) pipelines using GitHub Actions as an example.

Preparing Terraform Configuration

Before integrating Terraform with your CI/CD pipeline, you must first create a configuration file that defines the desired infrastructure resources. For instance, here's a simple Terraform configuration file written in HCL (HashiCorp Configuration Language):

provider "aws" {
  region = "us-east-1"
}

resource "aws_instance" "example" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
}
Enter fullscreen mode Exit fullscreen mode

This example creates an AWS EC2 instance with a specific AMI and instance type.

Setting up GitHub Actions Workflow

Next, you'll want to set up a GitHub Actions workflow that runs Terraform commands based on triggers such as push events or pull requests. To do so, create a new YAML file under .github/workflows/ in your repository, e.g., named terraform.yml. Here's an example workflow definition:

name: Terraform Deploy

on:
  push:
    branches:
      - main

jobs:
  deploy:
    name: Deploy Infrastructure
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
        with:
          fetch-depth: 0

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1

      - name: Check Terraform Version
        run: terraform --version | grep 'Terraform v1.'

      - name: Initialize Terraform
        run: terraform init

      - name: Plan Changes
        id: plan
        run: terraform plan -input=false -out=planfile
        env:
          TF_VAR_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
          TF_VAR_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

      - name: Apply Changes
        if: success(needs.plan.outputs.result) == 'true'
        run: terraform apply -input=false planfile
        env:
          TF_VAR_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
          TF_VAR_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Enter fullscreen mode Exit fullscreen mode

In this example, we define a workflow that deploys infrastructure when changes are pushed to the main branch. The workflow consists of several steps including checking out the repository, setting up Terraform, initializing it, planning changes, and applying them.

Securing AWS Credentials

It's crucial to secure your AWS credentials when integrating Terraform with a CI/CD pipeline. In this example, we use GitHub Actions secrets to store AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY securely. Here's how you can set up these secrets:

  1. Navigate to your repository settings page.
  2. Click on "Secrets" under the left sidebar menu.
  3. Add two new secrets named AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
  4. Paste your respective AWS credentials into their corresponding fields.

After configuring the secrets, update your Terraform configuration file by adding backend configuration to save state files remotely instead of locally:

terraform {
  backend "s3" {
    bucket = "<your-bucket-name>"
    key    = "path/to/terraform.tfstate"
    region = "us-east-1"
  }
}
Enter fullscreen mode Exit fullscreen mode

Replace <your-bucket-name> with an existing S3 bucket name where you want to store the Terraform state file. Make sure to grant proper permissions to the IAM role associated with the EC2 instance created by Terraform to allow it to read from and write to this S3 bucket.

Conclusion

Integrating Terraform with CI/CD pipelines enables infrastructure automation and ensures consistent deployments across various environments. By following best practices such as storing sensitive data securely and utilizing remote backends for state management, teams can build robust and scalable systems while maintaining control over their infrastructure deployment processes.

Top comments (0)