DEV Community

Cover image for Deploying Infrastructure on AWS with Terraform and AWS CodePipeline      
(#CloudGuruChallenge Series) (Part 1/3)
Thakur Rishabh Singh
Thakur Rishabh Singh

Posted on • Edited on

Deploying Infrastructure on AWS with Terraform and AWS CodePipeline
(#CloudGuruChallenge Series) (Part 1/3)

This Post is about creating a CI/CD pipeline on AWS using CodePipeline which deploys Infrastructure on AWS using Terraform.

Contents

  1. Project Overview
  2. Setting up Terraform
  3. Setting up AWS CodePipeline
    • Source stage
    • Terraform Plan step
    • Manual Approval step
    • Terraform Apply stage
    • Deploy stage
  4. Final View Of the Pipeline
  5. Conclusion

1. Project Overview

This project is First part in the series #CloudGuruChallenge – Event-Driven Python on AWS. Here we deploy an s3 buckets and a lambda function. The lambda function will be part of an AWS Step Functions Workflow which will be developed in the next part of this series and the S3 bucket is used to store the lambda deployment.
To automate the process Terraform is used for IaC (Infrastructure as Code) and AWS CodePipeline is used for CI/CD.

2. Setting up Terraform

The following are the required steps to start working with Terraform on AWS:

  • Create an S3 Bucket which will store the terraform state file.
  • Create a dynamodb table with on demand capacity with a primary key of LockID.

The above steps will configure terraform with S3 as the backend. The provider.tf and backends.tf file is shown below.

provider.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}


Enter fullscreen mode Exit fullscreen mode

backends.tf

terraform {
  backend "s3" {
    bucket = "YOUR-BUCKET-NAME"
    key    = "terraform.tfstate"
    region = "YOUR-REGION-NAME"
    dynamodb_table = "terraform-state-lock"

  }
}
Enter fullscreen mode Exit fullscreen mode

Create a sample lambda_function.py and zip it in the same directory as the .tf files with name lambda_function_payload.zip. After which the IaC for S3 and lambda could be written as shown below:

S3.tf

resource "aws_s3_bucket" "lambda_s3_buckets" {
    bucket = "YOUR-BUCKET-NAME"
    acl    = "private"
    force_destroy = true

}

resource "aws_s3_bucket_object" "object" {
    bucket = "YOUR-BUCKET-NAME"
    key    = "YOUR-PATH-TO-STORE-LAMBDA-DEPLOYMENT"
    source = "lambda_function_payload.zip"

    depends_on = [
    aws_s3_bucket.lambda_s3_buckets,
    ]
}
Enter fullscreen mode Exit fullscreen mode

Lambda.tf

resource "aws_lambda_function" "state_machine_lambdas" {
    function_name = "YOUR-FUNCTION-NAME"
    role          = "ROLE-ARN"
    handler       = "lambda_function.lambda_handler" 
    s3_bucket = "BUCKET-NAME_WITH-LAMBDA-DEPLOYMENT"
    s3_key    = "PATH-TO-LAMBDA-DEPLOYMENT"

    runtime = "python3.8"
    depends_on = [
    aws_s3_bucket_object.object,
    ]
}
Enter fullscreen mode Exit fullscreen mode

Store all the .tf files in a folder named terraform.

3. Setting up AWS CodePipeline

The AWS CodePipeline will be used for CI/CD (Continuous Integration/Continuous Delivery). The AWS free tier allows 1 free pipeline per month. Our pipeline consists of five stages viz source ,build (Terraform Plan), build (Manual Approval step), Terraform Apply and Deploy.

Source Stage

Here Github is used as a source repository for the pipeline. The configuration for this stage is shown below.

Stage 1: Source Configuration
Screenshot from 2021-04-10 19-57-32
Screenshot from 2021-04-10 19-57-57

  1. Github version 2 is selected as the source code repository. You can choose others such as AWS codecommit etc. For github you need to connect to it which is a straight forward process through the console.
  2. The repository name and branch name must be set up which will trigger the pipeline whenever there is a code change in that specific branch in the repository.

Build Stage(Terraform Plan Step)

This stage consists of two builds, a manual approval step and a deploy step. The first build is for terraform plan. AWS Code Build is used for creating the build projects. To set this up, in the pipeline stages add a new action group under the build stage as shown below

Adding a build action to CodePipeline
Screenshot from 2021-04-10 20-10-49
Screenshot from 2021-04-10 20-11-05

The project must be AWS Code Build Project which can be created by clicking on create project which will open the wizard as shown below.

CodeBuild Project For Terraform Plan
Screenshot from 2021-04-10 20-13-23
Screenshot from 2021-04-10 20-13-47
Screenshot from 2021-04-10 20-14-26
Screenshot from 2021-04-10 20-15-06
Screenshot from 2021-04-10 20-15-33

The important configuration above is

  • selecting 3gb memory with 2cpus (which is included in free tier)
  • The service role arn (which gives terraform the permission to provision AWS resources). It must contain all permissions which terraform needs such as access to S3, lambda etc.
  • The env variable TF_COMMAND_P which will be used in buildspec file.
  • The path to buildspec.yml file which contains the build commands.

After Configuring the above the git repository it must contain a buildspec file which contains the necessary commands. It is shown below

Terraform_Plan.yml

version: 0.1

phases:

  install:
    commands:
      - "apt install unzip -y"
      - "wget 
https://releases.hashicorp.com/terraform/0.14.10/terraform_0.14.10_linux_amd64.zip"
      - "unzip terraform_0.14.10_linux_amd64.zip"
      - "mv terraform /usr/local/bin/"
  pre_build:
    commands:
      - terraform -chdir=Terraform init -input=false

  build:
    commands:
      - terraform -chdir=Terraform $TF_COMMAND_P -input=false -no-color

  post_build:
    commands:
      - echo terraform $TF_COMMAND_P completed on `date`
Enter fullscreen mode Exit fullscreen mode

Manual Approval Step

Create a manual approval build action after the Terraform plan action as shown below.

Manual Approval Build Stage
Screenshot from 2021-04-10 20-31-22
Screenshot from 2021-04-10 20-31-39

You must create an SNS topic with a subscription to your email address to recieve notifications about approval and rejections.

Terraform Apply Stage

Follow a similar process as Terraform plan for Terraform Apply build stage with the following buildspec file.

Terraform_Apply.yml

version: 0.1

phases:

  install:
    commands:
      - "apt install unzip -y"
      - "wget https://releases.hashicorp.com/terraform/0.14.10/terraform_0.14.10_linux_amd64.zip"
      - "unzip terraform_0.14.10_linux_amd64.zip"
      - "mv terraform /usr/local/bin/"
  pre_build:
    commands:
      - terraform -chdir=Terraform init -input=false

  build:
    commands:
      - terraform -chdir=Terraform $TF_COMMAND_A -input=false -auto-approve

  post_build:
    commands:
      - echo terraform $TF_COMMAND_A completed on `date`
Enter fullscreen mode Exit fullscreen mode

Deploy Stage

Now the final stage would be deploy where code build is used to deploy code to lambda via s3 bucket. Create a folder in your repo called lambda_code and store your lambda_function.py in it. Create a new code build project. All the steps for code build project will be the same. The buildspec file for the project is shown below.

deploy_state_machine_code.yml

version: 0.1

phases:

  pre_build:
    commands:
      - mkdir -p ./lambda_code/zipped
      - zip -r -j lambda_code/zipped/lambda-function-payload.zip lambda_code/*

  build:
    commands:
      - aws s3 sync ./lambda_code/zipped s3://YOUR-BUCKET-NAME/YOUR-S3-KEY --delete

      - aws lambda update-function-code --function-name YOUR-FUNCTION-NAME --s3-bucket YOUR-BUCKET-NAME --s3-key YOUR-S3-KEY-TO-FILE

  post_build:
    commands:
      - echo state_machine_code was deployed to lambda from S3 bucket on `date`
Enter fullscreen mode Exit fullscreen mode

4. Final View Of the Pipeline

After the above steps the pipeline should look something like the one shown below.

Final View Of the Pipeline
Screenshot from 2021-04-13 13-53-25
Screenshot from 2021-04-13 13-53-47
Screenshot from 2021-04-13 13-54-07

Pushing to github will trigger the pipeline and the build process can be viewed through the details option on each build action.

5. Conclusion

In this post I have covered how to deploy infrastructure on AWS with terraform through AWS CodePipleine. In the next part of the series I'll show how to set up AWS Step Functions workflow which is triggered through a cloudwatch event. Stay Tuned!

Top comments (4)

Collapse
 
iamaashishpatel profile image
Ashish Patel • Edited

This is very useful article. I am following the same steps, but I am getting below error.

"COMMAND_EXECUTION_ERROR: Error while executing command: terraform -chdir=Terraform init -input=false. Reason: exit status 1"

dev-to-uploads.s3.amazonaws.com/up...

Collapse
 
mconnaker profile image
mconnaker

Hey Ashish, I was able to discover the problem here. In your yml files, remove the -chdir=Terraform. It is attempting to change directory and is unable to find it. Removing this allowed me to run plan and apply successfully!

Collapse
 
gkrrepo profile image
gkrrepo

Hi,

I have multiple builds, In first build it the terraform files get deployed from my repo
When i create a new tf file and upload it in the git repo, the trigger get activated and the plan build gets successful while the apply gets failed due to the below error,

Error: creating Amazon S3 (Simple Storage) Bucket (pipeline-artifacts-gkr-13): BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: FMVJKZJ5BSBQJHFH, host id: 8xzIzU1j5KQ7KjYVM6kDIB9vnOmFqIpMXqPjWyjN/DejK7jTi71j7LgfVPy56EPMYxrjVxCwsOM=

on g.tf line 1, in resource "aws_s3_bucket" "codepipeline_artifacmts_gkr_c":
1: resource "aws_s3_bucket" "codepipeline_artifacmts_gkr_c" {

Collapse
 
kelosol profile image
kelosol

Trying to follow this...

... Create a sample lambda_function.py and zip it

Is there a copy of this file "lambda_function.py" ?

Thanks