DEV Community

Cover image for Speed up and save cost for Terraform Deployments with AWS CodeBuild's Lambda Compute

Speed up and save cost for Terraform Deployments with AWS CodeBuild's Lambda Compute

This article is all about how you can make your terraform deployments faster and reduce your AWS billing using AWS CodeBuild's Lambda Compute.

This article will help those who are using Terraform for their IaC on AWS and are aiming to optimise the AWS infrastructure provisioning process.

I wanted to go ahead for my readers so I have shared a strategy using the concepts of this blog which will help them to make their CI/CD pipelines better.

Motivation

  • AWS has recently released two new updates related to CodeBuild and these updates were the reason for coming with this blog.

  • [ Nov 6, 2023 ] AWS CodeBuild now supports AWS Lambda compute

    • What does that mean: AWS CodeBuild users now have the option to employ AWS Lambda for building and testing their software packages.
    • This new compute mode choice offers quicker builds thanks to Lambda's almost instantaneous start-up times. Additionally, customers can benefit from cost savings since Lambda compute is billed per second of usage.

Until now, The only compute option was to use Ec2 as compute but with this update we can use LAMBDA as compute with CodeBuild.

  • [ Mar 19, 2024 ] AWS CodeBuild now supports custom images for AWS Lambda compute

    • What does that mean: AWS CodeBuild has enhanced its capabilities by enabling the utilization of container images stored in Amazon ECR repositories for projects configured to operate on Lambda compute.
    • Previously, users were restricted to leveraging managed container images supplied by AWS CodeBuild. These AWS-managed container images come equipped with support for tools like AWS CLI, AWS SAM CLI, and a range of programming language runtimes.

Until now, With LAMBDA as compute there was only the option to use AWS-managed images but now we can use custom images for LAMBDA as compute with CodeBuild.

Prerequisites

  • Knowledge of Terraform
  • Basic working of CodeBuild
  • Understanding of core AWS and its operations

Let's get started.

Create ECR repository

  • As mentioned in the update we need to create an ECR repository to host custom images. I will name it as terraform.

Terraform ecr repository

Pull the Terraform docker image


docker pull hashicorp/terraform  

Enter fullscreen mode Exit fullscreen mode

Push image to ECR repository

  • Retrieve an authentication token and authenticate your Docker client to your registry. (change account id as per your aws account id)
aws ecr get-login-password --region eu-west-1 --profile cicd | docker login --username AWS --password-stdin 123456789.dkr.ecr.eu-west-1.amazonaws.com
Enter fullscreen mode Exit fullscreen mode
  • Tag your image so you can push the image to this repository:
docker tag hashicorp/terraform 123456789.dkr.ecr.eu-west-1.amazonaws.com/terraform:latest
Enter fullscreen mode Exit fullscreen mode
  • Push terraform(custom) image
docker push 123456789.dkr.ecr.eu-west-1.amazonaws.com/terraform:latest
Enter fullscreen mode Exit fullscreen mode

ECR with terraform image

Create s3 buckets for terraform code and backend

  • In this step, will create two versioned s3 buckets; one for storing terraform code and another for terraform backend for storing state files with all the default settings except versioning. I will enable versioning for both buckets.

  • I will call them terraform-deployment-test-lambda-compute and terraform-deployment-test-lambda-compute-backend

s3 bucket for tf

s3 bucket for tf backend

Create a CodePipeline

  • To automate and share the strategy I will create a small pipeline which will be crucial for automation.

codepipeline create

  • Add source stage as s3.
  • Select terraform-deployment-test-lambda-compute.
  • Give an arbitrary name for the file name. Since I will be uploading a zip file, I will use tf.zip

codepipeline source stage

Create CodeBuild Project with Lambda as Compute

  • After creating the source stage for the pipeline, you will see the next screen to add the build stage. Select CodeBuild and select create new project or choose existing if you already have CodeBuild project.

Pipeline screen

  • While creating the CodeBuild Project, use the Project name as terraform-deployment-test-lambda-compute.
  • Codepipeline's source stage will provide the input artifact which is tf.zip for CodeBuild.

codebuild creation

  • For Environment, Environment Image as Custom. Compute as Lambda, Environment type as Linux Lambda.

codebuild Environment

  • Choose Image registry as ECR. Select terraform as ecr repository created in first step.

  • Choose ECR Image as latest.

  • Create Service role, choose default setting as create new service role.

codebuild environment

  • For buidspec, choose insert build commands and add the following yaml. All it does is initialise the terraform and apply the terraform configuration.
version: 0.2

phases:
  build:
    commands:
      - terraform init
      - terraform apply -auto-approve -no-color -input=false

Enter fullscreen mode Exit fullscreen mode

buildspec

lambda codebuild project config

  • Rest choose everything as default and select create build project at the bottom. After creation, you will return to the CodePipline screen with CodeBuild project name auto-filled. Select Next

codebuild project created screen

  • As next step Skip Deploy Stage

skip deploy stage

  • After Reviewing, select Create to create the Pipeline.

Modify service role for Codebuild project

  • Go to the Codebuild project, find the terraform-deployment-test-lambda-compute and click on the service role link created automatically by CodeBuild.

codebuild service link role policies

  • Since CodeBuild will deploy terraform resources across the account in this case we need to give Administrator access policy so that it can deploy any kind of resources.

  • It will make more sense to use this method for cross- account deployment as in that case you can restrict permissions by permitting only to assume to role of the account terraform wants to deploy resources.

Terraform code


terraform {
  backend "s3" {
    bucket = "terraform-deployment-test-lambda-compute-backend"
    region = "eu-west-1"
    key    = "terraform-deployment.tfstate"
  }
}

resource "aws_s3_bucket" "deploy-bucket" {
  bucket = "test-lambda-compute-backend-test-bucket"

  tags = {
    Project     = "For CI CD deploy test"
    Environment = "Dev"
  }
}

resource "aws_s3_bucket" "deploy-bucket-two" {
  bucket = "test-lambda-compute-backend-test-bucket-two"

  tags = {
    Project     = "For CI CD deploy test"
    Environment = "Dev"
  }
}

Enter fullscreen mode Exit fullscreen mode
  • Here I am using backend bucket for storing terraform state.

  • In this terraform code, I am creating two s3 buckets.

  • You can of course divide your code into modules just like any other regular terraform project.

zip -r tf.zip ./     
Enter fullscreen mode Exit fullscreen mode
  • Then I am converting this terraform file to a zip file called tf.zip and upload to the s3 bucket. This upload will trigger CodePipline and run terraform apply inside Codebuild using Lambda to create two s3 buckets as per configuration.

upload tf.zip

Pipeline in Action

  • After uploading tf.zip, the pipeline gets automatically triggered and runs.

pipeline state

  • We can also see the action details of codebuild where it initialises s3 backend for the terraform state and creates two s3 buckets.

terraform init and backend

terraform apply

  • We can confirm the created buckets in the console too.

s3 buckets

EC2 compute v/s Lambda compute

  • Documentation says lambda compute promotes faster builds and saves cost so I wanted to test this same operation with the Ec2 Compute CodeBuild project.

  • Buildspec commands for CodeBuild project with Compute as EC2 will be as follows. It's not the same because with LAMBDA our docker image is from Terraform which already has Terraform command but in the EC2 compute case we need to install Terraform command.

version: 0.2
env:
  variables:
    TERRAFORM_VERSION: 1.5.4

phases:
  install:
    commands:
      - echo "======== install terraform ========"
      - wget --quiet https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
      - unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip
      - mv terraform /usr/local/bin/

  build:
    commands:
      - terraform init -no-color -input=false
      - terraform apply -auto-approve -no-color -input=false

Enter fullscreen mode Exit fullscreen mode

Ec2 project

  • Edit the Ec2 codebuild project service role and add Administrator policy as it will be used to create resources.

  • All the other steps remain the same(except buildspec file), just create another CodeBuild project with Compute as EC2 and replace the build action in the pipeline with Lambda compute codeBuild Project.

edit the pipeline

add ec2 codebuild project

  • Delete the buckets by commenting out in the terraform code, regenerate tf.zip and upload to s3.
  • Once buckets are deleted, uncomment the terraform regenerate tf.zip and upload to s3. Pipeline will Run

ec2 state

  • we can see that even though Lambda's default memory configured is 2GB whereas EC2 compute is configured with a default memory of 3GB still** LAMBDA-based deployment(below image) is faster in every sense**.

lambbda deployment state

Limitations to LAMBDA as compute

limitations

AWS Lambda lacks support for tools needing root permissions, Docker, writing outside /tmp, lambda timeout error after 15 minutes, LINUX_GPU_CONTAINER environment, and features like caching, VPC connectivity, EFS, or SSH access.

Bonus: Strategy

  • As you have seen, If your infrastructure deployment workload is something less than 15 minutes codeBuild's LAMBDA as compute with CodePipeline can be used to create a completely secure serverless CD pipeline.

  • A pipeline which can be used to deploy infrastructure cross-account. You can go crazier by adding alerts on failure and success, adding manual approval actions and handling approvals with lambda functions, and integrating with Slack using CodePipeline features.

  • If your workload increases by more than 15 minutes then you can strategise to split codeBuild steps into multiple stages(subsections) within CodePipeline.

From DevOps Perspective

  • With this feature, the possibility of doing operations for code-based events becomes endless (of course considering the limitations of LAMBDA).

  • You can not only build the application coder artifact but also deploy infrastructure cost-effectively and more quickly thanks to AWS LAMBDA.

I would like to hear how other members are planning to use this amazing update and in what way. Please try this out, feel free to post your feedback or any questions you didn't understand.

Top comments (2)

Collapse
 
rdarrylr profile image
Darryl Ruggles

I will likely take another look at using Codebuild now. Supporting lambda compute and custom images makes it a lot more attractive. Thanks for the article!

Collapse
 
jatinmehrotra profile image
Jatin Mehrotra

Yes not only makes it attractive but also makes it flexible to pass information between stages using codepipeline making it very powerful to integrate with slack