DEV Community

Cover image for Using CodeBuild and CodePipeline to Deploy AWS Applications Easily
Francesco Ciulla for TinyStacks, Inc.

Posted on • Updated on • Originally published at blog.tinystacks.com

Using CodeBuild and CodePipeline to Deploy AWS Applications Easily

Learn how these two AWS features work together in perfect harmony. Author: Jay Allen

One of the core challenges of modern microservices architecture is in deployment. Application development teams require repeatable methods for shipping application changes to customers at scale.

In this article, we'll look at two key features in AWS - AWS CodeBuild and AWS CodePipeline - that you can use to build, test, package, and deploy your applications. We'll also walk through a hands-on example using a sample REST API app written in Flask. Finally, we'll talk a bit about how TinyStacks makes using these features of AWS even easier than ever.

Continuous Integration and Continuous Deployment

Shipping software in a repeatable and reliable manner at scale is a complex effort. At a minimum, achieving this goal requires:

  • Building, validating, and packaging a new version of our application for deployment; and
  • Deploying our application through various stages (dev, test, stage, and production) and running it through various manual and automated validation procedures before making it available to users.

Continuous integration (CI) is the process of preparing a deployable application artifact - usually in an automated fashion triggered by a check-in to a code repository such as Git. Continuous deployment (CD) is the process of releasing this artifact into a given release stage.

AWS CodeBuild

Our first challenge, then, is how to trigger the creation of a new artifact for deployment whenever we check in a releasable change. At a minimum, this requires:

  • Assembling the executable artifacts themselves. For languages such as Java and C#, this means compiling our applications into executable bytecode. For scripted languages, it might mean writing out dynamically generated configuration files and downloading required dependent packages from a package manager repository (such as NPM for Node.js or Pip for Python).
  • Validating our code. This most often involves running unit tests to ensure our code changes are sound and don't introduce nasty regressions before deployment.
  • Packaging the application. Depending on the target runtime environment, packaging might mean creating a ZIP file of our contents or preparing a Docker container containing a fully configured runtime environment and our application code.

All of this, of course, requires some sort of computing capacity - such as Amazon EC2 instances or Docker containers - that's configured with the build tools and environments our application framework requires. In the good ol' days, we'd have to create and configure all of this ourselves and keep it running 24/7 in our data centers - often at considerable expense.

Fortunately, we now have AWS CodeBuild, a continuous integration service that makes automating this process easy and efficient. CodeBuild provides managed compute capacity that we can use to orchestrate a build and packaging process of arbitrary complexity.

Using CodeBuild, we can define a CI process that triggers automatically upon a Git check-in. CodeBuild runs a pre-built Linux or Windows build environment as a Docker image that is terminated immediately after our build completes. This means we only pay for exactly the compute capacity our build requires. We can even supply our own Docker container if our build requires specialized tooling or configuration. Build artifacts can be written out to Amazon S3 and the detailed output from our build process sent to Amazon CloudWatch Logs for analysis and debugging.

AWS CodePipeline

While CodeBuild handles our CI piece, we still need a way to deploy our packaged bits. This involves standing up all of the AWS services required to deploy our application. It also means we need the flexibility to deploy to any stage in our development process (eg., dev/test/stage/prod).

AWS CodePipeline is AWS's comprehensive continuous delivery solution. With CodePipeline, you can chain together multiple complex tasks into a single pipeline that builds, tests, and deploys your application. CodePipeline integrates with multiple AWS and third-party services, including GitHub, AWS CodeCommit, CodeBuild, AWS CloudFormation, Amazon S3, and many others.

Walkthrough: Deploying a Flask App

Let's see how you might combine CodeBuild and CodeDeploy with a real-world example. The following walkthrough assumes you already have an AWS account with the AWS CLI installed locally and configured.

We'll use a basic CRUD (Create/Read/Update/Delete) REST API application written in the Python Flask framework. This app is written and maintained by TinyStacks; you can access it and read a full description of the app and its REST API on GitHub.

As I mentioned, CodePipeline integrates with a plethora of other services. For today, we'll keep our example simple. Our CodePipeline will consist of two CodeBuild steps:

  • Our build step uses a Dockerfile to build a Docker image from a base Linux Python image. It then pushes this image into an Amazon Elastic Container Repository (ECR) in our AWS account.
  • Our release step takes the Docker image from our ECR repository and creates a task for it on an Amazon Elastic Container Services (ECS) cluster running in our account.

(If you're unfamiliar with Docker and the structure of a Dockerfile, go check out Francesco Ciulla's great article and video where he dissects a Dockerfile line by line.)

Setting Up The Prerequisites

To run our pipeline, we'll first need to stand up some AWS architecture as documented in our README file for our Flask application . We'll do this using a combination of the AWS CLI and the AWS console.

Before we start, we'll need an Amazon ECR repository for storing Docker images. If you don't already have an ECR repository in your AWS account, create one using the AWS CLI:

aws ecr create-repository --repository-name "flask-cluster-manual"
Enter fullscreen mode Exit fullscreen mode

You'll also need an ECS cluster, as well as a service definition that will run your Docker container. We'll provide some pointers for how to set this up after we stand up the first part of our pipeline.

Create the CodeBuild Service Role

Next, let's create our IAM service role for our CodeBuild projects. The service role will be assumed by CodeBuild when it executes our pipeline, giving it permission to access AWS resources in our account on our behalf. In our case, we need to grant our pipeline permission to the ECR repository we just created, and to the ECS cluster that we'll create later.

In the step after this, we'll create our pipeline in the AWS console. We'll create our CodeBuild IAM role separately so we can reuse it across our two CodeBuild projects.

First, we need to create a service role that CodeBuild has permission to assume. To do this, save the following JSON trust policy as codebuild-trust-policy.json:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "codebuild.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Then, create the role from your command line with the following call:

aws iam create-role --role-name codebuild-flask-role --assume-role-policy-document file://codebuild-trust-policy.json
Enter fullscreen mode Exit fullscreen mode

Next, we need to add an AWS role policy that specifies the permissions that our role has. Save the following file to disk as codebuild-policy.json. This policy is designed to provide the minimum required permissions to run your CodeBuild tasks. Be sure to replace AWS_ACCOUNT_ID with your actual account ID throughout.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:GetRepositoryPolicy",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:DescribeImages",
                "ecr:BatchGetImage",
                "ecr:GetLifecyclePolicy",
                "ecr:GetLifecyclePolicyPreview",
                "ecr:ListTagsForResource",
                "ecr:DescribeImageScanFindings"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Stmt1629150505031",
            "Action": [
                "ecr:PutImage",
                "ecr:TagResource",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:ecr:us-west-2:AWS_ACCOUNT_ID:repository/ts-flask-test"
        },
        {
            "Sid": "EcsManageTasks",
            "Action": [
                "ecs:ListTasks",
                "ecs:RegisterContainerInstance",
                "ecs:RegisterTaskDefinition",
                "ecs:RunTask",
                "ecs:StartTask",
                "ecs:StopTask",
                "ecs:TagResource",
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:ecs:us-west-2:AWS_ACCOUNT_ID:cluster/default"
        },
        {
            "Sid": "UpdateFlaskService",
            "Action": [
                "ecs:UpdateService"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:ecs:us-west-2:AWS_ACCOUNT_ID:service/default/ts-flask-test"
        },
    ]
}
Enter fullscreen mode Exit fullscreen mode

You can then attach this policy to the role you just created with the following command:

aws iam put-role-policy --role-name codebuild-flask-role --policy-name CodeBuildFlaskPolicy --policy-document file://codebuild-policy.json
Enter fullscreen mode Exit fullscreen mode

Fork the GitHub Repo

Before creating your pipeline, you'll want your own copy of the Flask sample application from the TinyStacks repo. This will be necessary as you'll connect your CodePipeline to GitHub and use this repo as your build source. To create a fork, navigate to the repo and click the **Fork **button in the upper right corner.

image.png

Create the Pipeline

While you can create a pipeline from the command line, this requires constructing a JSON parameters file whose structure is somewhat complex in daunting. So for now, we'll use the console.

In the AWS Console, navigate to CodePipeline. Then, click Create pipeline.

image.png

On the next page, give your pipeline a sensible name. For Service role, choose New service role. This is the service role for the pipeline itself, which is separate from the service roles used for the CodeBuild projects you'll create shortly. We don't need any special permissions here, so we'll let the AWS console create this for us.

image.png

In the next step, you'll need to connect to your GitHub repo to pull the source code into your pipeline. Under Source provider, select Github (version 2). Under Connection, if you haven't used your GitHub account before with AWS, click the Connect to GitHub button and connect your AWS and GitHub accounts; otherwise, select an existing connection. Finally, under Repository name, select the aws-docker-templates-flask repository that you just cloned to use it as the source for your builds.

Once done, click the Next button.

image.png

Configuring the Builds

Configure the Build Step

On the next screen, you'll configure your build job. Remember that we have two CodeBuild YAML files, so we will create two CodeBuild projects - the build project and the release project - that execute one after the other in separate stages.

From the Build provider drop-down, select AWS CodeBuild. To create the first build step, next to Project name, select Create project. A window will pop up for creating the CodeBuild project.

There are several configuration elements here that it's important we get right. First, in Project name, give your project a sensible name, such as FlaskBuildStep.

Next, scroll down to Environment. Here, you specify the Docker image that CodeBuild is going to use to build your project. Leave the Environment image field set to Managed image and select an Amazon Linux 2 build environment as depicted below. Make sure you also select the Privileged flag, as this is required to build a Docker container inside of CodeBuild's own Docker container.

image.png

Next, under Service Role, select Existing service role the service role that you created in an earlier step using the AWS CLI.

image.png

You will next need to expand the Additional configuration chevron in order to configure the environment variables for your build. This job will be using the build.yml file we discussed earlier, which defines two environment variables:

  • ECR_ENDPOINT: The name of the Amazon ECR repository to publish to. This variable takes the format: awsaccountid.dkr.ecr.amazonaws.com
  • ECR_IMAGE_URL: The name of the Amazon ECR repository plus the name of the container you are publishing. This should take the format: awsaccountid.dkr.ecr..amazonaws.com/ts-flask-test

image.png

Next, under Buildspec, we need to specify the CodeBuild YAML file that will be used in this build job. By default, CodeBuild looks for a file named buildspec.yaml. Our file is called build.yml, so we will specify that instead.

image.png

Just one more configuration option and you're done! Under Logs, specify a CloudWatch Logs group name and log file name (e.g., FlaskTest and Flasktest1). Then, click Continue to CodePipeline.

image.png

Back at the Add build stage screen, click Next. The next screen is for the Add deploy stage step. This is useful when you want to use CodePipeline to finish the deployment using another AWS technology such as AWS CloudFormation or AWS CodeDeploy. In our project, we will use CodeBuild as our deployment mechanism, so simply click Skip deploy stage, and then click Skip.

On the Review screen, click Create. This will kick off your pipeline with your first CodeBuild project. If everything has been configured correctly, you'll see the pipeline will successfully build your container and push it to your ECR repository. If you click on Details for the build, you'll see a rich set of build logs, as well as a status by build phase.

image.png

Standing Up Your ECS Cluster and Service

Our release step below will update our service with a new Docker container whenever we make a revision to it. Before this will work, you'll need to stand up an ECS cluster with an initial version of your service.

Standing up an ECS cluster is beyond the scope of this article. However, you can start a Fargate cluster and create a service with the Docker image you just uploaded to your ECR repository pretty easily. Zayd has all of the details in his article on creating ECS clusters on AWS, so check it out if you need assistance.

Adding the Release Step

This completes our build step. However, we still need to add a stage to our CodePipeline pipeline to update our service on ECS. As discussed earlier, this will involve creating a new CodeBuild project, except this time we will run the release.yml file.

Go to your pipeline and click Edit. Then from there, click the Add stage button.

image.png

Name your new stage Release, then click Add stage. How, within the new stage, click + Add action group.

image.png

Configure your new action group as a CodeBuild action exactly as you did for the build stage; however, make these following changes:

  • Buildspec file: specify release.yml as your buildspec.
  • Environment variables: Specify the following variables:
    • ECR_ENDPOINT and ECR_IMAGE_URL: Set these as you did before.
    • PREVIOUS_STAGE_NAME and STAGE_NAME: Set these both to latest.
    • SERVICE_NAME: The name of the service to run.
    • CLUSTER_ARN: The Amazon Resource Name (ARN) of your cluster.

Once you've made these configuration changes, save them and re-run your pipeline from the beginning. You should see your pipeline run all the way through and your service updated with the latest image.

To see the real benefits of this setup, commit a change to your fork of the Flask application. Within a minute, you will see your pipeline kick off automatically as it builds your change and publishes it directly to your ECS cluster. That's the true power of CI/CD automation!

An Even FASTER Way to Deploy

In the real world, of course, we wouldn't have gone through all of these manual steps to create our pipeline. We would normally develop an AWS CloudFormation template or a script in a language such as Python that automated the setup and teardown of this entire infrastructure in any development stage we required.

Developing such a repeatable infrastructure, however, is usually a time-consuming effort, involving lots of trial and error. That's why TinyStacks simplifies the process even further, providing production-ready stacks of infrastructure and code. You can skip the time-consuming process of standing up infrastructure and instead focus on the specific needs of your application. For more details, contact us today !

Latest comments (0)