DEV Community

exanubes
exanubes

Posted on • Edited on • Originally published at exanubes.com

CI/CD pipeline for ECS application

Any modern application these days needs to have an automated deployment process. Usually it is setup via a webhook, other times we need to manually trigger the deployment, sometimes even requiring more than one person to approve. In this article we will learn how to build a CI/CD pipeline for an ECS Fargate application utilising AWS Code Pipeline, Code Build and Code Deploy services. By the end of this article, we will be able to automatically deploy a new version of our application by simply pushing new commits to the master branch of a Github repository.

Starting point

loadbalanced ecs fargate architecture diagram

As you can see in the diagram, we're starting off with a Load Balanced Fargate Cluster inside a VPC. The application is secured with an SSL certificate on a custom domain. Download code from the previous article about adding SSL certificate to a Fargate app if you'd like to follow along. Keep in mind you will need a SSL certificate and a hosted zone to get this to work. If you'd like to see the finished code for this article, you can find it on github.

Configuring Github Credentials

Starting off, we need to define the source of our application's sourcecode. In this case, we're going to use github.

// lib/pipeline.stack.ts

interface Props extends StackProps {}
const secretConfig = {
  arn: "arn:aws:secretsmanager:eu-central-1:145719986153:secret:github/token",
  id: "github/token",
}
export class PipelineStack extends Stack {
  constructor(scope: Construct, id: string, private readonly props: Props) {
    super(scope, id, props)
    new GitHubSourceCredentials(this, "code-build-credentials", {
      accessToken: SecretValue.secretsManager(secretConfig.id),
    })
  }
}
Enter fullscreen mode Exit fullscreen mode

As you can see, we're loading up a github token from the Secrets Manager. Here's a quick guide on how to generate and store a personal access token. This piece of code tells Code Build what credentials to use when communicating with Github API. Important to note that AWS allows only one Github credential per account per region.

Code Source

Now that Code Build is authorized to communicate with GitHub in our name, we can define where it should take the code from

// lib/pipeline.stack.ts
const githubConfig = {
  owner: "exanubes",
  repo: "ecs-fargate-ci-cd-pipeline",
  branch: "master",
}

const source = Source.gitHub({
  owner: githubConfig.owner,
  repo: githubConfig.repo,
  webhook: true,
  webhookFilters: [
    FilterGroup.inEventOf(EventAction.PUSH).andBranchIs(githubConfig.branch),
  ],
})
Enter fullscreen mode Exit fullscreen mode

This is how we can subscribe to a Github Webhook that will trigger an event every time someone pushes to a master branch in the ecs-fargate-ci-cd-pipeline repo that belongs to exanubes.

Build spec

// lib/pipeline.stack.ts

private getBuildSpec() {
    return BuildSpec.fromObject({
        version: '0.2',
        env: {
            shell: 'bash'
        },
        phases: {
            pre_build: {
                commands: [
                    'echo logging in to AWS ECR',
                    'aws --version',
                    'echo $AWS_STACK_REGION',
                    'echo $CONTAINER_NAME',
                    'aws ecr get-login-password --region ${AWS_STACK_REGION} | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_STACK_REGION}.amazonaws.com',
                    'COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)',
                    'echo $COMMIT_HASH',
                    'IMAGE_TAG=${COMMIT_HASH:=latest}',
                    'echo $IMAGE_TAG'
                ],
            },
            build: {
                commands: [
                    'echo Build started on `date`',
                    'echo Build Docker image',
                    'docker build -f ${CODEBUILD_SRC_DIR}/backend/Dockerfile -t ${REPOSITORY_URI}:latest ./backend',
                    'echo Running "docker tag ${REPOSITORY_URI}:latest ${REPOSITORY_URI}:${IMAGE_TAG}"',
                    'docker tag ${REPOSITORY_URI}:latest ${REPOSITORY_URI}:${IMAGE_TAG}'
                ],
            },
            post_build: {
                commands: [
                    'echo Build completed on `date`',
                    'echo Push Docker image',
                    'docker push ${REPOSITORY_URI}:latest',
                    'docker push ${REPOSITORY_URI}:${IMAGE_TAG}',
                    'printf "[{\\"name\\": \\"$CONTAINER_NAME\\", \\"imageUri\\": \\"$REPOSITORY_URI:$IMAGE_TAG\\"}]" > docker_image_definition.json'
                ]
            }
        },
        artifacts: {
            files: ['docker_image_definition.json']
        },
    })
}
Enter fullscreen mode Exit fullscreen mode

Lots of cool things happening here. First of all, the build spec are instructions for Code Build on how it should handle building our projects. AWS provides us with some life cycle hooks so we can decide at what point in the life cycle of the build process certain actions should happen. The version key/pair defines the version of the build spec API, 0.2 is the most recent version at the time of writing.

Starting out, we have a pre_build life cycle hook, here we're checking our environment, logging into docker with ECR credentials and creating an image tag from a commit sha.

Moving over to the build stage of the life cycle, it's time to build the Dockerfile image. The repository I'm using has both backend and infrastructure code in it so we have to define the path to the Dockerfile as well as the execution context both of which are inside the /backend directory. Then we just tag it and move over to our post_build commands.

Now we take the REPOSITORY_URI variable and the IMAGE_TAG generated in the pre_build step and we push our image to both the latest tag and our own generated tag in order to keep track of our image versions. Last but not least, we generate a image definitions file that will hold the name of our container and the image uri. This is necessary to substitute our Task Definition's image section with a new image and this will also be the artifact/output of Code Build.

Build config

// lib/pipeline.stack.ts

const stack = Stack.of(this)
const buildSpec = this.getBuildSpec()

const project = new Project(this, "project", {
  projectName: "pipeline-project",
  buildSpec,
  source,
  environment: {
    buildImage: LinuxBuildImage.AMAZON_LINUX_2_ARM_2,
    privileged: true,
  },
  environmentVariables: {
    REPOSITORY_URI: {
      value: props.repository.repositoryUri,
    },
    AWS_ACCOUNT_ID: {
      value: stack.account,
    },
    AWS_STACK_REGION: {
      value: stack.region,
    },
    GITHUB_AUTH_TOKEN: {
      type: BuildEnvironmentVariableType.SECRETS_MANAGER,
      value: secretConfig.arn,
    },
    CONTAINER_NAME: {
      value: props.container.containerName,
    },
  },
})
Enter fullscreen mode Exit fullscreen mode

Since we know where to look for application sourcecode and we know how to build the app, we can now create a Code Build project to put this all together.

Our Fargate application is running on ARM64 architecture and unfortunately the version of docker in codebuild does not support defining a platform when building an image. This is why we have to explicitly set the buildImage to Linux ARM64 Platform in the environment settings.

In the buildspec we're using quite a few environment variables and this is where we set it. Some of them will come from other stacks, we also have to use the github secret arn that we created in the beginning but also use some stack config.
For this, we're gonna use Stack.of(this) to get the scope of our current stack.

We also have to update our Props interface as we're relying on components from other stacks.

// lib/pipeline.stack.ts
interface Props extends StackProps {
  repository: IRepository
  service: IBaseService
  cluster: ICluster
  container: ContainerDefinition
}
Enter fullscreen mode Exit fullscreen mode

Permissions

As everything in AWS, Code Build needs to have relevant permissions to be able to interface with other services.

// lib/pipeline.stack.ts

project.addToRolePolicy(
  new PolicyStatement({
    actions: ["secretsmanager:GetSecretValue"],
    resources: [secretConfig.arn],
  })
)
props.repository.grantPullPush(project.grantPrincipal)
Enter fullscreen mode Exit fullscreen mode

First off, we gotta give the Code Build Project access to our github access token from the Secrets Manager. It's also going to pull and push to and from the ECR repository.

Defining pipeline actions

We can finally start creating the different actions for our pipeline. We will have a source action – downloading code from Github – a build action for building the docker image and pushing it to the ECR repository, and to finish it off we'll have a deploy action that will take the output of the build action – BuildOutput Artifact – and use it to deploy a new version of our application on ECS.

// lib/pipeline.stack.ts
const artifacts = {
  source: new Artifact("Source"),
  build: new Artifact("BuildOutput"),
}
const pipelineActions = {
  source: new GitHubSourceAction({
    actionName: "Github",
    owner: githubConfig.owner,
    repo: githubConfig.repo,
    branch: githubConfig.branch,
    oauthToken: SecretValue.secretsManager("github/cdk-pipeline"),
    output: artifacts.source,
    trigger: GitHubTrigger.WEBHOOK,
  }),
  build: new CodeBuildAction({
    actionName: "CodeBuild",
    project,
    input: artifacts.source,
    outputs: [artifacts.build],
  }),
  deploy: new EcsDeployAction({
    actionName: "ECSDeploy",
    service: props.service,
    imageFile: new ArtifactPath(
      artifacts.build,
      "docker_image_definition.json"
    ),
  }),
}
Enter fullscreen mode Exit fullscreen mode

Let's start with defining our artifacts – the output of a particular action in the pipeline – which will be stored in CodePipeline's S3 Bucket. Each subsequent action will use the previous action's artifact so this is very important. In this pipeline we only have two artifacts, first one will be the code downloaded from the repository. The source Artifact will be the input for the build action and as per the build spec we're going to output the image definition file – docker_image_definition.json – as build output Artifact. This is then used in the deploy action and we're passing that file to be used during deployment.

Defining pipeline stages

Now that we have everything setup, we can finally define the pipeline stages and assign appropriate actions to them.

// lib/pipeline.stack.ts
const pipeline = new Pipeline(this, "DeployPipeline", {
  pipelineName: `exanubes-pipeline`,
  stages: [
    { stageName: "Source", actions: [pipelineActions.source] },
    { stageName: "Build", actions: [pipelineActions.build] },
    { stageName: "Deploy", actions: [pipelineActions.deploy] },
  ],
})
Enter fullscreen mode Exit fullscreen mode

In this simple example we have only one action per stage but nothing's stopping us from adding another action that will run integration tests in the Build stage, for example.

Deployment

Now that our pipeline is ready, there's still a few things we need to sort out. Because our pipeline needs environment variables such as account id and region, we actually have to explicitly pass those to our stacks.

// bin/infrastructure.stack.ts
import {
  getAccountId,
  getRegion,
  resolveCurrentUserOwnerName,
} from "@exanubes/cdk-utils"

async function start(): Promise<void> {
  const owner = await resolveCurrentUserOwnerName()
  const account = await getAccountId()
  const region = await getRegion()
  const env: Environment = { account, region }
  const app = new cdk.App()
  const ecr = new EcrStack(app, EcrStack.name, { env })
  const vpc = new VpcStack(app, VpcStack.name, { env })
  const ecs = new ElasticContainerStack(app, ElasticContainerStack.name, {
    vpc: vpc.vpc,
    repository: ecr.repository,
    env,
  })
  new Route53Stack(app, Route53Stack.name, {
    loadBalancer: ecs.loadBalancer,
    env,
  })
  new PipelineStack(app, PipelineStack.name, {
    repository: ecr.repository,
    service: ecs.service,
    cluster: ecs.cluster,
    container: ecs.container,
    env,
  })
  Tags.of(app).add("owner", owner)
}

start().catch(error => {
  console.log(error)
  process.exit(1)
})
Enter fullscreen mode Exit fullscreen mode

Here we're getting the account and region from utility functions I've imported from @exanubes/cdk-utils that use AWS SDKv3. Then I pass them to every stack as it often happens that there are errors when one stack has explicitly set env and others do not so I recommend passing it to all stacks as those errors can be quite difficult to debug sometimes.

Important to note that we're also passing repository, service, cluster and container to the pipeline stack as it relies on references to these resources. Do make sure that they're available on your ElasticContainerStack instance.

// lib/elastic-container.stack.ts
public readonly container: ContainerDefinition
public readonly service: FargateService
public readonly cluster: Cluster
Enter fullscreen mode Exit fullscreen mode

I've also added an owner tag – your account name – to each resource for good measure.

Now we can build and deploy

npm run build && npm run cdk:deploy -- --all
Enter fullscreen mode Exit fullscreen mode

Keep in mind you will have to upload your first image to ECR otherwise deployment will hang on ElasticContainerStack. This can be done during the deployment as stacks are deployed in a sequence and ECR is the first one. After deploying the infrastructure, try and make changes in the app.service.ts file and see if the automatic deployment works when you push to your repository.

Don't forget to destroy the stack when you're done

npm run cdk:destroy -- --all
Enter fullscreen mode Exit fullscreen mode

Summary

Loadbalanced ecs fargate with ci/cd pipeline

In this article we were able to take our current application infrastructure and add a CI/CD pipeline to it. First we defined how Code Build should talk to the Github API by defining credentials. Then we needed to create a buildspec which would instruct Code Build on how to build our application by utilising different life cycle hooks of the entire process. Last but not least we output a BuildOutput Artifact that can then be used by Code Deploy to update our app.

Top comments (0)