DEV Community

Cover image for Deploying container applications on AWS with CI/CD pipelines
(Chester) Htoo Lynn
(Chester) Htoo Lynn

Posted on

Deploying container applications on AWS with CI/CD pipelines

In this blog, we will be creating a cloud environment, specifically on Amazon Web Services, to deploy a web application, which is a simple Vite application. The Vite application will be containerised using Docker, and will be pushed into our Amazon ECR Registry, which will later be used by Amazon ECS task definition to run a service on ECS Fargate. We will also be setting up a CI/CD pipeline using Github actions so that whenever a change is committed to main branch (production), it will trigger an automatic docker image build process and update the ECS task to use the latest docker image from our ECR repository.

Architecture Overview

The architecture overview will look something like this.

Architecture

Let's dive right into it.

Pre-requisites

Before we get into the good stuffs, first we need to make sure we have the required services on our local machine, which are:

Folder Structure

We will be using a (sort-of) monorepo approach for this project. We will have a terraform folder for our infrastructure, and an app folder for our web application. We will also have a .github/workflows folder for our Github Actions workflow files. So it will look something like this.

your-project
├── .github
│   └── workflows
├── app
└── terraform
Enter fullscreen mode Exit fullscreen mode

Creating the Web Application

We don't need a fancy fully-functional web application for this project. We just need a simple web application that we can use to deploy, build docker image and make a few changes to test our CI/CD pipeline. So we will just be using a simple react application boilerplate created with Vite. You can create your own or use any other boilerplate you like. So let's go into our working directory (any folder you like) and create a new vite application. I will be using pnpm for this project and here is a link to their installation guide.

pnmp create vite app/
cd app/
pnpm install
Enter fullscreen mode Exit fullscreen mode

We'll clean up a few changes in the App.tsx file and run pnpm run dev. If everything is working fine, you should be able to see the web application running on localhost:3000.

Resources: Don't worry about the codes, all the codes can be found on Github here. I will also be linking all the resouces either in the code commends or the end of the blog. Don't forget to star the repo and share this article if you find it useful 😄

Web application running

Cool, great! Now we got the application up and running.

Dockerizing the Web Application

Now, let's create a dockerfile to create a docker image.

FROM --platform=linux/amd64 node:18-alpine
RUN ["npm" ,"install", "-g","pnpm"]
COPY package.json /vite-app/
COPY . /vite-app
WORKDIR /vite-app
RUN ["pnpm", "install"]
CMD ["pnpm", "dev"]
EXPOSE 8000
Enter fullscreen mode Exit fullscreen mode

Let's go through the dockerfile line by line.

  • FROM --platform=linux/amd64 node:18-alpine: We are using the node:18-alpine image as our base image. We are also specifying the platform to be linux/amd64 because we will be using this docker image on ECS Fargate, which is a linux environment. If we don't specify the platform, it will default to my system's platform, which is MacOS M1 (darwin/arm64), and it will fail to run on ECS Fargate.
  • RUN ["npm" ,"install", "-g","pnpm"]: We are installing pnpm globally. We will be using pnpm to install our dependencies. You can use npm or yarn if you like.
  • COPY package.json /vite-app/: We are copying the package.json file to the /vite-app directory in our docker image.
  • COPY . /vite-app: We are copying the rest of the files to the /vite-app directory in our docker image.
  • WORKDIR /vite-app: We are setting the working directory to /vite-app.
  • RUN ["pnpm", "install"]: We are installing the dependencies.
  • CMD ["pnpm", "dev"]: We are running the dev script.
  • EXPOSE 8000: We are exposing the container port 8000.

We also don't want to include node_modules in our docker image, so we will add it to our .dockerignore file.

node_modules
Enter fullscreen mode Exit fullscreen mode

Now, let's build our docker image using docker build -t vite-app:latest . (make sure you are in the app directory). You should be able to see the docker image when you run docker images.

REPOSITORY             TAG              IMAGE ID       CREATED        SIZE
vite-app               latest           4dd38de114b8   42 hours ago   390MB
Enter fullscreen mode Exit fullscreen mode

Now, let's run the docker image using docker run -p 3000:8000 vite-app:latest. You should be able to see the web application running on localhost:3000 if you have setup everything correctly.

Setting up AWS Environment

Now, let's setup our AWS environment. We will be using Terraform to create our infrastructure. We will be creating the following main resources:

  • Amazon ECR private repository
  • Amazon ECS cluster
  • Amazon ECS task definition
  • Amazon ECS service
  • Some IAM roles and policies

Let's first create a couple of files that we plan to use in our terraform/ directory.

cd terraform/
touch main.tf providers.tf variables.tf outputs.tf main.tfvars iam.tf sg.tf vpc.tf
Enter fullscreen mode Exit fullscreen mode

We'll be using Amazon Web Service provider for Terraform. So let's add the following to our providers.tf file.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.22.0"
    }
  }
}

provider "aws" {
  region = "eu-west-1"
}
Enter fullscreen mode Exit fullscreen mode

In our main.tf, we will be creating some of our core services, which are: Amazon ECR private repository, Amazon ECS cluster, Amazon ECS task definition, and Amazon ECS service.


module "vite_app_repository" {
  source  = "terraform-aws-modules/ecr/aws"
  version = "1.6.0"

  repository_name                 = "vite-app-repository"
  repository_type                 = "private"
  repository_image_tag_mutability = "IMMUTABLE"
  create_lifecycle_policy         = true

  # only keep the latest 5 images
  repository_lifecycle_policy = jsonencode({
    rules = [
      {
        rulePriority = 1
        description  = "Expire images by count"
        selection = {
          tagStatus   = "any"
          countType   = "imageCountMoreThan"
          countNumber = 5
        }
        action = {
          type = "expire"
        }
      }
    ]
  })

  tags = merge(var.common_tags)
}

resource "aws_ecs_cluster" "vite_app_cluster" {
  name = var.ecs_cluster_name
  tags = var.common_tags
}

resource "aws_ecs_task_definition" "vite_app_runner" {
  family                   = var.ecs_task_definition_name
  requires_compatibilities = ["FARGATE"]
  network_mode             = "awsvpc"
  cpu                      = "512"
  memory                   = "1024"
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn
  container_definitions = jsonencode([
    {
      name   = var.ecs_container_name
      image  = "${module.vite_app_repository.repository_url}:latest"
      cpu    = 512
      memory = 1024
      portMappings = [
        {
          containerPort = 8000
          hostPort      = 8000
          protocol      = "tcp"
        }
      ]
      essential = true
    }
  ])
  tags = var.common_tags
}

resource "aws_ecs_service" "vite_app_service" {
  name            = var.ecs_service_name
  cluster         = aws_ecs_cluster.vite_app_cluster.id
  task_definition = aws_ecs_task_definition.vite_app_runner.arn
  launch_type     = "FARGATE"
  desired_count   = 1

  network_configuration {
    subnets          = module.vite_app_vpc.public_subnets
    security_groups  = [module.web_access_sg.security_group_id]
    assign_public_ip = true
  }

  tags = var.common_tags
}

Enter fullscreen mode Exit fullscreen mode

So, let's go through our main.tf file.

First we are creating an ECR repository using the terraform-aws-modules/ecr/aws module. We are also creating a lifecycle policy to only keep the latest 5 images. We are also creating an ECS cluster, ECS task definition, and ECS service. We are also creating a security group for our ECS service to allow traffic from the internet (see below).

We also need to have some IAM permissions for us to allow ECS task to pull the docker image from our ECR repository. We will also need to create an IAM user for our Github Actions workflow to use to push the docker image to our ECR repository. We will be creating a couple of IAM roles and policies in our iam.tf file. I won't be putting the codes here on the blog but you can see the codes here on Github. We will also need to create a security group for our ECS service to allow traffic from the internet. We will be creating a sg.tf file for that. For the security group resource you can see here and for VPC resouce, you can get the terraform codes here.

We will also need to create a few variables, so let's create a variables.tf file and need to provide them depending on the different environments here. We will be using a main.tfvars file to store our variables for now since we only have one environment.

aws_region               = "eu-west-1"
ecs_task_definition_name = "vite-app-runner"
ecs_container_name       = "vite-app"
ecs_cluster_name         = "vite-app-cluster"
ecs_service_name         = "vite-app-service"
Enter fullscreen mode Exit fullscreen mode

We will also need to create an outputs.tf file to output some of the resources that we will be using later.

output "ecr_repo_url" {
  value = module.vite_app_repository.repository_url
}

output "github_actions_user_access_key_id" {
  value = aws_iam_access_key.github_actions_user_access_key.id
}

output "github_actions_user_access_secret_key" {
  value     = aws_iam_access_key.github_actions_user_access_key.secret
  sensitive = true
}
Enter fullscreen mode Exit fullscreen mode

Now that, we've setup the AWS infrastructure in place, let's run terraform init to initialise our Terraform project. Then, we can run terraform plan -var-file=main.tfvars to see what resources will be created. If everything looks good, we can run terraform apply -var-file=main.tfvars to create the resources.

Apply complete! Resources: 26 added, 0 changed, 0 destroyed.

Outputs:

ecr_repo_url = "********.dkr.ecr.eu-west-1.amazonaws.com/vite-app-repository"
github_actions_user_access_key_id = "**********"
github_actions_user_access_secret_key = <sensitive>
Enter fullscreen mode Exit fullscreen mode

Now, we have our AWS infrastructure in place. We can now push our docker image to our ECR repository and run our ECS service.

Pushing Docker Image to ECR

Now, let's push our docker image to our ECR repository. First, we need to login to our ECR repository using aws ecr get-login-password --region your-region | docker login --username AWS --password-stdin your-account-id.dkr.ecr.your-region.amazonaws.com. Then, we can tag our docker image using docker tag vite-app:latest your-account-id.dkr.ecr.your-region.amazonaws.com/vite-app-repository:latest. Then, we can push our docker image using docker push your-account-id.dkr.ecr.your-region.amazonaws.com/vite-app-repository:latest. If everything is working fine, you should be able to see the docker image in your ECR repository. (You can also see the push command if you go to your ECR repository and click on View push commands)

If everything works as expected, you should be able to see a task running in your ECS cluster. If you go into the networking, you will see this:

ECS Task

If you open that Public IP in your browser, and go to port 8000, you should be able to see the web application running.

Web application

Github actions

We wouldn't want to do the entire process manually every time we make a change to our web application. So, let's create a github action that will do that for us. We will be creating a workflow that will detect changes that are pushed to the main branch, build the docker image, push it to our private ECR repository, and update the ECS service to use the latest docker image. We will be creating a deploy.yml file in our .github/workflows directory.

name: Push image to Amazon ECR and deploy to ECS

on:
  push:
    branches:
      - main
      - master

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4.1.1

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4.0.1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      - name: Login to Amazon ECR
        uses: aws-actions/amazon-ecr-login@v2.0.1
        id: login-ecr

      - name: Set outputs
        id: vars
        run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT

      - name: Build, tag and Push image to Amazon ECR
        id: build-and-tag-docker-image
        working-directory: ./app
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: ${{ secrets.AWS_ECR_REPOSITORY }}
          IMAGE_TAG: git-${{ steps.vars.outputs.sha_short }}
        run: |
          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
          echo "IMAGE_URI=${{ env.ECR_REGISTRY }}/${{ env.ECR_REPOSITORY }}:${{ env.IMAGE_TAG }}" >> $GITHUB_OUTPUT

      - name: Download task definition
        run: |
          aws ecs describe-task-definition \
          --task-definition ${{ secrets.AWS_ECS_TASK_DEFINITION_NAME}} \
          --query taskDefinition \
          --output json > taskDefinition.json

      - name: Fill in the new image ID in the Amazon ECS task definition
        id: update-task-def
        uses: aws-actions/amazon-ecs-render-task-definition@v1.1.3
        with:
          task-definition: taskDefinition.json
          container-name: ${{ secrets.AWS_ECS_CONTAINER_NAME }}
          image: ${{ steps.build-and-tag-docker-image.outputs.IMAGE_URI }}

      - name: Deploy Amazon ECS task definition
        id: deploy-ecs
        uses: aws-actions/amazon-ecs-deploy-task-definition@v1.4.11
        with:
          task-definition: ${{ steps.update-task-def.outputs.task-definition }}
          service: ${{secrets.AWS_ECS_SERVICE_NAME}}
          cluster: ${{secrets.AWS_ECS_CLUSTER_NAME}}
          wait-for-service-stability: true
Enter fullscreen mode Exit fullscreen mode

Let's go through each step.

  1. With the checkout action, we are checking out the code from the repository.
  2. We need some sort of a programmatic access to our AWS account (which is why we created a github-actions-user IAM user in our iam.tf file earlier here), so we are configuring our AWS credentials using the aws-actions/configure-aws-credentials action.
  3. We then need to login to our ECR repository using the aws-actions/amazon-ecr-login action.
  4. We then need to build our docker image, tag it, and push it to our ECR repository. We are also using the git rev-parse --short HEAD command to get the short SHA of the commit that triggered the workflow. We will be using this short SHA as our docker image tag. We are also using the aws-actions/amazon-ecs-render-task-definition action to update our ECS task definition with the new docker image.
  5. We then need to download our ECS task definition using the aws ecs describe-task-definition command. What this step will do is simply call the aws ecs describe-task-definition command and save the output to a file called taskDefinition.json.
  6. We then need to update our ECS task definition with the new docker image. We are using the aws-actions/amazon-ecs-render-task-definition action to do that. We are also using the aws-actions/amazon-ecs-deploy-task-definition action to deploy our ECS task definition.

After waiting for a few minutes, you should be able to see the new docker image in your ECR repository. You should also be able to see the new docker image being used in your ECS task definition.

You will notice a few environment variables that are being used in this workflow. We will be storing these environment variables in our Github repository secrets. You can access your repository secrets by going to Settings > Secrets > New repository secret. We will be storing the following secrets:

Github secrets

Most of the secrets are from our terraform/main.tfvars file which are being used to pass as some sort of variables to our github action. We will also need to store our AWS credentials in our Github repository secrets.

Github actions

Great, now that everything's in place. Let's test out our pipeline. We'll remove the smiley face from our App.tsx inside the vite application, and push it. As soon as we pushed it, we'll see it triggers the github actions.

Trigger

After waiting for a while, let's go into our ECS console and go onto our newly running task, and click on the Public IP and go onto port 8000. You should be able to see the web application running without the smiley face.

Without smiley face

Conclusion

And that's it! We've successfully created a CI/CD pipeline using Github Actions to deploy a simple web application to ECS. The application is a simple vite application that will be dockerized and pushed to ECR. The pipeline will be triggered on every push to the main branch. The pipeline will build the docker image, push it to ECR, and update the ECS service with the new image.

Let me know if you have any questions or suggestions. You can also find the codes on Github here. Don't forget to star the repo and share this article if you find it useful 😄

Top comments (4)

Collapse
 
weijuinlee profile image
Juin

Great article on using pipelines for deploying containerized applications on the cloud, keep it coming!

Collapse
 
halchester profile image
(Chester) Htoo Lynn

Thanks Wei. Appreciate it :D

Collapse
 
lazarovbonifacio profile image
Lázaro Vinícius de Oliveira Bonifácio

Very good article. I have one question: why did you choose a modulo for ECR declaration, but not for ECS?

Collapse
 
halchester profile image
(Chester) Htoo Lynn

I decided to use module for the ECR since the logic for the repository is not that complex and can be encapsulated into one module, but for ECS resources, it has building components like tasks, services and task definitions, and using modules might introduce unnecessary complexity.