Introduction:
In today’s world, deploying an application in a production environment with zero downtime is crucial. A blue-green deployment is one of the methods that help achieve zero downtime. In this post, we will implement a blue-green deployment on the ECS service from GitLab with the help of the CodePipeline service.
About the project:
Blue-Green Deployment is a deployment strategy that enables you to release new versions of your application with zero downtime and minimal risk of failure. This tutorial will implement Blue-Green Deployment using GitLab CI and AWS CodePipeline, a fully managed continuous integration and delivery service. We will use Terraform to create the necessary infrastructure, including an Elastic Container Service (ECS) cluster, an EC2 Container Registry (ECR), and an S3 bucket for storing artifacts. We will configure the GitLab project to trigger the AWS CodePipeline and deploy the application automatically. We will also use AWS CodeDeploy to swap traffic seamlessly from the old to the new version of the application.
The project structure:
├── appspec.yml
├── buildspec.yml
├── Dockerfile
├── env_aws_Codepipeline_ECS
│ ├── backend.tf
│ ├── deregister_task_definition.sh
│ ├── ec2_user_data.tpl
│ ├── main.tf
│ └── variables.tf
├── images
├── index.html
├── LICENSE
├── README.md
└── taskdef.json
The infrastructure schema:
Prerequisites:
To follow along with this tutorial, you will need the following:
A GitLab account with an active project.
An AWS account with permissions to create resources, a DNS domain in Route53, and an S3 bucket for storing the Terraform backend.
The Terraform CLI and AWS CLI are installed on your local machine. Before proceeding, make sure that you have updated or freshly installed the AWS CLI. This is particularly important as a deletion task definition is possible since February 2023. You can find more information about this change in the AWS documentation.
Deployment:
Step 1: Create an SSH Key Pair
For connectivity to the EC2 instances, it is necessary to create SSH keys:
ssh-keygen -t rsa -b 4096 -f codepipeline
Step 2. Configure connectivity from Gitlab CI to CodeCommit
For connectivity from Gitlab CI to CodeCommit and cloning repository it is necessary to set up an SSH connection to the CodeCommit repository and configure the IAM user for accessing CodeCommit, more information here.
Step 3. Clone the repository and define the S3 bucket.
Clone the repository, go to the repository folder env_aws_Codepipeline_ECS and define the S3 bucket in the backend.tf.
Step 4. Apply infrastructure configuration with Terraform
Build the AWS environment: set AWS credentials, and define necessary variables in variables.tf , make terraform init
, terraform plan
and terraform apply
.
After the successful creation of the infrastructure put your FQDN in the browser and you should see the default Nginx page.
For receiving emails from SNS topic about pipeline changes you should confirm your email. You will receive a subscription confirmation notification and should confirm it. After confirmation, you will receive emails from the SNS topic.
Step 5. Push the repository to your GitLab account
Create a new project in GitLab. In your local Git repository, add the GitLab repository as a remote. Push your local repository to GitLab.
Step 6. Setup necessary variables for CI/CD pipeline
Add the necessary variables into GitLab Settings -> CI/CD -> Variables.
Step 7. Run the CI/CD pipeline.
CI/CD pipeline:
GitLab pipeline stages:
A working set of CI release Gitlab workflows are provided in .gitlab-ci.yml. GitLab pipeline stages:
test
Making SAST tests of some repository files.
push_to_codecommit
Clone existing repo to AWS CodeCommit repo.
gitlab-ci.yml configuration:
# variables
variables:
AWS_DEFAULT_REGION: "eu-central-1"
GITLAB_REPO: "https://gitlab.com/Andr1500/cicd_bluegreen_codepipeline_ecs.git"
CODECOMMIT_REPO: "ssh://git-codecommit.$AWS_DEFAULT_REGION.amazonaws.com/v1/repos/from_gitlab"
REPO_DIR: cicd_bluegreen_codepipeline_ecs
stages:
- sast
- push_to_codecommit
include:
- template: Security/SAST.gitlab-ci.yml
# SAST check
sast:
stage: sast
image: registry.gitlab.com/security-products/semgrep:3
script:
- /analyzer run --config="type:terraform" --config="type:yaml" --config="type:dockerfile" --config="type:shell" --config="type:html" --json > gl-sast-report.json
artifacts:
reports:
sast: gl-sast-report.json
allow_failure: true
# Clone repo from Gitlab to Codecommit
push_to_codecommit:
stage: push_to_codecommit
before_script:
- mkdir ~/.ssh/
- chmod 700 ~/.ssh
- ssh-keyscan -t rsa git-codecommit.$AWS_DEFAULT_REGION.amazonaws.com >> ~/.ssh/known_hosts
- echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- echo -e "Host git-codecommit.*.amazonaws.com\n User $AWS_SSH_USER\n PreferredAuthentications publickey\n IdentityFile ~/.ssh/id_rsa" > ~/.ssh/config
- rm -rf ~/.git
- git clone -b $CI_COMMIT_BRANCH $GITLAB_REPO
- cd $REPO_DIR/
script:
- git push $CODECOMMIT_REPO --all
This configuration with cloning repository from Gitlab to CodeCommit is necessary because CodePipeline doesn’t support Gitlab as a source for now, more information here.
The main part of the CI/CD pipeline is done by AWS CodePipeline with stages:
Source — CodeCommit,
Build — CodeBuild,
Deploy — CodeDeploy.
CodePipeline stages configuration:
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeCommit"
input_artifacts = []
version = "1"
output_artifacts = ["SourceArtifact"]
configuration = {
RepositoryName = var.repository_name
BranchName = var.branch_name
PollForSourceChanges = true
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["SourceArtifact"]
output_artifacts = ["BuildArtifact"]
configuration = {
ProjectName = "${aws_codebuild_project.codebuild_project.name}"
}
}
}
stage {
name = "Deploy"
action {
name = "ExternalDeploy"
category = "Deploy"
owner = "AWS"
provider = "CodeDeployToECS"
input_artifacts = ["BuildArtifact"]
version = "1"
configuration = {
ApplicationName = aws_codedeploy_app.codedeploy_app.name
DeploymentGroupName = aws_codedeploy_deployment_group.deployment_group.deployment_group_name
TaskDefinitionTemplateArtifact = "BuildArtifact"
TaskDefinitionTemplatePath = "taskdef.json"
AppSpecTemplateArtifact = "BuildArtifact"
AppSpecTemplatePath = "appspec.yml"
}
}
}
In case the deployment is successful, traffic is shifted and a new deployment version will be available. The default waiting time is configured for 10 minutes.
At the beginning of using CodeDeploy, I occasionally encountered the Docker Hub rate limit because I was not logged in to Docker Hub. As a result, I switched to using the AWS ECR public repository.
In CodePipeline we use different configuration files: buildspec.yml, appspec.yml, and taskdef.json. More information about these configuration files you can find here.
Deregistering task definition revisions:
When we destroy an ECS cluster using Terraform, it will remove the resources created by Terraform but it won’t automatically deregister the task definition revisions that were created outside of Terraform (created by CodePipeline). To remove all the task definition revisions during the process of destroying infrastructure, the Terraform null resource is triggering the bash script deregister_task_definition.sh that deregisters and deletes them.
Terraform null resource:
resource "null_resource" "deregister_task_definition" {
triggers = {
invokes_me_everytime = uuid()
TASK_NAME = var.service_name,
REGION = var.region
}
provisioner "local-exec" {
when = destroy
command = "/bin/bash deregister_task_definition.sh"
environment = {
TASK_NAME = format("%s", self.triggers.TASK_NAME)
REGION = self.triggers.REGION
}
}
}
Bash script deregister_task_definition.sh:
#!/usr/bin/env bash
get_task_definition_arns() {
aws ecs list-task-definitions \
--region "$REGION" \
--family-prefix "$TASK_NAME" \
--status ACTIVE \
--sort DESC \
| jq -r '.taskDefinitionArns[]'
aws ecs list-task-definitions \
--region "$REGION" \
--family-prefix "$TASK_NAME" \
--status INACTIVE \
--sort DESC \
| jq -r '.taskDefinitionArns[]'
}
delete_task_definition() {
local arn=$1
aws ecs deregister-task-definition \
--region "$REGION" \
--task-definition "${arn}" > /dev/null
aws ecs delete-task-definitions \
--region "$REGION" \
--task-definition "${arn}" > /dev/null
}
for arn in $(get_task_definition_arns)
do
echo "Deregistering and deleting ${arn}..."
delete_task_definition "${arn}"
done
Conclusion:
Blue-Green Deployment is an essential technique for deploying updates to the web application with zero downtime and minimal risk of failure. By implementing this technique with GitLab CI and AWS CodePipeline on an AWS ECS cluster, we can automate the deployment process and minimize the chances of human error. In this tutorial, we’ve gone through the process of setting up the necessary infrastructure on AWS using Terraform, and we’ve configured the Gitlab project to trigger the AWS CodePipeline. We’ve also learned how to use AWS CodeDeploy to swap traffic from the old to the new version of the application without any interruptions.
If you found this post helpful and interesting, please click the reaction button below to show your support for the author. Feel free to use and share this post!
You can also support me with a virtual coffee https://www.buymeacoffee.com/andrworld1500 .
Top comments (0)