In past articles, we've focused a lot on deployments to servers (Amazon EC2 instances in AWS).
However, in today's fast-paced and ever-evolving world of software development, containerization has become a popular choice for deploying applications due to its scalability, portability, and ease of management.
Amazon ECS (Elastic Container Service), a highly scalable and fully managed container orchestration service provided by AWS, offers a robust platform for running and managing containers at scale.
Amazon ECR (Elastic Container Registry), on the other hand, is an AWS-managed container image registry service that is secure, scalable, and reliable. It supports private repositories with resource-based permissions using AWS IAM, allowing IAM users and AWS services to securely access your container repositories and images.
By leveraging the power of ECS and the security features of ECR, you can confidently push your containerized application to a private ECR repository, and deploy this application using ECS.
In this step-by-step guide, we will walk through the process of deploying a containerized app to Amazon ECS using a Docker image stored in a private ECR repository.
Here are some things to note, though, before we get started.
Disclaimer
a) Given that we'll use Terraform and Terragrunt to provision our infrastructure, familiarity with these two is required to be able to follow along. You can reference one of my previous articles to get some basics.
b) Given that we'll use GitHub Actions to automate the provisioning of our infrastructure, familiarity with the tool is required to be able to follow along as well.
c) Some basic understanding of Docker and container orchestration will also help to follow along.
These are the steps we'll follow to deploy our containerized app:
Create a private ECR repo and push a Docker image to it.
Write code to provision infrastructure.
Version our infrastructure code with GitHub.
Create a GitHub Actions workflow and delegate the infrastructure provisioning task to it.
Add a GitHub Actions workflow job to destroy our infrastructure when we're done.
1. Create a private ECR repo and push a Docker image to it
For simplicity, we'll create our ECR repo manually, and then push an Nginx image to it.
a) Make sure you have the AWS CLI configured locally, and Docker installed as well.
b) Pull the latest version of the nginx Docker image using the command below:
docker pull nginx
c) Access the ECR console from the region you intend to create your ECS cluster.
d) Select Repositories under the Private registry section in the sidebar.
e) Click on the Create repository button then make sure the Private radio option is selected.
f) Enter your private ECR repository name, like ecs-demo.
g) From your local device, run the following command to login to your private ECR repo. Be sure to replace , , and with the appropriate values for you:
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account_id>.dkr.ecr.<region>.amazonaws.com
h) Tag the nginx image appropriately so that it can be pushed to your private ECR repo:
docker tag nginx:latest <account_id>.dkr.ecr.<region>.amazonaws.com/<repo_name>:latest
i) Push the newly tagged image to your private ECR repo:
docker push <account_id>.dkr.ecr.<region>.amazonaws.com/<repo_name>:latest
2. Write code to provision infrastructure
In this article we write Terraform code for most of the building blocks we'll be using now (VPC, Internet Gateway, Route Table, Subnet, NACL). We also write Terraform code for the Security Group building block in this article.
You can use those for reference, as in this article we'll focus on the building blocks for an IAM Role, an ECS Cluster, an ECS Task Definition, and an ECS Service.
A file that will be used by all building blocks will be the provider.tf, which is shown below:
provider.tf
terraform {
required_version = ">= 1.4.2"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
access_key = var.AWS_ACCESS_KEY_ID
secret_key = var.AWS_SECRET_ACCESS_KEY
region = var.AWS_REGION
}
We can now start writing the other Terraform code for our building blocks.
a) IAM Role
The IAM role will be used to define permissions that IAM entities will have.
variables.tf
variable "AWS_ACCESS_KEY_ID" {
type = string
}
variable "AWS_SECRET_ACCESS_KEY" {
type = string
}
variable "AWS_REGION" {
type = string
}
variable "principals" {
type = list(object({
type = string
identifiers = list(string)
}))
}
variable "is_external" {
type = bool
default = false
}
variable "condition" {
type = object({
test = string
variable = string
values = list(string)
})
default = {
test = "test"
variable = "variable"
values = ["values"]
}
}
variable "role_name" {
type = string
}
variable "policy_name" {
type = string
}
variable "policy_statements" {
type = list(object({
sid = string
actions = list(string)
resources = list(string)
}))
}
main.tf
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
dynamic "principals" {
for_each = { for principal in var.principals : principal.type => principal }
content {
type = principals.value.type
identifiers = principals.value.identifiers
}
}
actions = ["sts:AssumeRole"]
dynamic "condition" {
for_each = var.is_external ? [var.condition] : []
content {
test = condition.value.test
variable = condition.value.variable
values = condition.value.values
}
}
}
}
data "aws_iam_policy_document" "policy_document" {
dynamic "statement" {
for_each = { for statement in var.policy_statements : statement.sid => statement }
content {
effect = "Allow"
actions = statement.value.actions
resources = statement.value.resources
}
}
}
resource "aws_iam_role" "role" {
name = var.role_name
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
resource "aws_iam_role_policy" "policy" {
name = var.policy_name
role = aws_iam_role.role.id
policy = data.aws_iam_policy_document.policy_document.json
}
outputs.tf
output "role_arn" {
value = aws_iam_role.role.arn
}
output "role_name" {
value = aws_iam_role.role.name
}
output "unique_id" {
value = aws_iam_role.role.unique_id
}
b) ECS Cluster
The ECS cluster is the main component where your containerized application will reside.
variables.tf
variable "AWS_ACCESS_KEY_ID" {
type = string
}
variable "AWS_SECRET_ACCESS_KEY" {
type = string
}
variable "AWS_REGION" {
type = string
}
variable "name" {
type = string
description = "(Required) Name of the cluster (up to 255 letters, numbers, hyphens, and underscores)"
}
variable "setting" {
type = object({
name = optional(string, "containerInsights")
value = optional(string, "enabled")
})
description = "(Optional) Configuration block(s) with cluster settings. For example, this can be used to enable CloudWatch Container Insights for a cluster."
}
main.tf
# ECS Cluster
resource "aws_ecs_cluster" "cluster" {
name = var.name
setting {
name = var.setting.name
value = var.setting.value
}
}
outputs.tf
output "arn" {
value = aws_ecs_cluster.cluster.arn
}
output "id" {
value = aws_ecs_cluster.cluster.id
}
c) ECS Task Definition
The ECS task definition is a blueprint for your application that describes the parameters and container(s) that form your application.
variables.tf
variable "AWS_ACCESS_KEY_ID" {
type = string
}
variable "AWS_SECRET_ACCESS_KEY" {
type = string
}
variable "AWS_REGION" {
type = string
}
variable "family" {
type = string
description = "(Required) A unique name for your task definition."
}
variable "container_definitions_path" {
type = string
description = "Path to a JSON file containing a list of valid container definitions"
}
variable "network_mode" {
type = string
description = "(Optional) Docker networking mode to use for the containers in the task. Valid values are none, bridge, awsvpc, and host."
default = "awsvpc"
}
variable "compatibilities" {
type = list(string)
description = "(Optional) Set of launch types required by the task. The valid values are EC2 and FARGATE."
default = ["FARGATE"]
}
variable "cpu" {
type = number
description = "(Optional) Number of cpu units used by the task. If the requires_compatibilities is FARGATE this field is required."
default = null
}
variable "memory" {
type = number
description = "(Optional) Amount (in MiB) of memory used by the task. If the requires_compatibilities is FARGATE this field is required."
default = null
}
variable "task_role_arn" {
type = string
description = "(Optional) ARN of IAM role that allows your Amazon ECS container task to make calls to other AWS services."
default = null
}
variable "execution_role_arn" {
type = string
description = "(Optional) ARN of the task execution role that the Amazon ECS container agent and the Docker daemon can assume."
}
main.tf
# ECS Task Definition
resource "aws_ecs_task_definition" "task_definition" {
family = var.family
container_definitions = file(var.container_definitions_path)
network_mode = var.network_mode
requires_compatibilities = var.compatibilities
cpu = var.cpu
memory = var.memory
task_role_arn = var.task_role_arn
execution_role_arn = var.execution_role_arn
}
outputs.tf
output "arn" {
value = aws_ecs_task_definition.task_definition.arn
}
output "revision" {
value = aws_ecs_task_definition.task_definition.revision
}
d) ECS Service
The ECS service can be used to run and maintain a specified number of instances of a task definition simultaneously in an ECS cluster.
variables.tf
variable "AWS_ACCESS_KEY_ID" {
type = string
}
variable "AWS_SECRET_ACCESS_KEY" {
type = string
}
variable "AWS_REGION" {
type = string
}
variable "name" {
type = string
description = "(Required) Name of the service (up to 255 letters, numbers, hyphens, and underscores)"
}
variable "cluster_arn" {
type = string
description = "(Optional) ARN of an ECS cluster."
}
variable "task_definition_arn" {
type = string
description = "(Optional) Family and revision (family:revision) or full ARN of the task definition that you want to run in your service. Required unless using the EXTERNAL deployment controller. If a revision is not specified, the latest ACTIVE revision is used."
}
variable "desired_count" {
type = number
description = "(Optional) Number of instances of the task definition to place and keep running. Defaults to 0. Do not specify if using the DAEMON scheduling strategy."
}
variable "launch_type" {
type = string
description = "(Optional) Launch type on which to run your service. The valid values are EC2, FARGATE, and EXTERNAL. Defaults to EC2."
default = "FARGATE"
}
variable "force_new_deployment" {
type = bool
description = "(Optional) Enable to force a new task deployment of the service. This can be used to update tasks to use a newer Docker image with same image/tag combination (e.g., myimage:latest), roll Fargate tasks onto a newer platform version, or immediately deploy ordered_placement_strategy and placement_constraints updates."
default = true
}
variable "network_configuration" {
type = object({
subnets = list(string)
security_groups = optional(list(string))
assign_public_ip = optional(bool)
})
description = "(Optional) Network configuration for the service. This parameter is required for task definitions that use the awsvpc network mode to receive their own Elastic Network Interface, and it is not supported for other network modes."
}
main.tf
# ECS Service
resource "aws_ecs_service" "service" {
name = var.name
cluster = var.cluster_arn
task_definition = var.task_definition_arn
desired_count = var.desired_count
launch_type = var.launch_type
force_new_deployment = var.force_new_deployment
network_configuration {
subnets = var.network_configuration.subnets
security_groups = var.network_configuration.security_groups
assign_public_ip = var.network_configuration.assign_public_ip
}
}
outputs.tf
output "arn" {
value = aws_ecs_service.service.id
}
output "name" {
value = aws_ecs_service.service.name
}
With all the building blocks in place, we can now write our Terragrunt code that will orchestrate the provisioning of our infrastructure.
The code will have the following directory structure:
infra-live/
dev/
ecs-cluster/
terragrunt.hcl
ecs-service/
terragrunt.hcl
ecs-task-definition/
container-definitions.json
terragrunt.hcl
internet-gateway/
terragrunt.hcl
nacl/
terragrunt.hcl
public-route-table/
terragrunt.hcl
public-subnets/
terragrunt.hcl
security-group/
terragrunt.hcl
task-role/
terragrunt.hcl
vpc/
terragrunt.hcl
terragrunt.hcl
Now we'll fill our files with appropriate code.
Root terragrunt.hcl file
Our root terragrunt.hcl file will contain the configuration for our remote Terraform state. We'll use an S3 bucket in AWS to store our Terraform state file, and the name of our S3 bucket must be unique for it to be successfully created. My S3 bucket is in the N. Virginia region (us-east-1).
infra-live/terragrunt.hcl
generate "backend" {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
backend "s3" {
bucket = "<unique_bucket_name>"
key = "infra-live/${path_relative_to_include()}/terraform.tfstate"
region = "us-east-1"
encrypt = true
}
}
EOF
}
NB: Make sure to replace with the name of the S3 bucket you will have created in your AWS account.
a) VPC
At the core of it all, our ECS cluster components will reside within a VPC, which is why we need this.
infra-live/dev/vpc/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = <git_repo_url>
}
inputs = {
vpc_cidr = "10.0.0.0/16"
vpc_name = "vpc-dev"
enable_dns_hostnames = true
vpc_tags = {}
}
In this Terragrunt file (and in the subsequent files), replace the terraform source value with the URL of the Git repository hosting your building block's code (we'll get to versioning our infrastructure code soon).
b) Internet Gateway
infra-live/dev/internet-gateway/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = <git_repo_url>
}
dependency "vpc" {
config_path = "../vpc"
}
inputs = {
vpc_id = dependency.vpc.outputs.vpc_id
name = "igw-dev"
tags = {}
}
c) Public Route Table
infra-live/dev/public-route-table/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = <git_repo_url>
}
dependency "vpc" {
config_path = "../vpc"
}
dependency "igw" {
config_path = "../internet-gateway"
}
inputs = {
route_tables = [
{
name = "public-rt-dev"
vpc_id = dependency.vpc.outputs.vpc_id
is_igw_rt = true
routes = [
{
cidr_block = "0.0.0.0/0"
igw_id = dependency.igw.outputs.igw_id
}
]
tags = {}
}
]
}
d) Public Subnets
infra-live/dev/public-subnets/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = <git_repo_url>
}
dependency "vpc" {
config_path = "../vpc"
}
dependency "public-route-table" {
config_path = "../public-route-table"
}
inputs = {
subnets = [
{
name = "public-subnet"
vpc_id = dependency.vpc.outputs.vpc_id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
private_dns_hostname_type_on_launch = "resource-name"
is_public = true
route_table_id = dependency.public-route-table.outputs.route_table_ids[0]
tags = {}
}
]
}
e) NACL
infra-live/dev/nacl/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = <git_repo_url>
}
dependency "vpc" {
config_path = "../vpc"
}
dependency "public-subnets" {
config_path = "../public-subnets"
}
inputs = {
_vpc_id = dependency.vpc.outputs.vpc_id
nacls = [
{
name = "public-nacl"
vpc_id = dependency.vpc.outputs.vpc_id
egress = [
{
protocol = "tcp"
rule_no = 100
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 80
to_port = 80
},
{
protocol = "tcp"
rule_no = 200
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 443
to_port = 443
}
]
ingress = [
{
protocol = "tcp"
rule_no = 100
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 80
to_port = 80
},
{
protocol = "tcp"
rule_no = 200
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 443
to_port = 443
}
]
subnet_id = dependency.public-subnets.outputs.public_subnets[0]
tags = {}
}
]
}
f) Security Group
infra-live/dev/security-group/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = <git_repo_url>
}
dependency "vpc" {
config_path = "../vpc"
}
dependency "public-subnets" {
config_path = "../public-subnets"
}
inputs = {
vpc_id = dependency.vpc.outputs.vpc_id
name = "public-sg"
description = "Web security group"
ingress_rules = [
{
protocol = "tcp"
from_port = 80
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
},
{
protocol = "tcp"
from_port = 443
to_port = 443
cidr_blocks = ["0.0.0.0/0"]
}
]
egress_rules = [
{
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
]
tags = {}
}
g) Task Role
This IAM role gives ECR and CloudWatch permissions.
infra-live/dev/task-role/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = <git_repo_url>
}
inputs = {
principals = [
{
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
]
role_name = "ECSTaskExecutionRole"
policy_name = "ECRTaskExecutionPolicy"
policy_statements = [
{
sid = "ECRPermissions"
actions = [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:DescribeImages",
"ecr:DescribeImageScanFindings",
"ecr:DescribeRepositories",
"ecr:GetAuthorizationToken",
"ecr:GetDownloadUrlForLayer",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"ecr:ListTagsForResource"
]
resources = ["*"]
},
{
sid = "CloudWatchLogsPermissions"
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:GetLogEvents",
"logs:FilterLogEvents",
],
resources = ["*"]
}
]
}
h) ECS Cluster
infra-live/dev/ecs-cluster/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = <git_repo_url>
}
inputs = {
name = "ecs-demo"
setting = {
name = "containerInsights"
value = "enabled"
}
}
i) ECS Task Definition
The ECS task definition references a JSON file that contains the actual container definition configuration.
Be sure to replace with the actual URI of your Docker image in your private ECR repo.
infra-live/dev/ecs-task-definition/container-definitions.json
[
{
"name": "ecs-demo",
"image": <ecr_image_uri>,
"cpu": 512,
"memory": 2048,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "ecs-demo",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs-demo"
}
}
}
]
infra-live/dev/ecs-task-definition/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = <git_repo_url>
}
dependency "task_role" {
config_path = "../task-role"
}
inputs = {
family = "ecs-demo-task-definition"
container_definitions_path = "./container-definitions.json"
network_mode = "awsvpc"
compatibilities = ["FARGATE"]
cpu = 512
memory = 2048
task_role_arn = dependency.task_role.outputs.role_arn
execution_role_arn = dependency.task_role.outputs.role_arn
}
j) ECS Service
The ECS service lets us determine how many instances of our task definition we want (desired_count) and which launch type we want for our ECS tasks (EC2 or FARGATE). We've selected FARGATE as our launch type, since that's the focus of this article.
infra-live/dev/ecs-service/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = <git_repo_url>
}
dependency "ecs_cluster" {
config_path = "../ecs-cluster"
}
dependency "ecs_task_definition" {
config_path = "../ecs-task-definition"
}
dependency "public_subnets" {
config_path = "../public-subnets"
}
dependency "security_group" {
config_path = "../security-group"
}
inputs = {
name = "ecs-demo-service"
cluster_arn = dependency.ecs_cluster.outputs.arn
task_definition_arn = dependency.ecs_task_definition.outputs.arn
desired_count = 2
launch_type = "FARGATE"
force_new_deployment = true
network_configuration = {
subnets = [dependency.public_subnets.outputs.public_subnets[0]]
security_groups = [dependency.security_group.outputs.security_group_id]
assign_public_ip = true
}
}
3. Version our infrastructure code with GitHub
You can use this article as a reference to create repositories for our building blocks' code and Terragrunt code.
After versioning the building blocks, be sure to update the terragrunt.hcl
files' terraform source in the Terragrunt project with the GitHub URLs for the corresponding building blocks. You can then push these changes to your Terragrunt project's GitHub repo.
4. GitHub Actions workflow for infrastructure provisioning
With our code written and versioned, we can now create a workflow that will be triggered whenever we push code to the main branch.
We'll first need to configure some secrets in our GitHub infra-live
repository settings.
Once again, you can use this article for a step-by-step guide on how to do so.
We can then create a .github/workflows
directory in the root directory of our infra-live
project, and then create a YAML file within this directory which we'll call configure.yml
(you can name it whatever you want, as long as it has a .yml
extension).
infra-live/.github/workflows/configure.yml
name: Configure
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
apply:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup SSH
uses: webfactory/ssh-agent@v0.4.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.5.5
terraform_wrapper: false
- name: Setup Terragrunt
run: |
curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
chmod +x terragrunt_linux_amd64
sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
terragrunt -v
- name: Apply Terraform changes
run: |
cd dev
terragrunt run-all apply -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
env:
AWS_ACCESS_KEY_ID: ${{ vars.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ vars.AWS_DEFAULT_REGION }}
So our configure.yml
file is executed whenever code is pushed to the main branch or a pull request is merged to the main branch.
We then have an apply
job which runs on the latest version of Ubuntu that checks out our infra-live
GitHub repo, sets up SSH on the GitHub runner to be able to pull our building blocks' code from their various repositories, installs Terraform and Terragrunt, and then applies our Terragrunt configuration.
Here's some sample output from the execution of our pipeline after pushing code to the main branch:
Below, we can see our service trying to spin up two tasks since our ECS service configuration has a desired_count
of 2.
5) GitHub Actions destroy job
Having provisioned our infrastructure for illustration purposes, we may now want to easily destroy it all to avoid incurring costs.
We can easily do so by adding a job whose task is to destroy our provisioned infrastructure to our GitHub Actions workflow and configure it to be triggered manually.
We'll start by adding a workflow_dispatch
block to our on
block. This block also allows us to configure inputs whose values we can define when triggering the workflow manually.
In our case, we define a destroy
input which is essentially a dropdown element with two options: true and false.
Selecting true should run the destroy
job, whereas selecting false should run the apply
job.
on:
push:
branches:
- main
pull_request:
branches:
- main
workflow_dispatch:
inputs:
destroy:
description: 'Run Terragrunt destroy command'
required: true
default: 'false'
type: choice
options:
- true
- false
We now need to add a condition to our apply
job which will cause it to only be run if a) we haven't defined the destroy input or b) we have selected false as the value for our destroy input.
jobs:
apply:
if: ${{ !inputs.destroy || inputs.destroy == 'false' }}
runs-on: ubuntu-latest
...
We can now add a destroy
job which will only run if we select true as the value of our destroy input.
destroy:
if: ${{ inputs.destroy == 'true' }}
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup SSH
uses: webfactory/ssh-agent@v0.4.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.5.5
terraform_wrapper: false
- name: Setup Terragrunt
run: |
curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
chmod +x terragrunt_linux_amd64
sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
terragrunt -v
- name: Destroy Terraform changes
run: |
cd dev
terragrunt run-all destroy -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
env:
AWS_ACCESS_KEY_ID: ${{ vars.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ vars.AWS_DEFAULT_REGION }}
So our full configure.yml
file should look like this:
name: Configure
on:
push:
branches:
- main
pull_request:
branches:
- main
workflow_dispatch:
inputs:
destroy:
description: 'Run Terragrunt destroy command'
required: true
default: 'false'
type: choice
options:
- true
- false
jobs:
apply:
if: ${{ !inputs.destroy || inputs.destroy == 'false' }}
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup SSH
uses: webfactory/ssh-agent@v0.4.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.5.5
terraform_wrapper: false
- name: Setup Terragrunt
run: |
curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
chmod +x terragrunt_linux_amd64
sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
terragrunt -v
- name: Apply Terraform changes
run: |
cd dev
terragrunt run-all apply -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
env:
AWS_ACCESS_KEY_ID: ${{ vars.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ vars.AWS_DEFAULT_REGION }}
destroy:
if: ${{ inputs.destroy == 'true' }}
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup SSH
uses: webfactory/ssh-agent@v0.4.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.5.5
terraform_wrapper: false
- name: Setup Terragrunt
run: |
curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
chmod +x terragrunt_linux_amd64
sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
terragrunt -v
- name: Destroy Terraform changes
run: |
cd dev
terragrunt run-all destroy -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
env:
AWS_ACCESS_KEY_ID: ${{ vars.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ vars.AWS_DEFAULT_REGION }}
We can then commit and push our code, and see the change in the GitHub interface when we go to our GitHub repo, select the Actions tab, and select our Configure workflow in the left sidebar menu (note that pushing the code to your main branch will still trigger the automatic execution of your pipeline).
If we select true and click the green Run workflow button, a pipeline will be executed, running just the destroy
job.
When the pipeline execution is done, you can check the AWS console to confirm that the ECS cluster and its components have been deleted.
You could choose to recreate the cluster by following the same approach, but selecting false instead of true to trigger the workflow manually and create our resources.
And that's it! I hope this helps you in your tech journey.
If you have any questions or remark, feel free to leave them in the comments section.
Top comments (2)
Super detailed article. Thanks for sharing.
Thanks for the feedback. I hope it can help.