Introduction
This article will describe the way how we can dockarize an application using aws ECS cluster service and distribute the incoming traffic using application load balancer. As we are in the age of containerization, we really need to understand why container ? what is the purpose ? benefits ?
What Is Container & Containerization ?
Container is nothing but a lightweight machine which can be installed on top the virtual machine by installing docker application. It doesn’t have any operating system concept, but that doesn’t mean it works without OS :) basically container supports such level of advancement due to which it can utilize the resources of host virtual machine, normally virtual machines virtualizes the underlying hardware, but here in case of container, it virtualizes the operating system of host virtual machine and that’s a reason it doesn’t have any explicit OS concept and have been light weighted.
So the way we package all application codes and create a container where application works fine with required port configuration with host machine, is known as containerization.
Purpose:
Container is light weight and is easy to tackle in every aspects. If we consider traditional approach, then we see once upon a time the whole application was supposed to be deployed in single host machine. In case of any outages or problems it affects the entire application components, that kind of application architecture was known as Monolithic Architecture. But containerization has opened up a different approach where each and every application components can be broken down into multiple pieces and deployed in isolated containers which prevents others containers from affected one from outages. So that only affected components can be troubleshooted, not all and dedicated engineer team will focus on that particular issue.
Diagram
- Architectural Diagram
Services Used:
ECS FARGATE
VPC
Application Load Balancer
CloudWatch Log Groups
What Is ECS:
Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances.
Task Definition:
A task definition is a blueprint that defines the infrastructure parameters like docker image which we are going to refer, container CPU and memory, host type(EC2 or Fargate), IAM role, Logging configuration and docker networking etc.
Tasks:
Based on the parameters defined in the task definition, containers can be launched using that image with same cpu/memory configuration mentioned in task definition.
Services:
Service is the component of ECS that manages all the tasks i.e. containers based upon the health checks. It continuously monitors the incoming traffic flow into containers, in case traffic doesn’t come, then those containers will be treated as unhealthy. So service destroys all and recreates using same image.
**
Deployment Of Resources
**
Here, all the resources deployment have been codified using Terraform and GitLab CI Pipeline. We have created separate repositories for each resources to maintain transparency and simplicity.
Terraform Provider Details:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.67.0"
}
}
}
provider "aws" {
# Configuration options
region = "us-east-1"
}
1. ECR:
There a image repository has been created in ECR service where the docker image has been pushed and maintained with version.
var.tf
variable "ecr_repo" {
description = "Name of repository"
default = "aws-ecs-reactjs-personal-portfolio"
}
variable "ecr_tags" {
type = map(any)
default = {
"AppName" = "ReactJS"
"Env" = "Dev"
}
}
ecr.tf
resource "aws_ecr_repository" "aws-ecr" {
name = var.ecr_repo
tags = var.ecr_tags
}
output.tf
output "ecr_arn" {
value = aws_ecr_repository.aws-ecr.arn
}
output "ecr_registry_id" {
value = aws_ecr_repository.aws-ecr.registry_id
}
output "ecr_url" {
value = aws_ecr_repository.aws-ecr.repository_url
}
Pipeline File
default:
tags:
- gitlab-runner-test
stages:
- image_repo_create
Image_Repository_Build:
stage: image_repo_create
script:
- terraform init
- terraform plan
- terraform apply --auto-approve
2. Image Creation:
Here, to build an application image, we have prepared a Dockerfile that builds a custom application image and push to ECR registry in aws.
Application Code:
Dockerfile
#Stage 1
# Fetch the latest image of node
FROM node:alpine as er
# Set up working directry
WORKDIR /app
# Copy package.json to working directory
COPY package.json ./
COPY package-lock.json ./
COPY . .
RUN npm install
# Coping all file in our project
#Starting
CMD [ "npm", "run", "start" ]
Pipeline File
default:
tags:
- gitlab-runner-test
stages:
- image_build
Image Build:
variables:
region: "us-east-1"
stage: image_build
script:
- export aws_account_id=$(aws sts get-caller-identity | jq .'Account' | xargs)
- sudo docker build -t reactjs-image .
- sudo docker tag reactjs-image:latest $aws_account_id.dkr.ecr.$region.amazonaws.com/aws-ecs-reactjs-personal-portfolio
- sudo aws ecr get-login-password --region $region | docker login --username AWS --password-stdin $aws_account_id.dkr.ecr.$region.amazonaws.com
- sudo docker push $aws_account_id.dkr.ecr.$region.amazonaws.com/aws-ecs-reactjs-personal-portfolio
3. Application Load Balancer:
var.tf
variable "TG_conf" {
type = object({
name = string
port = string
protocol = string
target_type = string
enabled = bool
healthy_threshold = string
interval = string
path = string
})
}
variable "ALB_conf" {
type = object({
name = string
internal = bool
load_balancer_type = string
ip_address_type = string
})
}
variable "Listener_conf" {
type = map(object({
port = string
protocol = string
type = string
priority = number
}))
}
variable "alb_tags" {
description = "provides the tags for ALB"
type = object({
Environment = string
Email = string
Type = string
Owner = string
})
default = {
Email = "dasanirban0806@gmail.com"
Environment = "Production"
Owner = "Anirban Das"
Type = "External"
}
}
terraform.tfvars
TG_conf = {
enabled = true
healthy_threshold = "2"
interval = "30"
name = "TargetGroup-External"
port = "3000"
protocol = "HTTP"
target_type = "ip"
path = "/home"
}
ALB_conf = {
internal = false
ip_address_type = "ipv4"
load_balancer_type = "application"
name = "ALB-External"
}
Listener_conf = {
"1" = {
port = "80"
priority = 100
protocol = "HTTP"
type = "forward"
}
}
data.tf
data "aws_security_group" "ext_alb" {
filter {
name = "tag:Name"
values = ["InternetFacing-ALB"]
}
}
# vpc details :
data "aws_vpc" "this_vpc" {
state = "available"
filter {
name = "tag:Name"
values = ["custom-vpc"]
}
}
# subnets details :
data "aws_subnet" "web_subnet_1a" {
vpc_id = data.aws_vpc.this_vpc.id
filter {
name = "tag:Name"
values = ["weblayer-pub1-1a"]
}
}
data "aws_subnet" "web_subnet_1b" {
vpc_id = data.aws_vpc.this_vpc.id
filter {
name = "tag:Name"
values = ["weblayer-pub2-1b"]
}
}
ext_alb.tf
resource "aws_lb_target_group" "this_tg" {
name = var.TG_conf["name"]
port = var.TG_conf["port"]
protocol = var.TG_conf["protocol"]
vpc_id = data.aws_vpc.this_vpc.id
health_check {
enabled = var.TG_conf["enabled"]
healthy_threshold = var.TG_conf["healthy_threshold"]
interval = var.TG_conf["interval"]
path = var.TG_conf["path"]
}
target_type = var.TG_conf["target_type"]
tags = {
Attached_ALB_dns = aws_lb.this_alb.dns_name
}
}
resource "aws_lb" "this_alb" {
name = var.ALB_conf["name"]
load_balancer_type = var.ALB_conf["load_balancer_type"]
ip_address_type = var.ALB_conf["ip_address_type"]
internal = var.ALB_conf["internal"]
security_groups = [data.aws_security_group.ext_alb.id]
subnets = [data.aws_subnet.web_subnet_1a.id, data.aws_subnet.web_subnet_1b.id]
tags = merge(var.alb_tags)
}
resource "aws_lb_listener" "this_alb_lis" {
for_each = var.Listener_conf
load_balancer_arn = aws_lb.this_alb.arn
port = each.value["port"]
protocol = each.value["protocol"]
default_action {
type = each.value["type"]
target_group_arn = aws_lb_target_group.this_tg.arn
}
}
output.tf
output "arn" {
value = [aws_lb.this_alb.arn]
}
output "dns_name" {
value = [aws_lb.this_alb.dns_name]
}
Pipeline File
default:
tags:
- gitlab-runner-test
stages:
- external_alb_create
Ext_ALB:
stage: external_alb_create
script:
- terraform init
- terraform plan
- terraform apply --auto-approve
- ECS var.tf
variable "region" {
type = string
default = "us-east-1"
}
variable "ecs_role" {
description = "ecs roles"
default = "ecsTaskExecutionRole"
}
variable "ecs_details" {
description = "details of ECS cluster"
type = object({
Name = string
logging = string
cloud_watch_encryption_enabled = bool
})
}
variable "ecs_task_def" {
description = "defines the configurations of task definition"
type = object({
family = string
cont_name = string
cpu = number
memory = number
essential = bool
logdriver = string
containerport = number
networkmode = string
requires_compatibilities = list(string)
})
}
variable "ecsservice" {
description = "defines the configuration of ecs service"
type = object({
name = string
launch_type = string
scheduling_strategy = string
desired_count = number
force_new_deployment = bool
})
}
variable "cw_log_grp" {
description = "defines the log group in cloudwatch"
type = string
default = ""
}
variable "kms_key" {
description = "defines the kms key"
type = object({
description = string
deletion_window_in_days = number
})
}
variable "custom_tags" {
description = "defines common tags"
type = object({})
default = {
AppName = "ReactJS"
Env = "Dev"
}
}
terraform.tfvars
ecs_details = {
Name = "ecs-cluster"
logging = "OVERRIDE"
cloud_watch_encryption_enabled = true
}
ecs_task_def = {
family = "custom-task-definition"
cont_name = "ReactJS-Container"
cpu = 256
memory = 512
essential = true
logdriver = "awslogs"
containerport = 3000
networkmode = "awsvpc"
requires_compatibilities = ["FARGATE",]
}
ecsservice = {
name = "ecs-service"
launch_type = "FARGATE"
scheduling_strategy = "REPLICA"
desired_count = 2
force_new_deployment = true
}
cw_log_grp = "cloudwatch-log-group-ecs-cluster"
kms_key = {
description = "log group encryption"
deletion_window_in_days = 7
}
iam.tf
resource "aws_iam_role" "ecsTaskExecutionRole" {
name = var.ecs_role
assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
}
data "aws_iam_policy_document" "assume_role_policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
}
}
resource "aws_iam_role_policy_attachment" "ecsTaskExecutionRole_policy" {
role = aws_iam_role.ecsTaskExecutionRole.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
data.tf
data "aws_ecr_repository" "ecr" {
name = "aws-ecs-reactjs-personal-portfolio"
}
# vpc details :
data "aws_vpc" "this_vpc" {
state = "available"
filter {
name = "tag:Name"
values = ["custom-vpc"]
}
}
data "aws_security_group" "sg" {
filter {
name = "tag:Name"
values = ["WebSG"]
}
}
data "aws_security_group" "ext_alb" {
filter {
name = "tag:Name"
values = ["InternetFacing-ALB"]
}
}
# subnets details :
data "aws_subnet" "web_subnet_1a" {
vpc_id = data.aws_vpc.this_vpc.id
filter {
name = "tag:Name"
values = ["weblayer-pub1-1a"]
}
}
data "aws_subnet" "web_subnet_1b" {
vpc_id = data.aws_vpc.this_vpc.id
filter {
name = "tag:Name"
values = ["weblayer-pub2-1b"]
}
}
# Fetching the details of target group:
data "aws_lb_target_group" "this_tg" {
name = "TargetGroup-External"
}
data "aws_lb" "this_lb" {
name = "ALB-External"
}
data "aws_lb_listener" "this_lb_listener" {
load_balancer_arn = data.aws_lb.this_lb.arn
port = 80
}
cw_log_group.tf
resource "aws_cloudwatch_log_group" "log-group" {
name = var.cw_log_grp
tags = var.custom_tags
}
aws_kms_key.tf
resource "aws_kms_key" "kms" {
description = var.kms_key["description"]
deletion_window_in_days = var.kms_key["deletion_window_in_days"]
tags = var.custom_tags
}
ecs.tf
resource "aws_ecs_cluster" "aws-ecs-cluster" {
name = var.ecs_details["Name"]
configuration {
execute_command_configuration {
kms_key_id = aws_kms_key.kms.arn
logging = var.ecs_details["logging"]
log_configuration {
cloud_watch_encryption_enabled = true
cloud_watch_log_group_name = aws_cloudwatch_log_group.log-group.name
}
}
}
tags = var.custom_tags
}
resource "aws_ecs_task_definition" "taskdef" {
family = var.ecs_task_def["family"]
container_definitions = jsonencode([
{
"name": "${var.ecs_task_def["cont_name"]}",
"image": "${data.aws_ecr_repository.ecr.repository_url}:latest",
"entrypoint": [],
"essential": "${var.ecs_task_def["essential"]}",
"logConfiguration": {
"logDriver": "${var.ecs_task_def["logdriver"]}",
"options": {
"awslogs-group": "${aws_cloudwatch_log_group.log-group.id}",
"awslogs-region": "${var.region}",
"awslogs-stream-prefix": "app-prd"
}
},
"portMappings": [
{
"containerPort": "${var.ecs_task_def["containerport"]}",
}
],
"cpu": "${var.ecs_task_def["cpu"]}",
"memory": "${var.ecs_task_def["memory"]}",
"networkMode": "${var.ecs_task_def["networkmode"]}"
}
])
requires_compatibilities = var.ecs_task_def["requires_compatibilities"]
network_mode = var.ecs_task_def["networkmode"]
memory = var.ecs_task_def["memory"]
cpu = var.ecs_task_def["cpu"]
execution_role_arn = aws_iam_role.ecsTaskExecutionRole.arn
task_role_arn = aws_iam_role.ecsTaskExecutionRole.arn
tags = var.custom_tags
}
resource "aws_ecs_service" "aws_ecs-service" {
name = var.ecsservice["name"]
cluster = aws_ecs_cluster.aws-ecs-cluster.id
task_definition = aws_ecs_task_definition.taskdef.family
launch_type = var.ecsservice["launch_type"]
scheduling_strategy = var.ecsservice["scheduling_strategy"]
desired_count = var.ecsservice["desired_count"]
force_new_deployment = var.ecsservice["force_new_deployment"]
health_check_grace_period_seconds = 100
network_configuration {
subnets = [data.aws_subnet.web_subnet_1a.id, data.aws_subnet.web_subnet_1b.id]
assign_public_ip = true
security_groups = [
data.aws_security_group.sg.id
]
}
load_balancer {
target_group_arn = data.aws_lb_target_group.this_tg.id
container_name = "ReactJS-Container"
container_port = 3000
}
depends_on = [
data.aws_lb_listener.this_lb_listener,
]
}
output.tf
output "ecs_arn" {
value = aws_ecs_cluster.aws-ecs-cluster.id
}
output "cw_log_group_arn" {
value = aws_cloudwatch_log_group.log-group.arn
}
output "kms_id" {
value = aws_kms_key.kms.id
}
output "kms_arn" {
value = aws_kms_key.kms.arn
}
Pipeline File
default:
tags:
- gitlab-runner-test
stages:
- ecs_cluster_build
ECS_Cluster_Build:
stage: ecs_cluster_build
script:
- terraform init
- terraform plan
- terraform apply --auto-approve
Now, we are coming to an end as we have deployed all required resources sequentially as above. Once ecs nodes are treated as healthy, then we can hit load balancer URL, so we can see below screenshot that ensures application is running fine.
Benefits
Customer will be billed for containers only, not underlying hosts, hence it reduces more expenses than earlier.
Due to light weight, containers can process the user request in very low latency. Also there is no chances of OS crash.
Containers can be configured with volume which resides on host machine, so the application codes can be resided there that completely removes the space requirement inside a container.
As containers support microservices approach, so application components can be broken down into multiple isolated sections that are deployed in different containers which reduces the probability of entire application outages.
That’s all about the containerization, I hope it has been helpful to all of you. In this article I kept the entire concept very simple and that’s why I have chosen public subnets and container with public ips, but in real time scenarios it can’t be public. Of course it must be placed in private subnet in order to make those safe from external networks.
Hence in coming articles, I will definitely try to cover up same kind of architecture with more security and compliance considerations.
GitHub Link: https://github.com/dasanirban834/aws-ecs-reactjs-personal-portfolio
Thanks!! 🙂🙂
Top comments (0)