DEV Community

Paloma Lataliza for AWS Community Builders

Posted on • Originally published at Medium

Your containerized application with IAC on AWS — Pt.2

Hi everyone! We’ll see how to create our terraform modules in this blog article. Next, we’ll publish our application to AWS Fargate using the terraform modules we created here and also terragrunt.

TERRAFORM
We will establish our directory structure and our terraform module scripts in this blog article. We will set everything up and utilize terraform in conjunction with Terragrunt in part 3.

DIRECTORIES
Our codes must be organized at the directory level in order to use terraform and terragrunt:

app
modules
    ├── amazon_vpc
    ├── aws_loadbalancer
    ├── aws_fargate
    ├── aws_roles
    ├── aws_ecs_cluster
    └── aws_targetgroup
    └── aws_certificate_manager
Enter fullscreen mode Exit fullscreen mode
terragrunt
    └── dev
        └── us-east-1
            ├── aws_ecs
            │   ├── cluster
            │   └── service
            ├── aws_loadbalancer
            ├── amazon_vpc
            ├── aws_targetgroup
            ├── aws_roles
            ├── aws_certificate_manager
            └── terragrunt.hcl
Enter fullscreen mode Exit fullscreen mode
  • app: This is our infrastructure’s primary directory.
  • modules: Each unique AWS resource or service has a subdirectory within this directory. The modules will be inserted here, arranged according to resources like VPC, load balancers, ECS, etc.
  • Terraform subdirectories: Module-specific Terraform files are located in subdirectories like amazon_vpc and aws_loadbalancer.
  • Terragrunt: Terragrunt configurations are kept in this directory.
  • dev: Stands for the configuration of the development environment.
  • us-east-1: Configurations unique to the AWS region “us-east-1”.
  • Terragrunt subdirectories: Environment- and region-specific options for individual services may be found in the aws_ecs, aws_loadbalancer, amazon_vpc, etc. folders.
  • terragrunt.hcl: This is our Terragrunt configuration file, where we will include backend configurations as well as those that apply to all services in the “us-east-1” area of the development environment. -** Modules have three files**: variables.tf, main.tf, and _outputs.tf in each of the subdirectories. Roles will make use of a _data.tf
  • main.tf: The main.tf file, which defines and configures AWS resources, is the hub of the module.
  • variables.tf: Allows for module customisation and reuse by defining variables that the module will use.
  • _outputs.tf: Indicates which module outputs — information — will be accessible to other modules or the Terraform project in its whole.
  • _data.tf: To consult and look up information on already-existing resources or services, we shall utilize data.

RESOURCES
The following are the AWS resources that we will use:

  • VPC
  • SUBNETS
  • ROUTE TABLE
  • INTERNET GATEWAY
  • NAT GATEWAY
  • ELASTIC IP
  • ECR
  • SECURITY GROUP
  • APPLICATION LOAD BALANCER
  • FARGATE
  • ROUTE53
  • ACM
  • TERRAFORM MODULES

VPC
Let’s get started with VPC module creation. It will be necessary for each and every one of our apps’ network connections.

modules
 ├── amazon_vpc
Enter fullscreen mode Exit fullscreen mode

main.tf

// Creat VPC
resource "aws_vpc" "vpc" {
  cidr_block           = var.vpc_cidr_block
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-vpc"
    },
    var.tags,
  )
}


// Creat public subnet1 for VPC
resource "aws_subnet" "public_subnet1" {
  vpc_id            = aws_vpc.vpc.id
  cidr_block        = var.public_subnet1_cidr_block
  availability_zone = var.availability_zone1

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-public-subnet1"
    },
    var.tags,
  )
}


// Creat public subnet2 for VPC
resource "aws_subnet" "public_subnet2" {
  vpc_id            = aws_vpc.vpc.id
  cidr_block        = var.public_subnet2_cidr_block
  availability_zone = var.availability_zone2

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-public-subnet2"
    },
    var.tags,
  )
}


// Creat private subnet1 for VPC
resource "aws_subnet" "private_subnet1" {
  vpc_id            = aws_vpc.vpc.id
  cidr_block        = var.private_subnet1_cidr_block
  availability_zone = var.availability_zone1

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-private-subnet1"
    },
    var.tags,
  )
}


// Creat private subnet2 for VPC
resource "aws_subnet" "private_subnet2" {
  vpc_id            = aws_vpc.vpc.id
  cidr_block        = var.private_subnet2_cidr_block
  availability_zone = var.availability_zone2

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-private-subnet2"
    },
    var.tags,
  )
}


// Create Internet gateway
resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.vpc.id

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}"
    },
    var.tags,
  )
}


// Creat route IGW VPC default rtb
resource "aws_default_route_table" "vpc_default_rtb" {
  default_route_table_id = aws_vpc.vpc.default_route_table_id

  # Internet gtw route
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw.id
  }

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-vpc-default-rtb"
    },
    var.tags,
  )
}


// Associate a public subnet1 with VPC
resource "aws_route_table_association" "public_subnet1_rtb_association" {
  subnet_id      = aws_subnet.public_subnet1.id
  route_table_id = aws_default_route_table.vpc_default_rtb.id
}

# Associate public subnet2 with VPC
resource "aws_route_table_association" "public_subnet2_rtb_association" {
  subnet_id      = aws_subnet.public_subnet2.id
  route_table_id = aws_default_route_table.vpc_default_rtb.id
}

# Create custom private route table 1 
resource "aws_route_table" "private_rtb1" {
  vpc_id = aws_vpc.vpc.id

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-rtb1"
    },
    var.tags,
  )
}


// Creat custom private route table 2
resource "aws_route_table" "private_rtb2" {
  vpc_id = aws_vpc.vpc.id

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-rtb2"
    },
    var.tags,
  )
}


// Creat EIP for nat1
resource "aws_eip" "eip1" {
  domain = "vpc"

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-eip1"
    },
    var.tags,
  )
}


// Creat EIP for nat2
resource "aws_eip" "eip2" {
  domain = "vpc"

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-eip2"
    },
    var.tags,
  )
}


// Creat NAT GTW1
resource "aws_nat_gateway" "nat_gtw1" {
  allocation_id = aws_eip.eip1.id
  subnet_id     = aws_subnet.public_subnet1.id

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-nat-gtw1"
    },
    var.tags,
  )
}


// Creat NAT GTW2
resource "aws_nat_gateway" "nat_gtw2" {
  allocation_id = aws_eip.eip2.id
  subnet_id     = aws_subnet.public_subnet2.id

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}-nat-gtw2"
    },
    var.tags,
  )
}


// Configure natgtw route private route table 1
resource "aws_route" "private_rtb1_nat_gtw1" {
  route_table_id         = aws_route_table.private_rtb1.id
  destination_cidr_block = "0.0.0.0/0"
  nat_gateway_id         = aws_nat_gateway.nat_gtw1.id
}



// Configure nat gtw route private route table 2
resource "aws_route" "private_rtb2_nat_gtw2" {
  route_table_id         = aws_route_table.private_rtb2.id
  destination_cidr_block = "0.0.0.0/0"
  nat_gateway_id         = aws_nat_gateway.nat_gtw2.id
}



// Associate private subnet1 VPC
resource "aws_route_table_association" "private_subnet1_rtb_association" {
  subnet_id      = aws_subnet.private_subnet1.id
  route_table_id = aws_route_table.private_rtb1.id
}



// Associate private subnet2 VPC
resource "aws_route_table_association" "private_subnet2_rtb_association" {
  subnet_id      = aws_subnet.private_subnet2.id
  route_table_id = aws_route_table.private_rtb2.id
}



resource "aws_security_group" "default" {
  name        = "${var.env}-${var.project_name}-sg-vpc"
  description = "Default security group to allow inbound/outbound from the VPC"
  vpc_id      = "${aws_vpc.vpc.id}"

  ingress {
    from_port = "0"
    to_port   = "0"
    protocol  = "-1"
    self      = true
  }

  egress {
    from_port = "0"
    to_port   = "0"
    protocol  = "-1"
    self      = "true"
  }
}
Enter fullscreen mode Exit fullscreen mode

variables.tf

variable "vpc_cidr_block" {
}

variable "public_subnet1_cidr_block" {
}

variable "public_subnet2_cidr_block" {
}

variable "private_subnet1_cidr_block" {
}

variable "private_subnet2_cidr_block" {
}

variable "availability_zone1" {
}

variable "availability_zone2" {
}

variable "project_name" {
}

variable "env" {
}

variable "tags" {
  type = map(string)
}

Enter fullscreen mode Exit fullscreen mode

_outputs.tf

output "vpc_arn" {
  value = aws_vpc.vpc.arn
}

output "vpc_id" {
  value = aws_vpc.vpc.id
}

output "vpc_main_rtb" {
  value = aws_vpc.vpc.main_route_table_id
}

output "vpc_cidr_block" {
  value = aws_vpc.vpc.cidr_block
}


output "public_subnet1_id" {
  value = aws_subnet.public_subnet1.id
}

output "public_subnet1_cidr_block" {
  value = aws_subnet.public_subnet1.cidr_block
}

output "public_subnet1_az" {
  value = aws_subnet.public_subnet1.availability_zone
}

output "public_subnet1_az_id" {
  value = aws_subnet.public_subnet1.availability_zone_id
}


output "public_subnet2_id" {
  value = aws_subnet.public_subnet2.id
}

output "public_subnet2_cidr_block" {
  value = aws_subnet.public_subnet2.cidr_block
}

output "public_subnet2" {
  value = aws_subnet.public_subnet2.availability_zone
}

output "public_subnet2_az_id" {
  value = aws_subnet.public_subnet2.availability_zone_id
}

output "private_subnet1_id" {
  value = aws_subnet.private_subnet1.id
}

output "private_subnet1_cidr_block" {
  value = aws_subnet.private_subnet1.cidr_block
}

output "private_subnet1_az" {
  value = aws_subnet.private_subnet1.availability_zone
}

output "private_subnet1_az_id" {
  value = aws_subnet.private_subnet1.availability_zone_id
}

output "private_subnet2_id" {
  value = aws_subnet.private_subnet2.id
}

output "private_subnet2_cidr_block" {
  value = aws_subnet.private_subnet2.cidr_block
}

output "private_subnet2_az" {
  value = aws_subnet.private_subnet2.availability_zone
}

output "private_subnet2_az_id" {
  value = aws_subnet.public_subnet2.availability_zone_id
}

output "igw_id" {
  value = aws_internet_gateway.igw.id
}

output "default_rtb_id" {
  value = aws_default_route_table.vpc_default_rtb.id
}
Enter fullscreen mode Exit fullscreen mode

IAM PERMISSIONS
We need to create permissions for our services.

modules
 ├── aws_roles
Enter fullscreen mode Exit fullscreen mode

_data.tf

data "aws_iam_policy_document" "ecs_service_role" {
  statement {
    actions = [
      "application-autoscaling:DeleteScalingPolicy",
      "application-autoscaling:DeregisterScalableTarget",
      "application-autoscaling:DescribeScalableTargets",
      "application-autoscaling:DescribeScalingActivities",
      "application-autoscaling:DescribeScalingPolicies",
      "application-autoscaling:PutScalingPolicy",
      "application-autoscaling:RegisterScalableTarget",
      "autoscaling:UpdateAutoScalingGroup",
      "autoscaling:CreateAutoScalingGroup",
      "autoscaling:CreateLaunchConfiguration",
      "autoscaling:DeleteAutoScalingGroup",
      "autoscaling:DeleteLaunchConfiguration",
      "autoscaling:Describe*",
      "ec2:CreateNetworkInterface",
      "ec2:DescribeDhcpOptions",
      "ec2:DescribeNetworkInterfaces",
      "ec2:DeleteNetworkInterface",
      "ec2:DescribeSubnets",
      "ec2:DescribeSecurityGroups",
      "ec2:DescribeVpcs",
      "ec2:AssociateRouteTable",
      "ec2:AttachInternetGateway",
      "ec2:AuthorizeSecurityGroupIngress",
      "ec2:CancelSpotFleetRequests",
      "ec2:CreateInternetGateway",
      "ec2:CreateLaunchTemplate",
      "ec2:CreateRoute",
      "ec2:CreateRouteTable",
      "ec2:CreateSecurityGroup",
      "ec2:CreateSubnet",
      "ec2:CreateVpc",
      "ec2:DeleteLaunchTemplate",
      "ec2:DeleteSubnet",
      "ec2:DeleteVpc",
      "ec2:Describe*",
      "ec2:DetachInternetGateway",
      "ec2:DisassociateRouteTable",
      "ec2:ModifySubnetAttribute",
      "ec2:ModifyVpcAttribute",
      "ec2:RunInstances",
      "ec2:RequestSpotFleet",
      "codebuild:BatchGetBuilds",
      "codebuild:StartBuild",
      "s3:GetObject",
      "s3:GetObjectVersion",
      "s3:GetBucketVersioning",
      "s3:PutObject",
      "s3:PutObjectAcl",
      "s3:ListBucket",
      "es:ESHttpPost",
      "ecr:*",
      "ecs:*",
      "ec2:*",
      "sqs:*",
      "cloudwatch:*",
      "logs:*",
      "iam:PassRole",
      "elasticloadbalancing:Describe*",
      "iam:AttachRolePolicy",
      "iam:CreateRole",
      "iam:GetPolicy",
      "iam:GetPolicyVersion",
      "iam:GetRole",
      "iam:ListAttachedRolePolicies",
      "iam:ListRoles",
      "iam:ListGroups",
      "iam:ListUsers",
      "iam:ListInstanceProfiles",
      "elasticfilesystem:*",
      "secretsmanager:GetSecretValue",
      "ssm:GetParameters",
      "ssm:GetParameter",
      "ssm:GetParametersByPath",
      "kms:Decrypt",
      "dynamodb:GetItem",
      "dynamodb:PutItem",
      "dynamodb:UpdateItem",
      "dynamodb:DeleteItem",
      "dynamodb:Query",
      "dynamodb:Scan",
    ]

    sid       = "1"
    effect    = "Allow"
    resources = ["*"]
  }
}
Enter fullscreen mode Exit fullscreen mode

main.tf

// Creat policy
resource "aws_iam_policy" "ecs_service_policy" {
  name   = "${var.env}-${var.project_name}-policy"
  path   = "/"
  policy = data.aws_iam_policy_document.ecs_service_role.json
}

// Creat IAM Role
resource "aws_iam_role" "ecs_service_role" {
  name                  = "${var.env}-${var.project_name}-role"
  force_detach_policies = "true"
  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "1",
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": [
          "ecs.amazonaws.com",
          "ecs-tasks.amazonaws.com",
          "codebuild.amazonaws.com",
          "codepipeline.amazonaws.com",
          "ecs.application-autoscaling.amazonaws.com",
          "ec2.amazonaws.com",
          "ecr.amazonaws.com"
        ]
      }
    }
  ]
}
EOF

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}"
    },
    var.tags,
  )
}

resource "aws_iam_policy_attachment" "ecs_service_role_atachment_policy" {
  name       = "${var.env}-${var.project_name}-policy-attachment"
  roles      = [aws_iam_role.ecs_service_role.name]
  policy_arn = aws_iam_policy.ecs_service_policy.arn
}
Enter fullscreen mode Exit fullscreen mode

variables.tf

variable "env" {
}

variable "project_name" {
}


variable "tags" {
    type        = map(string)
  default     = {}
}
Enter fullscreen mode Exit fullscreen mode

_outputs.tf

output ecs_role_arn {
    value = aws_iam_role.ecs_service_role.arn
}
Enter fullscreen mode Exit fullscreen mode

AWS CERTIFICATE MANAGER

modules
 ├── aws_certificate_manager
Enter fullscreen mode Exit fullscreen mode

We will also need a domain already configured in a zone hosted on AWS. With the domain created, we will create a valid TLS certificate within our account.

main.tf

// creat the certificate
resource "aws_acm_certificate" "cert" {
  domain_name       = "*.${var.domain_name}"
  validation_method = "DNS"

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}"
    },
    var.tags,
  )
  lifecycle {
    create_before_destroy = true
  }
}


// validation certificate
resource "aws_route53_record" "record_certificate_validation" {
  for_each = {
    for dvo in aws_acm_certificate.cert.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  allow_overwrite = true
  name            = each.value.name
  records         = [each.value.record]
  ttl             = 60
  type            = each.value.type
  zone_id         = "Z08676461KWRT5RHNLSKS"
}
Enter fullscreen mode Exit fullscreen mode

variables.tf

variable "env" {
}

variable "domain_name" {
}

variable "project_name" {
}

variable "tags" {
  type    = map(string)
  default = {}
}
Enter fullscreen mode Exit fullscreen mode

_outputs.tf

output "acm_arn" {
  value = aws_acm_certificate.cert.arn
}
Enter fullscreen mode Exit fullscreen mode

AWS LOAD BALANCER
Here, we will create an application load balancer that will handle the balancing of our applications.

modules
 ├── aws_loadbalancer
Enter fullscreen mode Exit fullscreen mode

main.tf

// Creat AWS ALB 
resource "aws_lb" "alb" {
  load_balancer_type         = "application"
  internal                   = var.alb_internal
  name                       = "${var.env}-alb-${var.project_name}"
  subnets                    = ["${var.subnet_id_1}", "${var.subnet_id_2}"]
  drop_invalid_header_fields = var.alb_drop_invalid_header_fields

  security_groups = [
    aws_security_group.alb.id,
  ]

  idle_timeout = 400

  dynamic "access_logs" {
    for_each = compact([var.lb_access_logs_bucket])

    content {
      bucket  = var.lb_access_logs_bucket
      prefix  = var.lb_access_logs_prefix
      enabled = true
    }
  }

  tags = {
    Name = "${var.env}-alb-${var.project_name}"
  }
}



//Creat SG ALB
resource "aws_security_group" "alb" {
  name        = "${var.env}-sg-alb-${var.project_name}"
  description = "SG for ECS ALB"
  vpc_id      = var.vpc_id

  revoke_rules_on_delete = "true"

  ingress {
    description = "TLS from VPC"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]

  }

  ingress {
    description = "HTTP from VPC"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]

  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "${var.env}-alb-${var.project_name}"
  }
}

//Creat default TG - ALB
resource "aws_alb_target_group" "target_group" {
  name        = "${var.env}-tg-default-alb"
  port        = 80
  protocol    = "HTTP"
  target_type = "ip"
  vpc_id      = var.vpc_id

  lifecycle {
    create_before_destroy = true
  }

  tags = merge(
    {
      "Name" = "${var.env}-tg-${var.project_name}"
    },
    var.tags,
  )
}


// Creat HTTPS listener
resource "aws_alb_listener" "listener_ssl" {
  load_balancer_arn = aws_lb.alb.arn
  port              = "443"
  protocol          = "HTTPS"
  ssl_policy        = "ELBSecurityPolicy-2016-08"
  certificate_arn   = var.certificate_arn

  default_action {
    target_group_arn = aws_alb_target_group.target_group.arn
    type             = "forward"
  }
  depends_on = [
    aws_alb_target_group.target_group
  ]
}


resource "aws_alb_listener_rule" "ssl_listener_rule" {
  action {
    target_group_arn = aws_alb_target_group.target_group.arn
    type             = "forward"
  }

  condition {
    host_header {
      values = ["default.${var.domain_name}"]
    }
  }

  priority     = var.priority_listener_rule
  listener_arn = aws_alb_listener.listener_ssl.arn

  depends_on = [
    aws_alb_listener.listener_ssl,
    aws_alb_target_group.target_group
  ]
}


// Creat HTTP listener
resource "aws_lb_listener" "listener_http" {
  load_balancer_arn = aws_lb.alb.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type = "redirect"

    redirect {
      port        = "443"
      protocol    = "HTTPS"
      status_code = "HTTP_301"
    }
  }

} 
Enter fullscreen mode Exit fullscreen mode

variables.tf

variable "alb" {
  default = true
}

variable "alb_http_listener" {
  default = true
}

variable "alb_sg_allow_test_listener" {
  default = true
}

variable "alb_sg_allow_egress_https_world" {
  default = true
}

variable "alb_only" {
  default = false
}

variable "alb_ssl_policy" {
  default = "ELBSecurityPolicy-2016-08"
  type    = string
}

variable "alb_internal_ssl_policy" {
  default = "ELBSecurityPolicy-TLS-1-2-Ext-2018-06"
  type    = string
}

variable "alb_drop_invalid_header_fields" {
  default = true
  type    = bool
}

variable "lb_access_logs_bucket" {
  type    = string
  default = ""
}

variable "lb_access_logs_prefix" {
  type    = string
  default = ""
}

variable "vpc_id" {
  type    = string
  default = ""
}

variable "subnet_id_1" {
  type    = string
  default = ""
}

variable "subnet_id_2" {
  type    = string
  default = ""
}

variable "project_name" {
  type    = string
  default = ""
}

variable "env" {
  type    = string
  default = ""
}

variable "alb_internal" {
  type    = bool
  default = false
}

variable "certificate_arn" {
  type    = string
  default = ""
}

variable "tags" {
  type    = map(string)
  default = {}
}

variable "priority_listener_rule" {
}

variable "domain_name" {
}
Enter fullscreen mode Exit fullscreen mode

outputs.tf

output "alb_arn" {
  value = aws_lb.alb.arn
}

output "alb_dns_name" {
  value = aws_lb.alb.dns_name
}


output "alb_secgrp_id" {
  value = aws_security_group.alb.id
}


output "alb_arn_suffix" {
  value = trimspace(regex(".*loadbalancer/(.*)", aws_lb.alb.arn)[0])
}

output "listener_ssl_arn" {
  value = aws_alb_listener.listener_ssl.arn
}
Enter fullscreen mode Exit fullscreen mode

AWS TARGET GROUP
Moving forward, let’s look at the codes that will comprise our TG.

modules
 ├── aws_targetgroup
Enter fullscreen mode Exit fullscreen mode

main.tf

//Creat Target Group
resource "aws_alb_target_group" "target_group" {
  name        = "${var.env}-tg-${var.project_name}"
  port        = 80
  protocol    = "HTTP"
  target_type = "ip"
  vpc_id      = var.vpc_id

  health_check {
    matcher             = "200-299"
    path                = var.health_check_path
    port                = var.container_port
    protocol            = "HTTP"
    unhealthy_threshold = 8
    timeout             = 10
  }


  lifecycle {
    create_before_destroy = true
  }

  tags = merge(
    {
      "Name" = "${var.env}-tg-${var.project_name}"
    },
    var.tags,
  )
}



// Creat HTTPS listener rule
resource "aws_alb_listener_rule" "ssl_listener_rule" {
  action {
    target_group_arn = aws_alb_target_group.target_group.arn
    type             = "forward"
  }

  condition {
    host_header {
      values = ["${var.host_headers}"]
    }
  }

  priority     = var.priority_listener_rule
  listener_arn = var.listener_ssl_arn

}
Enter fullscreen mode Exit fullscreen mode

variables.tf

variable "project_name" {
}

variable "env" {
}

variable "certificate_arn" {
}

variable "tags" {
  description = "Mapa de tags para serem aplicadas aos recursos."
  type        = map(string)
  default     = {}
}

variable "vpc_id" {
}

variable "subnet_id_1" {
}

variable "subnet_id_2" {
}

variable "listener_ssl_arn" {
}

variable "priority_listener_rule" {
}

variable "host_headers" {
}

variable "health_check_path" {
}

variable "container_port" {
}
Enter fullscreen mode Exit fullscreen mode

_outputs.tf

output "tg_alb_arn" {
  value = aws_alb_target_group.target_group.arn
}

output "tg_arn_suffix" {
  value = regex(".*:(.*)", aws_alb_target_group.target_group.arn)[0]
}
Enter fullscreen mode Exit fullscreen mode

ECS and ECR
All of the container configurations will be made here. We will build an ECS cluster first, and then a fargate service with all the necessary components. To host our application image, we will construct a repository in ECR in addition to the cluster and service.

ECS CLUSTER
modules
 ├── aws_cluster
Enter fullscreen mode Exit fullscreen mode

main.tf

// Creat ECS cluster ECS 
resource "aws_ecs_cluster" "ecs" {
  name = "${var.env}-${var.project_name}"

  setting {
    name  = "containerInsights"
    value = var.container_insights ? "enabled" : "disabled"
  }

  lifecycle {
    ignore_changes = [
      tags
    ]
  }

}
Enter fullscreen mode Exit fullscreen mode

variables.tf

variable "project_name" {
  type        = string
  default     = ""
}

variable "env" {
  type        = string
  default     = ""
}

variable "container_insights" {
  type        = bool
  default     = false
}
Enter fullscreen mode Exit fullscreen mode

_outputs.tf

output "cluster_name" {
  value = aws_ecs_cluster.ecs.name
}

output "cluster_arn" {
  value = aws_ecs_cluster.ecs.arn
}
Enter fullscreen mode Exit fullscreen mode

FARGATE

modules
 ├── aws_fargate
Enter fullscreen mode Exit fullscreen mode

main.tf

//Creat ECR repositpry
resource "aws_ecr_repository" "ecs_cluster_ecr" {
  name = "${var.env}-${var.project_name}"

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}"
    },
    var.tags,
  )
}

//Creat Route53 record 
resource "aws_route53_record" "record_sonic" {
  zone_id = "Z08676461KWRT5RHNLSKS"
  name    = "${var.host_headers}"
  type    = "CNAME"
  ttl     = 300
  records = [var.alb_dns_name]
}

//Creat Task Definition
resource "aws_ecs_task_definition" "ecs_task_definition" {
  family = "${var.env}-task-def-${var.project_name}"

  container_definitions = <<DEFINITION
[
  {
    "name":  "${var.env}-${var.project_name}" ,
    "image": "${var.aws_account_id}.dkr.ecr.${var.region}.amazonaws.com/${var.env}-${var.project_name}:latest",
    "essential": true,
    "memoryReservation": 64,
    "portMappings": [{
      "containerPort": ${var.container_port}
    }],
    "environment": [
      {
        "name": "ENV_PORT",
        "value": "${var.container_port}"
      },
      {
        "name": "ENVIRONMENT",
        "value": "${var.env}"
      }
    ],
    "logConfiguration": {
      "logDriver": "awslogs",
      "options": {
        "awslogs-group": "ecs-${var.env}-${var.project_name}",
        "awslogs-region": "${var.region}",
        "awslogs-create-group": "true",
        "awslogs-stream-prefix": "${var.env}-${var.project_name}"
      }
    }
  }
]

DEFINITION


  requires_compatibilities = ["FARGATE"]
  network_mode             = "awsvpc"
  task_role_arn            = var.ecs_role_arn
  execution_role_arn       = var.ecs_role_arn
  cpu                      = var.container_vcpu
  memory                   = var.container_memory
}


//Creat Fargate Service
resource "aws_ecs_service" "ecs_service" {
  name            = "${var.env}-${var.project_name}-service"
  cluster         = "${var.cluster_arn}"
  task_definition = aws_ecs_task_definition.ecs_task_definition.arn
  desired_count   = var.instance_count
  launch_type     = "FARGATE"

  load_balancer {
    target_group_arn = var.target_group_arn
    container_name   = "${var.env}-${var.project_name}"
    container_port   = var.container_port
  }

  network_configuration {
    security_groups  = [aws_security_group.sg_ecs.id]
    subnets          = ["${var.subnet_id_1}", "${var.subnet_id_2}"]
    assign_public_ip = "false"
  }

  deployment_minimum_healthy_percent = 50
  deployment_maximum_percent         = 400

  tags = merge(
    {
      "Name" = "${var.env}-${var.project_name}"
    },
    var.tags,
  )
}



///Creat SG to ECS
resource "aws_security_group" "sg_ecs" {
  name                   = "${var.env}-sg-ecs-${var.project_name}"
  description            = "SG for ECS"
  vpc_id                 = var.vpc_id
  revoke_rules_on_delete = "true"

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "${var.env}-sg-ecs-${var.project_name}"
  }
}

// SG rule ALB
resource "aws_security_group_rule" "rule_ecs_alb" {
  description              = "from ALB"
  type                     = "ingress"
  from_port                = 0
  to_port                  = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.sg_ecs.id
  source_security_group_id = var.sg_alb
}

// SG rule ECS
resource "aws_security_group_rule" "in_ecs_nodes" {
  description              = "from ECS"
  type                     = "ingress"
  from_port                = 0
  to_port                  = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.sg_ecs.id
  source_security_group_id = aws_security_group.sg_ecs.id
}
Enter fullscreen mode Exit fullscreen mode

variables.tf

variable "env" {
}

variable "region" {
}

variable "project_name" {
}

variable "container_port" {
}

variable "instance_count" {
}

variable "container_vcpu" {
}

variable "container_memory" {
}

variable "vpc_id" {
}

variable "subnet_id_1" {
}

variable "subnet_id_2" {
}

variable "aws_account_id" {
}

variable "tags" {
  type    = map(string)
  default = {}
}

variable "ecs_role_arn" {
}

variable "target_group_arn" {
}

variable "sg_alb" {
}

variable "cluster_arn" {
}


variable "host_headers" {
}

variable "alb_dns_name" {
}

Enter fullscreen mode Exit fullscreen mode

_outputs.tf

output "sg_ecs" {
  value = aws_security_group.sg_ecs.id
}

output "service_name" {
  value = aws_ecs_service.ecs_service.name
}
Enter fullscreen mode Exit fullscreen mode

Our modules are ready, and in the next section, we will create the hcl for Terragrunt and also apply our code. See Ya!

Top comments (0)