In this article, I am going to show you how to deploy a Docker (centos) image from an ECS cluster with Terraform.
What is ECS?
Amazon Elastic Container Service (Amazon ECS) is a highly scalable and fast container management service. You can use it to run, stop, and manage containers on a cluster.
Here, We are going to focus only on four main components of ECS.
- ECS Cluster: An Amazon ECS cluster is a logical grouping of tasks or services.
- Task: A task is the instantiation of a task definition within a cluster.
- Task Definition: A task definition is a JSON format text file that describes one or more containers that form your application. Maximum ten containers. The task definition functions as a blueprint for your application.
- ECS Service: An Amazon ECS service runs and maintains your desired number of tasks simultaneously in an Amazon ECS cluster.
What is Fargate?
AWS Fargate is a serverless technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.
What is Terraform?
HashiCorp Terraform is an infrastructure as code (IaC) tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share.
What is Docker?
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.
Here is the link to my github code for this article: ECS_Dockerimage_Terraform
Let’s get started!
Objectives:
Your team needs you to deploy a Docker container with a CentOS image-
- Pull a CentOS image from the Docker registry.
- Create an ECS cluster using the docker image with Terraform.
Pre-requisites:
- AWS user account with admin access, not a root account.
- Cloud9 IDE, comes with Terraform and Docker installed.
- An account with Docker and Docker Hub.
Resources Used:
For this article, I referred to ECS documentation and ECS (Elastic Container) section from Terraform AWS documentation.
Steps for implementaion of this project:
- Create a directory for this project
- Create these following files into your project directory
providers.tf
main.tf
vpc.tf
variables.tf
outputs.tf
.gitignore
terraform.tfvars
- Provision Infrastructure
- Verify Resources created from AWS Console
- Clean up
Create a directory for this project
mkdir ECS_Dockerimage_Terraform
cd ECS_Dockerimage_Terraform
Create these following files into your project directory
— ECS_Dockerimage_Terraform
- providers.tf
This file contains two providers
- Docker for pulling the centos image
- AWS for creating ECS
Note: Here in our aws provider block we set our access_key
and secret_access_key
variables.
# --- Terraform_projects/ECS_Dockerimage_Terraform/providers.tf ---
# Configure the Docker & AWS Providers
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "~> 2.20.0"
}
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
}
provider "docker" {}
provider "aws" {
region = var.region
access_key = var.access_key
secret_key = var.secret_access_key
}
- main.tf
This file contains resources for creating AWS ECS cluster and its components.
# --- Terraform_projects/ECS_Dockerimage_Terraform/main.tf ---
#-----Create ECS cluster
resource "aws_ecs_cluster" "cluster" {
name = "centos-cluster"
}
resource "aws_ecs_cluster_capacity_providers" "cluster" {
cluster_name = aws_ecs_cluster.cluster.name
capacity_providers = ["FARGATE"]
default_capacity_provider_strategy {
base = 1
weight = 100
capacity_provider = "FARGATE"
}
}
#ECS Service and it's details.
resource "aws_ecs_service" "ecs_service" {
name = "project-service"
cluster = aws_ecs_cluster.cluster.id
task_definition = aws_ecs_task_definition.ecs_task.arn
launch_type = "FARGATE"
desired_count = 1
network_configuration {
subnets = [aws_subnet.private_east_a.id, aws_subnet.private_east_b.id]
}
}
#Tasks definitions
resource "aws_ecs_task_definition" "ecs_task" {
family = "service"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE", "EC2"]
cpu = 512
memory = 2048
container_definitions = <<DEFINITION
[
{
"name" : "centos",
"image" : "centos:7",
"cpu" : 512,
"memory" : 2048,
"essential" : true,
"portMappings" : [
{
"containerPort" : 80,
"hostPort" : 80
}
]
}
]
DEFINITION
}
- vpc.tf
This file has our vpc and subnets information.
# --- Terraform_projects/ECS_Dockerimage_Terraform/vpc.tf ---
#-------Create VPC for ECS
resource "aws_vpc" "project_ecs" {
cidr_block = var.cidr
tags = {
Name = "Project ECS"
}
}
#--------Create Private subnets for ECS
resource "aws_subnet" "private_east_a" {
vpc_id = aws_vpc.project_ecs.id
cidr_block = var.private_cidr_a
availability_zone = var.region_a
tags = {
Name = "Private East A"
}
}
resource "aws_subnet" "private_east_b" {
vpc_id = aws_vpc.project_ecs.id
cidr_block = var.private_cidr_b
availability_zone = var.region_a
tags = {
Name = "Private East B"
}
}
- variables.tf
This file contains variables.
Note: Here we have declared variables for our access key and secret access key.
# --- Terraform_projects/ECS_Dockerimage_Terraform/variables.tf ---
variable "region" {
description = "region to use for AWS resources"
type = string
default = "us-east-1"
}
variable "region_a" {
description = "The region the environment is going to be installed into"
type = string
default = "us-east-1a"
}
variable "region_b" {
description = "The region the environment is going to be installed into"
type = string
default = "us-east-1b"
}
variable "cidr" {
description = "CIDR range for created VPC"
type = string
default = "10.0.0.0/16"
}
variable "private_cidr_a" {
description = "CIDR range for created VPC"
type = string
default = "10.0.1.0/24"
}
variable "private_cidr_b" {
description = "CIDR range for created VPC"
type = string
default = "10.0.2.0/24"
}
variable "access_key" {
type = string
sensitive = true
}
variable "secret_access_key" {
type = string
sensitive = true
}
Setting the sensitive = true
means that these variables are not displayed as plain text in the CLI or state File. We also do not pass our actual values for keys in this file, which are stored in another file terraform.tvars.
- terraform.tfvars
Here we provided values for our access key and secret access key. This file is automatically ignored by your .gitignore file and will not be published to your repo.
# --- Terraform_projects/ECS_Dockerimage_Terraform/terraform.tfvars ---
access_key = <"your access key">
secret_access_key = <"your secret access key">
- outputs.tf
This file is used to extract the value of an output variable from the state file.
# --- Terraform_projects/ECS_Dockerimage_Terraform/outputs.tf ---
#This will display the name of the cluster.
output "aws_ecs_cluster" {
value = aws_ecs_cluster.cluster.name
description = "The name of the cluster"
}
#Compute serverless engine for ECS.
output "aws_ecs_cluster_capacity_providers" {
value = aws_ecs_cluster_capacity_providers.cluster.capacity_providers
description = "Compute serverless engine for ECS"
}
- .gitignore
This file will ensure that our sensitive information is ignored by git.
# --- Terraform_projects/ECS_Dockerimage/.gitignore
# Local .terraform directories
**/.terraform/*
# Ignore local .git directories
**/.git/*
# .tfstate files
*.tfstate
*.tfstate.*
# Crash log files
crash.log
crash.*.log
# Exclude all .tfvars files, which are likely to contain sensitive data, such as
# password, private keys, and other secrets. These should not be part of version
# control as they are data points which are potentially sensitive and subject
# to change depending on the environment.
*.tfvars
*.tfvars.json
# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json
# Include override files you do wish to add to version control using negated pattern
# !example_override.tf
# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*
# Ignore CLI configuration files
.terraformrc
terraform.rc
Provision Infrastructure
- Run
terraform init
-> initialize directory, pull down providers and modules from the registry to allow your configuration to work properly. - Run
terraform fmt
-> reformats your configuration in the standard style, so it’ll make sure that the spacing and everything else is formatted correctly. - Run
terraform validate
-> catch syntax errors, version errors, and other issues. - Run
terraform plan
-> do a dry run of your plan to see what it’s actually going to do and what resources will be created. - Run
terraform apply
-> applies your configuration to a provider to create your infrastructure. TypeYes
when prompted.
- Lists all the outputs from outputs.tf file for the root module, as stated in the code.
- Run
terraform state list
-> shows 7 resources are added
Verify Resources created from AWS Console
- Click centos-cluster
- On the Services tab
- Click Service
ECS-Fargate
- On the Tasks tab
- Click the Task
- In the Task Details tab
- We will see our Pending Containers. Notice our pending container’s Image is centos.
centos container with centos-7 image
- From the left panel click TaskDefinitions
Subnets
VPC
Clean up
- Run
terraform destroy --auto-approve
- Will destroy your infrastructure
- Destroy without being prompted to enter ”yes”
Top comments (0)