Introduction
In this tutorial we will deploy a full docker compose file directly to ec2 using Terraform or OpenTofu (the open source version of Terraform).
What? How?
Glad you asked. Before jumping on it's better to have a clear idea about how we are going to do it in the firts place.
So, we will create a "launch template". It's basically a template that we can use to create any numbers of ec2 instances we want. We just have to configure the launch template once.
The launch template will be configured to provision (basically copy) your compose file to desired location of the server (ec2) using user data. And using user data along with cloudinit config will help us here. what if the image tag changes? Well the compose file will use environment variable and we will supply the desired tag on user data while running.
Now that we are clear what we are gonna do, let's start.
Directory Structure
We will keep the compose file in app directory, and .tf files in Terraform directory. This is the structure we will follow -
project-root/
├── app/
│ ├── docker-compose.yml
│ └── other-app-files...
└── Terraform/
├── main.tf
├── variables.tf
└── other-terraform-files...
The Docker Compose file
I have created a simple app that shows your IP address on 8080 port.
Here's the compose file -
version: '3'
services:
my_app:
image: "ashraftheminhaj/ip-fetcher:${TAG}"
container_name: my_app
ports:
- "8080:8080"
Terraform/Tofu files
Let's see the launch-template.tf and ec2.tf files first -
launch_template.tf
See the provision config, we send the docker-compose file to ec2-user's (the user name of that ec2 instance/server) home path.
Then we install docker, docker compose then finally run it while passing the desired docker image tag -
resource "aws_iam_role" "ec2_role" {
name = "${local.component}-role-${var.component_postfix}"
assume_role_policy = jsonencode(
{
"Version" : "2012-10-17",
"Statement" : [
{
"Action" : "sts:AssumeRole",
"Principal" : {
"Service" : "ec2.amazonaws.com"
},
"Effect" : "Allow",
"Sid" : ""
}
]
})
}
resource "aws_iam_role_policy" "ec2_policy" {
name = "${local.component}-policy-${var.component_postfix}"
role = aws_iam_role.ec2_role.id
policy = jsonencode(
{
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Action" : [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource" : "arn:aws:logs:${var.aws_region}:${data.aws_caller_identity.current.account_id}:*"
}
]
})
}
# cloudinit configs - user data
locals {
provision_config = <<-EOF
#cloud-config
${jsonencode({
write_files = [
{
path = "/home/ec2-user/docker-compose.yml"
permissions = "0644"
encoding = "b64"
content = filebase64("../app/docker-compose.yml")
},
]
})
}
EOF
}
data "cloudinit_config" "config" {
gzip = false
base64_encode = true
part {
content_type = "text/cloud-config"
filename = "cloud-config-cred-provision.yaml"
content = local.provision_config
}
part {
content_type = "text/x-shellscript"
filename = "setup_dependencies.sh"
content = <<-EOF
#!/bin/bash
cd /home/ec2-user/
sudo yum update -y
sudo yum install docker -y
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo systemctl enable docker.service
sudo systemctl start docker.service
sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# docker-compose version
touch i_ran.txt
# sudo curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
# sudo unzip awscliv2.zip
# sudo ./aws/install
# sudo docker login --username=user --password-stdin < dockerhub_token.txt
sudo -E TAG=${local.docker_image_tag} docker-compose up
EOF
}
}
resource "aws_iam_instance_profile" "instance_profile" {
name = local.ec2_profile
role = aws_iam_role.ec2_role.name
tags = {
app = "${var.component_prefix}"
Name = "${var.component_prefix}-${var.component_name}-ec2prof-${var.component_postfix}"
env = "${var.component_postfix}"
}
}
resource "aws_launch_template" "machine_template" {
name = local.ec2_launch_template
image_id = var.ami_id
instance_type = var.instance_type
key_name = var.ssh_key
user_data = data.cloudinit_config.config.rendered
vpc_security_group_ids = [aws_security_group.ec2_security_group.id]
metadata_options {
http_tokens = "required"
}
iam_instance_profile {
name = aws_iam_instance_profile.instance_profile.name
}
tag_specifications {
resource_type = "instance"
tags = {
Name = "${local.component}-${var.component_postfix}" # name of the ec2 instance
Source = "Autoscaling"
}
}
monitoring {
enabled = false
}
tags = {
app = "${var.component_prefix}"
env = "${var.component_postfix}"
}
}
ec2.tf
Using the launch template we created before, we can now deploy an ec2 instance. The ec2 instances name will be "test-min-instance", feel free to change according to your need.
resource "aws_instance" "ec2_instance" {
launch_template {
id = aws_launch_template.machine_template.id
version = "$Latest"
}
iam_instance_profile = aws_iam_instance_profile.instance_profile.id
tags = {
Name = "test-min-instance"
}
lifecycle {
create_before_destroy = true
}
}
Now Let's create the other .tf files
main.tf
provider "aws" {
region = var.aws_region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
}
vpc.tf
for keeping this simple, let's use default vpc and subnets. Does the job.
# use default ones for testing only
data "aws_vpc" "vpc" {
default = true
}
data "aws_subnets" "subnet" {
filter {
name = "vpc-id"
values = ["${data.aws_vpc.vpc.id}"]
}
}
security_group.tf
I am opening port 8080 (for app) and 22 (for ssh access) -
resource "aws_security_group" "ec2_security_group" {
name = "${local.component}-ec2-sg-${var.component_postfix}"
description = "Public internet access"
vpc_id = data.aws_vpc.vpc.id
dynamic "ingress" {
for_each = [22, var.ec2_ingress_port]
iterator = port
content {
description = "Allow inbound traffic"
from_port = port.value
to_port = port.value
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
egress {
description = "Allow outbound traffic"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${local.component}-ec2-sg-${var.component_postfix}"
Role = "public"
ManagedBy = "terraform"
app = "${var.component_prefix}"
env = "${var.component_postfix}"
ManagedBy = "terraform"
}
}
locals.tf
Just a file to create resource names in a fancy way. Ideally, during CI/CD, a file is generated with the image tag that it had just built. Kept it there is someone needs to know how it's done -
locals {
component = "${var.component_prefix}-${var.component_name}"
ec2_profile = "${var.component_prefix}-${var.component_name}-${var.ec2_profile}-${var.component_postfix}"
ec2_launch_template = "${var.component_prefix}-${var.component_name}-${var.ec2_launch_template}-${var.component_postfix}"
s3_origin_id = "${var.component_prefix}-bucket-oid-${var.component_postfix}"
docker_image_tag = "latest"
# docker_image_tag = trimspace(file("../scripts/tmp/docker_image_tag.txt"))
}
variables.tf
Here we define all the variable and their values -
variable "aws_region" {
default = "ap-southeast-1"
}
variable "component_prefix" {
default = "test"
}
variable "component_name" {
default = "min"
}
variable "component_postfix" {
default = "dev"
}
# ec2
variable "ec2_profile" {
default = "profile"
}
variable "ec2_launch_template" {
default = "launch-template"
}
variable "ami_id" {
default = "ami-0a481e6d13af82399"
description = "amazon linux 2023"
}
variable "instance_type" {
default = "t2.micro"
}
variable "ssh_key" {
default = "min-test-key"
sensitive = true
}
variable "slow_start_period" {
default = 140
}
variable "ec2_ingress_port" {
default = 8080
}
Deploy
Now you can just run
terraform init
terraform plan
to see the changes or if there is any mistake. Then -
terraform apply
To make the changes.
Now go to your aws console and copy the ip address of that instance. On a browser go to "ec2-ip:8080", for me it was - "http://52.221.212.76:8080/" and you should see something similar to this -
Note: destroy things before the bill piles up.
terraform destroy --auto-approve
For tofu users -
tofu init
tofu plan
tofu apply
tofu destroy
Conclusion
Ec2 is very powerful and with this setup we can ditch ECS and take full control of our deployment. I hope this helps someone. Find the source codes here. Happy Coding!
Top comments (0)