DEV Community

Cover image for Terraform & Terragrunt to Deploy a Web Server with Amazon EC2
Stephane Noutsa for AWS Community Builders

Posted on

Terraform & Terragrunt to Deploy a Web Server with Amazon EC2

Disclaimer

  1. Some basic understanding of the AWS cloud, Terraform, and Terragrunt is needed to be able to follow along with this tutorial.
  2. This article builds on my previous two articles, so to follow along you'll need to go through them first:

In this article, we'll use Terraform & Terragrunt to deploy an Apache web server to an EC2 instance that will be in the public subnet of a VPC. As stated in the disclaimer above, this article builds on my last articles, whose links are provided in the disclaimer.

An EC2 instance, which stands for Elastic Compute Cloud, is a virtual server in AWS. It allows you to run applications and services on the AWS cloud infrastructure and provides computing resources, such as CPU, memory, storage, and networking capabilities, which can be easily configured and scaled as per your requirements. You can think of an EC2 instance as a virtual machine in the cloud.

By the end of this article, we'll be able to access the Apache web server deployed to our EC2 instance by using its public IP address or its public DNS name.
Below are the different components we'll create to reach our objective:

  1. Security group building block
  2. SSH key pair building block
  3. EC2 instance profile building block
  4. EC2 instance building block
  5. Security group module in VPC orchestration Terragrunt code
  6. Web server orchestration Terragrunt code

Our building blocks will have the same common files as described in this article, although the variables.tf files will have additional variables in them.

1. Security group building block

This building block will be used to set a firewall (security rules) on our EC2 instance. It will allow us to define multiple ingress and egress rules at once for any security group that we create.

main.tf

resource "aws_security_group" "security_group" {
  name        = var.name
  description = var.description
  vpc_id      = var.vpc_id

  # Ingress rules
  dynamic "ingress" {
    for_each = var.ingress_rules
    content {
      from_port   = ingress.value.from_port
      to_port     = ingress.value.to_port
      protocol    = ingress.value.protocol
      cidr_blocks = ingress.value.cidr_blocks
    }
  }

  # Egress rules
  dynamic "egress" {
    for_each = var.egress_rules
    content {
      from_port   = egress.value.from_port
      to_port     = egress.value.to_port
      protocol    = egress.value.protocol
      cidr_blocks = egress.value.cidr_blocks
    }
  }

  tags = merge(var.tags, {
    Name = var.name
  })
}

output "security_group_id" {
  value = aws_security_group.security_group.id
}
Enter fullscreen mode Exit fullscreen mode

variables.tf (additional variables)

variable "vpc_id" {
  type = string
}

variable "name" {
  type = string
}

variable "description" {
  type = string
}

variable "ingress_rules" {
  type = list(object({
    protocol    = string
    from_port   = string
    to_port     = string
    cidr_blocks = list(string)
  }))
  default = []
}

variable "egress_rules" {
  type = list(object({
    protocol    = string
    from_port   = string
    to_port     = string
    cidr_blocks = list(string)
  }))
  default = []
}

variable "tags" {
  type = map(string)
}
Enter fullscreen mode Exit fullscreen mode

2. SSH key pair building block

This building block will allow us to create key pairs that we'll use to SSH into our EC2 instance. We'll first need to use OpenSSH to manually create a key pair, then provide the public key as an input to this building block (in the corresponding Terragrunt module).

This article shows you how to create a key pair on macOS and Linux:
https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/create-with-openssh/

variables.tf (additional variables)

variable "key_name" {
  type = string
}

variable "public_key" {
  type = string
}

variable "tags" {
  type = map(string)
}
Enter fullscreen mode Exit fullscreen mode

main.tf

resource "aws_key_pair" "ssh" {
  key_name   = var.key_name
  public_key = var.public_key

  tags = merge(var.tags, {
    Name = var.key_name
  })
}

output "key_name" {
  value = aws_key_pair.ssh.key_name
}

output "key_pair_id" {
  value = aws_key_pair.ssh.key_pair_id
}

output "key_pair_arn" {
  value = aws_key_pair.ssh.arn
}
Enter fullscreen mode Exit fullscreen mode

NB: We actually don't need this because our EC2 instance profile's role will allow our EC2 instance to be managed by Systems Manager (an AWS service), which will allow us to log into our instance using Session Manager (a Systems Manager feature) without needing an SSH key pair.
(This key pair will be used in the next article where Ansible gets involved, so stay alert for that one 😉)

3. EC2 instance profile building block

An EC2 instance profile in AWS is a container for an IAM (Identity and Access Management) role that you can assign to an EC2 instance. It provides the necessary permissions for the instance to access other AWS services and resources securely.

For the purpose of this article, our instance profile will be assigned a role with permissions to be managed by Systems Manager.

variables.tf (additional variables)

variable "iam_policy_statements" {
  type = list(object({
    sid    = string
    effect = string
    principals = object({
      type        = optional(string)
      identifiers = list(string)
    })
    actions   = list(string)
    resources = list(string)
  }))
}

variable "iam_role_name" {
  type = string
}

variable "iam_role_description" {
  type = string
}

variable "iam_role_path" {
  type = string
}

variable "other_policy_arns" {
  type = list(string)
}

variable "instance_profile_name" {
  type = string
}

variable "tags" {
  type = map(string)
}
Enter fullscreen mode Exit fullscreen mode

main.tf

# IAM Policy
data "aws_iam_policy_document" "iam_policy" {
  dynamic "statement" {
    for_each = { for statement in var.iam_policy_statements : statement.sid => statement }

    content {
      sid    = statement.value.sid
      effect = statement.value.effect

      principals {
        type        = statement.value.principals.type
        identifiers = statement.value.principals.identifiers
      }

      actions   = statement.value.actions
      resources = statement.value.resources
    }
  }
}

# IAM Role
resource "aws_iam_role" "iam_role" {
  name               = var.iam_role_name
  description        = var.iam_role_description
  path               = var.iam_role_path
  assume_role_policy = data.aws_iam_policy_document.iam_policy.json

  tags = {
    Name = var.iam_role_name
  }
}

# Attach more policies to role
resource "aws_iam_role_policy_attachment" "other_policies" {
  for_each = toset([for policy_arn in var.other_policy_arns : policy_arn])

  role       = aws_iam_role.iam_role.name
  policy_arn = each.value
}

# EC2 Instance Profile
resource "aws_iam_instance_profile" "instance_profile" {
  name = var.instance_profile_name
  role = aws_iam_role.iam_role.name

  tags = merge(var.tags, {
    Name = var.instance_profile_name
  })
}

output "instance_profile_name" {
  value = aws_iam_instance_profile.instance_profile.name
}
Enter fullscreen mode Exit fullscreen mode

4. EC2 instance building block

This building block will create the virtual machine where the Apache web server will be deployed.

variables.tf (additional variables)

variable "most_recent_ami" {
  type = bool
}

variable "owners" {
  type = list(string)
}

variable "ami_name_filter" {
  type = string
}

variable "ami_values_filter" {
  type = list(string)
}

variable "instance_profile_name" {
  type = string
}

variable "instance_type" {
  type = string
}

variable "subnet_id" {
  type = string
}

variable "associate_public_ip_address" {
  type = bool
}

variable "vpc_security_group_ids" {
  type = list(string)
}

variable "has_user_data" {
  type = bool
}

variable "user_data_path" {
  type = string
}

variable "user_data_replace_on_change" {
  type = bool
}

variable "instance_name" {
  type = string
}

variable "uses_ssh" {
  type = bool
}

variable "key_name" {
  type = string
}

variable "tags" {
  type = map(string)
}
Enter fullscreen mode Exit fullscreen mode

main.tf

# AMI
data "aws_ami" "ami" {
  most_recent = var.most_recent_ami
  owners      = var.owners

  filter {
    name   = var.ami_name_filter
    values = var.ami_values_filter
  }
}

# EC2 Instance
resource "aws_instance" "instance" {
  ami                         = data.aws_ami.ami.id
  associate_public_ip_address = var.associate_public_ip_address
  iam_instance_profile        = var.instance_profile_name
  instance_type               = var.instance_type
  key_name                    = var.uses_ssh ? var.key_name : null
  subnet_id                   = var.subnet_id
  user_data                   = var.has_user_data ? file(var.user_data_path) : null
  user_data_replace_on_change = var.has_user_data ? var.user_data_replace_on_change : null
  vpc_security_group_ids      = var.vpc_security_group_ids

  tags = merge(var.tags, {
    Name = var.instance_name
  })
}

output "instance_id" {
  value = aws_instance.instance.id
}

output "instance_arn" {
  value = aws_instance.instance.arn
}

output "instance_private_ip" {
  value = aws_instance.instance.private_ip
}

output "instance_public_ip" {
  value = aws_instance.instance.public_ip
}

output "instance_public_dns" {
  value = aws_instance.instance.public_dns
}
Enter fullscreen mode Exit fullscreen mode

5. Security group module in VPC orchestration Terragrunt code

In the vpc-live/dev/ directory that we created in the previous article, we'll create a new directory called security-group that will contain a terragrunt.hcl file.

Directory structure

vpc-live/
  dev/
    ... (previous modules)
    security-group/
      terragrunt.hcl
  terragrunt.hcl
Enter fullscreen mode Exit fullscreen mode

vpc-live/dev/security-group/terragrunt.hcl

include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "<path_to_local_security_group_building_block_or_git_repo_url>"
}

dependency "vpc" {
  config_path = "../vpc"
}

inputs = {
  AWS_ACCESS_KEY_ID = "<your_aws_access_key_id>"
  AWS_SECRET_ACCESS_KEY = "<your_aws_secret_access_key>"
  AWS_REGION = "<your_aws_region>"
  vpc_id = dependency.vpc.outputs.vpc_id
  name = "dev-sg"
  description = "Allow HTTP (80), HTTPS (443) and SSH (22)"
  ingress_rules = [
    {
      protocol    = "tcp"
      from_port   = 80
      to_port     = 80
      cidr_blocks = ["0.0.0.0/0"]
    },
    {
      protocol    = "tcp"
      from_port   = 443
      to_port     = 443
      cidr_blocks = ["0.0.0.0/0"]
    },
    {
      protocol    = "tcp"
      from_port   = 22
      to_port     = 22
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
  egress_rules = [
    {
      protocol    = "tcp"
      from_port   = 80
      to_port     = 80
      cidr_blocks = ["0.0.0.0/0"]
    },
    {
      protocol    = "tcp"
      from_port   = 443
      to_port     = 443
      cidr_blocks = ["0.0.0.0/0"]
    },
    {
      protocol    = "tcp"
      from_port   = 22
      to_port     = 22
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
  tags = {}
}
Enter fullscreen mode Exit fullscreen mode

This module will create a security group that allows internet traffic on ports 80 (HTTP), 443 (HTTPS), and 22 (SSH).

After adding this, I can run the command below from the vpc-live/dev/ directory to create the security group (enter y when prompted to confirm the creation of the resource).
Be sure to set the appropriate values for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_REGION, and DO NOT commit these values to a Git repository.

terragrunt run-all apply

Below is part of the output of the above command which shows that the security group has been created:

Terragrunt output

6. Web server orchestration Terragrunt code

To proceed, we'll first need to copy the ID of one public subnet and the ID of our newly created security group from our VPC. Don't use the ID in the image as that won't work for you.

Our Terragrunt code will have the following directory structure:

ec2-live/
  dev/
    apache-server/
      ec2-key-pair/
        terragrunt.hcl
      ec2-web-server/
        terragrunt.hcl
        user-data.sh
      ssm-instance-profile/
        terragrunt.hcl
      terragrunt.hcl
Enter fullscreen mode Exit fullscreen mode

The content of the terragrunt.hcl files will be shared below.
Notice that the ec2-web-server subdirectory contains a script (user-data.sh). This script will deploy the Apache web server to our EC2 instance as will be illustrated in a step further down.

ec2-live/dev/apache-server/terragrunt.hcl

generate "backend" {
  path      = "backend.tf"
  if_exists = "overwrite_terragrunt"
  contents = <<EOF
terraform {
  backend "s3" {
    bucket         = "<s3_bucket_name>"
    key            = "${path_relative_to_include()}/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
  }
}
EOF
}
Enter fullscreen mode Exit fullscreen mode

The above file, which is the root Terragrunt file, defines the backend configuration and will save the Terraform state file in an S3 bucket that you would have already created manually (and whose name will replace the placeholder in the above configuration).

ec2-live/dev/apache-server/ec2-key-pair/terragrunt.hcl

include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "<path_to_local_key_pair_building_block_or_git_repo_url>"
}

inputs = {
  AWS_ACCESS_KEY_ID = "<your_aws_access_key_id>"
  AWS_SECRET_ACCESS_KEY = "<your_aws_secret_access_key>"
  AWS_REGION = "<your_aws_region>"
  key_name = "Apache server SSH key pair"
  public_key = "<your_ssh_public_key>"
  tags = {}
}
Enter fullscreen mode Exit fullscreen mode

This module will create the key pair that will be used to SSH into the EC2 instance.
Be sure to replace the source value in the terraform block with the path to your local building block or the URL of the Git repo hosting the building block's code.
Also, replace the public_key value in the inputs section with the content of your SSH public key.

ec2-live/dev/apache-server/ssm-instance-profile/terragrunt.hcl

include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "<path_to_local_ec2_instance_profile_building_block_or_git_repo_url>"
}

inputs = {
  AWS_ACCESS_KEY_ID = "<your_aws_access_key_id>"
  AWS_SECRET_ACCESS_KEY = "<your_aws_secret_access_key>"
  AWS_REGION = "<your_aws_region>"
  iam_policy_statements = [
    {
      sid = "AllowEC2AssumeRole"
      effect = "Allow"
      principals = {
        type        = "Service"
        identifiers = ["ec2.amazonaws.com"]
      }
      actions   = ["sts:AssumeRole"]
      resources = []
    }
  ]
  iam_role_name = "EC2RoleForSSM"
  iam_role_description = "Allows EC2 instance to be managed by Systems Manager"
  iam_role_path = "/"
  other_policy_arns = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"]
  instance_profile_name = "EC2InstanceProfileForSSM"
  tags = {
    Name = "dev-ssm-instance-profile"
  }
}
Enter fullscreen mode Exit fullscreen mode

This module allows us to use an AWS-managed IAM policy (AmazonSSMManagedInstanceCore) that grants Systems Manager the permissions it needs to manage an EC2 instance. This policy will then be attached to an IAM role (whose name we've defined as EC2RoleForSSM here) that will be created by the instance profile building block and attached to the created instance profile (that we've named EC2InstanceProfileForSSM here).

ec2-live/dev/apache-server/ec2-web-server/terragrunt.hcl

include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "<path_to_local_ec2_instance_building_block_or_git_repo_url>"
}

dependency "key-pair" {
  config_path = "../ec2-key-pair" # Path to Terragrunt ec2-key-pair module
}

dependency "instance-profile" {
  config_path = "../ssm-instance-profile" # Path to Terragrunt ssm-instance-profile module
}

inputs = {
  AWS_ACCESS_KEY_ID = "<your_aws_access_key_id>"
  AWS_SECRET_ACCESS_KEY = "<your_aws_secret_access_key>"
  AWS_REGION = "<your_aws_region>"
  most_recent_ami = true
  owners = ["amazon"]
  ami_name_filter = "name"
  ami_values_filter = ["al2023-ami-2023.*-x86_64"]
  instance_profile_name = dependency.instance-profile.outputs.instance_profile_name
  instance_type = "t3.micro"
  subnet_id = "<copied_subnet_id>"
  associate_public_ip_address = true # Set to true so that our instance can be assigned a public IP address
  vpc_security_group_ids = ["<copied_security_group_id>"]
  has_user_data = true
  user_data_path = "user-data.sh"
  user_data_replace_on_change = true
  instance_name = "Apache Server"
  uses_ssh = true # Set to true so that the building block knows to uses the input below
  key_name = dependency.key-pair.outputs.key_name
  tags = {}
}
Enter fullscreen mode Exit fullscreen mode

This module depends on both the EC2 key pair and EC2 instance profile modules as indicated by the dependency blocks (as well as the values of the instance_profile_name and key_name inputs).
It will use the most recent version of the AWS Amazon Linux 2023 AMI (as the values for most_recent_ami, owners, ami_name_filter, and ami_values_filter indicate) and will create an instance of type t3.micro, belonging to a public subnet in our VPC (whose ID we previously copied and should paste as the value of the subnet_id input) and also using the security group we created above (whose ID we also previously copied and should paste as a value in the array of values of the vpc_security_group_ids input).

The user_data_path input expects to receive the path to a script that will be executed only when the EC2 instance is first created. This script is the user-data.sh file that will contain instructions to deploy an Apache web server to our EC2 instance as shown below:

ec2-live/dev/apache-server/ec2-web-server/user-data.sh

#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1>Hello World from $(hostname -f)</h1>" > /var/www/html/index.html
Enter fullscreen mode Exit fullscreen mode

This script does the following:
a) Updates the Amazon Linux 2023 system (yum update -y)
b) Installs httpd which is the Apache web server (yum install -y httpd)
c) Starts the Apache service (systemctl start httpd)
d) Ensures the Apache service is started whenever the server restarts (systemctl enable httpd)
e) Copies the string "

Hello World from $(hostname -f)

" into the index.html file located in the /var/www/html/ directory. This will make the server display this string in bold, replacing $(hostname -f) with the hostname of the EC2 instance.

Putting it all together

Our Terraform and Terragrunt configuration is now ready, so we can create the resources using the following Terragrunt command from within the ec2-live/dev/apache-server/ directory. Enter y when prompted to confirm the creation of the resources.

terragrunt run-all apply

The last output lines following the successful execution of this command should look like this:

Terragrunt output

From the list of outputs, we are most concerned with the instance_public_dns and instance_public_ip, whose values will allow us to access our web server from our browser.

Accessing web server via its public IP address

Accessing web server via its public DNS name

As you can see, both the public IP address and public DNS name return the same result when accessed from a browser.
You can also see that the message is the same as that which was set in the user data script, and it has replaced $(hostname -f) with the hostname of the created EC2 instance.

Bonus - Systems Manager

We can now access the AWS management console to check if our EC2 instance is managed by Systems Manager. To do this, we need to:

  1. log in to the AWS management console,
  2. then search for the Systems Manager service from the search bar. We can pick the Systems Manager option that is presented to us. This will take us to the Systems Manager console, where we can scroll down the menu and select Session Manager (under the Node Management section).
  3. We'll see a button labeled Start session that we should click on and we'll be presented with a list of target instances.
  4. Our instance will be in this list, so we can select its radio button and click on the Start session button at the bottom to log in to the instance.

Voilà!

Conclusion

Given that we can now easily deploy an Apache web server to an EC2 instance using both Terraform and Terragrunt, we should delete the resources we created to avoid incurring unexpected costs. We should do this from both the vpc-live/dev and ec2-live/dev/apache-server directories using the command below. Enter y when prompted to confirm the destruction of these resources.

terragrunt run-all destroy

In the next article, we'll create a second instance in a private subnet, and see how to use Ansible, a configuration management tool, to manage the configuration of our public and private instances.

Until then, happy coding!

Top comments (0)