This is the complete implementation of creating one or more ec2 instances using Terraform modules that will have pre-configured EFS.
EFS is a network based file sharing storage system that allow multiple servers to use the same shared storage.
Pre-requisites
aws access key and secret key must be configured on the machine or any other form of authentication to AWS.
Create one S3 bucket to store tfstate files.
Configure Backend s3 bucket
create a backend.tf file with the following code:
terraform {
backend "s3" {
bucket = "terraform-tfstate" // your bucket name
key = "tfstate"
region = "us-east-1" // your region
}
}
EFS (Elastic File System)
Create an efs.tf file.
Creating EFS and mount points in the subnets where we will deploy our ec2 servers.A mount point is essentially an endpoint that provides access to the EFS file system. It allows clients (such as EC2 instances) to mount the EFS file system to their local file systems, making it accessible just like a local directory.
resource "aws_efs_file_system" "efs" {
creation_token = "terraform-efs"
performance_mode = "generalPurpose"
lifecycle_policy {
transition_to_ia = "AFTER_90_DAYS"
}
}
resource "aws_efs_mount_target" "efs-mt" {
file_system_id = aws_efs_file_system.efs.id
subnet_id = data.aws_subnet.public_subnet.id
security_groups = [aws_security_group.efs.id]
}
- An associated security group with the EFS that will allow communication between the EC2 and EFS.
resource "aws_security_group" "efs" {
name = "efs-sg"
description= "Allow inbound efs traffic from ec2"
vpc_id = data.aws_vpc.default_vpc.id
ingress {
security_groups = [aws_security_group.ec2.id] // this is a reference to the sg associated with ec2 instances
from_port = 2049 // NFS listens on port 2049
to_port = 2049
protocol = "tcp"
}
egress {
security_groups = [aws_security_group.ec2.id]
from_port = 0
to_port = 0
protocol = "-1"
}
}
IAM role
- Create a role.tf file.
- this role will have the policy to give complete access to EFS and it can only be assumed by an ec2.
- An IAM instance profile is also required which allows it to be attached to an ec2 instance.
resource "aws_iam_role" "ec2_role" {
name = "ec2-role-for-efs-access"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
resource "aws_iam_policy_attachment" "efs_policy" {
name = "efs-policy"
roles = [aws_iam_role.ec2_role.name]
policy_arn = "arn:aws:iam::aws:policy/AmazonElasticFileSystemFullAccess"
}
resource "aws_iam_instance_profile" "ec2_instance_profile" {
name = "ec2-instance-profile-for-efs-access"
role = aws_iam_role.ec2_role.name
}
EC2 Module
Create a folder EC2 with the follwing files inside.
- resource.tf that will accept values and create an ec2 instance
resource "aws_instance" "web" {
ami = var.ami
instance_type = var.instance_type
subnet_id = var.subnet_id
key_name = var.key_name
vpc_security_group_ids = [var.security_group_id]
iam_instance_profile = var.iam_instance_profile
user_data = templatefile("<PROJECT PATH>/ec2/script.sh",{
efs_id = var.efs_id })
tags = {
Name = var.instance_name
}
}
- A vars.tf file with the following variables that the module will be using
variable "ami" {
type = string
}
variable "instance_type" {
type = string
}
variable "subnet_id" {
type = string
}
variable "key_name" {
type = string
}
variable "security_group_id" {
type = string
}
variable "instance_name" {
type = string
}
variable "efs_id" {
type = string
}
variable "iam_instance_profile" {
type = string
}
- A shell script that will be executed when an instance in provisioned. These commands will automate the process of configuration of efs storage on /efs directory on the ec2 instance.
#!/bin/bash
EFS_ID="${efs_id}"
sudo yum install -y amazon-efs-utils && \
mkdir /efs && \
echo "$EFS_ID:/ /efs efs defaults,_netdev 0 0" >> /etc/fstab && \
sudo mount /efs
Calling EC2 module
- Create instance.tf
- Retrieving existing VPC, Subnet and a key pair using Terraform Data sources.
data "aws_vpc" "default_vpc" {
id = "<YOUR VPC ID>"
}
data "aws_subnet" "public_subnet" {
vpc_id = data.aws_vpc.default_vpc.id
availability_zone = "us-east-1d"
}
data "aws_key_pair" "ec2_key_pair" {
key_name = "test"
}
- Create Security group for ec2
resource "aws_security_group" "ec2" {
name = "terraform-sg"
description = "Allow ssh inbound traffic"
vpc_id = data.aws_vpc.default_vpc.id
ingress {
description = "incoming traffic from anywhere using ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
- Calling ec2 module with the values for variables defined in ec2/vars.tf. Update these values as per your use case
module "ec2" {
source = "./ec2"
depends_on = [aws_efs_mount_target.efs-mt]
instance_name = "terraform-ec2-1"
subnet_id = data.aws_subnet.public_subnet.id
ami = "ami-012967cc5a8c9f891"
instance_type = "t2.micro"
key_name = data.aws_key_pair.ec2_key_pair.key_name
security_group_id = aws_security_group.ec2.id
efs_id = aws_efs_file_system.efs.id
iam_instance_profile = aws_iam_instance_profile.ec2_instance_profile.name
}
Voila ! and its done.
Now you have a scalable, reliable robust infrastructure that you can customise to create ec2 instances across multiple subnets that share the same network storage using EFS.
Do share your feedback :)
Top comments (0)