With this project, I’m creating AWS resources—specifically EC2 instances—using GitHub Actions. That’s the core of it, but I’m using advanced methodologies to get it done.
Normally, people just hardcode AWS AccessKeyID and SecretAccessKeyID. This is not a good practice! If someone gets these values, your resources are hacked. To avoid this issue, I used AWS OpenID Connect (OIDC). With this, there’s no need to add hardcoded AWS secrets to GitHub Actions as a secret variable. It’s a much more secure "handshake" between GitHub and AWS.
terraform {
required_version = "~>1.14" //version of terraform inside my mac
backend "s3" {
bucket = "terraform-state-hemantpatil-123456789"
key = "sre-project/terraform.tfstate"
region = "ap-south-1"
dynamodb_table = "terraform_state_lock"
encrypt = true
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6.0" //latest version of aws provider
}
}
}
Those who already have Terraform will understand it easily, but I will explain it once again. When we want to create cloud resources, we need to define Terraform versions. We need to define both versions: one is the Terraform version we downloaded on our laptop, and the other is which version of the AWS provider we need to download when using Terraform.
In this code, the main and special thing I did was storing the state file in an S3 bucket with a DynamoDB table locking mechanism. First, we will understand what a state file is. A state file in Terraform is like a real-time view of your infrastructure; it knows what you have created already in the cloud. Suppose if you move or change that resource, it will note it down and manage your infrastructure in JSON format. Most of the time, we store the state file on our local machine (laptop), but it’s not good practice. This is because if you lose this state file, you will lose control of the cloud resources you have created, so it is good practice to store it in an S3 bucket.
In this,
I also used DynamoDB table locking. The use of this locking system is that every “terraform apply” becomes unique. The problem it solves is this: Imagine two engineers, A and B, apply the command “terraform apply” at the same time; then only one can apply that command. For every user or engineer, it will create a LockID so there is no merge conflict.
resource "aws_s3_bucket" "terraform_state" {
bucket = "terraform-state-hemantpatil-123456789" //name of the s3 bucket where i want to store my terraform state file
lifecycle {
prevent_destroy = true
}
tags = {
name = "Terraform state bucket"
}
}
resource "aws_s3_bucket_versioning" "versioning" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_dynamodb_table" "terraform_lock" {
name = "terraform_state_lock"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
tags = {
name = "Terraform state lock table"
}
}
This is the code for creating the S3 bucket and DynamoDB table. Especially for the S3 bucket, I have added lifecycle attributes as prevent_destroy = true because I am storing my state there and it is too important; this ensures no one can accidentally destroy that resource. Also, for backup, I’m adding versioning for the S3 bucket for each data change, and in the DynamoDB table, I’m using the hash key as LockID.
1. THE SCANNER: Go to the internet and get GitHub's current security certificate.
data "tls_certificate" "github" {
url = "https://token.actions.githubusercontent.com/.well-known/openid-configuration"
}
2. THE TRUST CENTER: Tell AWS IAM to recognize GitHub as a valid login source.
resource "aws_iam_openid_connect_provider" "github" {
url = "https://token.actions.githubusercontent.com"
client_id_list = ["sts.amazonaws.com"] # The standard "Audience" for AWS
# Use the thumbprint we just fetched automatically
thumbprint_list = [data.tls_certificate.github.certificates[0].sha1_fingerprint]
}
1. THE UNIFORM: Create the Role that GitHub will "assume"
resource "aws_iam_role" "github_actions_role" {
name = "GitHubActionsTerraformRole"
# 2. THE TRUST POLICY: The logic that checks the "ID Card" at the gate
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRoleWithWebIdentity" # Allows login via GitHub Token
Effect = "Allow"
Principal = {
Federated = aws_iam_openid_connect_provider.github.arn
}
Condition = {
StringLike = {
# THE LOCK: Replace 'YOUR_USERNAME' with your actual GitHub handle
"token.actions.githubusercontent.com:sub": "repo:Hemantp1234/sre-project1:*"
}
}
}
]
})
}
3. THE PERMISSIONS: Give this role the "Master Keys" (Admin Access)
resource "aws_iam_role_policy_attachment" "admin_access" {
role = aws_iam_role.github_actions_role.name
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}
This code is huge and it looks scary but don’t worry I will explain why portion , let’s begin.
Step 1) Getting the TLS certificate Every secure web app has TLS (the supreme version of an SSL certificate). In order to authenticate with GitHub, we need to get its newer TLS certificate. In the next step, we will understand why we need that.
Step 2) Getting the thumbprint or hash value of that certificate Here, we are connecting GitHub and AWS in order to create cloud resources on AWS with the help of GitHub Actions. For that, we need to authenticate between AWS and GitHub Actions using the aws_iam_openid_connect_provider resource. Here, we get the hash value of the GitHub TLS certificate so there is no need to manually authenticate every time.
Step 3) Then we need to create a role We need to create a role for GitHub to assume. In that role, I am setting it up so only my GitHub account with this username and repo can access this role. I’m giving “AdministratorAccess” to this role because only then can that repo code create the resources in the cloud.
resource "aws_key_pair" "deployer" {
key_name = "sre-project-key"
public_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA5b4UiDxZlMZt+xFyvfUfpWnx8jSwqmvJeXwoRLddPW hemantpatil@Hemants-MacBook-Air.local"
}
1. First, tell Terraform to find your Default VPC
data "aws_vpc" "default" {
default = true
}
2. Define the Security Group (The resource that was missing!)
resource "aws_security_group" "allow_ssh" {
name = "allow_ssh_access"
description = "Allow SSH inbound traffic"
vpc_id = data.aws_vpc.default.id # This links it to the Default VPC
ingress {
description = "SSH from anywhere"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_ssh"
}
}
With the help of this code, I want to create an EC2 instance using a key pair. If we want to get into EC2 instances with the SSH protocol, we need to create a key-pair. Here, key-pair means we need both a public key and a private key.
When we create these keys in the AWS Console, we download the private key to our machine. However, if you want to create EC2 instances with the help of Terraform, first you need to create both public and private keys on your laptop. After that, you need to send the public key to AWS using the aws_key_pair resource. Remember, the private key always stays on your laptop. In the code, I am sending the public key to AWS so it can authenticate with the private key on my machine.
In this code, I’m also using the Default VPC. I am allowing traffic from the internet for both ingress and egress. Here, Ingress means network traffic coming into the EC2 instance from the outside, and Egress is the opposite (traffic going out from the instance). I am allowing everyone, which is why I am choosing the CIDR block as ["0.0.0.0/0"].
data "aws_ami" "amazon_linux_2023" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["al2023-ami-2023*-kernel-6.1-x86_64"]
}
}
2. Launch the EC2 Instance
resource "aws_instance" "sre_server" {
ami = data.aws_ami.amazon_linux_2023.id
instance_type = "t3.micro" # Free-tier eligible
# Attach the SSH Key from Task 2.1
key_name = aws_key_pair.deployer.key_name
# Attach the Security Group from Task 2.2
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
# Tagging is essential for SREs to track resources
tags = {
Name = "SRE-Project-Server"
Environment = "Dev"
ManagedBy = "Terraform"
}
}
3. Output the Public IP so you can SSH into it later
output "instance_public_ip" {
value = aws_instance.sre_server.public_ip
}
In this code, I’m just getting a fresh EC2 instance from AWS. I have also added conditions, like I only want the 2023 version. After that, I’m adding the key to my EC2 instance and the security group from the default VPC too. Finally, I added an output so I can see the Public IP address in my terminal as soon as the instance is ready.
Top comments (0)