DEV Community

Amarachi Iheanacho
Amarachi Iheanacho

Posted on

Provision an AWS EC2 jumphost using Terraform and GitHub Actions

Modern application development demands a level of agility that traditional architectures simply can't support. Today's systems must enable teams to deploy changes quickly, safely, and efficiently. They must scale up or down in response to demand. And much of the work of DevOps involves figuring out how to achieve all of this in a repeatable and reliable way.

This article kicks off a four-part series on building a secure and scalable DevSecOps pipeline for deploying a quiz application to an Amazon Elastic Kubernetes Service(EKS) cluster. Throughout the series, you’ll leverage Infrastructure as Code (IaC) with Terraform, implement CI/CD using GitHub Actions, adopt GitOps practices through ArgoCD, and harness the scalability of Amazon EKS.

In this first part, we'll focus on setting up a secure EC2 jumphost. This jumphost will act as a controlled, auditable access point to the EKS cluster you’ll deploy later in the series. Rather than exposing your entire cluster to the internet, the jumphost provides a secure gateway for administrative access.

You’ll use Terraform to define the jumphost infrastructure, including compute resources, networking, and security rules. To make the setup fully automated, you’ll integrate GitHub Actions so that any change to the Terraform configuration triggers a workflow. This ensures that your infrastructure remains consistent, version-controlled, and easily reproducible.

By the end of this guide, you’ll have hands-on experience provisioning a hardened EC2 jumphost on AWS, entirely automated through Terraform and GitHub Actions, laying the foundation for a secure and scalable DevSecOps pipeline.

What this series will contain

This four-part series will walk you through building a modern DevSecOps pipeline for a containerized quiz application. Here's what each part will cover:

  1. Provision a secure EC2 jumphost using Terraform and GitHub Actions (this article).
  2. Build a CI/CD pipeline that tests your application and pushes Docker images to Amazon ECR.
  3. Set up an Amazon EKS cluster and deploy the application with ArgoCD.
  4. Add monitoring and observability using Prometheus and Grafana.

Clone the quiz application

This series builds on the quiz application. You can clone the repository here: Quiz application GitHub

Check out this GitHub repository, to view the complete code for the entire series: Three-tier DevSecOps Project GitHub

Prerequisites

To get the most out of this article, you must have the following:

  • A basic understanding of Git.
  • A GitHub account. If you don’t have one, you can create one here.
  • An AWS account. If you don’t have one, you can sign up for a free account here.
  • A basic understanding of GitHub Actions and Terraform.

Project structure

In this article, you will create a jumphost, which you'll later use to access an EKS cluster in the later parts of the series.

After cloning and pulling the quiz project from GitHub, add the following folders and files to the project structure:

  • terraform folder: Inside this folder, create three files, main.tf, outputs.tf, and variables.tf, to store all the Terraform configurations for the project.
  • scripts folder: This folder should contain the jumphost_init.sh file, which will include commands for installing the necessary packages on the jumphost.
  • .github/workflows folder: Create a terraform.yaml file in this folder to define the GitHub Actions configuration for the Terraform pipeline.

Once you've added these files, your project structure should look like this:

.
├── backend
└── .github/
    └── workflows/
        └── terraform.yml
├── docker-compose.yaml
├── frontend
├── scripts
   └── jumphost_init.sh
└── terraform
    ├── main.tf
    ├── outputs.tf
    └── variables.tf

Enter fullscreen mode Exit fullscreen mode

Pre-project setup checklist

You need to do the following before diving into this project:

  • Create an AWS access key and secret access key.
  • Set up an S3 bucket.
  • Generate a public SSH key.
  • Add your credentials (AWS access key, secret access key, and S3 bucket) to your GitHub Actions secrets.

Create an AWS access key and secret access key

These keys provide Terraform and GitHub Actions with programmatic access to your AWS account, allowing them to read or write data to your account.

Follow these steps to create your AWS keys:

  1. Sign in to your AWS account, either as an IAM user or a root user
  2. In the AWS console, search for IAM and select it.

  3. Click Users in the sidebar, and select the Create user button

  4. Enter a username (e.g., jumphost-terraform) and click Next.

  5. Choose the Attach policies directly option.

  6. Search for the AdministratorAccess policy and select it. (Note: This is just for the demo; in a real-world scenario, practice least privilege access.)

  7. Click Next, review your user settings, and click Create user.

  8. You should now see your user listed. Select the newly created user.

  9. Navigate to the Security credentials tab.

  10. Click Create access key in the Access key section.

  11. Choose Third-party service, check the confirmation box, and click Next.

  12. Optionally, add a description for the key.

  13. Click Create access key.

  14. Copy your Access key and Secret access key, and store them in a secure location (you won’t be able to view them again after leaving this page).

  15. Click Done.

With that, you have created your Access and Secret access keys.

Create an S3 bucket

Next, create an S3 bucket to store Terraform's state files, which represent the current state of your infrastructure. By default, Terraform stores these files locally, but using S3 ensures that the state is centralized and accessible. Follow these steps to create your S3 bucket:

  1. In the AWS console, search for S3 and select it.
  2. Click Create bucket under the General purpose buckets section.

  3. Choose a globally unique name for your bucket.

  4. Click Create bucket.

Create an SSH key

You’ll need an SSH key to securely access your EC2 instance. Here's how to create one:

  1. Open your terminal and navigate to the ~/.ssh directory. If it doesn’t exist, create it:
cd ~/.ssh  # Navigate to the directory if it exists
mkdir ~/.ssh  # Create the directory if it doesn't exist
Enter fullscreen mode Exit fullscreen mode

2.Run the following command to generate your SSH key:

ssh-keygen -t ed25519
Enter fullscreen mode Exit fullscreen mode

3.When prompted for the file location to save the key, enter a name for the key (e.g., key) and press Enter.

4.Press Enter for the rest of the prompts to accept the defaults. Your SSH key is now generated.

5.To view and copy your public key, run:

cat <name of the key>.pub
Enter fullscreen mode Exit fullscreen mode

For example, if you had entered key when prompted, your command would be:

cat key.pub
Enter fullscreen mode Exit fullscreen mode

Copy and save the content of this file as it is your public key and you would need it to provision and SSH into your instance.

Add the credentials to your GitHub secrets

Now that you have your AWS Access Key, Secret Access Key, and S3 Bucket you need to add these as secrets in GitHub Actions for your CI/CD pipeline.
Follow these steps to add your credentials to GitHub Secrets:

  1. Log in to your GitHub account.
  2. Navigate to your cloned repository.
  3. Click the Settings tab.
  4. In the left sidebar, click Secrets and variables, then select Actions.

  5. Click New repository secret.

  6. Add each of the following secrets with the corresponding values:

    • Name: AWS_ACCESS_KEY_ID | Secret:
    • Name: AWS_SECRET_ACCESS_KEY | Secret:
    • Name: BUCKET_TF | Secret:
  7. After entering each secret, click Add secret.

Provisioning the jumphost with Terraform

To provision the jumphost, we’ll use Terraform with a modular and organized setup. The configuration is divided into three core files: main.tf, variables.tf, and [outputs.tf], each serving a specific purpose:

  • main.tf defines the infrastructure resources. In this case, it describes the EC2 instance that Terraform will provision.
  • variables.tf declares reusable input variables, allowing you to customize the configuration in main.tf easily. This makes your setup more flexible and maintainable.
  • outputs.tf specifies the output values you want Terraform to return after provisioning, such as public IPs or instance IDs.

In the next step, you’ll copy the relevant code snippets into each of these files.

Define your Terraform variables in your [variables.tf] file

Create your Terraform variables by copying and pasting these variables in your variables.tf file, replacing the <your public key> with the public key you generated earlier:

variable "region" {
 default = "us-east-1"
}

variable "vpc_cidr" {
 default = "10.0.0.0/16"
}

variable "instance_type" {
 default = "t3.micro"
}

variable "ami_name_filter" {
 default = "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"
}

variable "allowed_ssh_cidr" {
 default = "0.0.0.0/0"
}

variable "key_name" {
 default = "jumphost_key"
}

variable "public_key" {
 description = "Your SSH public key"
 default     = "<your public key>" # Add your public key here
}

variable "environment" {
 default = "DevOpsProject"
}

variable "owner" {
 default = "Amarachi"
}

Enter fullscreen mode Exit fullscreen mode

Here’s a breakdown of what each variable does:

  • region: Specifies the AWS region where resources will be deployed. Default is us-east-1.
  • vpc_cidr: Sets the IP range for the Virtual Private Cloud (VPC). Default is 10.0.0.0/16.
  • instance_type: Defines the EC2 instance type. We’re using t3.micro for a cost-effective option suitable for lightweight tasks.
  • ami_name_filter: Filters the Amazon Machine Image (AMI) for Ubuntu 22.04 (Jammy). Terraform will pick the latest version matching this pattern.
  • allowed_ssh_cidr: Determines which IP ranges are allowed to SSH into the instance. The default (0.0.0.0/0) allows access from anywhere—fine for testing, but should be tightened for production environments.
  • key_name: The name of your SSH key pair used to access the EC2 instance.
  • public_key: Defines a default SSH public key value that will be used to provision access to the jumphost.
  • environment: A tag to help identify which environment (e.g., Dev, Staging, Prod) the resources belong to.
  • owner: Tags resources with the owner’s name for accountability and resource tracking.

Next, you’ll continue by configuring the infrastructure in main.tf and defining outputs in outputs.tf.

Set your EC2 instance configuration using Terraform

Paste the following code into your main.tf file, replacing <name of the bucket> with the actual name of your bucket. This will define the properties of your EC2 instance.
https://gist.github.com/Iheanacho-ai/ff01f2e0ff30e30b95a6a4b5576c73d8

Here is a structure breakdown of what each Terraform block in the main.tf file does (If you're already familiar with Terraform, feel free to skip ahead to the Define your outputs in the outputs.tf file section.):

Terraform block

terraform {
 required_providers {
   aws = {
     source  = "hashicorp/aws"
     version = "~> 5.0"
   }
 }

 backend "s3" {
   bucket = "amara-jumphost"
   key    = "terraform.tfstate"
   region = "us-east-1"
 }
}

Enter fullscreen mode Exit fullscreen mode

This Terraform block above defines two things:

  • Provider Configuration: The required _providers block specifies that the AWS provider will be used
  • Remote State Backend: The backend"s3" block stores the current Terraform state file in an S3 bucket. This bucket is the same bucket you created at the start of the project.

AWS provider

# This is the AWS provider
provider "aws" {
 region = var.region
}


Enter fullscreen mode Exit fullscreen mode

The provider"aws" block configures the AWS provider to operate in the region specified in the region variable in your [variables.tf] file.

Ubuntu AMI

# Get latest Ubuntu AMI
data "aws_ami" "ubuntu" {
 most_recent = true

 filter {
   name   = "name"
   values = [var.ami_name_filter]
 }

 filter {
   name   = "virtualization-type"
   values = ["hvm"]
 }

 owners = ["099720109477"]
}



Enter fullscreen mode Exit fullscreen mode

The code block above dynamically fetches the latest Ubuntu AMI available in AWS. Here is a breakdown of the data block:

  • data "aws_ami" "ubuntu": Declares a data source block to look up an AWS AMI named "ubuntu".

    • most_recent = true: Ensures that Terraform selects the most recently created AMI from all the results from the filters.
  • filter block #1 — Name filter:

    • name = "name": Filters AMIs by name.
    • values = [var.ami_name_filter]: Uses the ami_name_filter variable defined in the [variables.tf] file to match AMI names for the filter.
  • filter block #2 — Virtualization type filter:

    • name = "virtualization-type": Filters by virtualization type.
    • values = ["hvm"]: Ensures only Hardware Virtual Machine (HVM) AMIs are returned, which is the standard for most modern EC2 instances.
  • owners = ["099720109477"]: Limits the search to AMIs owned by Canonical, the publisher of Ubuntu. This ID is Canonical’s official AWS account.

Networking Setup

This setup defines the networking infrastructure which your EC2 instance will live. It includes the following key components:

  • Virtual Private Cloud (VPC): An isolated, configurable network that provides foundational connectivity for your AWS resources.
  • Subnet: A segmented IP range within the VPC that dictates availability zones and routing for your EC2 instance.
  • Internet Gateway: Allows the EC2 instance to send and receive traffic from the internet by routing it in and out of the VPC.

Let's break down how each of these components is defined in Terraform.

  • VPC
resource "aws_vpc" "jumphost_vpc" {
 cidr_block           = var.vpc_cidr
 enable_dns_hostnames = true
 enable_dns_support   = true
}
Enter fullscreen mode Exit fullscreen mode

This resource defines a dedicated VPC named jumphost_vpc with the CIDR block specified in your variables.tf file. DNS hostnames and DNS support are enabled to allow for easier internal and external name resolution.

  • Subnet
resource "aws_subnet" "jumphost_subnet" {
 cidr_block = cidrsubnet(aws_vpc.jumphost_vpc.cidr_block, 4, 1)
 vpc_id     = aws_vpc.jumphost_vpc.id
}



Enter fullscreen mode Exit fullscreen mode

This Terraform block creates a subnet named jumphost_subnet within the jumphost_vpc VPC. It calculates the subnet’s CIDR block by dividing the VPC’s CIDR range into 16 smaller subnets (by increasing the subnet mask by 4 bits) and selects the second subnet (index 1). The subnet is associated with the VPC by referencing its ID.

Note: Terraform automatically handles the creation and referencing of these resource IDs.

  • Internet Gateway and Routing
resource "aws_internet_gateway" "jumphost_igw" {
 vpc_id = aws_vpc.jumphost_vpc.id

 tags = {
   Name = "main"
 }
}

resource "aws_route_table" "jumphost_route_table" {
 vpc_id = aws_vpc.jumphost_vpc.id

 route {
   cidr_block = "0.0.0.0/0"
   gateway_id = aws_internet_gateway.jumphost_igw.id
 }
}

resource "aws_route_table_association" "jumphost_route_table_assoc" {
 subnet_id      = aws_subnet.jumphost_subnet.id
 route_table_id = aws_route_table.jumphost_route_table.id
}


Enter fullscreen mode Exit fullscreen mode

This Terraform configuration enables internet access for a subnet using the following AWS networking components:

  • aws_internet_gateway.jumphost_igw: Creates an Internet Gateway and attaches it to the specified VPC using vpc_id = aws_vpc.jumphost_vpc.id. This allows the VPC to communicate with the internet.

  • aws_route_table.jumphost_route_table: Creates a route table for the VPC. It includes a route that directs all outbound traffic (0.0.0.0/0) through the Internet Gateway created earlier.

  • aws_route_table_association.jumphost_route_table_assoc: Associates the route table with a specific subnet, enabling instances within that subnet to use the routing rules, i.e., to access the internet via the Internet Gateway.

 subnet_id      = aws_subnet.jumphost_subnet.id
 route_table_id = aws_route_table.jumphost_route_table.id
Enter fullscreen mode Exit fullscreen mode

Security Group

# Security Group
resource "aws_security_group" "jumphost_SG" {
 name   = "jumphost_SG"
 vpc_id = aws_vpc.jumphost_vpc.id

 ingress {
   cidr_blocks = [var.allowed_ssh_cidr]
   from_port   = 22
   to_port     = 22
   protocol    = "tcp"
 }

 egress {
   from_port   = 0
   to_port     = 0
   protocol    = -1
   cidr_blocks = ["0.0.0.0/0"]
 }
}
Enter fullscreen mode Exit fullscreen mode

This Terraform block creates a Security Group in AWS named jumphost_SG that wil be associated with the jumphost_vpc VPC. With this security group you specify:

  • Ingress Rule (Incoming traffic): This rule allows SSH access (port 22) from the IP range specified by the variable var.allowed_ssh_cidr.
  • Egress Rule (Outgoing traffic): This allows all outbound traffic to any IP address (0.0.0.0/0).

Key pair

resource "aws_key_pair" "jumphost_key" {
 key_name   = var.key_name
 public_key = var.public_key
}
Enter fullscreen mode Exit fullscreen mode

This Terraform block creates and attaches the SSH key you generated earlier to your EC2 instance, allowing you to connect to it via SSH.
Here’s a breakdown of the components:

  • resource "aws_key_pair" "jumphost_key": Defines a new AWS EC2 key pair resource named jumphost_key.

  • key_name = var.key_name: Sets the key pair’s name using the value provided in the key_name variable defined in variables.tf.

  • public_key = var.public_key: Passes your previously created public SSH key to AWS using the public_key variable.

IAM Role & Instance Profile

resource "aws_iam_role" "jumphost_role" {
 name = "jumphost_role"

 assume_role_policy = jsonencode({
   Version = "2012-10-17"
   Statement = [
     {
       Action = "sts:AssumeRole"
       Effect = "Allow"
       Sid    = ""
       Principal = {
         Service = "ec2.amazonaws.com"
       }
     },
   ]
 })

 tags = {
   tag-key = "jumphost_tag_value"
 }
}

# This is an AWS IAM policy

resource "aws_iam_role_policy_attachment" "administrator_access_attach" {
 role       = aws_iam_role.jumphost_role.name
 policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}

resource "aws_iam_instance_profile" "jumphost_instance_profile" {
 name = "jumphost_instance_profile"
 role = aws_iam_role.jumphost_role.name
}



Enter fullscreen mode Exit fullscreen mode

This Terraform code block sets up IAM permissions for your jump host with three component blocks:

  • aws_iam_role "jumphost_role": The role includes a trust policy that allows EC2 instances to assume the role using the sts:AssumeRole action. The trusted service is ec2.amazonaws.com.
  • aws_iam_role_policy_attachment "administrator_access_attach": This block attaches the AWS-managed AdministratorAccess policy to the jumphost_role, granting it full administrative permissions.
  • aws_iam_instance_profile "jumphost_instance_profile": This block creates an instance profile that will link the IAM role to the jumphost EC2 instance

This profile can be attached to EC2 instances so they can inherit the role’s permissions at runtime.

EC2 Instance

# EC2 Instance
resource "aws_instance" "jumphost" {
 ami                         = data.aws_ami.ubuntu.id
 instance_type               = var.instance_type
 associate_public_ip_address = true
 key_name                    = aws_key_pair.jumphost_key.key_name
 vpc_security_group_ids      = [aws_security_group.jumphost_SG.id]
 subnet_id                   = aws_subnet.jumphost_subnet.id
 iam_instance_profile        = aws_iam_instance_profile.jumphost_instance_profile.name

 tags = {
   Name        = "Jumphost"
   Environment = var.environment
   Owner       = var.owner
 }

 user_data = file("../scripts/jumphost_init.sh")
}


Enter fullscreen mode Exit fullscreen mode

This Terraform block provisions the EC2 instance that will serve as your jumphost. It brings together all the components you defined earlier and ties them into a single resource:

  • ami = data.aws_ami.ubuntu.id: Uses the Ubuntu AMI you previously retrieved to launch the instance.

  • instance_type = var.instance_type: Specifies the EC2 instance type based on your provided variable.

  • associate_public_ip_address = true: Assigns a public IP address to the instance, allowing direct access over the internet.

  • key_name = aws_key_pair.jumphost_key.key_name: Associates the instance with the SSH key pair for secure remote access.

  • vpc_security_group_ids = [aws_security_group.jumphost_SG.id]: Attaches the instance to the predefined security group, controlling inbound and outbound traffic rules.

  • subnet_id = aws_subnet.jumphost_subnet.id: Places the instance in the designated subnet.

  • iam_instance_profile = aws_iam_instance_profile.jumphost_instance_profile.name: Applies the IAM instance profile, granting the instance appropriate permissions.

  • tags = {...}: Adds metadata for organizational purposes—such as the instance name, environment, and owner.

  • user_data = file("../scripts/jumphost_init.sh"): Runs a startup script upon instance launch, allowing for automated configuration and initialization.

In summary, this block ties everything together to create a fully functional jumphost with networking, access control, permissions, and startup configuration all pre-configured.

Define your outputs in the outputs.tf file

The outputs.tf file allows you to expose key information about your infrastructure after running terraform apply. These outputs make it easy to retrieve critical values such as public IP addresses or instance IDs, useful for debugging, connectivity, or automation.

To define outputs for your jumphost, paste the following code into your outputs.tf file:

output "jumphost_public_ip" {
 description = "The public IP of the jumphost"
 value       = aws_instance.jumphost.public_ip
}

output "jumphost_ssm_instance_id" {
 description = "Instance ID for use with AWS SSM"
 value       = aws_instance.jumphost.id
}

Enter fullscreen mode Exit fullscreen mode

Let’s break down what each output does:

  • output "jumphost_public_ip": This outputs the public IP address of the EC2 jumphost. It's particularly useful when you need to SSH into the instance or connect via tools that require the IP.

    • description: This describes the purpose of the output function.
    • value: Points to aws_instance.jumphost.public_ip, which fetches the actual IP address of the jumphost
  • output "jumphost_ssm_instance_id": This outputs the EC2 instance ID, a key value if you plan to use AWS Systems Manager (SSM) for session-based access, allowing you to connect without SSH keys.

    • description: This describes the purpose of the output function.
    • value: Refers to aws_instance.jumphost.id, which returns the unique identifier of the EC2 instance. This is useful for SSM sessions and automation scripts.

Creating the initialization script

Once your Terraform configurations for the EC2 instance are in place, the next step is to create an initialization script that runs after the instance is launched. This script prepares the jumphost for interacting with the AWS EKS cluster, whether for creating clusters or running monitoring tools like Prometheus and Grafana, by installing the necessary dependencies.

The required tools are:

  • kubectl: Command-line tool for managing Kubernetes clusters and workloads.
  • helm: Package manager for Kubernetes, used to install charts such as Prometheus, Grafana, nginx-ingress, and more.
  • eksctl: Utility for creating and managing EKS clusters, including the underlying infrastructure.
  • awscli: AWS Command Line Interface for authenticating and managing AWS resources.

Note: If you're not planning to follow the full tutorial series, you may safely skip this section.

To install these dependencies, copy and paste the following script into your scripts/jumphost_init.sh file:
https://gist.github.com/Iheanacho-ai/cd3833e34add7d1b7cc67257dbaef104

The script prepares the instance with tools and services needed for interacting with and managing an Amazon EKS (Kubernetes) cluster.

Here's a breakdown of what it does, step by step:

Script setup and logging

#!/bin/bash
# For Ubuntu 22.04

set -e # Exit script immediately on first error.

# Log all output to file
exec >> /var/log/init-script.log 2>&1

echo "Starting initialization script..."
Enter fullscreen mode Exit fullscreen mode

What this does:

  • #!/bin/bash: Specifies Bash as the interpreter for this script.
  • set -e: Ensures the script halts on any error, preventing unintended consequences.
  • exec >> /var/log/init-script.log 2>&1: Logs all output, both standard and error, to /var/log/init-script.log for easier troubleshooting.

Update the operating system

# Update system
sudo apt update -y
sudo apt upgrade -y
Enter fullscreen mode Exit fullscreen mode

The script above updates the system's package lists and upgrades all installed packages to their latest versions. This step helps prevent compatibility or security issues during subsequent installations.

Install AWS CLI

# Install AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip -y
unzip awscliv2.zip
sudo ./aws/install
echo "AWS CLI installed. Remember to configure credentials with 'aws configure'"
Enter fullscreen mode Exit fullscreen mode

The bash script block above:

  • Downloads the AWS CLI installation archive.
  • Installs the unzip utility if it’s not already present.
  • Extracts the archive and installs the CLI.
  • Prompts you to configure your credentials for AWS access.

Install kubectl (Kubernetes CLI)

# Install Kubectl
sudo apt update
sudo apt install curl -y
sudo curl -LO "https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl"
sudo chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client
Enter fullscreen mode Exit fullscreen mode

This script installs the Kubernetes command-line tool, kubectl. It allows you to interact with and manage Kubernetes resources within your EKS cluster.

Install eksctl

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
Enter fullscreen mode Exit fullscreen mode

This script installs eksctl.

Install Helm

sudo snap install helm --classic
Enter fullscreen mode Exit fullscreen mode

This command installs Helm, a powerful package manager for Kubernetes. You'll use Helm to deploy applications like Prometheus, Grafana, and NGINX Ingress using pre-configured packages called “charts.”

Add Helm repositories

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Enter fullscreen mode Exit fullscreen mode

These commands add external Helm repositories to your environment:

  • prometheus-community: for monitoring tools like Prometheus
  • grafana: for powerful visualization dashboards
  • ingress-nginx: for managing external access to your Kubernetes services

Finally, helm repo update refreshes the list of available charts so you can install the latest versions.

Install Prometheus

helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace
Enter fullscreen mode Exit fullscreen mode

This command installs Prometheus using the kube-prometheus-stack Helm chart into the monitoring namespace. If the namespace doesn't already exist, it will be created automatically.

Install Grafana

helm install grafana grafana/grafana --namespace monitoring --create-namespace
Enter fullscreen mode Exit fullscreen mode

This command installs Grafana in the same monitoring namespace. Grafana provides a powerful dashboard interface to visualize metrics collected by Prometheus.

Installs NGINX Ingress Controller

helm install ingress-nginx ingress-nginx/ingress-nginx
Enter fullscreen mode Exit fullscreen mode

The command above installs the NGINX Ingress Controller, which acts as a gateway for routing external HTTP and HTTPS traffic to services running inside your EKS cluster.

Finish Script

helm repo update
echo "Initialization script completed successfully."
Enter fullscreen mode Exit fullscreen mode

This section:

  • Updates your local Helm chart repository cache, ensuring access to the latest versions of available charts.
  • Prints a confirmation message to indicate that the initialization script has finished running successfully.

Automating Terraform deployments with GitHub Actions

Once you’ve written your Terraform configuration and initialization script, the next step is to automate the deployment process using GitHub Actions. This ensures that any changes made to your Terraform files are automatically applied to your infrastructure, keeping everything up to date without manual intervention.

To set this up, create a GitHub Actions workflow by copying the following YAML snippet into your .github/workflows/terraform.yaml:

https://gist.github.com/Iheanacho-ai/9d71fe4759dc8621de97967958cd60a5

Here is a breakdown of the GitHub Actions pipeline:

Workflow Name:

name: Terraform Jumphost Configuration
Enter fullscreen mode Exit fullscreen mode

This line gives your workflow a descriptive name, "Terraform Jumphost Configuration," which will be visible in your GitHub Actions tab.

Triggers (on):

on:
 push:
   branches:
     - main
   paths:
     - terraform/**
 pull_request:
   branches:
     - main
   paths:
     - terraform/**

Enter fullscreen mode Exit fullscreen mode

This section defines when this workflow will be automatically triggered:

  • push: This means the workflow will run when code is pushed to the repository.
    • branches: - main: Specifically, it will only trigger when commits are pushed to the main branch.
    • paths: - terraform/*: It only runs if the changes affect any files or subdirectories inside the terraform/ directory. The * wildcard ensures all nested files are included.
  • pull_request: The workflow will also run when a pull request is created, updated, or merged.
    • branches: - main: It will only trigger for pull requests that target the main branch.
    • paths: - terraform/**: Similar to the push event, it only runs if changes are made within the terraform/ directory.

Environment Variables (env):

env:
 AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
 AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
 BUCKET_TF_STATE: ${{ secrets.BUCKET_TF }}
 AWS_REGION: ${{ secrets.AWS_REGION }}
 TF_LOG: DEBUG
Enter fullscreen mode Exit fullscreen mode

This section defines environment variables that are accessible to all jobs within the workflow. Sensitive values are securely pulled from GitHub Secrets using the secrets context (secrets.NAME), ensuring credentials are not exposed in plain text.
Here's what each variable does:

  • AWS_ACCESS_KEY_ID: Stores your AWS access key ID, securely retrieved from GitHub Secrets.
  • AWS_SECRET_ACCESS_KEY: Stores your AWS secret access key, also pulled from Secrets.
  • BUCKET_TF_STATE: Specifies the name of the S3 bucket where your Terraform state file will be stored.
  • AWS_REGION: Sets the AWS region for your operations (e.g., us-east-1).
  • TF_LOG: Enables debug-level logging for Terraform, which provides detailed output useful for troubleshooting.

Jobs (jobs):

 terraform:
   name: "Apply Terraform configuration on changes"
   runs-on: ubuntu-latest
   defaults:
     run:
       shell: bash
       working-directory: ./terraform
Enter fullscreen mode Exit fullscreen mode

This section defines the tasks, called "jobs", that will be executed in your GitHub Actions workflow. In this case, there's a single job named terraform:

  • name: "Apply Terraform configuration on changes": A descriptive name for the job.
  • runs-on: ubuntu-latest: Specifies that the job will run in a clean, virtual machine hosted on the latest version of Ubuntu provided by GitHub Actions.
  • defaults: Defines default settings for all run steps within this job.
    • shell: bash: Ensures that the commands in the run steps are executed using the Bash shell.
    • working-directory: ./terraform: Sets the current working directory for all subsequent run steps within this job to the terraform directory in your repository. This is crucial because your Terraform configuration files are located there.

Steps (steps):

   steps:
   - name: Checkout code
     uses: actions/checkout@v4

   - name: Setup Terraform
     uses: hashicorp/setup-terraform@v3

   - name: Terraform Init
     run: terraform init -backend-config="bucket=$BUCKET_TF_STATE"

   - name: Terraform Format
     run: terraform fmt -check
     continue-on-error: true

   - name: Terraform Validate
     run: terraform validate

   - name: Terraform Plan
     id: plan
     run: terraform plan -no-color -input=false -out planfile
     continue-on-error: true

   - name: Terraform Plan Status
     if: steps.plan.outcome == 'failure'
     run: exit 1

   - name: Terraform Apply
     if: github.ref == 'refs/heads/main' && github.event_name == 'push'
     run: terraform apply -auto-approve -input=false -parallelism=1 planfile
Enter fullscreen mode Exit fullscreen mode

This section defines the individual steps that will be executed within the terraform job, in sequential order:

  1. Checkout code: This step leverages the official actions/checkout action (version 4) to clone your repository into the GitHub Actions runner, making your code available for the workflow.
  2. Setup Terraform: Here, the hashicorp/setup-terraform action (version 3) is used to install and configure the Terraform CLI on the runner environment.
  3. Terraform Init: This command initializes Terraform and configures the backend. Specifically, it uses the -backend-config option to point to your S3 bucket ($BUCKET_TF_STATE) where the Terraform state is stored securely.
  4. Terraform Format: The terraform fmt-check command verifies that your Terraform code conforms to the standard formatting conventions.The setting continue-on-error: true allows the workflow to proceed even if formatting issues are detected, preventing the entire job from failing at this stage.
  5. Terraform Validate: This step runs terraform validate to ensure that the Terraform configuration files are syntactically correct and internally consistent.
  6. Terraform Plan: This generates an execution plan with the command terraform plan -no-color -input=false -out planfile.

    • -no-color disables colored output for clearer logs.
    • -input=false prevents Terraform from prompting for input interactively.
    • -out planfile saves the generated plan to a file named planfile, ensuring that the apply step runs exactly what was planned.
    • Similar to the formatting step, continue-on-error: true lets the workflow continue even if the plan generation encounters errors.
  7. Terraform Plan Status: This step acts as a gatekeeper by checking the outcome of the plan step. If the plan failed (steps.plan.outcome == 'failure'), it runs exit 1 to terminate the job immediately, preventing a potentially harmful apply.

  8. Terraform Apply: The final step applies the Terraform changes, but only when two conditions are met:

    • The workflow was triggered by a push to the main branch (github.ref == 'refs/heads/main').
    • The event type is a push event (github.event_name == 'push').
    • The command terraform apply -auto-approve -input=false -parallelism=1 planfile applies the saved execution plan:
    • -auto-approve skips manual confirmation.
    • -input=false avoids interactive prompts.
    • -parallelism=1 limits resource creation/modification to one at a time to avoid race conditions or ordering issues, though it may slow execution.

Running your CI/CD pipeline

Once you’ve configured your pipeline, the next step is to trigger it to create the Terraform infrastructure. To do so, push your code to GitHub. GitHub will automatically detect your push, read your .github/workflows file, and run the pipeline.

Refer to the GitHub documentation for guidance on pushing your locally hosted code to GitHub.

After pushing your code, go to your project repository on GitHub and click on the Actions tab to monitor your workflow.

Note: If you do not see any workflow in the Actions tab, double-check the folder name and make sure your terraform.yaml file is correctly located in the workflow folder within the .github directory.

Once the workflow completes, your infrastructure should be provisioned.

Verify and review your project

To confirm that Terraform has successfully provisioned your infrastructure, follow these steps:

  1. Sign in to your AWS Console: Access your AWS console, then search for "EC2" in the search bar.
  2. Check for Your EC2 Instance: After searching, you should see your EC2 instance listed in the console.
  3. Get Your EC2 Instance's Public IP: To SSH into your EC2 instance, you need the public IP address. You can find this in your AWS console or from your GitHub workflow. In this case, we'll retrieve it from the GitHub workflow.

To get the public IP address from the GitHub workflow:
a. Click on your workflow run in the Actions page

b. Select your Job on the sidebar

    c. Expand the Terraform Apply step.
Enter fullscreen mode Exit fullscreen mode

d. Scroll to the end of the step, and you should see thejumphost_public_ip value in the outputs section.

e. Copy this value; it’s your EC2 instance's public IP.

5.SSH into Your EC2 Instance: Now that you have the public IP, you can SSH into your EC2 instance. Use the following command:

ssh -i <path to your public key> ubuntu@<public_ip>
Enter fullscreen mode Exit fullscreen mode

Replace the placeholders:

  • : This is the location of the key you created earlier, typically ~/.ssh/<name of the key>.
  • : This is the public IP you copied from the GitHub workflow.

For example, if your key's path is ~/.ssh/key and your public IP is 3.81.145.221, the command would look like this:

ssh -i ~/.ssh/key ubuntu@3.81.145.221
Enter fullscreen mode Exit fullscreen mode

With this, you should now be logged into your EC2 instance, provisioned by Terraform.

Final thoughts

Terraform enables consistent provisioning, management, and versioning of infrastructure across multiple cloud providers. Regardless of who runs the pipeline that triggers the Terraform configuration you set up in this tutorial, the infrastructure will always be identical, with the same configuration. By automating these processes, Terraform reduces the potential for manual errors, boosts efficiency, and ensures your infrastructure is reproducible and scalable.

In this tutorial, you’ve created a jumphost on AWS, a secure server that will facilitate controlled and secure access to an EKS cluster you’ll set up later in the series.

But this is just the beginning of what you can achieve with Terraform and AWS. To dive deeper, check out the official Terraform with AWS documentation.

Top comments (0)