DEV Community

Cover image for Automated EC2 Control Plane and EKS Cluster Deployment Using Terraform and GitHub Actions
Oloruntobi Olurombi
Oloruntobi Olurombi

Posted on

Automated EC2 Control Plane and EKS Cluster Deployment Using Terraform and GitHub Actions

In the fast-paced world of DevOps, automation is key to managing and scaling cloud infrastructure efficiently. Setting up an EC2 control plane and EKS cluster manually can be time-consuming, error-prone, and difficult to maintain, especially as infrastructure grows. That’s where tools like Terraform and GitHub Actions come into play. By leveraging Infrastructure as Code (IaC) with Terraform and automating deployments through Continuous Integration and Continuous Deployment (CI/CD) pipelines with GitHub Actions, we can streamline the entire process, reduce human error, and ensure consistent, repeatable infrastructure setups.

Terraform allows us to define infrastructure in code, making it easy to version, share, and collaborate on infrastructure changes. It supports a wide range of cloud providers, making it a powerful tool for managing cloud resources like EC2 instances and EKS clusters. On the other hand, GitHub Actions provides a flexible CI/CD platform directly integrated with our codebase, enabling us to automate everything from code testing to infrastructure provisioning.

In this article, I’ll walk you through the steps to automate the deployment of an EC2 control plane and EKS cluster using Terraform, integrated with GitHub Actions. You’ll learn how to define your infrastructure in Terraform, set up GitHub Actions workflows to automate the deployment process, and implement best practices to ensure your infrastructure is scalable, secure, and resilient.

Whether you're a seasoned DevOps engineer looking to enhance your toolkit or a cloud enthusiast eager to dive into automation, this guide will provide you with practical insights and hands-on techniques to get started. By the end of this article, you’ll be equipped with the knowledge to automate your cloud infrastructure deployments, allowing you to focus more on innovation and less on manual setup.

Prerequisites

Before diving into the automation process, ensure you have the following set up:

  • Create an AWS Account: If you haven’t already, sign up for an AWS account. This will be your gateway to provisioning and managing cloud resources.

  • Install AWS CLI: The AWS Command Line Interface (CLI) is a powerful tool that allows you to interact with AWS services from your terminal. Make sure you have it installed and configured with your credentials.

  • Create an S3 Bucket: Terraform uses a state file to keep track of your infrastructure. Create an S3 bucket to store this state file securely. This is crucial for maintaining consistency and enabling collaboration when managing your infrastructure.

  • Create IAM Keys - Access Key and Secret Key: Generate IAM keys to enable Terraform and GitHub Actions to authenticate and manage your AWS resources. Make sure you have both the access key and the secret key ready.

  • Create a Key Pair: If you plan to SSH into your EC2 instances, create a key pair in AWS. This will allow you to securely connect to your instances once they’re up and running.

These prerequisites lay the foundation for the automated deployment process. With these in place, you’ll be ready to follow along with the Terraform configurations and GitHub Actions workflows that will bring your EC2 control plane and EKS cluster to life.

Project Structure

To keep everything organised, we’ll structure our project as follows:

Eks-setup-github-action/
├── iam_roles.tf
├── main.tf
├── provider.tf
├── routing.tf
├── security_groups.tf
├── variables.tf
└── vpc.tf
Enter fullscreen mode Exit fullscreen mode
  • iam_roles.tf: This file contains the IAM roles and policies needed for your EC2 instances and EKS cluster to function securely.

  • main.tf: The main configuration file where we’ll define the resources that Terraform will provision, such as the EC2 instances and EKS cluster.

  • provider.tf: This file specifies the AWS provider and its configuration, including region and authentication details.

  • routing.tf: Manages the routing tables and routes within the VPC, ensuring that traffic is correctly routed between subnets and other AWS resources.

  • security_groups.tf: Defines the security groups, controlling inbound and outbound traffic to our EC2 instances and other components.

  • variables.tf: A centralized place for defining variables used across your Terraform configuration, making it easier to manage and reuse values.

  • vpc.tf: Contains the configuration for Virtual Private Cloud (VPC), including subnets, internet gateways, and other networking components.

By organising our Terraform files this way, we’ll ensure that our infrastructure code is modular, maintainable, and easy to understand. Each file has a specific purpose, making it easier to manage changes and scale the setup as needed.

Now, let’s start by looking at the provider.tf file, which is the foundation of our Terraform project.

The provider.tf File

The provider.tf file is crucial in any Terraform project as it defines the provider configuration, which in this case is AWS. This file tells Terraform which cloud provider to use and how to authenticate with it.

Below is the content of our provider.tf file:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region     = var.region
  access_key = var.aws_access_key_id
  secret_key = var.aws_secret_access_key
}

Enter fullscreen mode Exit fullscreen mode

Deep Dive:

  • terraform { required_providers { ... } }: This block specifies that we're using the AWS provider from HashiCorp's registry. The version constraint (~> 5.0) ensures that we use a version compatible with our configuration, avoiding breaking changes from newer versions.

  • provider "aws" { ... }: This block configures the AWS provider with the necessary credentials and region. The region, access_key, and secret_key are pulled from variables (var.region, var.aws_access_key_id, and var.aws_secret_access_key), making the configuration flexible and secure.

  • region: The AWS region where our resources will be deployed, such as us-west-2.

  • access_key and secret_key: These are the AWS credentials that Terraform uses to authenticate and manage infrastructure. By using variables, you can avoid hardcoding sensitive information directly in your configuration files.

This provider.tf setup ensures that Terraform can communicate with AWS using the credentials and region you've specified. It also makes it easy to switch regions or credentials by simply changing the variable values, keeping your infrastructure code adaptable and secure.

With our provider configured, we can now move on to the core of our Terraform setup.

The vpc.tf File

The vpc.tf file contains the configuration for our Virtual Private Cloud (VPC), including the VPC itself and its associated subnets. Proper network setup is crucial for the secure and efficient operation of our EC2 instances and EKS cluster.

Here’s a look at the content of vpc.tf:

# Provides a VPC resource
resource "aws_vpc" "main" {
  cidr_block       = var.vpc_cidr_block
  instance_tenancy = "default"

  tags = {
    Name = var.tags_vpc
  }
}

# Provides VPC Public subnet resources
resource "aws_subnet" "public_subnet_1" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.p_s_1_cidr_block
  availability_zone       = var.az_a
  map_public_ip_on_launch = true

  tags = {
    Name = var.tags_public_subnet_1
  }
}

resource "aws_subnet" "public_subnet_2" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.p_s_2_cidr_block
  availability_zone       = var.az_b
  map_public_ip_on_launch = true

  tags = {
    Name = var.tags_public_subnet_2
  }
}

resource "aws_subnet" "public_subnet_3" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.p_s_3_cidr_block
  availability_zone       = var.az_c
  map_public_ip_on_launch = true

  tags = {
    Name = var.tags_public_subnet_3
  }

Enter fullscreen mode Exit fullscreen mode

Deep Dive:

  • aws_vpc "main_vpc": Creates the VPC with a specified CIDR block. DNS support and DNS hostnames are enabled to allow instances within the VPC to resolve internal and external domain names.

  • aws_internet_gateway "igw": Attaches an Internet Gateway to the VPC, enabling resources within the VPC to communicate with the internet.

  • aws_subnet "public_subnet_1" and aws_subnet "public_subnet_2": Define public subnets within the VPC. These subnets have the map_public_ip_on_launch attribute set to true, which assigns public IP addresses to instances launched in these subnets.

  • aws_subnet "private_subnet_1": Defines a private subnet within the VPC. Instances in this subnet do not have direct access to the internet.

  • aws_route_table "public_rt": Creates a route table for the public subnets. The route table includes a route that directs all outbound traffic (0.0.0.0/0) to the Internet Gateway.

  • aws_route_table_association "public_subnet_1_association" and aws_route_table_association "public_subnet_2_association": Associates the public route table with the public subnets, ensuring that traffic from these subnets is routed through the Internet Gateway.

  • aws_security_group "main_sg": Defines a security group for EC2 instances. It allows inbound SSH traffic (port 22) from any IP address and permits all outbound traffic.

  • aws_vpc_endpoint "s3_endpoint": Creates a VPC endpoint for S3, allowing instances within the VPC to access S3 buckets without traversing the internet. This enhances security and reduces latency.

The vpc.tf file sets up the networking environment for your infrastructure. By defining the VPC, subnets, route tables, and security groups, you ensure that your resources are correctly networked and secured. This modular approach allows you to maintain and scale your infrastructure effectively while keeping your network configuration organized and manageable.

Now it is time to start building our resources.

The main.tf File

The main.tf file is where we define the actual resources that Terraform will provision. This file includes configurations for the EC2 instance, EKS cluster, and associated components.

Here’s a look at the content of main.tf:

terraform {
  backend "s3" {
    bucket = "terraform-state-bucket-tobi"
    key    = "terraform.tfstate"
    region = "us-east-1"
    encrypt = true 
    #profile = "tobi"
  }
}

# Provides an EC2 instance resource
data "aws_ami" "amazon-linux-2" {
  most_recent = true

  filter {
    name   = "owner-alias"
    values = ["amazon"]
  }

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm*"]
  }
}

# Provides an EC2 Instance for Control Plane
resource "aws_instance" "control_plane" {
  depends_on = ["aws_internet_gateway.igw"]

  ami                         = data.aws_ami.amazon-linux-2.id
  instance_type               = var.instance_type
  associate_public_ip_address = true
  iam_instance_profile        = aws_iam_instance_profile.ec2_instance_profile.name
  key_name                    = "bastion"
  vpc_security_group_ids      = [aws_security_group.main_sg.id]
  subnet_id                   = aws_subnet.public_subnet_1.id
}

# Provides an EKS Cluster
resource "aws_eks_cluster" "eks_cluster" {
  name     = var.cluster_name
  role_arn = aws_iam_role.eks_cluster_role.arn

  version = "1.28"

  vpc_config {
    subnet_ids = [
      aws_subnet.public_subnet_1.id,
      aws_subnet.public_subnet_2.id,
      aws_subnet.public_subnet_3.id
    ]
  }

  depends_on = [
    aws_iam_role_policy_attachment.eks_cluster_policy_attachment,
    aws_iam_role_policy_attachment.eks_service_policy_attachment,
  ]
}

# Provides an EKS Node Group 
resource "aws_eks_node_group" "eks_node_group" {
  cluster_name    = aws_eks_cluster.eks_cluster.name
  node_group_name = var.node_group_name
  node_role_arn   = aws_iam_role.eks_node_group_role.arn
  subnet_ids      = [
    aws_subnet.public_subnet_1.id,
    aws_subnet.public_subnet_2.id,
    aws_subnet.public_subnet_3.id
  ]

  scaling_config {
    desired_size = 2
    max_size     = 2
    min_size     = 2
  }

  update_config {
    max_unavailable = 1
  }

  depends_on = [
    aws_iam_role_policy_attachment.eks_worker_node_policy_attachment,
    aws_iam_role_policy_attachment.eks_cni_policy_attachment,
    aws_iam_role_policy_attachment.ec2_container_registry_readonly,
  ]
}

# Output Resources
output "endpoint" {
  value = aws_eks_cluster.eks_cluster.endpoint
}

output "ec2_public_ip" {
  value = aws_instance.control_plane.public_ip 
}

output "ec2_instance_id" {
  value = aws_instance.control_plane.id
}

output "eks_cluster_name" {
  value = aws_eks_cluster.eks_cluster.name
}

Enter fullscreen mode Exit fullscreen mode

Deep Dive:

  • terraform { backend "s3" { ... } }: Configures the backend to use an S3 bucket for storing the Terraform state file. This ensures that your state is centralized and protected with encryption.

  • data "aws_ami" "amazon-linux-2": Fetches the latest Amazon Linux 2 AMI ID, which is used for the EC2 instance.

  • resource "aws_instance" "control_plane": Defines an EC2 instance that will act as the control plane. It specifies the AMI, instance type, and other settings.

  • resource "aws_eks_cluster" "eks_cluster": Configures the EKS cluster, including its version and VPC configuration. It also ensures that IAM roles and policies are in place before and after cluster creation.

  • resource "aws_eks_node_group" "eks_node_group": Sets up an EKS node group with scaling and update configurations. This node group will run your containerized workloads.

  • output "endpoint": Outputs the EKS cluster endpoint URL, which is necessary for accessing the cluster.

  • output "ec2_public_ip": Outputs the public IP address of the EC2 instance.

  • output "ec2_instance_id": Outputs the ID of the EC2 instance.

  • output "eks_cluster_name": Outputs the name of the EKS cluster.

The main.tf file provides a comprehensive setup for your EC2 control plane and EKS cluster, ensuring that all necessary resources are created and configured. With this file, you can easily manage and scale your infrastructure while maintaining a clean and organised configuration.

With our primary resources defined, we now turn our attention to the IAM roles required for our setup. IAM roles are crucial for managing permissions and access controls for both our EC2 instances and EKS cluster. The iam_roles.tf file handles the creation of these roles and their associated policies.

The iam_roles.tf File

The iam_roles.tf file defines the IAM roles and policies necessary for the EC2 instance, EKS cluster, and worker nodes. This setup ensures that each component has the appropriate permissions to interact with other AWS services securely.

Here’s the content of iam_roles.tf:

# Declare the aws_caller_identity data source
data "aws_caller_identity" "current" {}

# IAM Role For EC2
resource "aws_iam_role" "ec2_instance_role" {
    name = var.ec2_instance_role_name

    assume_role_policy = jsonencode({
        Version = "2012-10-17",
        Statement = [
            {
                Action = "sts:AssumeRole",
                Effect = "Allow",
                Principal = {
                    Service = "ec2.amazonaws.com"
                }
            }
        ]
    })
}

# Policies For EC2 IAM Role

# Attach Policies 
resource "aws_iam_role_policy_attachment" "ec2_full_access" {
    role = aws_iam_role.ec2_instance_role.name
    policy_arn = "arn:aws:iam::aws:policy/AmazonEC2FullAccess"
}

resource "aws_iam_role_policy_attachment" "ec2_read_only_access" {
    role = aws_iam_role.ec2_instance_role.name 
    policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ReadOnlyAccess"
}

# Create an Instance Profile (for attaching the role to an EC2 instance)
resource "aws_iam_instance_profile" "ec2_instance_profile" {
    name = var.ec2_instance_profile
    role = aws_iam_role.ec2_instance_role.name
}

# IAM Role for EKS Cluster Plane 
resource "aws_iam_role" "eks_cluster_role" {
    name = var.eks_cluster_role_name

    assume_role_policy = jsonencode({
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "eks.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
            }
        ]
    })
}

resource "aws_iam_role_policy_attachment" "eks_cluster_policy_attachment" {
    policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
    role = aws_iam_role.eks_cluster_role.name 
}

resource "aws_iam_role_policy_attachment" "eks_service_policy_attachment" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
  role = aws_iam_role.eks_cluster_role.name
}

# IAM Role for Worker node
resource "aws_iam_role" "eks_node_group_role" {
    name = var.eks_node_group_role_name

    assume_role_policy = jsonencode({
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
            }
        ]
    })
}

resource "aws_iam_role_policy_attachment" "eks_worker_node_policy_attachment" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks_node_group_role.name
}

resource "aws_iam_role_policy_attachment" "eks_cni_policy_attachment" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_node_group_role.name
}

resource "aws_iam_role_policy_attachment" "ec2_container_registry_readonly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks_node_group_role.name
}

# Create a Policy That Allows The eks:DescribeCluster Action
resource "aws_iam_policy" "eks_describe_cluster_policy" {
  name        = var.eks_describe_cluster_policy_name
  description = "Policy to allow describing EKS clusters"
  policy      = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action   = "eks:DescribeCluster"
        Effect   = "Allow"
        Resource = "arn:aws:eks:${var.region}:${data.aws_caller_identity.current.account_id}:cluster/${var.cluster_name}"
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "eks_describe_cluster_policy_attachment" {
  role       = aws_iam_role.ec2_instance_role.name
  policy_arn = aws_iam_policy.eks_describe_cluster_policy.arn
}

Enter fullscreen mode Exit fullscreen mode
  • data "aws_caller_identity" "current": Retrieves information about the current AWS account. This is used to dynamically reference the account ID in policies.
  • IAM Roles for EC2:
  • aws_iam_role "ec2_instance_role": Defines a role for the EC2 instance with an assume role policy allowing EC2 to assume this role.

  • aws_iam_role_policy_attachment "ec2_full_access" and ec2_read_only_access: Attach policies to the EC2 role to provide full and read-only access to EC2 resources.

  • aws_iam_instance_profile "ec2_instance_profile": Creates an instance profile to attach the EC2 role to the EC2 instance.

  • IAM Roles for EKS Cluster:
  • aws_iam_role "eks_cluster_role": Defines a role for the EKS cluster with an assume role policy allowing EKS to assume this role.

  • aws_iam_role_policy_attachment "eks_cluster_policy_attachment" and eks_service_policy_attachment: Attach policies necessary for the EKS cluster to operate.

  • IAM Roles for EKS Worker Nodes:
  • aws_iam_role "eks_node_group_role": Defines a role for EKS worker nodes with an assume role policy allowing EC2 to assume this role.

  • aws_iam_role_policy_attachment "eks_worker_node_policy_attachment", eks_cni_policy_attachment, and ec2_container_registry_readonly: Attach policies to the worker node role to allow it to interact with EKS, manage networking, and access container images.

  • aws_iam_policy "eks_describe_cluster_policy" and aws_iam_role_policy_attachment "eks_describe_cluster_policy_attachment": Create a policy to allow describing EKS clusters and attach it to the EC2 role.

The iam_roles.tf file ensures that each component of your infrastructure has the appropriate permissions to operate effectively and securely. Proper role and policy management is essential for maintaining a secure and functional environment.

Now that we’ve set up the IAM roles, let’s move on to configuring the networking components of our infrastructure. Proper routing and internet access are crucial for ensuring that our resources can communicate as needed.

The routing.tf File

The routing.tf file handles the creation of routing tables and internet gateways for your VPC. This setup ensures that your public and private subnets can route traffic appropriately.

Here’s the content of the routing.tf file:

# Provides a resource to create a VPC routing table
resource "aws_route_table" "public_rt" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw.id
  }

  tags = {
    Name = var.tags_public_rt
  }
}

# Provides a resource to create an association between a route table and Public subnets
resource "aws_route_table_association" "public_subnet_1_association" {
    subnet_id = aws_subnet.public_subnet_1.id
    route_table_id = aws_route_table.public_rt.id 
}

resource "aws_route_table_association" "public_subnet_2_association" {
    subnet_id = aws_subnet.public_subnet_2.id
    route_table_id = aws_route_table.public_rt.id 
}

resource "aws_route_table_association" "public_subnet_3_association" {
    subnet_id = aws_subnet.public_subnet_3.id
    route_table_id = aws_route_table.public_rt.id 
}

# Provides a resource to create a VPC Internet Gateway
resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = var.tags_igw
  }
}

# Provides a resource to create a private route table 
resource "aws_route_table" "private_rt" {
    vpc_id = aws_vpc.main.id
}

# Provides a resource to create an association between a route table and Private subnets
resource "aws_route_table_association" "private_subnet_1_association" {
    subnet_id = aws_subnet.private_subnet_1.id
    route_table_id = aws_route_table.private_rt.id 
}

resource "aws_route_table_association" "private_subnet_2_association" {
    subnet_id = aws_subnet.private_subnet_2.id
    route_table_id = aws_route_table.private_rt.id 
}

resource "aws_route_table_association" "private_subnet_3_association" {
    subnet_id = aws_subnet.private_subnet_3.id
    route_table_id = aws_route_table.private_rt.id 
}
Enter fullscreen mode Exit fullscreen mode

Deep Dive:

  • aws_route_table "public_rt": Creates a routing table for public subnets. This routing table has a default route (0.0.0.0/0) that directs traffic to the internet gateway, allowing instances in public subnets to access the internet.

  • aws_route_table_association: Associates the public routing table with the public subnets. This ensures that traffic from these subnets is routed correctly according to the public_rt table.

  • aws_internet_gateway "igw": Creates an internet gateway and attaches it to the VPC. This gateway allows instances in the public subnets to access the internet.

  • aws_route_table "private_rt": Creates a routing table for private subnets. This table does not have a route to the internet gateway, ensuring that private subnets do not have direct internet access.

  • aws_route_table_association: Associates the private routing table with the private subnets, ensuring that traffic is routed according to the private_rt table.

The routing.tf file ensures that your VPC is properly configured to handle traffic both internally and to/from the internet. With the routing and internet access in place, your infrastructure will be able to interact with external resources and services as needed.

The security.tf File

The security.tf file defines the security groups required for your infrastructure. Security groups act as virtual firewalls that control the inbound and outbound traffic to your resources, such as EC2 instances and EKS clusters.

Here’s the content of the security.tf file:

# Provides a security group 
resource "aws_security_group" "main_sg" {
    name = "main_sg"
    description = var.main_sg_description
    vpc_id = aws_vpc.main.id 

    ingress  {
        description = "ssh access"
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    ingress  {
        description = "Kubernetes API access"
        from_port = 443
        to_port = 443 
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    egress {
        from_port = 0
        to_port = 0
        protocol = -1 
        cidr_blocks = ["0.0.0.0/0"]
    }

    tags = {
        Name = var.tags_main_sg_eks
    }
}

Enter fullscreen mode Exit fullscreen mode

Deep Dive:

  • aws_security_group "main_sg": Defines a security group named main_sg associated with your VPC. This security group is crucial for controlling access to your EC2 instances and EKS cluster.
  • Ingress Rules:
  • SSH Access: Allows incoming traffic on port 22 (SSH) from any IP address (0.0.0.0/0). This is essential for remote access to your EC2 instances.

  • Kubernetes API Access: Allows incoming traffic on port 443 (HTTPS) from any IP address (0.0.0.0/0). This rule is necessary for accessing the Kubernetes API server if you’re using EKS.
    Egress Rules:

  • Outbound Traffic: Allows all outbound traffic (protocol -1) to any IP address (0.0.0.0/0). This ensures that instances can initiate outbound connections to the internet or other resources.

The security.tf file is vital for defining the security posture of your infrastructure. By setting up appropriate security groups, you ensure that your resources are protected from unwanted access while allowing necessary communication for their operation.

With the security groups set up to manage traffic to and from your resources, it's important to ensure that all the necessary variables are defined. These variables help to parameterize your Terraform configurations, making them flexible and easier to manage.

The variables.tf File

The variables.tf file contains the variable definitions used across your Terraform configurations. These variables provide default values and can be overridden as needed.

Here’s the content of the variables.tf file:

variable "region" {
    type = string 
    default = "us-east-1"
}

variable "bucket_name" {
    type = string 
    default = "terraform-state-bucket-tobi"
}

variable "aws_access_key_id" {
    type = string
    default = ""
}

variable "aws_secret_access_key" {
    type = string
    default = ""
}

variable "tags_vpc" {
    type = string 
    default = "main-vpc-eks"
}

variable "tags_public_rt" {
    type = string 
    default = "public-route-table"
}

variable "tags_igw" {
    type = string 
    default = "internet-gateway"
}

variable "tags_public_subnet_1" {
    type = string 
    default = "public-subnet-1"
}

variable "tags_public_subnet_2" {
    type = string 
    default = "public-subnet-2"
}

variable "tags_public_subnet_3" {
    type = string 
    default = "public-subnet-3"
}

variable "tags_private_subnet_1" {
    type = string 
    default = "private-subnet-1"
}

variable "tags_private_subnet_2" {
    type = string 
    default = "private-subnet-2"
}

variable "tags_private_subnet_3" {
    type = string 
    default = "private-subnet-3"
}

variable "tags_main_sg_eks" {
    type = string
    default = "main-sg-eks"
}

variable "instance_type" {
    type = string 
    default = "t2.micro"
}

variable "cluster_name" {
    type = string 
    default = "EKSCluster"
}

variable "node_group_name" {
    type = string 
    default = "SlaveNode"
}

variable "vpc_cidr_block" {
    type = string 
    default = "10.0.0.0/16"
}

variable "p_s_1_cidr_block" {
    type = string 
    default = "10.0.1.0/24"
}

variable "az_a" {
    type = string 
    default = "us-east-1a"
}

variable "p_s_2_cidr_block" {
    type = string 
    default = "10.0.2.0/24"
}

variable "az_b" {
    type = string 
    default = "us-east-1b"
}

variable "p_s_3_cidr_block" {
    type = string 
    default = "10.0.3.0/24"
}

variable "az_c" {
    type = string 
    default = "us-east-1c"
}

variable "private_s_1_cidr_block" {
    type = string 
    default = "10.0.4.0/24"
}

variable "az_private_a" {
    type = string 
    default = "us-east-1c"
}

variable "private_s_2_cidr_block" {
    type = string 
    default = "10.0.5.0/24"
}

variable "az_private_b" {
    type = string 
    default = "us-east-1c"
}

variable "private_s_3_cidr_block" {
    type = string 
    default = "10.0.6.0/24"
}

variable "az_private_c" {
    type = string 
    default = "us-east-1c"
}

variable "main_sg_description" {
    type = string 
    default = "Allow TLS inbound traffic and all outbound traffic"
}

variable "ec2_instance_role_name" {
    type = string 
    default = "ec2-instance-role"
}

variable "ec2_instance_profile" {
    type = string 
    default = "ec2-instance-profile"
}

variable "eks_cluster_role_name" {
    type = string 
    default = "eksclusterrole-2"
}

variable "eks_node_group_role_name" {
    type = string 
    default = "eks-node-group-role"
}

variable "eks_describe_cluster_policy_name" {
    type = string 
    default = "eks-describe-cluster-policy"
}

Enter fullscreen mode Exit fullscreen mode

Deep Dive:

  • Region and Access Keys: Define your AWS region and credentials for Terraform to interact with AWS.

  • Tags: Tagging helps to identify and manage resources, making it easier to organize and maintain them.

  • Instance and Cluster Details: Variables for instance types, cluster names, and other configurations help to customize your deployments.

  • CIDR Blocks and Availability Zones: Define your network ranges and availability zones for proper network segmentation.

The variables.tf file is essential for defining all the customizable parameters that Terraform uses to create and manage your infrastructure. It helps in keeping your Terraform configurations flexible and adaptable to different environments or requirements.

With your infrastructure set up using Terraform, it’s time to automate the deployment and management of your resources through Continuous Integration (CI) and Continuous Deployment (CD). Leveraging GitHub Actions for CI/CD allows you to automate the processes of building, testing, and deploying your infrastructure, making your workflow more efficient and less error-prone.

CI/CD Integration: Automating Deployment and Management

In this section, we will introduce how to set up GitHub Actions workflows to manage your Terraform configurations and automate the deployment of your Elastic Kubernetes Service (EKS) cluster.

Adding GitHub Secrets

Before you can use GitHub Actions to deploy your infrastructure, you need to add sensitive information like AWS credentials and SSH keys as secrets in your GitHub repository. These secrets will be used in the workflow to interact with AWS and other services securely.

To add secrets to your GitHub repository:
  • Navigate to Your Repository: Go to the repository where you want to add secrets.

  • Access Repository Settings: Click on the “Settings” tab in the repository.

  • Select Secrets and Variables: In the left sidebar, click on "Secrets and variables” and then "Actions."

  • Add New Secrets: Click on the "New repository secret" button. Add the following secrets:

  • AWS_ACCESS_KEY_ID: Your AWS Access Key ID.

  • AWS_SECRET_ACCESS_KEY: Your AWS Secret Access Key.

  • AWS_REGION: The AWS region where your resources are located.

  • EKS_CLUSTER_NAME: The name of your EKS cluster.

  • EC2_SSH_PRIVATE_KEY: The private SSH key for accessing your EC2 instances.

GitHub Actions Workflow for EKS Setup

To automate the setup of your EKS cluster, we will create a GitHub Actions workflow that performs the following steps:

  • Log in to AWS: Configure AWS credentials.

  • Initialize Terraform: Prepare Terraform for execution.

  • Plan Terraform Deployment: Preview the changes Terraform will make.

  • Apply Terraform: Provision the infrastructure defined in your Terraform configuration.

  • Install Tools on EC2: Ensure necessary tools like AWS CLI and kubectl are installed on the EC2 instance.

Here is the content of the GitHub Actions workflow file eks-setup.yaml:

name: Set up EKS With Terraform

on: push

env: 
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  AWS_REGION: ${{ secrets.AWS_REGION }}
  EKS_CLUSTER_NAME: ${{ secrets.EKS_CLUSTER_NAME}}

jobs:
  LogInToAWS:
    runs-on: ubuntu-latest
    steps:
    - name: Configure credentials
      uses: aws-actions/configure-aws-credentials@v4
      with:
        aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
        aws-region: ${{ env.AWS_REGION }}

  TerraformInit:
    runs-on: ubuntu-latest
    needs: LogInToAWS
    steps:
    - name: Checkout
      uses: actions/checkout@v4

    - name: Initialize Terraform
      run: terraform init

  TerraformPlan:
    runs-on: ubuntu-latest
    needs: TerraformInit
    steps:
    - name: Checkout
      uses: actions/checkout@v4

    - name: Initialize Terraform
      run: terraform init

    - name: Plan Terraform
      run: terraform plan

  TerraformApply-InstallTools:
    runs-on: ubuntu-latest
    needs: TerraformPlan
    steps:
    - name: Checkout
      uses: actions/checkout@v4

    - name: Initialize Terraform (Again, if needed)
      run: terraform init

    - name: Apply Terraform
      run: terraform apply -auto-approve

    - name: Get EC2 Public IP
      id: get_public_ip
      run: |
        terraform output -json > tf_output.json
        EC2_PUBLIC_IP=$(jq -r '.ec2_public_ip.value' tf_output.json)
        if [ -z "$EC2_PUBLIC_IP" ]; then
          echo "Error: EC2 Public IP is empty."
          exit 1
        fi
        echo "EC2_PUBLIC_IP=$EC2_PUBLIC_IP" >> $GITHUB_ENV
        echo "Captured EC2 Public IP: $EC2_PUBLIC_IP"

    - name: Ensure EC2 Public IP is Not Empty
      if: ${{ env.EC2_PUBLIC_IP == '' }}
      run: |
        echo "Error: EC2 Public IP is empty. Exiting."
        exit 1    

    - name: Install SSH Client
      run: sudo apt-get update && sudo apt-get install -y sshpass

    - name: Setup SSH Key
      run: |
        echo "${{ secrets.EC2_SSH_PRIVATE_KEY }}" > /tmp/private_key
        chmod 600 /tmp/private_key

    - name: Wait for EC2 Instance to be Ready
      run: sleep 100 

    - name: Print Environment Variables
      run: |
        echo "EC2_PUBLIC_IP=${{ env.EC2_PUBLIC_IP }}"

    - name: Debug Public IP
      run: |
        echo "Public IP: ${{ env.EC2_PUBLIC_IP }}"

    - name: SSH and Install AWS CLI and kubectl
      run: |
        set -x
        ssh -o StrictHostKeyChecking=no -i /tmp/private_key ec2-user@${{ env.EC2_PUBLIC_IP }} << 'EOF'
          sudo yum install -y unzip

          # Check if AWS CLI exists
          if ! command -v aws &> /dev/null
          then
            echo "AWS CLI not found, installing..."
            curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
            unzip awscliv2.zip
            sudo ./aws/install 
          else
            echo "AWS CLI already installed"
          fi

          # Check if kubectl exists
          if ! command -v kubectl &> /dev/null
          then
            echo "kubectl not found, installing..."
            curl -O "https://s3.us-west-2.amazonaws.com/amazon-eks/1.28.11/2024-07-12/bin/linux/amd64/kubectl"
            chmod +x ./kubectl
            mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
          else
            echo "kubectl already installed"
          fi

          kubectl version --client
          #aws eks update-kubeconfig --region ${{ env.AWS_REGION }} --name ${{ env.EKS_CLUSTER_NAME }}
        EOF

    - name: Verify EC2 Instance State 
      run: |
        INSTANCE_STATE=$(aws ec2 describe-instances --instance-ids $(terraform output -raw ec2_instance_id) --query 'Reservations[*].Instances[*].State.Name' --output text)  
        echo "EC2 Instance State: $INSTANCE_STATE"
        if [[ "$INSTANCE_STATE" != "running" ]]; then
          echo "Error: EC2 instance is not running."
          exit 1
        fi 

Enter fullscreen mode Exit fullscreen mode

Deep dive of the Workflow

LogInToAWS: Configures AWS credentials to allow the workflow to interact with your AWS account.

TerraformInit: Initialises Terraform to prepare it for running.

TerraformPlan: Plans the changes Terraform will make, allowing you to review them before applying.

TerraformApply-InstallTools: Applies the Terraform configurations, retrieves the EC2 public IP, and ensures necessary tools are installed on the EC2 instance.

By integrating this GitHub Actions workflow into your project and configuring the required secrets, you ensure that your infrastructure is provisioned consistently and managed efficiently with automated processes. This setup minimizes manual intervention, reduces the risk of errors, and accelerates the deployment of your applications.

Pushing Code to Trigger the Pipeline

Once the setup is complete, the final step is to push your code to the repository to trigger the pipeline.

Here’s how you can do it:

  • Add Your Files:
  • Ensure all your Terraform configuration files and the GitHub Actions workflow file (eks-setup.yaml) are added to your local repository.
  • Commit Your Changes:
  • Use Git commands to commit your changes. For example:
git add .
git commit -m "Add Terraform configurations and GitHub Actions workflow"
Enter fullscreen mode Exit fullscreen mode
  • Push to Repository:
  • Push your changes to the remote repository on GitHub:
git push origin main

Enter fullscreen mode Exit fullscreen mode
  • Verify Pipeline Execution:

After pushing your code, navigate to the "Actions" tab in your GitHub repository to monitor the progress of your workflow. The pipeline will automatically trigger, executing the defined steps, from initialising and planning Terraform to applying the configurations and setting up necessary tools.

Image description

Summary

By following the steps outlined in this article, you have set up a robust and automated deployment pipeline. We started by defining and managing your infrastructure with Terraform, then moved on to integrate GitHub Actions for a seamless CI/CD process. Finally, by pushing your code to the repository, you triggered the pipeline, ensuring your infrastructure deployment is automated and consistent.

Conclusion

Automating your infrastructure deployment with Terraform and GitHub Actions brings significant benefits, including reduced manual intervention, improved efficiency, and enhanced consistency. By leveraging these tools, you maintain infrastructure as code, which simplifies version control and collaboration.

I hope this guide has equipped you with the knowledge and skills needed to implement a fully automated deployment pipeline for your cloud infrastructure. If you have any questions or need further assistance, please feel free to reach out or leave a comment.

☕️ If this article helped you avoid a tech meltdown or gave you a lightbulb moment, feel free to buy me a coffee! It keeps my code clean, my deployments smooth, and my spirit caffeinated. Help fuel the magic here!.

Happy deploying!

Top comments (0)