DEV Community

Cover image for Building Smarter CI/CD: Automating Kubernetes Deployments with GitHub Actions
AzeematRaji
AzeematRaji

Posted on

Building Smarter CI/CD: Automating Kubernetes Deployments with GitHub Actions

Overview

This project focuses on implementing automated Continuous Integration (CI) and Continuous Deployment (CD) workflows for both the frontend and backend applications of a web platform using GitHub Actions. The goal is to enhance development efficiency, enforce code quality standards, and streamline deployment to a Kubernetes cluster.

Deliverables

  1. Frontend CI/CD Pipelines:
  • Automate testing, linting, building, and deployment of the frontend application.
  • Ensure only code changes in the frontend application trigger workflows.
  • Implement conditional job execution to build and deploy only if linting and tests pass.
  • Tag Docker images with the Git SHA for traceability.
  1. Backend CI/CD Pipelines:
  • Establish a similar automated workflow for backend application development.
  • Enforce code quality checks and test validation before building the application.
  • Ensure Kubernetes deployments use the latest tagged Docker image based on the commit SHA.

Prerequisites

Steps to achieving this;

  • Deploying a cluster on EKS service in AWS using terraform

    • create a terraform/
    • containing main.tf which will create;

VPC

resource "aws_vpc" "vpc" {
  tags = {
    "Name" = "udacity"
  }
  cidr_block           = "10.0.0.0/16"
  enable_dns_support   = true
  enable_dns_hostnames = true
}

# Create an internet gateway
resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.vpc.id
}

# Create a public subnet
resource "aws_subnet" "public_subnet" {
  vpc_id                  = aws_vpc.vpc.id
  cidr_block              = "10.0.1.0/24"
  availability_zone       = "us-east-1${var.public_az}"
  map_public_ip_on_launch = true
  tags = {
    Name = "udacity-public"
  }
}

# Create public route table
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw.id
  }

  tags = {
    Name = "public"
  }
}

# Associate the route table
resource "aws_route_table_association" "public" {
  subnet_id      = aws_subnet.public_subnet.id
  route_table_id = aws_route_table.public.id
}

# Create a private subnet
resource "aws_subnet" "private_subnet" {
  vpc_id            = aws_vpc.vpc.id
  availability_zone = "us-east-1${var.private_az}"
  cidr_block        = "10.0.2.0/24"
  tags = {
    Name = "udacity-private"
  }
}

# Create private route table
resource "aws_route_table" "private" {
  vpc_id = aws_vpc.vpc.id

  tags = {
    Name = "private"
  }
}

# Associate private route table
resource "aws_route_table_association" "private" {
  subnet_id      = aws_subnet.private_subnet.id
  route_table_id = aws_route_table.private.id
}

# Create EKS endpoint for private access
resource "aws_vpc_endpoint" "eks" {
  count               = var.enable_private == true ? 1 : 0 # only enable when private
  vpc_id              = aws_vpc.vpc.id
  service_name        = "com.amazonaws.us-east-1.eks"
  vpc_endpoint_type   = "Interface"
  security_group_ids  = [aws_eks_cluster.main.vpc_config.0.cluster_security_group_id]
  subnet_ids          = [aws_subnet.private_subnet.id]
  private_dns_enabled = true
}

# Create EC2 endpoint for private access
resource "aws_vpc_endpoint" "ec2" {
  count               = var.enable_private == true ? 1 : 0
  vpc_id              = aws_vpc.vpc.id
  service_name        = "com.amazonaws.us-east-1.ec2"
  vpc_endpoint_type   = "Interface"
  security_group_ids  = [aws_eks_cluster.main.vpc_config.0.cluster_security_group_id]
  private_dns_enabled = true
}

resource "aws_vpc_endpoint" "ecr-dkr-endpoint" {
  count               = var.enable_private == true ? 1 : 0
  vpc_id              = aws_vpc.vpc.id
  service_name        = "com.amazonaws.us-east-1.ecr.dkr"
  vpc_endpoint_type   = "Interface"
  security_group_ids  = [aws_eks_cluster.main.vpc_config.0.cluster_security_group_id]
  subnet_ids          = [aws_subnet.private_subnet.id]
  private_dns_enabled = true
}

resource "aws_vpc_endpoint" "ecr-api-endpoint" {
  count               = var.enable_private == true ? 1 : 0
  vpc_id              = aws_vpc.vpc.id
  service_name        = "com.amazonaws.us-east-1.ecr.api"
  vpc_endpoint_type   = "Interface"
  security_group_ids  = [aws_eks_cluster.main.vpc_config.0.cluster_security_group_id]
  subnet_ids          = [aws_subnet.private_subnet.id]
  private_dns_enabled = true
}
Enter fullscreen mode Exit fullscreen mode

EKS resources

# Create an EKS cluster
resource "aws_eks_cluster" "main" {
  name     = "cluster"
  version  = var.k8s_version
  role_arn = aws_iam_role.eks_cluster.arn
  vpc_config {
    subnet_ids              = [aws_subnet.private_subnet.id, aws_subnet.public_subnet.id]
    endpoint_public_access  = var.enable_private == true ? false : true
    endpoint_private_access = true
  }
  depends_on = [aws_iam_role_policy_attachment.eks_cluster, aws_iam_role_policy_attachment.eks_service]
}


# Create an IAM role for the EKS cluster
resource "aws_iam_role" "eks_cluster" {
  name = "eks_cluster_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "eks.amazonaws.com"
        }
      }
    ]
  })
}

# Attach policies to the EKS cluster IAM role
resource "aws_iam_role_policy_attachment" "eks_cluster" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks_cluster.name
}

resource "aws_iam_role_policy_attachment" "eks_service" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = aws_iam_role.eks_cluster.name
}



# EKS Node Group

# Track latest release for the given k8s version
data "aws_ssm_parameter" "eks_ami_release_version" {
  name = "/aws/service/eks/optimized-ami/${aws_eks_cluster.main.version}/amazon-linux-2/recommended/release_version"
}

resource "aws_eks_node_group" "main" {
  node_group_name = "udacity"
  cluster_name    = aws_eks_cluster.main.name
  version         = aws_eks_cluster.main.version
  node_role_arn   = aws_iam_role.node_group.arn
  subnet_ids      = [var.enable_private == true ? aws_subnet.private_subnet.id : aws_subnet.public_subnet.id]
  release_version = nonsensitive(data.aws_ssm_parameter.eks_ami_release_version.value)
  instance_types  = ["t3.small"]

  scaling_config {
    desired_size = 1
    max_size     = 1
    min_size     = 1
  }


  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    aws_iam_role_policy_attachment.node_group_policy,
    aws_iam_role_policy_attachment.cni_policy,
    aws_iam_role_policy_attachment.ecr_policy,
  ]

  lifecycle {
    ignore_changes = [scaling_config.0.desired_size]
  }
}

// IAM Configuration
resource "aws_iam_role" "node_group" {
  name               = "udacity-node-group"
  assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
}

resource "aws_iam_role_policy_attachment" "node_group_policy" {
  role       = aws_iam_role.node_group.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
}

resource "aws_iam_role_policy_attachment" "cni_policy" {
  role       = aws_iam_role.node_group.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}

resource "aws_iam_role_policy_attachment" "ecr_policy" {
  role       = aws_iam_role.node_group.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}

data "aws_iam_policy_document" "assume_role_policy" {
  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

ECR repositories

# ECR Repositories

resource "aws_ecr_repository" "frontend" {
  name                 = "frontend"
  image_tag_mutability = "MUTABLE"
  force_delete         = true

  image_scanning_configuration {
    scan_on_push = true
  }
}

resource "aws_ecr_repository" "backend" {
  name                 = "backend"
  image_tag_mutability = "MUTABLE"
  force_delete         = true

  image_scanning_configuration {
    scan_on_push = true
  }
}
Enter fullscreen mode Exit fullscreen mode

github action user for interacting in pipeline.

# Github Action role

resource "aws_iam_user" "github_action_user" {
  name = "github-action-user"
}

resource "aws_iam_user_policy" "github_action_user_permission" {
  user   = aws_iam_user.github_action_user.name
  policy = data.aws_iam_policy_document.github_policy.json
}

data "aws_iam_policy_document" "github_policy" {
  statement {
    effect    = "Allow"
    actions   = ["ecr:*", "eks:*", "ec2:*", "iam:GetUser"]
    resources = ["*"]
  }
}
Enter fullscreen mode Exit fullscreen mode
  • outputs.tf
output "frontend_ecr" {
  value = aws_ecr_repository.frontend.repository_url
}
output "backend_ecr" {
  value = aws_ecr_repository.backend.repository_url
}

output "cluster_name" {
  value = aws_eks_cluster.main.name
}

output "cluster_version" {
  value = aws_eks_cluster.main.version
}

output "github_action_user_arn" {
  value = aws_iam_user.github_action_user.arn
}
Enter fullscreen mode Exit fullscreen mode
  • variables.tf
variable "k8s_version" {
  default = "1.25"
}

variable "enable_private" {
  default = false
}

variable "public_az" {
  type        = string
  description = "Change this to a letter a-f only if you encounter an error during setup"
  default     = "a"
}

variable "private_az" {
  type        = string
  description = "Change this to a letter a-f only if you encounter an error during setup"
  default     = "b"
}
Enter fullscreen mode Exit fullscreen mode
  • providers.tf
provider "aws" {
  region = "us-east-1"
}

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "4.55.0"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode
  • run terraform init to initialize the repository
  • terraform plan to validate before applying
  • terraform apply --auto-approve to apply without prompt

This will provision our infra

  • Run init.sh which adds the github-action-user IAM user ARN to the Kubernetes configuration that will allow that user to execute kubectl commands against the cluster.
#!/bin/bash
set -e -o pipefail

echo "Fetching IAM github-action-user ARN"
userarn=$(aws iam get-user --user-name github-action-user | jq -r .User.Arn)

# Download tool for manipulating aws-auth
echo "Downloading tool..."
curl -X GET -L https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.6.2/aws-iam-authenticator_0.6.2_linux_amd64 -o aws-iam-authenticator
chmod +x aws-iam-authenticator

echo "Updating permissions"
./aws-iam-authenticator add user --userarn="${userarn}" --username=github-action-role --groups=system:masters --kubeconfig="$HOME"/.kube/config --prompt=false

echo "Cleaning up"
rm aws-iam-authenticator
echo "Done!"
Enter fullscreen mode Exit fullscreen mode

This script will download a tool, add the IAM user ARN to the authentication configuration, indicate a Done status, then it'll remove the tool.

  • Generate AWS access keys for Github Action user

    • Go to AWS console, navigate to the IAM service
    • Under users, the github-action-user user account should be there
    • Click the account and go to Security Credentials
    • Under Access keys select Create access key
    • Select Application running outside AWS and click Next, then Create access key to finish creating the keys
    • Finally, make sure to copy these keys for storing in Github Secrets
    • In the github repository settings, click secrets tab, then actions, past these keys in the repo secrets
  • Backend CI/CD pipeline

name: Backend CD

on:
  push:
    branches:
      - main
    paths:
      - 'starter/backend/**'
  workflow_dispatch:

jobs:
  lint:
    name: Lint
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'

      - name: Cache pipenv dependencies
        uses: actions/cache@v3
        with:
          path: ~/.cache/pipenv
          key: ${{ runner.os }}-pipenv-${{ hashFiles('starter/backend/Pipfile.lock') }}
          restore-keys: |
            ${{ runner.os }}-pipenv-

      - name: Install dependencies
        run: pip install pipenv && pipenv install --dev
        working-directory: starter/backend

      - name: Run lint
        run: pipenv run lint
        working-directory: starter/backend

  test:
    name: Test
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'
          cache: 'pipenv'

      - name: Install pipenv
        run: curl https://raw.githubusercontent.com/pypa/pipenv/master/get-pipenv.py | python
      - run: pipenv install
        working-directory: starter/backend

      - name: Run tests
        run: pipenv run test
        working-directory: starter/backend

  build-and-push:
    name: Build Docker Image
    runs-on: ubuntu-latest
    needs: 
      - lint
      - test
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'
          cache: 'pipenv'

      - name: Install pipenv
        run: curl https://raw.githubusercontent.com/pypa/pipenv/master/get-pipenv.py | python
      - run: pipenv install
        working-directory: starter/backend

      - name: Build Docker image
        run: |
          docker build \
            --tag mp-backend:latest \
            .
        working-directory: starter/backend

      - name: Set up AWS CLI
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Amazon ECR Login
        uses: aws-actions/amazon-ecr-login@v1

      - name: Tag and Push Docker Image
        run: |
          docker tag mp-backend:latest ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/backend:${{ github.sha }}
          docker push ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/backend:${{ github.sha }}

  deploy:
    name: Deploy to EKS
    runs-on: ubuntu-latest
    needs: build-and-push
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up AWS CLI
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Amazon ECR Login
        uses: aws-actions/amazon-ecr-login@v1

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'v1.25.0'

      - name: Update kubeconfig
        run: |
          aws eks --region us-east-1 update-kubeconfig --name cluster

      - name: Set image in Kubernetes manifests using Kustomize
        run: |
          kustomize edit set image backend=${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/backend:${{ github.sha }}
        working-directory: starter/backend/k8s

      - name: Deploy to EKS using Kustomize
        run: |
          kustomize build | kubectl apply -f -
        working-directory: starter/backend/k8s
Enter fullscreen mode Exit fullscreen mode
  • This runs lint and test jobs in parallel.
  • lint checks out the code, sets up Python, installs dependencies using pipenv, and then runs a linting tool to ensure the code adheres to style guidelines
  • test runs the test suite to ensure the application functions correctly
  • Proceeds to build docker image and push to the backend repo created in ECR.
  • Deploy the app code to the cluster using the manifest files present in the backend/.
  • Kustomize manages and overlays YAML files, enabling reusable and environment-specific configurations.

Before pushing this code to the github repository;

  • add AWS_ACCOUNT_ID to the github secrets

After pushing the code, the workflow should be triggered and run successfully;

  • run kubectl get svc to get the URL of the backend service
  • add the Backend_URL to the github repo secrets, frontend app needs it to connect to the backend_api

  • Frontend CI/CD pipeline

name: Frontend CD

on:
  push:
    branches:
      - main
    paths:
      - 'starter/frontend/**'
  workflow_dispatch:

jobs:
  lint:
    name: Lint
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 'lts/*'

      - name: Cache npm dependencies
        uses: actions/cache@v3
        with:
          path: ~/.npm
          key: ${{ runner.os }}-node-${{ hashFiles('starter/frontend/package-lock.json') }}
          restore-keys: |
            ${{ runner.os }}-node-

      - name: Install dependencies
        run: npm ci
        working-directory: starter/frontend

      - name: Run lint
        run: npm run lint
        working-directory: starter/frontend

  test:
    name: Test
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 'lts/*'

      - name: Cache npm dependencies
        uses: actions/cache@v3
        with:
          path: ~/.npm
          key: ${{ runner.os }}-node-${{ hashFiles('starter/frontend/package-lock.json') }}
          restore-keys: |
            ${{ runner.os }}-node-

      - name: Install dependencies
        run: npm ci
        working-directory: starter/frontend

      - name: Run tests
        run: CI=true npm test
        working-directory: starter/frontend

  build-and-push:
    name: Build Docker Image
    runs-on: ubuntu-latest
    needs: 
      - lint
      - test
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 'lts/*'

      - name: Cache npm dependencies
        uses: actions/cache@v3
        with:
          path: ~/.npm
          key: ${{ runner.os }}-node-${{ hashFiles('starter/frontend/package-lock.json') }}
          restore-keys: |
            ${{ runner.os }}-node-

      - name: Install dependencies
        run: npm ci
        working-directory: starter/frontend

      - name: Build Docker image
        run: |
          docker build \
            --build-arg REACT_APP_MOVIE_API_URL=${{ secrets.BACKEND_URL }} \
            --tag mp-frontend:latest \
            .
        working-directory: starter/frontend

      - name: Set up AWS CLI
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Amazon ECR Login
        uses: aws-actions/amazon-ecr-login@v1

      - name: Tag and Push Docker Image
        run: |
          docker tag mp-frontend:latest ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/frontend:${{ github.sha }}
          docker push ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/frontend:${{ github.sha }}

  deploy:
    name: Deploy to EKS
    runs-on: ubuntu-latest
    needs: build-and-push
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up AWS CLI
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1

      - name: Amazon ECR Login
        uses: aws-actions/amazon-ecr-login@v1

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'v1.25.0'

      - name: Update kubeconfig
        run: |
          aws eks --region us-east-1 update-kubeconfig --name cluster

      - name: Set image in Kubernetes manifests using Kustomize
        run: |
          kustomize edit set image frontend=${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/frontend:${{ github.sha }}
        working-directory: starter/frontend/k8s

      - name: Deploy to EKS using Kustomize
        run: |
          kustomize build | kubectl apply -f -
        working-directory: starter/frontend/k8s
Enter fullscreen mode Exit fullscreen mode

This does similar to what the backend CI/CD pipeline is doing.

  • It referenced the $Backend_URL to access the backend_api endpoint.
  • This workflow should be triggered on push
  • Run kubectl get svc and open the URL for the frontend service in a browser and the application should be live and displaying the data correctly

Best practices

  • Created a github-action-user for enhanced security:
    This user is granted only the minimal permissions required to execute the workflow, reducing potential risks.

  • Utilized terraform plan for validation:
    Ensures configurations are error-free before applying changes, allowing for easy rollback in case of failures.

Conclusion

This project demonstrates a modern DevOps approach to managing web application development, emphasizing automation, efficiency, and scalability.

  • Faster release cycles with automated workflows.
  • Enhanced code quality through linting and test validation.
  • Improved traceability and reliability with SHA-tagged Docker images.
  • Simplified deployment process to Kubernetes, reducing manual errors.

Checkout my Github

Speedy emails, satisfied customers

Postmark Image

Are delayed transactional emails costing you user satisfaction? Postmark delivers your emails almost instantly, keeping your customers happy and connected.

Sign up

Top comments (0)

Retry later
Retry later