DEV Community

leroykayanda
leroykayanda

Posted on

A simple CICD pipeline for EKS using codepipeline.

Image description

A developer pushes a commit to their repository. This can be an application change or a change to the kubernetes infrastructure eg the deployment. The change triggers the pipeline and codebuild builds a new container image and pushes it to ECR. We then run kubectl apply -f app.yaml , app.yaml being the yaml containing the kubernetes infrastructure. This command is run by codebuild and it triggers an update in kubernetes.

The code for this tutorial is on github.

Image description

src: contains a simple html file, which is the application in this demo.
Dockerfile: takes the contents of src folder and builds a docker image.
app.yaml: contains the kubernetes infrastructure i.e deployment, service, ingress
buildspec.yml: this is used by codebuild to build the image and push it to ECR. It is the brains of the entire pipeline.

buildspec.yml

We define a couple of environment variables in codebuild.

Image description

We define the URI for the ECR repository, the region we are working in and the EKS cluster name. Eventually, we need to run kubectl apply -f app.yaml to apply changes to the cluster. We thus need an IAM role (EKS_KUBECTL_ROLE_ARN) that is authorized to make changes to the cluster.

This is the policy attached to that role.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "eks:Describe*",
            "Resource": "*"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

You will need to add this role to your EKS cluster to authorize it to make changes.

I was getting throttling errors when pulling the image php:7.3-apache from docker hub. That is why I added DOCKERHUB_USERNAME and DOCKERHUB_PASSWORD as per this tutorial to get around this.

version: 0.2

  install:
    commands:
      - echo "${DOCKERHUB_PASSWORD}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin
      - echo "Installing jq, awscli and kubectl"
      - apt-get update && apt-get -y install jq
      - curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
      - unzip awscliv2.zip
      - ./aws/install
      - export PATH="$PATH:/usr/local/bin"
      - curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
      - chmod +x ./kubectl
      - mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
      - export PATH="$PATH:$HOME/bin"
Enter fullscreen mode Exit fullscreen mode

Here, we install jq to help us manipulate json on linux later. We also install the aws cli as well as kubectl.

  pre_build:
    commands:
      - echo "Setting image in deployment and log in to ECR"
      - TAG="$(date +%Y-%m-%d.%H.%M.%S)-$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 5)"
      - sed -i 's@CONTAINER_IMAGE@'"$REPOSITORY_URI:$TAG"'@' app.yaml
      - export KUBECONFIG=$HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

We come up with a container image tag based on the commit id that triggered the pipeline and the date. In app.yaml in the deployment section, we have defined the image as CONTAINER_IMAGE. We use sed to replace this with an image name and the tag we have just come up with.

build:
    commands:
      - echo "building and tagging image"
      - docker build -t $REPOSITORY_URI .
      - docker tag $REPOSITORY_URI $REPOSITORY_URI:$TAG
Enter fullscreen mode Exit fullscreen mode

We build and tag the image.

post_build:
    commands:
      - echo "post_build"
      - aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $REPOSITORY_URI
      - docker push $REPOSITORY_URI:$TAG
      - CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900)
      - export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
      - export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
      - export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
      - export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
      - aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
      - kubectl apply -f app.yaml
Enter fullscreen mode Exit fullscreen mode

We log in to ECR and push the image. We assume the IAM role we created earlier. We use the AWS update-kubeconfig API call to automatically create the kubeconfig file. We finally run kubectl apply -f app.yaml and our cluster is updated.

Top comments (0)