DEV Community

Cover image for Deploy to Kubernetes using Github Actions (including Slack notification)
Leandro Proença
Leandro Proença

Posted on

Deploy to Kubernetes using Github Actions (including Slack notification)

In this guide we'll cover the full cycle of deploying to Kubernetes using Github Actions. Batteries included:

  • Kubernetes cluster running in AWS EKS
  • Docker images stored in AWS ECR
  • Bonus: notification to Slack

The Github workflow will be triggered at every commit on pull request, and its steps are described as follows:

  • git checkout
  • login to AWS ECR (credentials needed)
  • build Docker image
  • push Docker image to ECR
  • deploy to EKS using kubectl
  • send notification to Slack (needs webhook URL)

Github Action

Let's place our file under .github/workflows/release.yml. Then, we start by configuring the workflow trigger:

name: Release

on:
  pull_request:
    branches: [main]
Enter fullscreen mode Exit fullscreen mode

Such trigger will run right after we open and at every commit on the pull request.

Next, we define the env variables that will be used across steps:

env:
  RELEASE_REVISION: "pr-${{ github.event.pull_request.number }}-${{ github.event.pull_request.head.sha }}"
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  AWS_REGION: ${{ secrets.AWS_REGION }}
  KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
  KUBE_NAMESPACE: production
  ECR_REPOSITORY: my-cool-application
  SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Enter fullscreen mode Exit fullscreen mode

Env explained:

Now, let's start writing the job, after which we'll declare the steps:

jobs:                                            
  release:                                       
    name: Release                                
    runs-on: ubuntu-latest                       
    steps:                                       
     ... [steps be at this level]
Enter fullscreen mode Exit fullscreen mode

Step - Cancel Previous Runs

This step instructs Github to cancel any current run for this job on this very repository.

- name: Cancel Previous Runs               
  uses: styfle/cancel-workflow-action@0.4.1
  with:                                    
    access_token: ${{ github.token }}      
Enter fullscreen mode Exit fullscreen mode

Step - Checkout

Performs the git checkout at this specific commit.

- name: Checkout                                  
  uses: actions/checkout@v2                       
  with:                                           
    ref: ${{ github.event.pull_request.head.sha }}
Enter fullscreen mode Exit fullscreen mode

Step - Configure AWS credentials

This steps uses the AWS credentials defined in the env section.

- name: Configure AWS credentials                          
  uses: aws-actions/configure-aws-credentials@v1           
  with:                                                    
    aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}        
    aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
    aws-region: ${{ env.AWS_REGION }}
Enter fullscreen mode Exit fullscreen mode

Step - Login to AWS ECR

Performs the login to AWS ECR, using the AWS credentials configured in the previous step.

- name: Login to Amazon ECR            
  id: login-ecr                        
  uses: aws-actions/amazon-ecr-login@v1
Enter fullscreen mode Exit fullscreen mode

Step - Setup Docker buildx cache

These two steps are very important for the performance of building image. A few key notes:

  • Github Actions, like every CI runner in the cloud, is ephemeral, which means a new instance is virtualized every time we perform a new workflow job
  • Due to this ephemerality, we cannot rely on the native Docker layer caching

The above constraints would make our build time very slow, since every layer in Dockerfile will be evaluated across builds.

But thanks to this action, we can make use of the buildkit CLI to cache Docker layers. Then, in combination with native Github actions cache, we can rely on this cache strategy, thus optimizing build time.

- name: Set up Docker Buildx                             
  id: buildx                                             
  uses: docker/setup-buildx-action@master                
- name: Docker cache layers                              
  uses: actions/cache@v2                                 
  with:                                                  
    path: /tmp/.buildx-cache                             
    key: ${{ runner.os }}-single-buildx-${{ github.sha }}
    restore-keys: |                                      
      ${{ runner.os }}-single-buildx                     
Enter fullscreen mode Exit fullscreen mode

Step - Build & Push the image to the registry

This step covers building the Docker image with buildx to optimize build time, and pushing it to AWS ECR, which was previously configured in "Login to Amazon ECR".

The example assumes we have a target named "release" in the Dockerfile.

- name: Build & Push Image                                                                                      
  env:                                                                                                          
    ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}                                                       
    RELEASE_IMAGE: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ env.RELEASE_REVISION }}
  run: |
    docker buildx create --use

    docker buildx build \                                
      --cache-from=type=local,src=/tmp/.buildx-cache \   
      --cache-to=type=local,dest=/tmp/.buildx-cache-new \
      --tag ${{ env.RELEASE_IMAGE }} \                           
      --target release \                                 
      --push \                                           
      .                                                  

    rm -rf /tmp/.buildx-cache
    mv /tmp/.buildx-cache-new /tmp/.buildx-cache
Enter fullscreen mode Exit fullscreen mode

Step explained:

  • docker buildx create --use: creates a new build context for buildx and sets it as the current context
  • docker buildx build ...: builds the image using the cache configured/restored in the previous steps "Docker cache layers". After build, it uploads the image to the pre-configured registry using the --push option
  • rm | mv ...: we have to renew the cache at every run, otherwise we may reach the 5GB limit of storage on Githb Actions

Step - Deploy to Kubernetes cluster

Once we have the image uploaded to the registry, we can send a command to kubernetes to perform the deploy.

- name: Deploy to Kubernetes cluster                                                                            
  uses: kodermax/kubectl-aws-eks@master                                                                         
  env:                                                                                                          
    RELEASE_IMAGE: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ env.RELEASE_REVISION }}
  with:                                                                                                         
    args: set image deployment/my-pod app=${{ env.RELEASE_IMAGE }} --record -n $KUBE_NAMESPACE   
Enter fullscreen mode Exit fullscreen mode

Here we are using kubectl set image but it could be kubectl rollout or any other command, as needed.

Additionally, we can include a step to check the deployment:

- name: Verify Kubernetes deployment                               
  uses: kodermax/kubectl-aws-eks@master                            
  with:                                                            
    args: rollout status deploy my-pod -n $KUBE_NAMESPACE 
Enter fullscreen mode Exit fullscreen mode

Step - Slack notification

After a succeeded deployment, we can use this action to send a notification to Slack.

- name: Slack notification                                
  uses: rtCamp/action-slack-notify@master                 
  env:                                                    
    SLACK_CHANNEL: my_cool_channel                   
    SLACK_MESSAGE: 'Just deployed our cool application!'
    SLACK_TITLE: 'Deploy'                         
    SLACK_USERNAME: 'Some Bot'                           
    SLACK_ICON: "[icon URL]"
    SLACK_COLOR: '#228B22'                                
    SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }}       
    MSG_MINIMAL: true  
Enter fullscreen mode Exit fullscreen mode

Wrapping up

In this guide we configured the full cycle of building a Docker image, uploading it to a registry, performing the deployment to Kubernetes and sending a notification to Slack.

In the upcoming posts we'll see how to optimize build time of dependencies' installation inside the Docker image (a.k.a bundle install for Ruby developers), using a cache strategy that relies on AWS S3.

Top comments (16)

Collapse
 
cpavlatos profile image
Xtos

This is great! Can we add a revert/rollback in the event of a failure after deployment ?

Collapse
 
leandronsp profile image
Leandro Proença

yup, that’s perfectly possible

Collapse
 
cpavlatos profile image
Xtos

I’d love to see how it will fetch the rollback sha(one previous to the failed deployment), and the previous ECR image. Would this be added in a new post, or to the current? :-)

Thread Thread
 
leandronsp profile image
Leandro Proença

A Kubernetes deployment keeps N deploy versions, aka docker images. So a rollout rollback is simply switching back to the previous image already present in the cluster. Is that you’re looking for or I’m missing some point?

Thread Thread
 
cpavlatos profile image
Xtos

Yes, very similar, just trying to incorporate to what you have developed a rollback option :-)

As a final step, i tweaked the GitHub action step with:

helm upgrade my-application -f helm/values-${{ env.ENV_NAME }}.yaml --set image.tag="${{ env.SHA }}" helm/ -n ${{ env.ENV_NAME }}-apps -v 20

I like your step to "Verify Kubernetes deployment", based on that output, i'm trying to see how i can rollback in the event of a failed deployment.

for example a GitHub action to use kubectl rollout history or kubectl set image but the later uses --record which will be deprecated.

Much appreciated...BTW, been enjoying all your articles, and looking forward to see your statefulsets on k8s!

Thread Thread
 
leandronsp profile image
Leandro Proença

Okay, I got it. Today later I’ll check your good insights and how to fit it in this example, thanks!

Collapse
 
amin_djawadi_842f8e3f8ced profile image
Amin Djawadi

Thank you Leandro, It was really helpful.

Collapse
 
jijothottungal profile image
jijothottungal

I tried these steps but failing at build satge even after trying some solutions from the internet. Could you please help?

==

0s
Run actions/cache@v2
with:
path: /tmp/.buildx-cache
key: Linux-single-buildx-a0e66f3b0efdba7f889770a006153d36b421b028
restore-keys: Linux-single-buildx

env:
RELEASE_REVISION: pr--
AWS_ACCESS_KEY_ID: ***
AWS_SECRET_ACCESS_KEY: ***
AWS_REGION: ***
KUBE_CONFIG_DATA: ***
KUBE_NAMESPACE: default
ECR_REPOSITORY: app-api
SLACK_WEBHOOK_URL: ***
AWS_DEFAULT_REGION: ***
Cache not found for input keys: Linux-single-buildx-a0e66f3b0efdba7f889770a006153d36b421b028, Linux-single-buildx
2s
Run docker buildx create --use
docker buildx create --use
docker buildx build \

--cache-from=type=local,src=/tmp/.buildx-cache \

--cache-to=type=local,dest=/tmp/.buildx-cache-new \
--tag .dkr.ecr..amazonaws.com/app-api:pr-- \

--target build-stage \

--push \

.

  rm -rf /tmp/.buildx-cache
  mv /tmp/.buildx-cache-new /tmp/.buildx-cache
Enter fullscreen mode Exit fullscreen mode

shell: /usr/bin/bash -e {0}
env:
RELEASE_REVISION: pr--
AWS_ACCESS_KEY_ID: ***
AWS_SECRET_ACCESS_KEY: ***
AWS_REGION: ***
KUBE_CONFIG_DATA: ***
KUBE_NAMESPACE: default
ECR_REPOSITORY: app-api
SLACK_WEBHOOK_URL: ***
AWS_DEFAULT_REGION: ***
ECR_REGISTRY: .dkr.ecr..amazonaws.com
RELEASE_IMAGE: .dkr.ecr..amazonaws.com/cleva-api:pr--
nice_benz
time="2021-11-09T17:04:14Z" level=warning msg="No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load"

1 [internal] booting buildkit

1 pulling image moby/buildkit:buildx-stable-1

1 pulling image moby/buildkit:buildx-stable-1 0.8s done

1 creating container buildx_buildkit_nice_benz0

1 creating container buildx_buildkit_nice_benz0 0.7s done

1 DONE 1.5s

error: unable to prepare context: path " " not found
Error: Process completed with exit code 1.

Collapse
 
thearvindnarayan profile image
Arvind Narayan • Edited

This should help

- name: Set up Docker Buildx
        id: buildx
        uses: docker/setup-buildx-action@master
 - name: Docker cache layers
        uses: actions/cache@v2
        with:
          path: /tmp/.buildx-cache
          key: ${{ runner.os }}-single-buildx-${{ github.sha }}
          restore-keys: |
            ${{ runner.os }}-single-buildx
  - name: Build 
        uses: docker/build-push-action@v2
        with: 
          context: .
          push: true
          tags: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ github.sha }}
          cache-from: type=local,src=/tmp/.buildx-cache
          cache-to: type=local,dest=/tmp/.buildx-cache-new
  - name: Move cache
        run: |
          rm -rf /tmp/.buildx-cache
          mv /tmp/.buildx-cache-new /tmp/.buildx-cache
Enter fullscreen mode Exit fullscreen mode
Collapse
 
leandrosilva profile image
Leandro Silva

You're putting in great content, bud.
Keep going, I'll be back.

Collapse
 
leandronsp profile image
Leandro Proença

I enjoy and learn with your writings! Yes, do it more 💪🏻

Collapse
 
leandrosilva profile image
Leandro Silva • Edited

Oh man, I really need to. It's been 3 months since my last post. Meh.
And btw, I'm entertaining the idea of writing in English. Let's see.

Collapse
 
vitalykarasik profile image
Vitaly Karasik

Thank you for your post!
BTW, is it possible to run a few commands inside one step, like "set image... ; get rollout status..."?

Collapse
 
ahmedappout08 profile image
ahmedappout08

i got this error from deploying the image on k8s :
The connection to the server localhost:8080 was refused - did you specify the right host or port?
How can i fix it .. thanks in advance

Collapse
 
tiagovbarreto profile image
tiagovbarreto

Hi Leandro, nice article! Do you know how to deploy an image from docker hub into aws eks? Thks!

Collapse
 
devsam profile image
Devsam

In "Deploy to Kubernetes cluster" action, I'm getting below error:

Error from server (NotFound): deployments.apps "my-pod" not found