In this guide we'll cover the full cycle of deploying to Kubernetes using Github Actions. Batteries included:
The Github workflow will be triggered at every commit on pull request, and its steps are described as follows:
- git checkout
- login to AWS ECR (credentials needed)
- build Docker image
- push Docker image to ECR
- deploy to EKS using
kubectl
- send notification to Slack (needs webhook URL)
Github Action
Let's place our file under .github/workflows/release.yml
. Then, we start by configuring the workflow trigger:
name: Release
on:
pull_request:
branches: [main]
Such trigger will run right after we open and at every commit on the pull request.
Next, we define the env variables that will be used across steps:
env:
RELEASE_REVISION: "pr-${{ github.event.pull_request.number }}-${{ github.event.pull_request.head.sha }}"
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
KUBE_NAMESPACE: production
ECR_REPOSITORY: my-cool-application
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Env explained:
-
RELEASE_REVISION
: the tag that we'll use on the Docker image -
AWS_ACCESS_KEY_ID | AWS_SECRET_ACCESS_KEY
: used in configure aws credentials action -
KUBE_CONFIG_DATA
: used in kubectl aws eks action -
ECR_REPOSITORY
: used in aws ecr action -
SLACK_WEBHOOK_URL
: used in slack notification action
Now, let's start writing the job, after which we'll declare the steps:
jobs:
release:
name: Release
runs-on: ubuntu-latest
steps:
... [steps be at this level]
Step - Cancel Previous Runs
This step instructs Github to cancel any current run for this job on this very repository.
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.4.1
with:
access_token: ${{ github.token }}
Step - Checkout
Performs the git checkout at this specific commit.
- name: Checkout
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
Step - Configure AWS credentials
This steps uses the AWS credentials defined in the env
section.
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
Step - Login to AWS ECR
Performs the login to AWS ECR, using the AWS credentials configured in the previous step.
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
Step - Setup Docker buildx cache
These two steps are very important for the performance of building image. A few key notes:
- Github Actions, like every CI runner in the cloud, is ephemeral, which means a new instance is virtualized every time we perform a new workflow job
- Due to this ephemerality, we cannot rely on the native Docker layer caching
The above constraints would make our build time very slow, since every layer in Dockerfile will be evaluated across builds.
But thanks to this action, we can make use of the buildkit CLI to cache Docker layers. Then, in combination with native Github actions cache, we can rely on this cache strategy, thus optimizing build time.
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@master
- name: Docker cache layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-single-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-single-buildx
Step - Build & Push the image to the registry
This step covers building the Docker image with buildx
to optimize build time, and pushing it to AWS ECR, which was previously configured in "Login to Amazon ECR".
The example assumes we have a target named "release" in the Dockerfile.
- name: Build & Push Image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
RELEASE_IMAGE: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ env.RELEASE_REVISION }}
run: |
docker buildx create --use
docker buildx build \
--cache-from=type=local,src=/tmp/.buildx-cache \
--cache-to=type=local,dest=/tmp/.buildx-cache-new \
--tag ${{ env.RELEASE_IMAGE }} \
--target release \
--push \
.
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
Step explained:
-
docker buildx create --use
: creates a new build context for buildx and sets it as the current context -
docker buildx build ...
: builds the image using the cache configured/restored in the previous steps "Docker cache layers". After build, it uploads the image to the pre-configured registry using the--push
option -
rm | mv ...
: we have to renew the cache at every run, otherwise we may reach the 5GB limit of storage on Githb Actions
Step - Deploy to Kubernetes cluster
Once we have the image uploaded to the registry, we can send a command to kubernetes to perform the deploy.
- name: Deploy to Kubernetes cluster
uses: kodermax/kubectl-aws-eks@master
env:
RELEASE_IMAGE: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ env.RELEASE_REVISION }}
with:
args: set image deployment/my-pod app=${{ env.RELEASE_IMAGE }} --record -n $KUBE_NAMESPACE
Here we are using kubectl set image
but it could be kubectl rollout
or any other command, as needed.
Additionally, we can include a step to check the deployment:
- name: Verify Kubernetes deployment
uses: kodermax/kubectl-aws-eks@master
with:
args: rollout status deploy my-pod -n $KUBE_NAMESPACE
Step - Slack notification
After a succeeded deployment, we can use this action to send a notification to Slack.
- name: Slack notification
uses: rtCamp/action-slack-notify@master
env:
SLACK_CHANNEL: my_cool_channel
SLACK_MESSAGE: 'Just deployed our cool application!'
SLACK_TITLE: 'Deploy'
SLACK_USERNAME: 'Some Bot'
SLACK_ICON: "[icon URL]"
SLACK_COLOR: '#228B22'
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }}
MSG_MINIMAL: true
Wrapping up
In this guide we configured the full cycle of building a Docker image, uploading it to a registry, performing the deployment to Kubernetes and sending a notification to Slack.
In the upcoming posts we'll see how to optimize build time of dependencies' installation inside the Docker image (a.k.a bundle install
for Ruby developers), using a cache strategy that relies on AWS S3.
Top comments (16)
This is great! Can we add a revert/rollback in the event of a failure after deployment ?
yup, that’s perfectly possible
I’d love to see how it will fetch the rollback sha(one previous to the failed deployment), and the previous ECR image. Would this be added in a new post, or to the current? :-)
A Kubernetes deployment keeps N deploy versions, aka docker images. So a rollout rollback is simply switching back to the previous image already present in the cluster. Is that you’re looking for or I’m missing some point?
Yes, very similar, just trying to incorporate to what you have developed a rollback option :-)
As a final step, i tweaked the GitHub action step with:
helm upgrade my-application -f helm/values-${{ env.ENV_NAME }}.yaml --set image.tag="${{ env.SHA }}" helm/ -n ${{ env.ENV_NAME }}-apps -v 20
I like your step to "Verify Kubernetes deployment", based on that output, i'm trying to see how i can rollback in the event of a failed deployment.
for example a GitHub action to use
kubectl rollout history
orkubectl set image
but the later uses--record
which will be deprecated.Much appreciated...BTW, been enjoying all your articles, and looking forward to see your statefulsets on k8s!
Okay, I got it. Today later I’ll check your good insights and how to fit it in this example, thanks!
Thank you Leandro, It was really helpful.
I tried these steps but failing at build satge even after trying some solutions from the internet. Could you please help?
==
0s
Run actions/cache@v2
with:
path: /tmp/.buildx-cache
key: Linux-single-buildx-a0e66f3b0efdba7f889770a006153d36b421b028
restore-keys: Linux-single-buildx
env:
RELEASE_REVISION: pr--
AWS_ACCESS_KEY_ID: ***
AWS_SECRET_ACCESS_KEY: ***
AWS_REGION: ***
KUBE_CONFIG_DATA: ***
KUBE_NAMESPACE: default
ECR_REPOSITORY: app-api
SLACK_WEBHOOK_URL: ***
AWS_DEFAULT_REGION: ***
Cache not found for input keys: Linux-single-buildx-a0e66f3b0efdba7f889770a006153d36b421b028, Linux-single-buildx
2s
Run docker buildx create --use
docker buildx create --use
docker buildx build \
--cache-from=type=local,src=/tmp/.buildx-cache \
--cache-to=type=local,dest=/tmp/.buildx-cache-new \
--tag .dkr.ecr..amazonaws.com/app-api:pr-- \
--target build-stage \
--push \
.
shell: /usr/bin/bash -e {0}
env:
RELEASE_REVISION: pr--
AWS_ACCESS_KEY_ID: ***
AWS_SECRET_ACCESS_KEY: ***
AWS_REGION: ***
KUBE_CONFIG_DATA: ***
KUBE_NAMESPACE: default
ECR_REPOSITORY: app-api
SLACK_WEBHOOK_URL: ***
AWS_DEFAULT_REGION: ***
ECR_REGISTRY: .dkr.ecr..amazonaws.com
RELEASE_IMAGE: .dkr.ecr..amazonaws.com/cleva-api:pr--
nice_benz
time="2021-11-09T17:04:14Z" level=warning msg="No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load"
1 [internal] booting buildkit
1 pulling image moby/buildkit:buildx-stable-1
1 pulling image moby/buildkit:buildx-stable-1 0.8s done
1 creating container buildx_buildkit_nice_benz0
1 creating container buildx_buildkit_nice_benz0 0.7s done
1 DONE 1.5s
error: unable to prepare context: path " " not found
Error: Process completed with exit code 1.
This should help
You're putting in great content, bud.
Keep going, I'll be back.
I enjoy and learn with your writings! Yes, do it more 💪🏻
Oh man, I really need to. It's been 3 months since my last post. Meh.
And btw, I'm entertaining the idea of writing in English. Let's see.
Thank you for your post!
BTW, is it possible to run a few commands inside one step, like "set image... ; get rollout status..."?
i got this error from deploying the image on k8s :
The connection to the server localhost:8080 was refused - did you specify the right host or port?
How can i fix it .. thanks in advance
Hi Leandro, nice article! Do you know how to deploy an image from docker hub into aws eks? Thks!
In "Deploy to Kubernetes cluster" action, I'm getting below error:
Error from server (NotFound): deployments.apps "my-pod" not found