DEV Community

Cover image for Kubernetes Resume Challenge - Google Cloud GKE
David O' Connor
David O' Connor

Posted on • Updated on

Kubernetes Resume Challenge - Google Cloud GKE

Background

One of the most valuable experiences for me last year was completing the Cloud Resume Challenge. The things that I learnt proved very useful when working on different projects. So when I saw that the CRC's Forrest Brazzeal and KodeKloud were coming together to create a Kubernetes Resume Challenge, I leapt at the opportunity to try it!

The Challenge

Deploy a scalable, consistent, and highly available e-commerce website with a database to Kubernetes using a cloud provider of your choosing.

What is Kubernetes and why is it important?

Let's go back to the 2000s. When you went to a company's website, it was probably hosted on a server that might be sitting in a room somewhere in their office. Think of a server as like the computer you might be reading this on now, but a computer dedicated to just one purpose - like serving a website. But this can be wasteful - a server may have resources that are going unused. And what if the business suddenly receives a lot of traffic? The website could potentially go down and cost the company sales and revenue.

Enter Kubernetes. Kubernetes was invented at Google and is basically a way of running lots of mini-servers (called containers). It allows you to do things like automatically scale your applications with traffic, check and ensure containers are healthy, and make more efficient use of your resources. It is very likely you have used a site or app that is deployed on Kubernetes - Airbnb, Spotify, and Reddit are three of the most prominent. And for IT professionals, a big advantage of Kubernetes is that it can run basically anywhere - including on major cloud providers such as Amazon Web Services, Microsoft Azure, or Google Cloud.

Week 1 - Steps 1 to 5

I had previous Kubernetes experience from one of my first projects and had recently made it a little over halfway through KodeKloud's CKA course, so I felt ready to start. I created a Dockerfile, built and pushed the image to DockerHub, and additionally created a K8s ConfigMap for the database init script.

Although most of my experience is with AWS, I decided to try Google Cloud as I had heard great things about their Kubernetes service (GKE) and I knew that new users get free credits. To deploy a cluster, I used GCP's very useful and easy cluster creation wizard. I came across a small issue where my selected region did not have enough resources but this was easily solved by changing to a different region. I installed gcloud CLI, gcloud's auth plugin, and updated kubectl to use the new cluster.

I realised that to deploy the website I needed a database first. I created a deployment using a MariaDB image, configured it to use the db-init configmap to populate the database, and added a service which would allow the frontend pods to connect to the database. When I deployed the website pods I noticed that they were unable to connect. I exec'd into one of the pods and checked the PHP code and environment variables but it all looked fine. I then checked the MariaDB pod logs before realising it was actually an init-script issue. After fixing that, the connection was up and running.

The last step this week was to create a load balancer to expose the website to the internet. This was a quick and easy process as cloud providers have seamless K8s/LB integrations. I deployed the LB targeting the front-end pods on port 80, used the gcloud CLI to fetch the IP address, and successfully accessed the website.

Week 2: Steps 6 to 10

The first task was to set a feature toggle that enabled dark mode for the website. I used my browser's developer tools to change various CSS rules to create a dark mode and created new stylesheets using these rules. I wrote some simple PHP code that enabled different stylesheets depending on whether the FEATURE_DARK_MODE environment variable was true or not. I built a new Docker image with the changes, pushed to DockerHub, and was able to successfully deploy the website with the new dark mode feature.

Next I manually scaled the application up and down, built a new image with a promotional banner, deployed the new image, and rolled back the application to the previously deployed image.

The last step this week was to implement autoscaling. I used the guide's commands to create a Horizontal Pod Autoscaler and then used Apache Bench to simulate traffic and CPU load. However, I noticed that the website pods were not scaling. After checking the HPA logs and Googling the behaviour I realised it was because I had not set resource requests on the pods. I used kubectl top to check the current resource consumption and then set the requests based on those values. After experimenting with different values I was able to see the pods autoscaling up and back down when tested with Apache Bench.

Week 3 - Steps 11 to 13

I added liveness and readiness probes to the application and was able to see Kubernetes delaying traffic to unready pods and additionally restarting pods if they became unhealthy. I also configured the database and website pods to pull credentials from a secret. I created a GitHub repository and pushed my code.

Extra Credit

I also decided to try the extra credit steps. I started by adding a persistent volume to the database which would allow data to remain stored upon pod restarts or other events. I found a useful MariaDB guide that I followed, first creating a persistent volume claim and then adding and mounting a volume to the database deployment. To test, I logged into the DB pod to create a new entry. I restarted the deployment and saw that the new entry had persisted.

Next I used Helm to package the application. On a very high level, Helm is a way of streamlining the deployment of your Kubernetes code. I followed the Quick Start guide I found in the documentation. It was a bit daunting to change all the Kubernetes templates to use Helm but I found a useful tool called Helmify that I used to create rough drafts. I went through the values and templates generated and changed them both for clarity and to parameterise values as much as possible. Once I did that I deleted my previous deployment and created a new one using Helm. I was impressed by how quick and easy it was.

The last step was to implement a CI/CD pipeline which would allow me to automatically build and deploy code. It was quite easy to create a GitHub Actions job that built a Docker image and pushed to DockerHub. However, it was a trickier process to deploy the Helm charts to GKE.

I followed a Google guide and started by creating a service account and adding the necessary IAM roles to it. I stored the generated JSON key securely in GitHub Secrets and used a GitHub action to authenticate the job to GCP. However, this failed with an error about an auth plugin. While searching for a solution I found a very useful GitHub Action that allowed authentication without gcloud. After experimenting with a few Helm commands I was able to successfully deploy to GKE via GitHub Actions!

Finished Product

Normal:
Normal

Dark Mode:
Dark Mode

Final Thoughts

This was a very enjoyable challenge that allowed me to use knowledge gained both from work projects and the Cloud Resume Challenge. Although I had Kubernetes experience I did not have experience with Helm or HPA. After this challenge, I am really interested in exploring Helm further and will hopefully have more opportunities to use this and Kubernetes in the future.

Github Repository

Top comments (0)