DEV Community

Cover image for Diving Deeper into the Cloud: my journey through the Cloud Resume Challenge
Kyle Williams
Kyle Williams

Posted on

Diving Deeper into the Cloud: my journey through the Cloud Resume Challenge

Background:

Just over a year ago, I decided to switch up my career. I’d been working as an IT systems analyst at a large company. This role provided exposure to a lot of various technologies, and was great for getting diverse experiences, but wasn’t quite as hands-on as I wanted. I found myself being drawn deeper into engineering and infrastructure.
I finished my bachelor’s degree in 2022—one of my college courses was Cloud Foundations and included the AWS Cloud Certified Practitioner certification. I had a lot of fun getting more familiarized with cloud technologies and doing some actual work with various AWS offerings. I already knew that cloud computing was powerful, and not going anywhere, but I didn’t have the practical experience to really appreciate all that the cloud has to offer. After achieving my CCP certification, I knew I wanted to focus further on working with the cloud.
In planning the next steps for my career, I stumbled upon the Cloud Resume Challenge by Forrest Brazeal. The challenge offers a practical way of learning more about cloud technologies, and demonstrating that knowledge. I quickly decided it was worth my time to invest in completing the challenge.

For those not familiar, here are the requirements of the challenge:

  • Get an AWS Certification (already done!).
  • Create a resume website using HTML.
  • Use CSS for styling.
  • Host the site using AWS S3.
  • Use HTTPS via an AWS CloudFront distribution to securely serve the site content.
  • Use a custom domain name and set up DNS using AWS Route 53.
  • Write some JavaScript to implement a visitor counter on the site.
  • Set up a DynamoDB database to store the visitor count.
  • Build an API Gateway and a Lambda function to communicate between the website and the database.
  • Write the Lambda function in Python.
  • Implement Python tests.
  • Deploy the stack using Infrastructure-as-Code.
  • Use source control for the site.
  • Implement automated CI/CD for the front end and back end.
  • Write a blog post about the experience.

I followed Forrest’s recommended methodology and broke down the build into four phases:
Fronted, Backend, Integration, and finally, Automation. With these phases in mind, I got to work. Since the last phase of the project is the automation piece, I went ahead and manually set everything up through the AWS Console. This also helped me to familiarize myself with the various services involved and how they all needed to be configured.

Frontend:

First, I decided to use the challenge as an opportunity to get another domain name (new project, new domain!). I decided to splurge a bit and was able to secure kyle.mn (‘.mn’ is officially for Mongolia, but Minnesotans have also been using it for their own purposes (https://en.wikipedia.org/wiki/.mn), don'tcha know. 😜 I created a Hosted Zone in Route 53 to start my DNS configuration.

I already had a basic portfolio site written in vanilla HTML and CSS (Jen Kramer’s Getting Started with CSS course on Frontend Masters was an invaluable resource in getting myself refreshed on CSS). I already had a link to download a PDF of my resume, but for the challenge it needed to be in HTML and CSS. This was a fun exercise in extending what I had already built and adding to and adapting my CSS to present a decent-looking resume page. I’m pretty happy with how it turned out. And since I was already working on the Resume page, I went ahead and added a placeholder for the required visitor counter to the bottom of the page. I also started the JavaScript code to update the visitor counter on page load, all I needed to finish the code was the URL of the API gateway.

With most of the site developed, I went ahead and uploaded the code to my AWS S3 bucket and got to work creating a Cloudfront distribution. Rather than setting up the S3 bucket as a website endpoint, I decided to use Origin Access Control to limit access to the S3 objects to only my Cloudfront distribution. This meant I didn’t have to enable public access to my S3 bucket (something AWS strongly warns against). Setting up OAC is pretty straightforward and is the recommended method of configuring an Origin in Cloudfront; the AWS console even tells you the security policy to create for OAC while setting up the Cloudfront distribution. I also requested an SSL cert for my domain using Certificate Manager and then set up Cloudfront to use that cert. I then set up an alias for my domain, pointed it to my Cloudfront distribution, and voila! The front end of my site was up and running.

Backend:

With my front end basically complete, I turned my attention to the back end. The task was to implement a visitor counter using API Gateway, a Lambda function, and DynamoDB. The challenge also recommended using Python for the function to add a little diversity to the project. I’ve also been working on upping my Python skills, so this just made sense. I consulted the AWS docs and found the AWS SDK for Python, AKA Boto. I didn’t want to simply track the number of hits for the page, the counter could quickly show an inflated number after a few page reloads. I decided to write a function to hash the IP address of the visitor and check to see if that hash was already in the database. If the hash isn’t present, a record of the hash gets added to the database and then a count of the total number of hash records gets returned to the client. I have to do a bit of googling to get my DynamoDB query working correctly, but it ended up being a bit simpler than what I was initially thinking. Once I got my function working, I wanted to write a test to make sure that it would provide the results I was expecting. Using moto I was able to mock an instance of DynamoDB and create a Python unit test to run against my code. This was a bit challenging as well—I’ve not worked with mocking resources before. Again, Stack Overflow and Google to the rescue.
With the lambda function working, I created an API gateway and added an integration. Rather than using the default route, I set up a specific route for the counter. I confirmed functionality using a cool CLI tool called HTTPie. It’s like Postman (another tool I use a lot), but HTTPie makes it incredibly simple to create API requests from the CLI without having to use a GUI with a lot of mouse-clicking.

Integration:

This was probably the most straightforward piece of the challenge. With the URI of the gateway route, I added it to my frontend JavaScript and now I’ve got a working visitor counter! I also set up some end-to-end tests using Cypress testing. The tests check if the site loads, if the resume page loads, and finally if the counter loads.

Automation and final configuration:

Automation is everything these days and for good reason. I’ve learned all too well that manual processes dramatically increase the chances of errors. There are so many benefits to automation, including reduced downtime, quick recovery, and reduction of human error that it’d be wild to not automate as much as possible.
I reviewed a few different options for implementing automation. Since I’m using AWS, of course, CloudFormation and AWS SAM were options. However, after looking around at a lot of job postings, I found that Terraform seems to be the popular choice, and for good reason. Terraform works with basically any cloud provider you throw at it helping to prevent vendor lock-in and has a ton of modules available to extend its functionality.
I also have been committing all my code to a GitHub repository. I knew that combining GitHub Actions with Terraform would allow me to get to my desired end state.

I worked through each of the specific AWS services in my stack and set up terraform configurations for each. Terraform is very well documented, and there are some powerful modules available for AWS that helped me along. I’d regularly plan and deploy my config as I went along, and checked what was deployed against my manually configured stack. Over a few days, I was able to get my Terraform configs nailed down to implement everything for the full cloud resume stack.

With my terraform configuration completed, I took a moment to pause and further consider how my solution would work, and what a real-world scenario would look like. I wanted to have dedicated environments for development, staging/QA, and production. A common way to implement that on AWS is to have separate accounts in their own Organizations. I created three new orgs and accounts for each of these environments, keeping my original account as the “root” account. I also set up SSO in AWS’s IAM Identity Center, allowing me to only have one login for all my separate accounts.
I then refactored my terraform configuration to deploy to each of the accounts, depending on what has happening in my GitHub repo. Each deployment also gets its own subdomain name, based on the name of the working branch. With that all set up and working from my laptop, I got to work creating a few different GH Actions workflows to handle the CI/CD of my site:
Any new branch that gets created and pushed is automatically deployed to the development account. Any additional pushes to the branch are deployed and tested automatically. This allows me to have a specific development environment for any updates I’m working on (I’ve still got a few more ideas to implement!).
Next, a Pull Request to the main branch deploys to the Staging/QA environment. It also kicks off a Terraform plan step against the main branch, so I can see what changes will be made to my underlying infrastructure, if any.
Finally, once a PR is merged into the main branch, the changes are automatically deployed to my main AWS account and production site. I’ve got a testing job in my GH Actions to ensure that the site works after deployment, giving me quick feedback if something goes wrong. I can quickly back out any changes to my site if necessary.

Final Thoughts

The Cloud Resume Challenge was a fun challenge. I got to dig in deeper with a lot of popular AWS services, and finally familiarize myself with Terraform. I’m excited to keep working on my site and to advance my career in working with cloud technologies. The challenge has provided a great foundation for working in the cloud, and how to keep improving my skills.
I encourage anyone who wants to deepen their knowledge of the cloud to participate in the challenge!

My Resume: https://kyle.mn/resume.html
GitHub Repo: https://github.com/elykkyle/kyle.mn

Top comments (1)

Collapse
 
gnarlylasagna profile image
Evan Dolatowski

This is great, I had a great time completing the Cloud Resume Challenge using Azure! Thank you for sharing your experience