DEV Community

Trevor S
Trevor S

Posted on

Growing Outside of Work: My Journey with the Cloud Resume Challenge

As someone who's relatively early in their software development career, I'm always looking for opportunities to grow my skillset. While I do regularly work with AWS in my day to day, sometimes it's good to just play in the cloud— and in a big company where much of the infrastructure work is abstracted away from developers by a few core teams, play time isn't always apparent.

That's why when I discovered the Cloud Resume Challenge by Forrest Brazeal, I knew I had to partake. This 16-step challenge helped refine my understanding of some key cloud components, and how they can interoperate to quickly standup a basic web app. I figured since I already had an AWS Solutions Architect cert, it wouldn't hurt to spend some more time "walking the walk" on my cloud skills.

Feel free to check my site out at www.trevorscloudresume.com, as well as my frontend and backend repos.

Getting Started with the frontend

If we want to deploy a resume site, the first step is to... build a resume site! This took me about an hour of messing about in VS Code to get an HTML skeleton laid out, with the idea being to recreate something similar to a traditional paper resume. The styling is focused on alignment and readability, and while I would like to go back and pretty it up later, this project wasn't about flexing my design chops.

Initial Deployment - AWS Console

Once I had my source files ready to rock, it was simple enough to upload everything to an S3 bucket, and enable static website hosting. This works just fine for a basic static site, but the project calls for the use of a CloudFront Distribution to enable HTTPS, and custom domain routing with Route53. I found this guide to be particularly helpful in getting everything stood up through the console.

As an added bonus, it shows you how to configure redirects so that users can visit your site with or without the www subdomain. (Try going straight to trevorscloudresume.com to see how it redirects you automatically!)

The Backend

Once my site was stood up, I needed to build out the user count API. Through the console, I set up a DynamoDB table and created a user count item. Getting my lambda to interface with AWS resources was a breeze with the Boto3 SDK. You can see my Python code that increments the user count whenever someone visits the site here. The key is the usage of the update_item method that comes from Boto3.

Once the Lambda was in place, I needed to make it accessible as an API endpoint with API Gateway. Again, through the console, all of this setup was quick and easy. The only gotcha was making sure to enable CORS, but even that process was fairly straightforward.

My Foray into IaC

I had never touched Terraform, and my IaC experience prior to this was fairly limited. But boy, did I get plenty of practice here. Some parts were certainly easier than others, and HashiCorp has some very thorough documentation to guide you.

Here are some of the pain points I had and the lessons I learned:

  • If you're using Terraform to manage your Route53 domain, setting up the certificate validation can be a little tricky, but the aws_acm_certificate_validation documentation helped!
  • If you use Terraform to specify any DynamoDB item(s), there's a good chance any updates you make to those items via your API will get blown away on the next terraform apply, so be sure to only worry about setting up the initial table.
  • The biggest wall I hit by far was enabling CORS for my API. This module by squidfunk trivializes the process, but since this is a separate module, it may not trigger a re-deployment of your API depending on your configuration. I spent hours trying to figure out why I was having CORS issues when trying to call my API, only to realize the OPTIONS endpoint wasn't actually staged and deployed on each terraform apply. Here's how I set up my aws_api_gateway_deployment configuration to trigger a re-deploy whenever a change to the gateway config, and/or to my root terraform file (where the CORS module lies) is detected.

CI/CD With GitHub Actions

This part was a lot of fun. I've been using CI/CD pipelines everywhere I've worked since my first internship, but I never had the opportunity to build one from the ground up. For anyone who doesn't know, GitHub Actions let's you automate your builds, testing, and deployments. You can create workflows that run on GitHub-provided Linux, Windows, or macOS machines, or on your own runner if you so choose.

GitHub's documentation is great, and setting up workflows to run my tests and deploy my changes was easy enough. To simplify your life, there are tons of pre-built actions on the marketplace. The ones that helped me out the most were:

One final note here: to make my existing Terraform state visible to the runners, I needed to setup a Backend Configuration and store my state files in an S3 bucket.

Conclusion

If you got this far, thank you for reading! I learned a lot from this project, got to walk away with a convenient way to share my resume going forward! If you're new to the cloud and looking for hands-on experience building an application from start to finish, I highly recommend taking on the Cloud Resume Challenge yourself.

If you have any questions, feel free to reach me at trevor@trevorscloudresume.com

Top comments (1)

Collapse
 
ssennettau profile image
Stephen Sennett

Well done on building your cloud resume and completing the challenge, mate! Well done! CORS is a surprising headache. Putting your skills to the test with a practical challenge after completing your certification is a great plan.