DEV Community

Chinedu Oji
Chinedu Oji

Posted on

Tackling the Cloud Resume Challenge

Introduction

In this article, I give an overview of the steps and challenges I underwent to complete the Cloud Resume Challenge.
After learning much about AWS and various DevOps tools, I decided it was time to build real projects so I started searching for good projects to implement. While searching, I came across the Cloud Resume Challenge by Forrest Brazeal and decided to try it out.
The Cloud Resume Challenge is a hands-on project designed to help bridge the gap from cloud certification to cloud job. It incorporates many skills that real cloud and DevOps engineers use daily.
This challenge involves hosting a personal resume website with a visitor counter on Amazon s3, configuring HTTPS and DNS, and setting up CI/CD for deployment. Sounds easy right? That's an oversimplification of the project. In reality, it involves interacting with a lot of tools and services. As a DevOps Engineer, all interactions with AWS should be done with IaC.
I will divide this post into 3 sections;

  • FrontEnd
  • Backend
  • IaC and CI/CD

FrontEnd

The FrontEnd part of the project involved the following steps;

  1. Designing the FrontEnd with HTML and CSS
    To design the FrontEnd, I took HTML and CSS crash courses to understand the fundamentals which helped me design a basic resume page. I am not a designer by any means or someone with an artistic eye, so my original design was as horrible as expected.
    My original site
    After seeing how ugly and bland the site was, I decided to go with an already made template. Deciding to use this template brought about a problem later on in the project which I will get into in the IaC section.
    After making the necessary edits to the template, my site was ready, and it was time to move to the next step.

  2. Hosting on Amazon S3 as a static website
    To interact with AWS, I created an IAM user specifically for the project and gave that user access to only the required tools to enhance security. I created an S3 bucket and manually uploaded the files for my website, configured the bucket to host a static website and got the output URL. That was okay for hosting the site, but the project requires you to go further by using a custom domain name.
    Files uploaded to S3

  3. Configuring HTTPS and DNS
    I registered my domain name with Whogohost, a local Hosting and Domain Registration Company and used Amazon Certificate Manager to request an SSL/TLS certificate for my domain. I also set up a CloudFront distribution to cache content and improve security by redirecting HTTP traffic to HTTPS. After doing all that, my domain name still wasn't pointing to my resume site so I did some digging and found that you have to create a CNAME record with your DNS provider that points that domain to the CloudFront distribution.
    images of Cloudfront and ACM

My website https://resume.chxnedu.online was finally accessible and online.
The result of the FrontEnd section is a static resume website with an HTTPS URL that points to a CloudFront Distribution.

Image description

BackEnd

The BackEnd section of the project involves setting up a DynamoDB table to store and update the visitor count, setting up an API as an intermediary between the web app and the database, writing Python code for a lambda function that will save a value to the DynamoDB table, and writing tests to ensure the API is always functional. The steps I took;

  1. Setting up the DynamoDb table
    The DynamoDB table was simple. It just needs to hold the value of the visitor count. I used the AWS Console to create the table, then created an item and gave it a number attribute with the value of 1.
    DynamoDB Table

  2. Setting up the Lambda function and writing Python code
    Lambda is an event-driven, serverless computing platform provided by AWS. It is perfect for this use case because it only needs to run when the website is visited which will trigger the visitor counter. I created a lambda function and used Python to write the code. The Python code updates the visitor counter's value by 1 and displays the new value as output. After testing the code and confirming it works as expected, I needed to find the right trigger for the lambda function.
    The Lambda function

  3. Placing an API Gateway in front of the lambda function
    Having the javascript code directly communicate with the DynamoDB table is not a good practice, and that's why we created the Lambda function to update the table. Instead of having the javascript code directly trigger the lambda function, an API is implemented. This API ensures that a call to an endpoint by the javascript code triggers the lambda function to run and outputs the new value of the visitor counter. I used AWS API Gateway to create the API and configured an /update_count endpoint, which when called, triggers the Lambda function to run.

  4. Setting up alarms and Writing a good test for the API
    As an engineer, you always need to know when your code encounters an issue, and you can't be checking your deployment every minute of the day. Some tools will monitor your deployment and alert you when errors are encountered. To monitor my deployment I used AWS CloudWatch because of how it easily integrates with AWS services. The metrics I configured CloudWatch to alert me about;

    • The function invocation crashes or throws an error
    • The latency or response time is longer than usual
    • The Lambda function is invoked many times in a short period. I set the three alarms up on CloudWatch and tested the first metric by changing my code a bit, and it worked.
  5. Writing Javascript code for the visitor counter
    To write the Javascript code, I took a crash course and did a lot of research. I wrote a short javascript code that fetches the current visitor count from the API endpoint and displays it on the website.

After I completed those steps, the FrontEnd and the BackEnd were seamlessly integrated, and a visit to the website will update the visitor counter and display the current count.

IaC and CI/CD

All my interactions with AWS have been through the web console, and as a DevOps engineer that is unacceptable.
I created separate repositories to store my FrontEnd and Backend files and configured GitHub Actions in each repository to run Terraform and a Cypress test. Terraform Cloud was used for the backend because of the seamless integration with my GitHub repository.
While writing Terraform configurations for my resources, I encountered the problem mentioned earlier in the article.
After creating an S3 bucket with Terraform, the files need to be uploaded to the bucket and that is done so by creating a Terraform resource for each file. Now I have a whole file tree that I need to upload, which meant I would have to do that manually for each file and folder. After some research and digging, I found a blog post that shows how to upload file trees cleverly using some Terraform functions. I implemented this method and had the whole file tree uploaded to the S3 bucket.
Using GitHub Actions, a push to each repository triggers a run that applies my Terraform configuration and runs a Cypress test.
Successful Backend run

With all these setups, I successfully implemented the Cloud Resume Challenge with a DevOps spin.

Top comments (0)