DEV Community

Lboncich
Lboncich

Posted on

DevOps Resume Challenge

Being involved to the product lifecycle as Full Stack Engineer exposed me to the DevOps philosophy, which was right in line with my interest in automation. I honestly didn't even know DevOps was a job title coming out of college!

I reached out to a family friend who was in a similar role, looking for advice on how to transition into DevOps. He pointed me to The Cloud Resume Challenge, which is an initiative by Forrest Brazeal to help new people to break into the Cloud industry.

Perfect. I love projects -- they are the best way for me to get some hands on experience with the tools and the whole idea was to make a website for your resume which is something I've always wanted to do.

Onwards to the Challenge.

1 . Your resume needs to have the AWS Cloud Practitioner certification on it

I was eager to get coding and this first step was in my way. Fine. After doing some research I found a great course on Udemy taught by Neal Davis. This was a great introduction to AWS and the cloud, learning how to set up completely serverless apps, creating Ec2 instances, databases and security/permissions. I went through all the content in about a week and scheduled my test. Once test day came around I passed was now the proud owner of an AWS CCP certification!

First step down.

Next two steps.

2 . Your resume needs to be written in HTML
3 . Your resume needs to be styled with CSS.

I have some experience designing basic frontends at my job, so no worries there. Html and CSS is one of my comparatively weaker areas and since I was eager to get started on the of the cloud side of the project, I opted to find a template to work off of and tweak to my liking. Thanks to AndrΓ© Fontenelle for the theme!

4 . Your HTML resume should be deployed online as an Amazon S3 static website.

That's not too bad. I learned from the AWS CCP exam that you can host static content in an s3 storage bucket. Its as easy as dragging the files into the bucket. AWS automatically creates a external link to the bucket to view the website.

5 . The S3 website URL should use HTTPS for security.
6 . DNS - Point a custom DNS domain name to the CloudFront distribution, so your resume can be accessed at a custom domain.

This part was fun. AWS has tools to make this easy, but this was all brand new to me and reminded me I have a lot to learn on the networking side of things. First I bought my domain, LukeBonci.ch. I'm very proud of my clever use of my name and the Switzerland domain. I created a hosted zone in Route53 which directs internet traffic for my domain. After following a YouTube tutorial I was able to create an SSL certificate and had my domain correctly pointing to my s3 bucket containing the website code. Success!

7 . Your resume should include a visitor counter written in JavaScript.
8 . The visitor counter will need to retrieve and update its count in a database somewhere.
9 . Do not communicate directly with DynamoDB from your JavaScript code. Instead, you will need to create an API that accepts requests from your web app and communicates with the database.

Recently completing the AWS CCP exam I already had an idea of how to get this done in a completely serverless fashion.

First I needed to setup a database in DynamoDB, a serverless NoSQL database, with a table for the visits. Easy enough. Next was the JavaScript code. I created 2 Lambda functions, a serverless compute function, that talk to the DynamoDB I just created. One to get the number of visits, and another to add a visit to the table. The JS code uses the AWS SDK and can retrieve/add entries to the database. Simple enough. Lastly, the API gateway. I created an API gateway with Post and GET options, and configured them to trigger their respective lambda functions when they were called.

So it was all setup. An API call would be made from my website to the API gateway, which would trigger the lambda function, which would then talk to the database. I added the calls to my website and tested it out.

BUT WAIT, THERES CORS!

The beloved CORS error strikes again. I still don't fully understand the beast that is CORS, maybe one day we will understand each other. Luckily in the API gateway there is an option to enable CORS which alters the API request headers for me.

I now can tell when people visit my site! Success!

10 . Python

I wrote the Lambda functions in JavaScript instead. I'm sure it wouldn't be hard to switch the language. Perhaps I will come back to this at a later time.

11 . Tests. Include some tests for your lambda functions.

Don't tell anyone, I haven't gotten around to writing the tests yet. I am familiar with unit testing but I got too excited about the next steps.

  1. Infrastructure as code. The DynamoDB, API gateway, and Lamba function should be defined in an AWS Serverless Application Model Template.

Cool! I've been excited to learn about IaC. I followed a tutorial and looked at the documentation to get started. All you need to do is create resources in the template, one for each of the components of the project.

  1. The S3 Bucket and its policy.
  2. The two Lambda function. Make sure you attach policies to allow it to have access to DynamoDB!
  3. The DynamoDB table. Where is the API gateway you might ask? What's great about defining the Lambda function in a SAM template is that if you point the functions to an API and define the API path and method, it will look to see if that resource exists, and create it if it doesn't. Nifty.

I loved doing this part. Having all the infrastructure in a document, easily deployable with a simple bash script is awesome. One command and its all online and operational. It makes you feel powerful :)

13 . Source control. You do not want to be updating either your back-end API or your front-end website by making calls from your laptop, though. You want them to update automatically whenever you make a change to the code

This step I actually completed earlier on when creating my website. I knew that I would be making constant fixes to my site, and wanted to see it get posted to the S3 bucket when I pushed to master.

I was recommended to use Jenkins. I created a basic ubuntu EC2 image in AWS and SSH'd into it to install Jenkins. I created a policy on AWS to open port 8080(the default Jenkins port) and once I attached it to the Ubuntu server I was able to get logged in. After looking up some tutorials I figured out how to integrate Jenkins to push the code up to my S3 Bucket. It just required a Jenkins plugin, granting it some AWS permissions and creating a webhook on GitHub to notify my Jenkins server when code was pushed up. This took a little while to set up but once it was, it felt great.

The last two steps are for CI/CD for the frontend and backend. Which I ended up integrating with Jenkins after following some tutorials.

The only other trouble I ran into was the Cloudfront cache. My website code was cached and even when I made changes to it and pushed the new code to the s3 bucket, it wouldn't update. After some digging I found out you can change the caching policy on the s3 bucket. I just set the age of the cache for all the files to 3 minutes and voila, it was fixed.

Final Words
All in all this was a really fun project. I learned a lot and feel a lot more confident in the basics of the cloud. I'm still very eager to learn more about containers, networking, pipelines and the like. DevOps is a frontier I am very excited to explore. I would like to thank Forrest for creating it as I believe the very best way to learn anything is through actual projects β€” nothing beats hands-on experience.

Top comments (0)