DEV Community

Jim Bledsoe
Jim Bledsoe

Posted on

Beginning Forrest Brazeal's Resume Challenge

I ran across Forrest Brazeal after seeing some of his wonderful illustrations related to cloud technologies on Linked-in. I later stumbled upon his Resume challenge that he had posted on his blog.

I started reading through the challenge, and it sounded like a wonderful idea and looked like a ton of fun. I was busy with work at the time, but have since found myself with plenty of time after the business I worked for folded in mid-2023.

I was in the process of studying for the AWS Solutions architect Professional certification, so I did not have full-time to devote to it, but it has made a wonderful distraction to clear my mind in-between chunks of study.

I am part-way through the exercise already, but decided (as requirement #14 indicates) to go ahead and start blogging about this adventure before I get too far into it.

As I write this now, I have already completed steps 1 and 2 - I have a website for my resume that was created with Terraform on AWS and is using a custom domain that I own from Namecheap.com - jimbledsoe.me. I will go ahead and post what I learned from the first day of the challenge, even though it is now nearly a week after-the-fact.

Setting up the Repository

I created a new private repository for the project and configured it the way I like GitHub repositories - squash merges and nothing else.

The challenge describes using separate repositories for the various pieces, which is normal practice - website in one, API in another, etc. This way the CI/CD deploys only what changed and nothing else, and there is not a jumble of code glued together.

But I decided to go against the grain on this one and just have everything in one mono-repo just for the sake of wrapping a neat bow around everything in one place. In my case, I am more concerned about seeing all the parts in one place since this is just a side project for me.

I also decided that I just might implement the same thing in multiple clouds some day, so I am also trying to provide enough structure and organization to make that easy as well. For now, it will only be deploying to AWS as the challenge describes.

Building an HTML Resume

I had just finished updating my resume before I decided to start working on this challenge, so I had a current one to start with. I normally write my master in Word, then publish as a PDF.

Exporting that Word document to HTML proved to be somewhat of a pain if trying to let Word do the heavy work, so I just ended up starting from scratch with the HTML structure, and just pasted in the blocks of data from each section by hand.

I organized everything with classed divs and then started in with the CSS. My Word version uses just a few fancy formatting features with styles and icons, and I wanted to duplicate everything as identically as possible just for the exercise.

And the CSS part proved to be the most challenging. I am not a CSS expert, but I do understand all the concepts. But still to this day, the display property is always the one that removes the most hair from my head. I probably spent two hours diddling with various display properties before finally finding one that would format the layout how I wanted it to look.

Implementing the icons were fairly easy. I just went to my old playbook of using font-awesome which provides the basic set and branding icons for free. I decided to download the current version and include the fonts and JavaScript within my project instead of trying to rely on a CDN to get the resources I needed.

Dusting off Terraform

I have used Terraform quite a bit in the past to create both AWS and GCP resources. I typically used Terraform, even for AWS resources, just for the sake of only needing to use one tool. Since Terraform is the preferred IaC of GCP, I might as well use it for everything. And I will continue that trend for this project as well.

The tricky part of Terraform is saving your state in a global location for team work. Even though I would be the only one using this project, I wanted to do things "the right way" just as best-practice and to not conflict with GitHub actions taking over these operations in the future. Plus, if GitHub will be managing the infrastructure, I don't want to have to check state into the repository.

I was used to using S3 and DynamoDB as the state and lock resources, so I decided to continue on that route. I manually created and S3 bucket and DynamoDB table in my personal account to store the resources. I also manually created an IAM user and access key to get up and running. The custom role for the GitHub user would start out with zero permissions, and I would only add permissions as needed. This is not the best approach with Terraform, as I would find out later.

I later found out about Terraform Cloud, which I have never used before. I went ahead and created an account, but I am not using it for anything just yet. I want to avoid using it, since in a regulated business setting, that would not be a viable option. I may explore it later just to see how the experience compares.

It didn't take too long to figure out why copy-and-paste from the Internet can paint you into a corner. I started out Searching for blogs related to creating S3 websites with Terraform, as you do. There are plenty of examples, window-shopper beware.

Many of the examples were a bit out of date, and some of them had some bad practices in them. I knew that the early going would be fraught with errors and failures. That's all fine, this is a learning exercise, after all.

It did not take too long to get an S3 bucket created for the website, but I learned that there are separate resources for some of the common features. For instance, the aws_s3_bucket resource cannot configure the S3 website feature - there is a separate aws_s3_bucket_website_configuration resource for that. In fact, there ended up being five S3 bucket resources I ended up having to use to set encryption, versioning, ACL, public access block, and bucket policy. It does make for smaller and easier to read code that way.

A quick search revealed a very clever way to upload all of the HTML files into the bucket. It worked great, but with bucket versioning and encryption at rest, the files wanted to upload every single time.

With versioning, the version tag is a property it wanted to manage, and the more I got to thinking about it, I do not want to version this bucket at all - Git is my versioning. So I turned off bucket versioning.

With encryption, the MD5 will never match an object in an encrypted bucket if you use the etag property. But the source_hash property works great for this purpose. Now the HTML files update only when they change.

Adding CloudFront to the S3 Website

I struggled quite a bit at this stage. I knew I had a valid S3 bucket, but it was private and thus I could not test it. Should I proceed with CloudFront, or make it public for testing? I tried making it public, but it just did not seem to work, so I decided to to move on with CloudFront and just use the default certificate for now.

A bit more Terraform code to get a CloudFront distribution set up and more testing. This is where Google again made things worse rather than better. After looking at a few samples on the internet, I decided to stick close to home and use the references that Hashicorp provides on the Terraform documentation.

Things got much cleaner after this, but I still could not get it working. Every time I would run terraform apply it would toggle the state of the website. One run it would add the website, then the next run it would remove it. It made testing very difficult since I sometimes needed to apply twice just to test my changes.

I decided to keep moving forward with the CloudFront custom domain even though I could not test anything for real. I created a subdomain at my domain registrar, then created a public hosted zone on AWS to match.

The AWS certificate was failing to validate, so I started to poke around with the DNS to see if things were resolving. My subdomain was not resolving any of the DNS entries that AWS was making in route 53. Then I realized, I never made the entries in my domain registrar to let the world know that AWS was in control of the subdomain. After adding the four DNS entries for my AWS subdomain in the subdomain on the registrar, now the DNS queries were being properly resolved.

But further testing was still not working. The website would not work, but the certificate could now be issued with DNS validation working.

Using plan was not sufficient. The resources looked good, but they were not working in the real world. I was having problems that seemed to be bugs with Terraform that were fixed long ago.

This is where I made the discovery that in the early setup of the AWS provider, I had pinned to an older version - 3.x. When I switched to 4.x, everything seemed to just magically work on the very first apply. I now had a working resume website being served on my custom subdomain through CloudFront. Stale Google examples bit me again and probably wasted two hours of my life that I will never get back.

Reflection

The major lesson is the realization that I am probably better off using the Hashicorp reference implementations from the documentation instead of out-dated examples on the web. I wasted at least two hours trying to debug old bugs while using new documentation. The documentation has examples, I should be using those first, before looking to the web for inspiration.

I wonder if ChatGPT would make this part better or worse? Can I tell how old the snippets are that ChatGPT would suggest? Maybe I need a completely separate implementation from ChatGPT one day just to find out.

One thing I found out after-the-fact is that there is a Terraform provider for Namecheap - my domain provider. I wish I had bothered to look earlier, and then I could have implemented the subdomain on AWS with Terraform code, too, and not have done any of that part by hand. Maybe I will revert all of this and re-implement that in Terraform one day.

One thing still left unresolved is that of IAM permissions. I thought that the approach of starting with a zero trust policy and only added needed permissions as the Terraform code expanded would be a good approach. It is one way to do it, but maybe not the most efficient.

For instance, I add new Terraform code for CloudFront. It fails because it needs to see something in the CloudFront API. SO you see the error, go add the permission and run again. Now it sees, but can't write, so you add that one, too, and run again. Now a third error pops up trying to read or modify something else.

I think you can see that this is not an efficient workflow. It is not robust from a maintenance aspect, either. I am finding the errors as the Terraform provider flexes. I am only seeing the errors to create and maintain the resources. I am likely to find a separate set of errors when it is time to run terraform destroy or use some other feature of a resource.

The proper way to do this would be to add all required permissions for a resource as I add a new resource. I am hoping to be able to find that in the documentation, but I have not found it yet (nor have I looked very hard). If I want to write maintainable code, I don't want permission errors just evolving over time.

And now I think to myself, why has this not been a problem for me in the past? I think the answer is that I probably gave admin rights by service as developing Terraform code. Not the best practice, but if I am asking terraform to create and maintain resources by service in a cloud provider, maybe that DOES constitute a need for admin access for a single service. I need to look into this as well.

Conclusion

After a bit of hair pulling with older versions of the Terraform AWS provider, the exercise itself has been fun so far.

I now have HTML code for a resume, and Terraform code that will create an S3 bucket to host it, copy the HTML files into it, create a website and CloudFront distribution to host it to my own domain resume.jimbledsoe.me.

I have created issues in my repository (private) for completing the challenge, so now I can easily set aside small chunks of time to chip away at the challenge until complete, then see what other things I might want to do with it.

A big thank you and hats-off to Forrest Brazeal for publishing this challenge. So far it has been great fun dusting off some skills I have not used for a while.

Top comments (0)