DEV Community

Cover image for Yeah, I’m sure I can figure that out
Kyle Cramer
Kyle Cramer

Posted on

Yeah, I’m sure I can figure that out

Making the Move

I have been working in tech in some form or another since graduating from college in 2008. I did not get a degree in anything remotely related to technology. I’ve never taken a single course in technology or programming. I am entirely self taught, and have spent the last 14 years teaching myself whatever it is that I wanted or needed to know.

In 2023 it became clear to me that it was time to make a career change. Not away from tech because I love it, but away from what I’ve been doing. So, in September of 2023 I took my first steps towards the cloud, and instead of working towards my AWS Cloud Practitioner, I decided to go for my Solutions Architect Associate (SAA). I chose the SAA because I felt it would give me a better jump start into the AWS Cloud space and give me a more in-depth understanding of the ecosystem as a whole. It was a challenge, but at the end of December 2023 I passed my AWS SAA exam. I was lucky enough to find a mentor to help guide me in shift towards the Cloud, and when I asked him what I should do next he immediately sent me the link for the Cloud Resume Challenge. So, off I went.

The Challenge

I felt some level of confidence going in, but when I sat down I immediately had the ‘blank page problem’. I always struggle with blank pages, with coming up with ideas to create something out of nothing. I have a resume, I know HTML and CSS. But I was already focusing on what I wanted it to look like, how I was going to present this in a way I liked, and I knew that if all I did was put something basic together that mimicked my PDF resume on a single page I would come out the other side of this very unhappy. I also realized I could sink hours and hours into just the index.html and the css and waste the entire day on front-end web development and not get into the meat of the challenge. I looked over some examples of other people’s end results, and read through some of their blog posts and decided my best path forward was to build off of the experience of others, so I went to look for a template to act as a diving board to get me over that visual hurdle.

Pick yourself up by the Bootstrap

I found a Bootstrap template for online resumes. Simple, clean, modern, and something I liked and was comfortable with.

Within an hour I had made my way through the initial stages. I had my index.html, I had my css, and a little javascript from the Bootstrap template. I reworked the index.html with my resume content as well as some other changes I wanted. I tweaked the CSS to match what I liked, but I was already starting to get sidetracked (I put the time into a custom Favicon). I was starting to think about how I wanted to modify the content of my resume. I forced myself to stop because it was time to step away from what I knew I knew and start working in the cloud.

Landing page for Cloud Resume

I set up my bucket, got all my files in and was able to access them. I got HTTPS set up, not a problem. Time to choose my domain. The process was familiar to me from other things I’ve done. I relied entirely on Route53 here just to keep it simple. I selected and paid for my domain.

Great… that was the easy part.

Working through the new

I got stuck setting up the API endpoint for CloudFront. I understood the concept of it but my lack of hands on made me question myself, so I read through AWS documentation and Medium posts for guidance. I encountered some difficulty setting up the custom URL with the CloudFront distribution, but after searching I was able to find the missing pieces to make it work. My hang up was in not creating all of the correct A records. I had also re-created my hosted zone in the process of troubleshooting, which changed the values on my DNS servers. I had not thought about this so I spent some time working through things before I realized my records were still using the old values and thus the values needed to be updated. Once all that was sorted out, things were working properly.

With that the site was up and running. My profile picture on the site was broken however, which annoyed me. I tried many times to upload a fixed version and no matter what I did it just kept referencing the old, broken file. After a good 10 minutes I remembered that I had already set up my CloudFront distribution, and that it was caching what was on my site from my initial first visit to the site. I initially thought about changing the TTL on my CloudFront distribution, but after reading the AWS documentation realized that in the real world this would incur unnecessary costs, so what I really needed to do was invalidate my CloudFront Distribution so that it would pick up the new files. Trying to fix this broken image on the site was a good reminder about the importance of CloudFront Invalidation and finding the right solution for the right problem.
With that the site was up and running. Trying to fix a broken image on the site was a good reminder about the importance of CloudFront Invalidation.

Learning to Count

I knew going into this that the visitor counter was going to be the most challenging aspect for me. I realized it was going to require a Lambda function at the very least, which means programming knowledge for setting up said Lambda function. Despite my desire to learn, I do not fully know programming. I know enough that I can read and understand code, but not enough to write anything natively. I also knew it was going to require some way to increment on that count, which means it needed to hold the count, which meant a database. Again, I have worked with databases but I have ever set one up from scratch before. So once again I knew I needed to lean into others who have done the challenge.

I borrowed and tweaked code examples from both the Medium post I had used before as well as from another site to get the entire setup working using Lambda, DynamoDB and the API Gateway. It took me a fair amount of time to get there and I ultimately used a blend of the two resources to create both the functioning Lambda as well as the DynamoDB setup. But once I was done It was working and properly tracking page visits in the database.

Once the counting was working properly I had to get the visual counter on my site. I knew Javascript would be a clean way to achieve this but again my lack of coding experience meant I used assistance from the same Medium post. Initially I was placing my code in the main body of the site so that I could focus on getting the actual counter to show at all. Once I had this working, I knew where I wanted to move the counter so I grabbed the HTML that I had worked out and moved it over to my left navigation area and then did some styling to get it where I wanted and looking the way I wanted. It felt really great once that was done and working and looking the way I wanted, and I loved the clean and simple way it was referenced in my index.html file.

<div class="counter-number"></div>
Enter fullscreen mode Exit fullscreen mode

Landing page for Cloud Resume with visitor counter

Template up!

Former2 was a suggested tool to help me package the work I had done into Infrastructure as Code (IaC) templates. Given that this was a suggestion by someone mentoring my in this process it made logical sense to start there, especially given that this whole exercise in learning by doing. I learn best when I have something tangible to work on and a real-life example relevant to my project to look at. I was able to use Former2 to pack up all my work to date and be able to see and understand the differences and similarities in CloudFormation/SAM templates compared to Terraform.

CloudFormation Terraform
former2 with cloudformation template former2 with terraform template

Change it faster

When I started on this I initially thought “Ok, this is going to be no big deal”. I had thought the steps would be:

  1. Uploaded my files to GitHub
  2. Connect my S3 bucket to GitHub
  3. Make changes to files and watch the magic happen!

Step 1 almost went as expected (more on that later). Step 2… well I hadn’t actually thought about HOW that was going to happen and of course that really was the bulk of the work. I also realized that most sane people probably aren’t doing direct coding in GitHub, and that it was not exactly efficient to manually upload local files to the browser every time I was planning to make a change. So, though not part of the actual challenge I decided I was going to take the opportunity to learn how to be a little more efficient around this.
Eventually Step 3 was really how it went.

So here are the ACTUAL steps required to get it all to work, including the added step of code upload efficiency.

  1. Uploaded my files to GitHub via browser.
  2. Create a GitHub Action - realized I needed to make a yaml file.
  3. Created needed directory in repo for workflows/actions.
  4. Set up the secrets to avoid storing anything sensitive directly in the file.
  5. Set up YAML file for the connection.
  6. Created IAM user for the GitHub connection.
  7. Connected GitHub repo to Visual Studio Code (VSCode)

Some of the mistakes I made along the way that caused me some headaches were:

Directories:
I had my directories in GitHub structured differently than they were in S3. I was trying to be cleaner in GitHub, so I put all my files into a man directory labeled “website” or something similar. This was not, however, the way the structure was set up in S3. So I eventually figured that out. To keep things clean I just deleted everything from GitHub and re-uploaded from my computer. Which lead to my second issue.

Deleting:
In my deleting of all the things, I also deleted the needed workflows folder and YAML file for the GitHub actions.

Random bits:
There were the other odds and ends of using borrowed code but forgetting to rename a piece of it, or having a spelling error, just little human error bits like that.

YAML What?

My biggest hurdle, which took me quite some time to resolve, was around the GitHub Action and the YAML file. I attempted to write my own from start to finish but was unable to get it working properly. After getting everything set up correctly I tried to set up a “test.txt” file to text my whole setup, from VSC to S3. And it failed.

It failed 16 times.

Actions table in GitHub showing failed actions

This is not all the failures, I did end up deleting the first 10 or so logs out of frustration.

After the first few failures I went out to look for suggestions for how to better set up my file. I found one that worked, but I was not happy with it because I had hardcoded my bucket location into the YAML file, which I thought would be better to keep in a GitHub secret. So I looked for ways to tweak what I now had so that I could use secrets for anything that I didn’t want front-facing. In the end I used a version using a sync action created by JakeJarvis. I made sure to look at what he was doing and I have a basic understanding of what he put together, as well as a thing to try to recreate in the future.

Almost to the finish line

At this point, I had everything I needed: a CloudFormation template for back-end CI/CD if I needed to redeploy my Cloud setup. I had my front-end CI/CD set up and functioning. And now, from experience, I knew that I wanted to automate CloudFront invalidation so that any changes I make to my repo will actually show on my site in a reasonable time without having to generate pulls too often on CloudFront. Since this was a little extra I looked around and found another GitHub action that rewindio on GitHub had made for CloudFront Invalidation, so borrowing from that I added the action to my PUTs in GitHub. So now I make a change and commit it the files are updated and CloudFront is invalidated, allowing the changes to show automatically.

With that, I have reached the last step in the Challenge, which I am completing right now; my blog post.

Into the Cloud

The Cloud Resume Challenge was a valuable learning experience for me. Through practical experimentation, leveraging community resources, and embracing a mindset of continuous improvement, I was able to expand my skills in cloud technologies. While the challenge marked a significant milestone, my journey towards mastering these technologies continues.

As of the completion of this Blog post I am currently studying for my AWS SysOps Associates certification. I am aslo working on various AWS workshops and other projects to continue to build and hone my hands-on skills and experience. I have also been enjoying starting to play within the GenAI space and prompt-based engineering. I have been having fun getting my feet wet with PartyRock and have entered the PartyRock Hackathon as a way to push myself a little further in that realm as well.

Top comments (0)