Introduction
A few months ago, I did not know what Terraform was. I had never written a Lambda function. I could not tell you the difference between an S3 bucket policy and an IAM role.
At the time, I was coming from a sales background. I had experience with cold outreach, lead generation, HubSpot certifications, and customer-facing work, but the closest I had gotten to "the cloud" was storing files on Google Drive.
The turning point was my brother-in-law. He works as a DevOps engineer at Compass/Foodbuy, and one day he walked me through what his job actually looks like. He showed me the dashboards, the pipelines, the infrastructure, and how all the different pieces work together behind the scenes.
I was hooked.
It felt like problem-solving at scale, and it clicked with something in me that sales alone was not fully satisfying.
He encouraged me to start by getting my AWS Solutions Architect Associate certification so I could build a real foundation. I studied, passed the exam, and then he pointed me toward the Cloud Resume Challenge as the next step.
The challenge gave me a way to take the theory I had learned and turn it into something real.
The idea behind it is simple: build a serverless resume website on AWS that covers the full stack. That means HTML, CSS, JavaScript, Python, a database, an API, infrastructure as code, CI/CD pipelines, and a custom domain with HTTPS.
Sixteen steps total.
I originally gave myself three days to finish it.
That was way too optimistic.
It ended up taking me eight days. Things broke. I got stuck. I had to backtrack and relearn concepts I thought I already understood. But I learned more in those eight days than I had in weeks of studying for the certification.
This post is about what I built, what broke, what I learned, and what I would do differently.
What I Actually Built
Before getting into the struggles, here is what the final project looks like from an architecture perspective:
Frontend: A multi-page resume website built with HTML, CSS, and JavaScript. It is hosted in an S3 bucket, served through CloudFront with HTTPS, and connected to a custom domain through Route 53.
Backend: An API Gateway HTTP API that triggers a Python Lambda function. The Lambda function increments a visitor counter stored in DynamoDB and returns the updated count.
Infrastructure as Code: Every major AWS resource is defined in Terraform. This includes the S3 bucket, CloudFront distribution, ACM certificate, DynamoDB table, Lambda function, API Gateway, Route 53 records, and IAM roles. I also used remote state stored in S3 with DynamoDB locking.
CI/CD: Two GitHub Actions pipelines using OIDC authentication, which means no stored AWS credentials. The backend pipeline runs Python tests and applies Terraform changes. The frontend pipeline syncs files to S3 and invalidates the CloudFront cache.
It sounds clean when I list it out like that.
The actual process was much messier.
The First Few Days: How Hard Can Terraform Be?
Very hard, especially when you have never used it before.
I started by setting up the Terraform remote state backend. This involved creating an S3 bucket and a DynamoDB table to store and lock Terraform's state file.
This creates a strange beginner problem: you need Terraform to create the resources that Terraform itself needs.
The solution was to create a small, separate Terraform configuration that uses local state to create the backend resources first. Then the main project can reference that backend afterward.
Kiro, an AI-powered coding assistant (more on this later), walked me through this and wrote the configs, but I still had to understand why we were doing it this way.
It took me longer than I would like to admit to understand the basic Terraform workflow:
terraform init
terraform plan
terraform apply
The first time I ran terraform plan and saw a wall of green text describing all the resources it wanted to create, I honestly did not know if that was a good thing or a bad thing.
The backend infrastructure came together faster than I expected. DynamoDB, Lambda, and API Gateway started to make sense once I saw how they connected. A big reason for that was Kiro handling the Terraform modules and helping me understand what each piece was doing.
The Lambda function itself was surprisingly simple. It is only about 15 lines of Python that increment a counter in DynamoDB and return the new value.
But wiring everything together with the right IAM permissions was where things got tricky.
IAM was probably my biggest headache early on. Every AWS service needs permission to talk to every other service. If you get one policy wrong, you usually get a vague "Access Denied" error that does not clearly tell you which permission is missing.
Even with Kiro's help, I spent about an hour debugging why my Lambda function could not write to DynamoDB before we finally tracked it down to a typo in the table ARN.
That was frustrating, but it was also a useful lesson.
Midweek: DNS, Certificates, and a Lot of Waiting
Day 2 was supposed to be focused on the frontend.
Write the HTML and CSS, deploy it, and point a domain at it.
Simple enough, right?
Not exactly.
I registered my domain through Route 53, which went smoothly. After that, I needed an ACM certificate for HTTPS.
One important thing I learned is that ACM certificates for CloudFront have to be in us-east-1, no matter where your other resources are. I had everything in us-east-1 already, but I can definitely see how that would trip someone up.
The certificate uses DNS validation. AWS creates a special CNAME record in your hosted zone, and then you wait for the certificate to validate.
Sometimes it takes 5 minutes. Sometimes it takes 30.
I probably refreshed the console 400 times.
CloudFront was another learning curve. The key concept I had to understand was Origin Access Control, or OAC. The idea is that you block public access to your S3 bucket and only allow CloudFront to read from it.
That means the website is only accessible through the CDN, not directly through S3. It is more secure, but it also means the bucket policy has to be exactly right.
Kiro set up the Terraform for this, but I still had to understand the flow well enough to troubleshoot when things did not connect the way they were supposed to.
The HTML and CSS were actually the fun part. I described the design I wanted: modern, dark-themed, with a sticky navigation bar, a hero section with my photo, and separate pages for Skills, Experience, Certifications, and Projects.
Kiro built it out from there.
I used Google Fonts with Inter and Font Awesome for the icons. It is nothing groundbreaking, but it looks professional, works well on mobile, and gives me something real to show.
The visitor counter JavaScript is simple. It makes a fetch() call to the API Gateway endpoint when the page loads, then displays the count in the footer.
Kiro also added sessionStorage caching so the counter does not increase every time someone clicks between pages. That was a small detail I probably would not have thought of on my own, but it made the final result feel much cleaner.
The Final Stretch: CI/CD and the OIDC Revelation
GitHub Actions was completely new to me.
I had used Git before for basic things like committing, pushing, and pulling, but I had never written a workflow file.
The biggest revelation was OIDC authentication.
Instead of storing AWS access keys as GitHub secrets, you can set up an OpenID Connect provider in AWS that trusts GitHub. When a workflow runs, GitHub gives it a short-lived token. AWS then exchanges that token for temporary credentials tied to a specific IAM role.
No long-lived secrets are stored anywhere.
Setting up the OIDC provider and trust policies was probably the most complex part of the bootstrap Terraform. It was also one of the areas where I leaned on Kiro the most.
The trust policy has to match the exact GitHub repository name. If you make a typo in the repo name, the workflow fails when it tries to assume the role.
Unfortunately, I learned that one the hard way.
Once CI/CD was working, the feedback loop became amazing.
I could push a change to the frontend repo, and within about a minute the site would update automatically. The workflow would sync files to S3, invalidate the CloudFront cache, and finish the deployment.
For the backend, pushing a change would run my Python tests and then apply any Terraform updates.
After a week of manually running commands, it felt like magic.
The Tool That Made This Possible: Kiro
I want to be completely transparent about something: I used Kiro, an AI coding assistant, throughout this entire project.
Without it, I honestly would not have been able to do this.
I am not a developer. I do not have a computer science degree. Before this project, I had never written a Terraform config, a Lambda function, or a GitHub Actions workflow.
Kiro helped me write the code, debug errors, understand what each piece of infrastructure was doing, and connect the dots between services I had only read about in study guides.
To be completely honest, Kiro did a lot of the heavy lifting on the more complex tasks.
The Terraform modules, IAM trust policies for OIDC, and GitHub Actions workflows were not things I wrote from scratch. I described what I needed, and Kiro helped build it. When things broke, it helped me debug them. When I needed to sync files to S3 and invalidate the CloudFront cache, it helped run the right commands.
Could I have done all of this without AI?
No.
Not in eight days, and probably not in eight weeks.
Terraform alone would have taken me months to learn well enough to write clean configs with remote state, modular design, and properly scoped IAM permissions.
But what I did do was drive the project.
I decided the architecture. I chose Terraform over SAM. I picked the design style for the website. I registered the domain, set up the AWS account, and made the decisions about what to build and how it should work.
When Kiro gave me code, I had to understand it well enough to know whether it was doing what I wanted. When something failed, I had to explain the problem clearly enough for us to fix it together.
I think being honest about this matters.
There is a temptation to downplay AI usage and pretend you hand-wrote every line yourself. But the reality of working in tech in 2026 is that AI is a tool. Knowing how to use it effectively is becoming a real skill.
You still have to know what to ask for. You still have to evaluate the output. You still have to recognize when something is wrong. You still have to understand the system well enough to make decisions.
This project taught me cloud architecture, AWS services, infrastructure as code, and deployment automation.
The fact that I had an AI partner along the way does not make that learning less real.
Was It Worth It?
Absolutely.
Before this project, cloud engineering felt like an abstract concept. I understood some of the terms from studying for the AWS certification, but I had not actually built anything real.
Now I can look at an architecture diagram and understand what each piece does and why it matters.
I can read Terraform configs and understand what they are going to create. I can follow Lambda function logic. I understand how CI/CD pipelines connect everything together.
More importantly, I have something real to show.
This is not just a tutorial I followed. It is a live website running on infrastructure I designed and deployed with Kiro's help.
When I talk about AWS in an interview, I can point to actual infrastructure and explain the architectural decisions behind it because I am the one who made them.
The Cloud Resume Challenge is supposed to be hard. It is designed to push you outside your comfort zone.
I thought I would finish it in three days. It took eight.
Some days I made huge progress. Other days I spent hours stuck on one IAM policy.
That is normal.
If you are thinking about doing the Cloud Resume Challenge, especially if you are coming from a non-technical background like I did, my advice is simple:
Just start.
You will not understand everything at first. You will get stuck. You will break things. You will probably spend too much time staring at error messages.
But you will figure it out as you go.
That is kind of the whole point.
Repos
You can check out my repos here:
Top comments (0)