TLDR;
- I built and deployed an AWS Cloud Resume Challenge project: a secure, static resume site with a live visitor counter.
- Frontend: HTML/CSS/JS hosted in S3 and served globally via CloudFront with HTTPS and a private S3 origin (OAI).
- Backend: API Gateway → Lambda → DynamoDB to increment and return the visitor count, then display it on the page.
- IaC + CI/CD: Provisioned resources with AWS SAM (CloudFormation under the hood) and automated deployments/testing with GitHub Actions.
- Production-minded extras: Added CloudWatch logs, metrics, alarms, a dashboard, and SNS email notifications for monitoring and alerting.
Table Of Contents
Introduction
The Problem
My Design
Implementation
Issues encountered
Lessons learnt
Future Work
Conclusion
Introduction
The Cloud resume challenge is a great introduction to cloud concepts and gives you a detailed specification of what to build. Even for experienced builders, this challenge offers a refresher on the basic principles of the cloud such as databases and serverless architecture. You can find out more information about the aws version of the challenge here.
Personally, I decided to attempt the challenge for three main reasons:
- I am AWS Certified twice, which gives me a head start. You see, the first step of the challenge is certification and the last step is a blog post.
- I already have AWS credits that I could utilize. I got these as part of the benefits of being an AWS Cloud Builder. This was also how I got the vouchers to get certified in the first place. If you’re interested to learn more, read up about the program and how to join the upcoming cohort please have a look at the guide here
- I wanted to learn new technologies that I had not interacted with before such as SAM (Serverless Application Model).
The Problem
The Problem Statement of the Cloud Resume Challenge, if I was to use my own words, is to host your resume on the cloud, keep a record of the number of visitors on the exposed page and expose this number on your Front end. It also encourages use of Infrastructure as Code and CICD Principles. For the AWS Challenge, the specification include the following:
- Certification
- HTML
- CSS
- Static Website
- HTTPS
- DNS (no custom domain yet – using CloudFront URL)
- JavaScript
- Database (DynamoDB)
- API (API Gateway + Lambda)
- Python
- Tests
- Infrastructure as Code (AWS SAM)
- Source Control (GitHub)
- CI/CD (Backend)
- CI/CD (Frontend)
- Blog Post
Spoiler alert: I did not complete all of the steps, but I will expound on that later in this blog post.
My Design
I made a simple design that included:
- The infrastructure defined on a SAM template
- The Frontend consisting of the HTML, CSS and JS hosted in an s3 bucket as needed, as well as cloudfront
- The Backend consisting of my lambda, database and API Gateway
I also extended the design to add:
- cloudwatch metrics and alerts
- SNS topic to send me emails of the cloudwatch alerts
I decided not to implement the custom domain name using Route 53 because of the costs. So far everything else would cost me nothing because of my cloud credits. However, the credits do not cover the purchase of a domain.
Implementation
Regardless of the order of the specs, I needed to find a way to iteratively build that works for me. I started with the Frontend and tested it locally first. I then deployed all the resources I needed (for the Frontend) using the SAM template. I then implemented the lambda, API Gateway and Dynamodb for the backend. I tested the lambda locally using unit tests and the API Gateway (deployed) using integration tests. Only then did I add monitoring via cloudwatch logs, alerts and a dashboard. I then added an SNS topic to receive the alerts from cloudwatch.
As part of the extension, I added dark mode on the front end and a button to download my resume as a pdf. Initially, this was just using the browsers print to pdf functionality but the resume was turning out as two pages. Eventually, I just made the button to open up to a pdf version hosted online which one can download.
Security
Security is a major issue to consider when designing any system. I addressed it in the following ways:
- I prevented direct access to the site using the s3 bucket link. I implemented Origin Access Identity (OAI) so that only CloudFront is allowed to fetch objects from the bucket
- Cloufront redirects HTTP to HTTPS. This ensures that all data that is in transit is encrypted.
- I implemented least privilege permissions in IAM. For example, Lambda can only make calls to the dynamodb and cloudwatch metrics
Costs
What typically stays free / very low-cost for this project
Within normal personal-portfolio traffic, the costs tend to be near-zero because:
- S3 storage is tiny (a few files)
- CloudFront has a generous free tier (and static content is cheap)
- Lambda free tier includes lots of requests and compute time
- DynamoDB on-demand for a single item updated occasionally is tiny
- CloudWatch basic metrics are included; small log volume is cheap
Where cost risks can appear
Even “free-tier-friendly” setups can cost money if something spikes:
1. CloudWatch Logs
- If your Lambda logs a lot (especially per request), logs can grow.
- Log ingestion and retention may incur costs.
- Mitigation: set log retention (e.g., 7–14 days) and avoid noisy logs.
2. API Gateway
- It charges per request.
- A traffic spike or bot traffic can increase costs.
- Mitigation: rate limiting (usage plans), WAF, or CloudFront in front of API (advanced).
3. Lambda invocations
- Still cheap, but if your API is hammered, invocations increase.
- Mitigation: caching, bot protection, or reducing how often the browser calls
/count.
4. CloudFront data transfer
- If you ever serve large files or heavy traffic, bandwidth is usually the cost driver.
- Mitigation: caching, compression, keep assets small.
CloudFront caching reduces origin traffic for static assets, which keeps performance high and costs low. This is because after the first request, many users are served from CloudFront edge locations instead of pulling from S3 every time.
The visitor counter remains dynamic and triggers Lambda; we could reduce those calls by caching the API response or only calling it once per session.
Issues encountered
Permissions & Least Privilege (IAM)
One of the first issues I ran into was related to IAM permissions. I started with a strict least-privilege policy for the Lambda function: it could only update a single DynamoDB table. That worked fine until I expanded the project to include custom CloudWatch metrics (for things like page views).
At that point, my integration tests began failing with HTTP 500 responses from the API. Unit tests still passed because they used mocked AWS services, but the deployed Lambda in AWS was failing at runtime. The root cause was that the Lambda role didn’t have permission to publish metrics to CloudWatch (cloudwatch:PutMetricData). Adding that permission fixed the 500s.
Testing Strategy (Unit + Integration)
Testing was another area where the implementation forced me to think more clearly about what I was validating.
Unit tests ran fully offline using mocks (e.g., moto), allowing me to test the Lambda logic quickly and repeatedly.
Integration tests hit the live API Gateway endpoint and verified that DynamoDB was actually updating. This was useful because it caught problems unit tests could never detect, such as missing permissions, incorrect region configuration, and miswired resources.
To keep CI/CD efficient and reduce noise, I configured the pipeline so tests run only when relevant code changes. For example, backend tests trigger on changes in backend folders or infrastructure templates, while deployment and integration tests only run on the main branch. That approach keeps pull requests fast while still protecting the main branch with “real” end-to-end validation.
CI/CD Failures Due to DynamoDB Region / AWS Region Configuration
A particularly annoying CI issue came from region configuration. The Lambda and DynamoDB code relied on boto3’s default region discovery. Locally, I had a region configured, so everything seemed fine. But in CI/CD, boto3 sometimes didn’t resolve a region the way I expected, which caused failures like NoRegionError when the code tried to talk to DynamoDB.
The fix was to be explicit: set the region consistently via environment variables and ensure boto3 clients/resources use it. It was a good lesson in writing cloud code that behaves the same in three environments: local development, GitHub Actions, and AWS Lambda. When something works locally but fails in CI, it’s often because local credentials or config is hiding assumptions.
SNS Subscription Emails Going to Spam
After setting up monitoring alerts, I added an SNS topic to notify my email address. The infrastructure deployed fine, but I initially missed the subscription confirmation email because it landed in my email’s spam folder. Since SNS won’t send alerts until the subscription is confirmed, this can silently break alerting.
Once I confirmed the subscription and moved the email out of spam, notifications started working.
Lessons learnt
- SAM templates still end up as standard CloudFormation resources. This means that you can inspect Cloudformation in the aws console when debugging
- Permissions and infrastructure management, just like code, is iterative.
- Observability and monitoring are a part of the non-functional requirements, they can help catch errors that tests won’t
- Cloudfront caching can help reduce costs and enhance performance for static assets
- Operational details still matter e.g., if a user does not confirm an sns subscription, it doesn’t matter how correct the infrastructure is.
Future Work
- Enable active tracing for the Lambda + API Gateway using AWS X-ray to visualize request paths, latency, and failures end-to-end.
- Attach a custom domain to CloudFront using Route 53 + ACM
- Improve visitor counting: cumulative, daily, or unique visitors
Conclusion
Overall, I found the challenge fun and engaging. I hope you decide to take it. Like me, you can take it for your reasons and you can even skip parts. Don’t let the lack of a certification or anything else stop you. You can view my deployed site here and the github repository here

Top comments (2)
This was quite insightful, thank you for sharing!
You're welcome!