After weeks of effort, late-night debugging, and countless commits, my Cloud Resume Challenge is finally complete.
This project wasn’t just about hosting a résumé on AWS. It was about learning real-world cloud engineering practices, from IAM fundamentals to Infrastructure as Code, CI/CD pipelines, and supply chain security.
In this post, I’ll tie everything together, walk through the full journey, and share what I learned.
🧩 Breaking It Down - The 5 Chunks
Throughout the challenge, I split the work into five chunks. Here’s a recap:
🔹 Chunk 0 - Access, Credentials, and Certification Prep
- Got AWS Certified Cloud Practitioner.
- Created and secured an AWS Organization (1 org, 2 OUs, MFA everywhere).
- Configured billing alerts (\$0.01 threshold 🚨).
🔹 Chunk 1 - Building the Front-End
- Created my résumé with Hugo (console theme for a Linux-terminal feel).
- Styled with HTML/CSS/JS.
- Deployed to S3, secured behind CloudFront (no public bucket access).
🔹 Chunk 2 - Building the API
- Added a visitor counter using DynamoDB + Lambda + API Gateway.
- Improved to track total + unique visits.
- Enabled S3 bucket versioning + lifecycle policies.
🔹 Chunk 3 - Front-End & Back-End Integration
- Built a GitHub Actions pipeline with OIDC (no long-term creds).
- Automated: build → deploy → invalidate CloudFront cache → run Playwright smoke tests.
- Pipeline permissions scoped to a single IAM role.
🔹 Chunk 4 — Building the Automation and CI
- Migrated infrastructure into Terraform for IaC.
- Hardened supply chain security (pinned Actions, checksum verification, minimal IAM).
- Documented with architecture diagrams.
👉 Read Chunk 4 Blog
Perfect — here’s a section you can insert after Chunk 4 (or right before your “Final Architecture” section). It introduces and explains the Supply Chain Security challenge in the same polished tone and structure as your other chunks, keeping the technical depth and clarity consistent with your blog’s progression.
🔹 Securing the Software Supply Chain
The software supply chain is everything that connects your laptop to production, the code you write, the dependencies you install, and the pipelines that deploy it all.
As recent high-profile incidents have shown, a single compromised dependency or misconfigured pipeline can cascade into a massive breach. This chunk was all about building trust and verifiability into that chain.
🧠 The Goal
The objective was to extend my Cloud Resume project to add integrity checks, signed commits, automated scanning, and dependency validation, in short, to treat my résumé like production-grade software.
Here’s what I implemented:
🖋️ Signed Commits and Verified Merges
Every commit to my front-end and back-end repositories is now cryptographically signed using a GPG key.
You’ll see the “Verified” badge on each commit in GitHub, proof that code changes are truly mine.
To take it a step further, I enforced branch protection rules:
✅ Only signed commits can be merged.
✅ Status checks must pass before merging.
This ensures no unsigned or unreviewed changes slip through.
🔍 Automated Code Scanning with CodeQL
Next, I enabled GitHub’s CodeQL analysis, a static analysis engine that detects security vulnerabilities in code patterns.
My workflow now runs CodeQL scans automatically:
- On every pull request merge to
main - On a monthly schedule to catch dependency decay
Any “High” or “Critical” findings cause the build to fail automatically, protecting the integrity of my main branch.
🧾 Generating an SBOM (Software Bill of Materials)
An SBOM lists every dependency your project includes, like an ingredient list for your software.
During each CI/CD pipeline run, I used Syft to generate an SBOM for the Lambda back-end:
syft dir:. -o json > sbom.json
This ensures I know exactly what’s deployed, every library, its version, and its source.
🕵🏻 Automated Vulnerability Checks
To validate my SBOM, I layered in defense-in-depth vulnerability scans:
- Grype — scanned the SBOM for known CVEs across multiple databases.
- OSV API — queried Google’s Open Source Vulnerability database for any dependency issues.
Here’s a snippet from my GitHub Actions step:
echo "Running OSV API vulnerability scan..."
jq 'if .artifacts then .artifacts |= map(select(.name != null and .version != null and .version != "UNKNOWN")) else . end' sbom.json > sbom-clean.json
cat sbom-clean.json | jq -c '.artifacts[] | {name: .name, version: .version}' | while read -r pkg; do
NAME=$(echo "$pkg" | jq -r '.name')
VERSION=$(echo "$pkg" | jq -r '.version')
RESPONSE=$(curl -s -X POST "https://api.osv.dev/v1/query" \
-H "Content-Type: application/json" \
-d "{\"package\": {\"name\": \"$NAME\"}, \"version\": \"$VERSION\"}")
if echo "$RESPONSE" | jq -e '.vulns' | grep -q '.'; then
echo "⚠️ Vulnerability found in $NAME@$VERSION"
echo "$RESPONSE" | jq -r '.vulns[] | "• \(.id): \(.summary)"'
exit 1
fi
done
The job fails if any compromised dependency is detected — preventing insecure code from being deployed.
🧾 Artifact Signing
Since my API runs on AWS Lambda (not containers), I integrated AWS Lambda Code Signing directly into my Terraform definition.
If I had been using containers, I’d have used Cosign with AWS KMS to sign my images, verifying that my deployment artifacts were unaltered from build to deploy.
🧩 What This Achieved
By layering these protections, my pipeline now provides:
✅ Provenance — cryptographic verification from commit to deploy
✅ Integrity — dependencies scanned and tracked via SBOM
✅ Security Automation — no manual checks needed
✅ Trustworthiness — everything verifiable, auditable, and repeatable
This chunk transformed my project from “secure enough” to professionally hardened, aligned with real DevSecOps practices.
🏗️ Final Architecture
Here’s the big picture of the project:

Here is the front-end of the project:

Here is the back-end of the project:

Here is the S3 Lifecycle management of the project:

🌐 The Final Product
✨ Live Resume Website: https://www.trinityklein.dev/
📂 GitHub Repository: https://github.com/tlklein/portfolio-website
Both the code and live deployment are currently publicly available.
--
💡 Key Lessons Learned
This challenge taught me so much more than AWS commands and YAML syntax.
- Security First → MFA, least privilege, no long-term creds.
- Automation Wins → GitHub Actions reduced manual deployment risk.
- Infrastructure as Code → Terraform makes my project reproducible.
- Resilience Through Testing → Playwright ensured code stability.
- Documentation Matters → Architecture diagrams simplified communication.
This was more than a résumé site, it became a mini production system.
📚 Helpful Resources
If you’re inspired to start your own Cloud Resume Challenge, here are some key references:
- The Cloud Resume Challenge Official Site
- Terraform Extension
- Supply Chain Security Extension
- AWS Free Tier
🙌 Final Thoughts
Completing the Cloud Resume Challenge wasn’t easy. There were plenty of moments where I thought, “Why isn’t this working?!” But those moments became the most valuable, because they forced me to dig deeper, learn faster, and think like a real cloud engineer.
This challenge proved that building cloud-native projects requires:
- Technical skills 🛠️
- Security awareness 🔐
- Persistence 💪
And now, I have not only a working cloud résumé, but also a solid foundation for my cloud career.
🫰🏻 Let’s Connect
If you’re following this challenge, or just passing by, I’d love to connect!
I’m always happy to help if you need guidance, want to swap ideas, or just chat about tech. 🚀
I’m also open to new opportunities, so if you have any inquiries or collaborations in mind, let me know!
- 🐙 GitHub
- ✍️ Dev.to Blog
- ✉️ Email Me
Top comments (0)