DEV Community

Cover image for My Resume in My Pocket 24/7. Powered by AWS
Michael Burbank
Michael Burbank

Posted on

My Resume in My Pocket 24/7. Powered by AWS

From phones and wallets to car keys and coffee receipts, there's never been a perfect way to carry a polished PDF resume.

So I built a resume I can carry 24/7. A live website on AWS, secured with HTTPS, delivered globally through a CDN, and backed by a serverless visitor counter. In this post, I break down the milestones I completed over 18 days, from S3 + CloudFront (OAC private origin) to API Gateway, Lambda, SAM (SAM CLI), and DynamoDB.

Live Site: https://resume.michael-burbank.com/

I designed and built a cloud-hosted resume that's secure, automated, and powered by a serverless backend, not just "another static webpage".

What this project includes:


HTML + CSS

(resume structure and styling)

I built my resume as a real webpage using HTML for structure and CSS for styling. This made it easy to update and share, while providing better flexibility than a PDF that may or may not be lost and outdated. HTML + CSS Image

Amazon S3

(private origin for static assets)

I store the resume’s static files (HTML/CSS/JS) in an S3 bucket and use it as a private CloudFront origin. The bucket is not public—CloudFront accesses it using Origin Access Control (OAC) so objects can only be fetched through CloudFront.AWS S3 Image

Amazon CloudFront

(HTTPS + CDN for my resume subdomain)

To serve the site securely using HTTPS and improve load times, I placed CloudFront in front of the S3 bucket. CloudFront caches the site at edge locations and delivers it globally. When resume changes are needed, I update the content followed by CloudFront invalidations to force edge caches to fetch the latest version from the S3 origin. CloudFront uses Origin Access Control (OAC) to securely access my private S3 bucket, enforced by a restrictive bucket policy.Amazon CloudFront Image

Custom DNS

(resume.michael-burbank.com)

My main domain points to an Amazon EC2-hosted personal website, running on Amazon Linux (AL) 2023. So for the Cloud Resume Challenge, I created the subdomain resume.michael-burbank.com and routed it to the CloudFront distribution that serves my S3 resume. This keeps the two sites separated while still living under the same domain.Amazon Route 53 Image

JavaScript

(visitor counter on page load)

Whenever a visitor loads the site, JavaScript calls my API and renders the updated visitor count on the page.
JavaScript Image

API Gateway

(REST API layer)

Instead of letting the browser communicate directly with DynamoDB, I used API Gateway to expose a REST API endpoint the website can call over HTTPS. API Gateway triggers Lambda. Lambda is the only component that reads and updates the DynamoDB table. Amazon API Gateway Image

AWS Lambda

(backend compute)

Lambda handles the visitor count logic. It runs only when invoked, increments the count, and returns the updated value.
Amazon Lambda Image

Python + boto3

(AWS SDK inside Lambda)

Within my Lambda function, I used Python and the AWS SDK (boto3) to interact with DynamoDB and return a clean JSON response back to the website.Python Image

Amazon DynamoDB

(on-demand visitor count storage)

I used DynamoDB on-demand capacity to store the visitor count, keeping costs low and removing capacity planning.Amazon DynamoDB Image

AWS SAM / SAM CLI

(Infrastructure-as-Code (IaC))

I defined the backend infrastructure, DynamoDB, API Gateway, and Lambda using AWS SAM, so I can deploy with the SAM CLI instead of clicking around in the AWS console. Provisioning and configuring infrastructure using IaC is my favorite part of building different software projects.Amazon CloudFront Image

Testing

(Python tests for backend logic)

I wrote pytest tests for the Lambda logic so changes can be validated automatically before deploying to prod.Python Testing Image

GitLab CI/CD

(pipelines for front and backend)

I initially configured a working pipeline using GitHub Actions as the challenge suggests, then I migrated to GitLab CI so both repos use a single, consistent CI/CD approach. Each repo uses the .gitlab-ci.yml pipeline to reduce manual steps, automate deployments, all while improving repeatability.GitLab Image

GitHub Mirroring

(one push to main updates both platforms)

Maintaining my GitHub presence is a must, given that many companies or other developers reference GitHub more often than referencing GitLab. My solution? Utilize two different remote repositories, ensuring this project still aligned with the minimal requirements and the source-control milestone. I mirrored my GitLab repositories to GitHub. This was the first time I had ever configured two different, but parallel and in-sync remote repositories. When I push to main in GitLab (origin) using the CLI, the same commits automatically push to the GitHub repo(s), keeping GitHub in-sync with GitLab. I configured this for both the front and backend repositories.GitHub Image


Architecture Diagrams

Below are the runtime request flows (frontend + backend), followed by the deployment pipeline that ships changes to AWS.

Runtime: Frontend request flow

(Browser → Route 53 → CloudFront → S3)

Runtime architecture - frontend: Route 53 → CloudFront (HTTPS + edge cache) → S3 origin protected by OAC
CloudFront terminates HTTPS using ACM, serves cached content from edge locations, and uses OAC so only CloudFront can fetch objects from the private S3 origin.

Runtime: Backend request flow

(Browser → API Gateway → Lambda → DynamoDB)

Runtime architecture - backend: Browser calls API Gateway over HTTPS → API Gateway invokes Lambda → Lambda reads/updates DynamoDB → returns JSON count
The browser calls API Gateway over HTTPS (CORS enabled). API Gateway invokes Lambda (proxy integration). Lambda uses boto3 to GetItem (GET) or UpdateItem (POST/PUT) in DynamoDB, then returns JSON { "count": n }. CloudWatch captures logs/metrics.

Deployment & CI/CD Pipeline

(GitLab -> AWS)

The deployment illustrates the CI/CD pipeline: GitLab as origin mirroring to GitHub, GitLabCI pipelines for frontend and backend, S3 and CloudFront deployment targets, SAM/CloudFormation for serverless resources (API Gateway, Lambda, DynamoDB)
GitLab CI runs tests, deploys serverless resources via SAM/CloudFormation, syncs static site assets to S3 and invalidates CloudFront so edge caches refresh.


Feedback and Self-Awareness

I learned a lot through this project - not just AWS services, but how the puzzle pieces fit together in terms of a real production workflow. These were the biggest skills I gained while building and shipping my Cloud Resume Challenge end-to-end:

GitHub to GitLab migration

I started with GitHub Actions (as the challenge suggested) but migrated to GitLab mid-project to GitLab CI to standardize my pipelines. That forced me to think through runner behavior, environment variables, and deployment steps instead of copying a template. It also gave me a real "change-in-flight" experience without breaking production.

AWS SAM / SAM CLI fundamentals (IaC)

Defining the backend in SAM strengthened my ability in treating infrastructure like versioned code. Instead of "click ops", I gain more skills in deploying repeatability, roll changes safely, and keep my API/Lambda/DynamoDB consistent across updates.

Two remotes + GitHub mirroring from GitLab

I configured my workflows so one push to main in GitLab also syncs to GitHub. This helped me keep my GitHub presence active while using GitLab CI as the primary CI/CD platform instead of GitHub Actions. It was also my first time running a multi-remote workflow cleanly.

REST API design practices

I strengthened my ability to keep the front and backend behavior separate from each other while maintaining an efficient contract between the two. Designing the endpoint around "increment and return the updated count" made the frontend less complex and kept the database logic server-side where it rightfully belongs.

API Gateway (real API layer, no direct DB calls)

API Gateway became the front door: HTTPS access, routing, and a clean interface between the browser and Lambda. It made the whole design feel like a real system rather than "JavaScript communicating with the database".

GitLab CI pipelines (automation + repeatability)

Building pipelines for both the front and backend reinforced how much automation reduces human error. Once it worked, deployments stopped being a "process" and became a push-to-main routine.

Docker (Desktop) for local API and DB testing

Using Docker for local API/DB testing taught me how to validate logic without relying on cloud deployments for every change. That feedback loop is faster, cheaper, and much closer to how teams develop and test, a part of the Software Development Life Cycle (SDLC).


If you’re doing the Cloud Resume Challenge, drop your link — I’ll check it out!

Top comments (0)