DEV Community

zach beecher
zach beecher

Posted on

AWS resume challenge (cloud) 2025 indepth explanation

*If you didn't know what the challenge is, here it is in all its glory: https://cloudresumechallenge.dev/docs/the-challenge/aws/
*

I recommend building this out from scratch doing deep research on your own and struggle as that is how you'll learn, then coming back here to see the detailed approach to see how you did and what approach you took?

🌐 Frontend
Write your resume in HTML and style it with CSS
Host it as a static website on Amazon S3
Secure it with HTTPS via CloudFront
Point a custom domain to it using Route 53 or another DNS provider
πŸ”’ Visitor Counter
Add a JavaScript-based counter to track site visits
Store the count in DynamoDB
Create a serverless API using API Gateway + AWS Lambda
Write the Lambda function in Python using the boto3 library
πŸ” DevOps & Testing
Use Infrastructure as Code (like Terraform or CloudFormation)
Set up CI/CD pipelines with GitHub Actions
Write unit tests for your backend code

Week 1: Converted resume into HTML/CSS. Then I uploaded resume to a free tier S3 bucket. Bought domain name on namecheap ($1 dollar, what a deal), looking back should have used Route53 for 20 bucks instead. As the URL redirect on namecheap (DNS) isn't as robust as route53. Setup cloudfront for caching. Success - have resume on the aws web as static site on S3 bucket.

**Week 2: **JS based counter on website. Store the count in dynamoDB. Create serverless API using API gateway + Lambda. Write Python and use boto3 library for Lambda function.

Week 3: Devops time. github actions, setup a repo on gitup with CI/CD pipeline. Setup Unit tests for backend code.

Why do all this? Demonstrates real-world cloud skills

Covers key AWS services: S3, Lambda, DynamoDB, CloudFront, Route 53 (or any DNS), IAM

Builds a portfolio project that’s neat.

Helps prep for certifications like AWS Cloud Practitioner

Steps 1-3 easy parts -> HTML resume gpt can do this in 10 seconds -> host on s3 bucket aws (upload document). I'd recommend buying domain name on route53 service (20 us dollars) as going 3rd party creates more headaches like needing to generate a SSL certificate which can be tricky getting in a timely manner.

*Detailed steps below: *

Connecting an S3 Bucket to CloudFront for HTTPS
To enable HTTPS for your S3 website using CloudFront, follow these steps:

Prerequisites
An S3 bucket configured for static website hosting

Your website content uploaded to the bucket

Step-by-Step Configuration

  1. Create a CloudFront Distribution Open the CloudFront console

Click "Create Distribution"

Under "Web", click "Get Started"

  1. Configure Origin Settings Origin Domain Name: Select your S3 bucket from the dropdown (use the format bucket-name.s3.amazonaws.com for REST API endpoint)

Origin Path: Leave blank unless your content is in a subfolder

Origin ID: Automatically generated (you can customize if needed)

Restrict Bucket Access: Select "Yes" for better security

Origin Access Identity: Create a new OAI or use existing

Grant Read Permissions on Bucket: Select "Yes, Update Bucket Policy"

  1. Configure Default Cache Behavior Settings Viewer Protocol Policy: Select "Redirect HTTP to HTTPS"

Allowed HTTP Methods: GET, HEAD (for static sites) or add others if needed

Cache Policy: Choose "CachingOptimized" for static content

Origin Request Policy: Leave as "None" for simple cases

  1. Configure Distribution Settings Price Class: Choose based on your needs (e.g., "Use All Edge Locations")

Alternate Domain Names (CNAMEs): Enter your custom domain if you have one

SSL Certificate:

If using a custom domain, choose "Custom SSL Certificate" and select/request one from ACM

Otherwise, use "Default CloudFront Certificate"

Default Root Object: Enter "index.html" (or your default page)

  1. Create the Distribution Click "Create Distribution"

Wait for deployment (this may take 10-40 minutes)

  1. Update Your DNS (if using custom domain) Create a CNAME record pointing your domain to the CloudFront distribution domain (e.g., d111111abcdef8.cloudfront.net)

Important Notes:
After setup, access your site via the CloudFront URL (not the S3 URL) to benefit from HTTPS

If you previously used the S3 website endpoint (format: bucket-name.s3-website-region.amazonaws.com), you'll need to switch to the REST endpoint format for CloudFront

Clear cache in CloudFront if you update your content (or implement cache invalidation)

This setup provides HTTPS encryption, improved performance through caching, and better security by restricting direct access to your S3 bucket.

**now for the Javascript part 7:

Implementing a Visitor Counter with JavaScript for Your S3 Resume Website
Here's how to add a visitor counter to your static website hosted on S3 using JavaScript. Since S3 is static hosting (no server-side processing), we'll use a combination of JavaScript and AWS services to track visits.

Option 1: Simple Counter Using localStorage (Client-Side Only)
javascript
// visitor-counter.js
document.addEventListener('DOMContentLoaded', function() {
// Check if localStorage is available
if (typeof(Storage) !== "undefined") {
// Get or initialize the counter
let visitCount = localStorage.getItem('visitCount') || 0;

    // Increment the counter
    visitCount++;

    // Store the updated count
    localStorage.setItem('visitCount', visitCount);

    // Display the count
    document.getElementById('visitor-counter').textContent = 
        `You are visitor number ${visitCount}`;
} else {
    document.getElementById('visitor-counter').textContent = 
        "Visitor counter not available";
}
Enter fullscreen mode Exit fullscreen mode

});
HTML:

html

Loading visitor count...

Limitation: This only tracks visits per browser/device.

Option 2: Using AWS Lambda + API Gateway + DynamoDB (More Accurate)
For a more accurate counter across all visitors:

  1. Set up the backend: Create a DynamoDB table:

Table name: VisitorCounts

Primary key: pageId (string)

Create a Lambda function (Node.js):

javascript
// Lambda function to update and return visitor count
const AWS = require('aws-sdk');
const dynamo = new AWS.DynamoDB.DocumentClient();

exports.handler = async (event) => {
const pageId = 'resume-page'; // or your unique identifier

// Update the count
const updateParams = {
    TableName: 'VisitorCounts',
    Key: { pageId },
    UpdateExpression: 'ADD #count :incr',
    ExpressionAttributeNames: { '#count': 'count' },
    ExpressionAttributeValues: { ':incr': 1 },
    ReturnValues: 'UPDATED_NEW'
};

try {
    const result = await dynamo.update(updateParams).promise();
    return {
        statusCode: 200,
        body: JSON.stringify({ count: result.Attributes.count }),
        headers: { 'Access-Control-Allow-Origin': '*' }
    };
} catch (error) {
    return { statusCode: 500, body: JSON.stringify(error) };
}
Enter fullscreen mode Exit fullscreen mode

};
Create API Gateway:

REST API

Create a GET method that triggers your Lambda

Enable CORS

  1. Client-side JavaScript:
    javascript
    // visitor-counter.js
    document.addEventListener('DOMContentLoaded', function() {
    const counterElement = document.getElementById('visitor-counter');

    // Replace with your API Gateway endpoint
    const apiUrl = 'https://your-api-id.execute-api.region.amazonaws.com/prod/counter';

    fetch(apiUrl)
    .then(response => response.json())
    .then(data => {
    counterElement.textContent =
    You are visitor number ${data.count};
    })
    .catch(error => {
    console.error('Error fetching visitor count:', error);
    counterElement.textContent = 'Visitor count not available';
    });
    });

  2. HTML Implementation:
    html

    Loading visitor count...

    Option 3: Using AWS CloudFront + Lambda@Edge
    For a more advanced solution with better performance:

Create a Lambda@Edge function that triggers on viewer requests

The function would update DynamoDB with each visit

Inject the count into your page

Security Considerations:
For the API Gateway solution, consider adding rate limiting

For production use, you might want to implement some bot detection

The localStorage solution is simplest but least accurate

** step 8 now to setup dynamodb - aka database

Implementing a DynamoDB-Backed Visitor Counter for Your Resume Website
Let's set up a complete solution using DynamoDB to track visitors accurately. This approach will work with your S3-hosted static website and provide a persistent count across all visitors.

Step 1: Create the DynamoDB Table
Go to the DynamoDB console

Click "Create table"

Enter:

Table name: WebsiteVisitorCounts

Partition key: pageId (String)

Under "Table settings":

Select "Customize settings"

Capacity mode: "On-demand"

Click "Create table"

Step 2: Create a Lambda Function to Handle Counts
Go to the Lambda console

Click "Create function"

Select:

"Author from scratch"

Function name: updateVisitorCount

Runtime: Node.js (latest version)

Click "Create function"

Paste this code:

javascript
const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB.DocumentClient();
const TABLE_NAME = 'WebsiteVisitorCounts';

exports.handler = async (event) => {
const pageId = 'resume-homepage'; // Unique ID for your page

const params = {
    TableName: TABLE_NAME,
    Key: { pageId },
    UpdateExpression: 'SET #count = if_not_exists(#count, :start) + :incr',
    ExpressionAttributeNames: { '#count': 'count' },
    ExpressionAttributeValues: { ':incr': 1, ':start': 0 },
    ReturnValues: 'UPDATED_NEW'
};

try {
    const data = await dynamodb.update(params).promise();
    return {
        statusCode: 200,
        body: JSON.stringify({ count: data.Attributes.count }),
        headers: {
            'Access-Control-Allow-Origin': '*',
            'Access-Control-Allow-Methods': 'GET, OPTIONS'
        }
    };
} catch (err) {
    console.error('Error:', err);
    return {
        statusCode: 500,
        body: JSON.stringify({ error: 'Could not update count' })
    };
}
Enter fullscreen mode Exit fullscreen mode

};
Step 3: Set Up API Gateway
Go to API Gateway console

Click "Create API" > "REST API" > "Build"

Choose:

Protocol: REST

Create new API

API name: VisitorCounterAPI

Endpoint Type: Regional

Click "Create API"

Create a new resource:

Actions > Create Resource

Resource name: counter

Click "Create Resource"

Create a GET method:

Select the /counter resource

Actions > Create Method > GET

Integration type: Lambda Function

Check "Use Lambda Proxy integration"

Select your updateVisitorCount Lambda

Click "Save"

Enable CORS:

Select the /counter resource

Actions > Enable CORS

Keep default settings

Click "Enable CORS and replace existing CORS headers"

Deploy the API:

Actions > Deploy API

Deployment stage: [New Stage]

Stage name: prod

Click "Deploy"

Step 4: Update Your Website JavaScript
Add this to your HTML:

html

Loading visitor count...

document.addEventListener('DOMContentLoaded', function() { // Replace with your actual API Gateway endpoint const apiUrl = 'https://your-api-id.execute-api.region.amazonaws.com/prod/counter'; fetch(apiUrl) .then(response => response.json()) .then(data => { document.getElementById('visitor-counter').textContent = `You are visitor number ${data.count}`; }) .catch(error => { console.error('Error fetching visitor count:', error); document.getElementById('visitor-counter').textContent = 'Visitor count not available'; }); });

Step 5: Set Up Permissions
Go to your Lambda function

Under "Configuration" > "Permissions"

Click the execution role name to edit in IAM

Add these permissions:

dynamodb:UpdateItem

dynamodb:GetItem

(Attach the AmazonDynamoDBFullAccess policy if you prefer)

Step 6: Initial Test
Manually add an initial record to DynamoDB:

Go to DynamoDB console

Select your table

Click "Create item"

Enter:

pageId: "resume-homepage"

count: 0 (Number)

Save

Cost Considerations
With on-demand DynamoDB:

You get 1 million read requests and 1 million write requests per month for free in AWS Free Tier

After that, $1.25 per million write requests and $0.25 per million read requests

For a personal resume site, you'll likely stay well within free tier limits

Advanced Options
Add caching: To reduce DynamoDB calls, you could:

Use localStorage to cache the count for returning visitors

Implement CloudFront caching for the API response

Enhance tracking: You could add:

Date-based tracking

Unique visitor detection (using cookies)

Geographic information

**APi time! Labnda get requests

Creating a Secure API for Your Visitor Counter Using API Gateway and Lambda
I'll guide you through setting up a proper API architecture that prevents direct database access from your frontend JavaScript. Here's the complete implementation:

Architecture Overview
text
[S3 Website] β†’ [API Gateway] β†’ [Lambda] β†’ [DynamoDB]
Step 1: Create the Lambda Function
Go to the Lambda Console

Click "Create function"

Configure:

Function name: visitorCounterAPI

Runtime: Node.js 18.x

Architecture: x86_64

Click "Create function"

Paste this code:

javascript
const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB.DocumentClient();
const TABLE_NAME = process.env.TABLE_NAME || 'WebsiteVisitorCounts';

exports.handler = async (event) => {
// Validate HTTP method
if (event.httpMethod !== 'GET') {
return {
statusCode: 405,
body: JSON.stringify({ error: 'Method Not Allowed' }),
headers: { 'Access-Control-Allow-Origin': '*' }
};
}

const pageId = 'resume-homepage'; // Your unique identifier

try {
    // Atomic update of the counter
    const result = await dynamodb.update({
        TableName: TABLE_NAME,
        Key: { pageId },
        UpdateExpression: 'SET #count = if_not_exists(#count, :start) + :incr',
        ExpressionAttributeNames: { '#count': 'count' },
        ExpressionAttributeValues: { ':incr': 1, ':start': 0 },
        ReturnValues: 'UPDATED_NEW'
    }).promise();

    return {
        statusCode: 200,
        body: JSON.stringify({ 
            count: result.Attributes.count,
            timestamp: new Date().toISOString() 
        }),
        headers: { 
            'Access-Control-Allow-Origin': '*',
            'Content-Type': 'application/json'
        }
    };
} catch (error) {
    console.error('DynamoDB Error:', error);
    return {
        statusCode: 500,
        body: JSON.stringify({ error: 'Internal Server Error' }),
        headers: { 'Access-Control-Allow-Origin': '*' }
    };
}
Enter fullscreen mode Exit fullscreen mode

};
Step 2: Configure API Gateway
Go to API Gateway Console

Create new HTTP API (not REST API for simpler CORS)

Click "Add integration"

Select Lambda

Choose your visitorCounterAPI function

API name: VisitorCounterAPI

Configure routes:

Path: /count

Method: GET

Click "Next" then "Create"

Step 3: Set Up CORS
In your API Gateway:

Select your API

Click "CORS" in the left menu

Configure:

Allow origins: * (or your specific domain)

Allow methods: GET

Allow headers: Content-Type

Click "Save"

Step 4: Set Up Lambda Permissions
Go to your Lambda function

Under "Configuration" > "Permissions"

Click the execution role name

Attach these policies:

AWSLambdaBasicExecutionRole

Create custom policy for DynamoDB access:

json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:UpdateItem",
"dynamodb:GetItem"
],
"Resource": "arn:aws:dynamodb:YOUR_REGION:YOUR_ACCOUNT_ID:table/WebsiteVisitorCounts"
}
]
}
Step 5: Implement Frontend JavaScript
javascript
// In your S3-hosted HTML file
async function updateVisitorCount() {
const counterElement = document.getElementById('visitor-counter');
if (!counterElement) return;

try {
    // Replace with your actual API Gateway endpoint
    const apiUrl = 'https://your-api-id.execute-api.region.amazonaws.com/count';

    const response = await fetch(apiUrl, {
        method: 'GET',
        headers: { 'Content-Type': 'application/json' }
    });

    if (!response.ok) throw new Error('Network response was not ok');

    const data = await response.json();
    counterElement.textContent = `Visitor Count: ${data.count}`;
} catch (error) {
    console.error('Error fetching visitor count:', error);
    counterElement.textContent = 'Visitor count unavailable';
}
Enter fullscreen mode Exit fullscreen mode

}

// Call when page loads
document.addEventListener('DOMContentLoaded', updateVisitorCount);
Step 6: Deploy and Test
Deploy your API:

In API Gateway, select your API

Click "Deploy"

Create new stage (e.g., "prod")

Note the invoke URL

Test the endpoint:

bash
curl https://your-api-id.execute-api.region.amazonaws.com/prod/count
Check DynamoDB to verify the count is incrementing

Security Enhancements
Rate Limiting:

In API Gateway, create a usage plan

Set throttling to ~5 requests/second

API Keys (optional):

Create API key in API Gateway

Require key in your frontend requests

Input Validation:

The Lambda already validates it's a GET request

You could add more validation if needed

Cost Analysis
API Gateway: First 1 million requests/month are free ($1.00/million after)

Lambda: 1 million free requests/month ($0.20/million after)

DynamoDB: On-demand pricing, minimal cost for this use case

For a personal resume site, you'll likely stay well within free tier limits.

**Python time!

Implementing the Visitor Counter in Python with Lambda and Boto3
Let's rewrite the Lambda function in Python using the boto3 library for AWS operations. This will give you exposure to Python for backend development while maintaining all the security benefits of the API Gateway architecture.

Python Lambda Function Code
python
import json
import os
from datetime import datetime
import boto3

dynamodb = boto3.resource('dynamodb')
table_name = os.environ.get('TABLE_NAME', 'WebsiteVisitorCounts')
table = dynamodb.Table(table_name)

def lambda_handler(event, context):
# Validate HTTP method
if event['httpMethod'] != 'GET':
return {
'statusCode': 405,
'body': json.dumps({'error': 'Method Not Allowed'}),
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
}
}

page_id = 'resume-homepage'  # Your unique identifier

try:
    # Atomic update of the counter
    response = table.update_item(
        Key={'pageId': page_id},
        UpdateExpression='SET #count = if_not_exists(#count, :start) + :incr',
        ExpressionAttributeNames={'#count': 'count'},
        ExpressionAttributeValues={':incr': 1, ':start': 0},
        ReturnValues='UPDATED_NEW'
    )

    return {
        'statusCode': 200,
        'body': json.dumps({
            'count': response['Attributes']['count'],
            'timestamp': datetime.utcnow().isoformat()
        }),
        'headers': {
            'Content-Type': 'application/json',
            'Access-Control-Allow-Origin': '*'
        }
    }

except Exception as e:
    print(f"Error updating DynamoDB: {str(e)}")
    return {
        'statusCode': 500,
        'body': json.dumps({'error': 'Internal Server Error'}),
        'headers': {
            'Content-Type': 'application/json',
            'Access-Control-Allow-Origin': '*'
        }
    }
Enter fullscreen mode Exit fullscreen mode

Key Python Concepts Used
Boto3 Library: AWS's official Python SDK for interacting with AWS services

Environment Variables: Using os.environ for configuration

Error Handling: Python's try-except blocks

Dictionary Manipulation: Working with AWS response objects

JSON Handling: The json module for serialization

Deployment Steps
Create the Lambda Function:

Go to AWS Lambda Console

Click "Create function"

Select "Python 3.9" or later runtime

Name: pythonVisitorCounter

Upload the Code:

Paste the Python code above into the inline editor

Or package it as a ZIP file if you have dependencies

Set Environment Variable:

In Configuration β†’ Environment variables

Add: TABLE_NAME = WebsiteVisitorCounts

Set Permissions:

Ensure the Lambda execution role has permissions to:

dynamodb:UpdateItem

dynamodb:GetItem

Attach the AmazonDynamoDBFullAccess policy if needed

Connect to API Gateway:

Use the same API Gateway setup as before

Just change the Lambda integration to point to your Python function

Python-Specific Benefits
Cleaner Syntax: Python's readability makes the code more maintainable

Strong Typing: Better error checking (especially with tools like mypy)

Rich Ecosystem: Access to Python's extensive libraries if you need to add functionality

Better Error Handling: More granular exception handling than JavaScript

Testing the Python Function
You can test locally before deploying:

Install Python 3.9+

Install dependencies:

bash
pip install boto3
Create a test event (save as test_event.json):

json
{
"httpMethod": "GET"
}
Test locally (with AWS credentials configured):

python
import lambda_function
print(lambda_function.lambda_handler({"httpMethod": "GET"}, None))
Cost Considerations
The Python implementation has the same cost profile as the JavaScript version:

First 1M Lambda requests free

Very minimal DynamoDB costs

API Gateway free tier covers typical personal site traffic

Next Steps
Consider adding input validation

Add logging with Python's logging module

Implement caching if needed

Set up monitoring with CloudWatch

**Tests time - unit tests / integration

Writing Tests for Your Python Lambda Function
Let's implement a comprehensive test suite for your visitor counter Lambda function using Python's built-in unittest framework. We'll test both the happy path and error scenarios.

Test Implementation
Create a new file named test_lambda_function.py with these tests:

python
import unittest
from unittest.mock import patch, MagicMock
import lambda_function
import json

class TestVisitorCounterLambda(unittest.TestCase):

def setUp(self):
    self.event = {
        'httpMethod': 'GET',
        'headers': {}
    }
    self.context = {}

@patch('lambda_function.dynamodb.Table')
def test_successful_count_update(self, mock_table):
    # Configure mock DynamoDB response
    mock_response = {
        'Attributes': {'count': 42},
        'ResponseMetadata': {'HTTPStatusCode': 200}
    }
    mock_table.return_value.update_item.return_value = mock_response

    # Call the function
    result = lambda_function.lambda_handler(self.event, self.context)

    # Verify results
    self.assertEqual(result['statusCode'], 200)
    response_body = json.loads(result['body'])
    self.assertEqual(response_body['count'], 42)
    self.assertIn('timestamp', response_body)
    self.assertEqual(result['headers']['Content-Type'], 'application/json')

    # Verify DynamoDB was called correctly
    mock_table.return_value.update_item.assert_called_once_with(
        Key={'pageId': 'resume-homepage'},
        UpdateExpression='SET #count = if_not_exists(#count, :start) + :incr',
        ExpressionAttributeNames={'#count': 'count'},
        ExpressionAttributeValues={':incr': 1, ':start': 0},
        ReturnValues='UPDATED_NEW'
    )

@patch('lambda_function.dynamodb.Table')
def test_wrong_http_method(self, mock_table):
    test_event = {'httpMethod': 'POST'}

    result = lambda_function.lambda_handler(test_event, self.context)

    self.assertEqual(result['statusCode'], 405)
    response_body = json.loads(result['body'])
    self.assertEqual(response_body['error'], 'Method Not Allowed')
    mock_table.return_value.update_item.assert_not_called()

@patch('lambda_function.dynamodb.Table')
def test_dynamodb_error_handling(self, mock_table):
    # Simulate DynamoDB error
    mock_table.return_value.update_item.side_effect = Exception('DB Error')

    result = lambda_function.lambda_handler(self.event, self.context)

    self.assertEqual(result['statusCode'], 500)
    response_body = json.loads(result['body'])
    self.assertEqual(response_body['error'], 'Internal Server Error')

@patch.dict('os.environ', {'TABLE_NAME': 'custom-table'})
@patch('lambda_function.dynamodb.Table')
def test_custom_table_name(self, mock_table):
    mock_response = {'Attributes': {'count': 1}}
    mock_table.return_value.update_item.return_value = mock_response

    lambda_function.lambda_handler(self.event, self.context)

    # Verify the custom table name was used
    mock_table.assert_called_once_with('custom-table')
Enter fullscreen mode Exit fullscreen mode

if name == 'main':
unittest.main()
Key Testing Concepts Covered
Mocking AWS Services: Using unittest.mock to simulate DynamoDB

Happy Path Testing: Verifying successful counter increments

Error Scenarios: Testing wrong HTTP methods and database failures

Configuration Testing: Checking environment variable handling

Response Validation: Ensuring proper status codes and headers

Test Dependencies
Add these to a requirements-test.txt file:

text
boto3==1.26.*
moto==4.1.*
pytest==7.4.*
pytest-cov==4.1.*
Running the Tests
Install test dependencies:

bash
pip install -r requirements-test.txt
Run tests with unittest:

bash
python -m unittest test_lambda_function.py
Or with pytest (for more detailed output):

bash
pytest test_lambda_function.py -v
For coverage report:

bash
pytest --cov=lambda_function test_lambda_function.py
Advanced Testing Considerations
Integration Tests:

Deploy to a test AWS environment

Use actual API Gateway and DynamoDB resources

Test the full stack

Performance Tests:

Check Lambda cold start times

Verify DynamoDB latency

Security Tests:

Validate IAM permissions

Check for sensitive data exposure

Example Integration Test
python
import boto3
import os
import lambda_function

class TestIntegration(unittest.TestCase):
@classmethod
def setUpClass(cls):
# Set up real DynamoDB table (run once before all tests)
cls.dynamodb = boto3.resource('dynamodb')
cls.table_name = 'test-website-counts'
cls.table = cls.dynamodb.create_table(
TableName=cls.table_name,
KeySchema=[{'AttributeName': 'pageId', 'KeyType': 'HASH'}],
AttributeDefinitions=[{'AttributeName': 'pageId', 'AttributeType': 'S'}],
BillingMode='PAY_PER_REQUEST'
)
os.environ['TABLE_NAME'] = cls.table_name

def test_real_dynamodb_integration(self):
    # Initial test
    response1 = lambda_function.lambda_handler({'httpMethod': 'GET'}, None)
    self.assertEqual(response1['statusCode'], 200)

    # Verify increment works
    response2 = lambda_function.lambda_handler({'httpMethod': 'GET'}, None)
    count1 = json.loads(response1['body'])['count']
    count2 = json.loads(response2['body'])['count']
    self.assertEqual(count2, count1 + 1)

@classmethod
def tearDownClass(cls):
    # Clean up
    cls.table.delete()
Enter fullscreen mode Exit fullscreen mode

Test Organization Best Practices
Keep unit tests fast and isolated

Put integration tests in separate files

Use mocking for AWS services in unit tests

Test both success and failure paths

Include edge cases (empty inputs, rate limits, etc.)

** IAC Infra as code - aka teraform

Implementing Infrastructure as Code with AWS SAM

Let's convert your entire visitor counter solution to Infrastructure as Code using AWS SAM (Serverless Application Model). This approach will make your infrastructure reproducible, version-controlled, and easily deployable.

AWS SAM Template (template.yaml)

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Visitor counter for resume website

Globals:
  Function:
    Timeout: 10
    Runtime: python3.9
    MemorySize: 128
    Environment:
      Variables:
        TABLE_NAME: !Ref VisitorCountTable

Resources:
  VisitorCounterFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: visitor_counter/
      Handler: lambda_function.lambda_handler
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref VisitorCountTable
      Events:
        ApiEvent:
          Type: Api
          Properties:
            Path: /count
            Method: GET
            RestApiId: !Ref VisitorCounterApi

  VisitorCounterApi:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod
      Cors:
        AllowMethods: "'GET'"
        AllowOrigin: "'*'"
        AllowHeaders: "'Content-Type'"

  VisitorCountTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: WebsiteVisitorCounts
      AttributeDefinitions:
        - AttributeName: pageId
          AttributeType: S
      KeySchema:
        - AttributeName: pageId
          KeyType: HASH
      BillingMode: PAY_PER_REQUEST

Outputs:
  ApiEndpoint:
    Description: "API Gateway endpoint URL for Prod stage"
    Value: !Sub "https://${VisitorCounterApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/count"
  LambdaFunctionName:
    Description: "Visitor Counter Lambda Function Name"
    Value: !Ref VisitorCounterFunction
Enter fullscreen mode Exit fullscreen mode

Project Structure

/resume-visitor-counter/
β”œβ”€β”€ template.yaml            # SAM template
β”œβ”€β”€ visitor_counter/
β”‚   β”œβ”€β”€ lambda_function.py   # Lambda code
β”‚   └── requirements.txt     # Python dependencies
β”œβ”€β”€ tests/                   # Test files
└── README.md                # Deployment instructions
Enter fullscreen mode Exit fullscreen mode

Lambda Function Code (visitor_counter/lambda_function.py)

import json
import os
from datetime import datetime
import boto3

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.environ['TABLE_NAME'])

def lambda_handler(event, context):
    # API Gateway proxy events wrap the HTTP request
    http_method = event.get('httpMethod', '')

    if http_method != 'GET':
        return {
            'statusCode': 405,
            'body': json.dumps({'error': 'Method Not Allowed'}),
            'headers': {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Origin': '*'
            }
        }

    page_id = 'resume-homepage'

    try:
        response = table.update_item(
            Key={'pageId': page_id},
            UpdateExpression='SET #count = if_not_exists(#count, :start) + :incr',
            ExpressionAttributeNames={'#count': 'count'},
            ExpressionAttributeValues={':incr': 1, ':start': 0},
            ReturnValues='UPDATED_NEW'
        )

        return {
            'statusCode': 200,
            'body': json.dumps({
                'count': response['Attributes']['count'],
                'timestamp': datetime.utcnow().isoformat()
            }),
            'headers': {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Origin': '*'
            }
        }

    except Exception as e:
        print(f"Error updating DynamoDB: {str(e)}")
        return {
            'statusCode': 500,
            'body': json.dumps({'error': 'Internal Server Error'}),
            'headers': {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Origin': '*'
            }
        }
Enter fullscreen mode Exit fullscreen mode

Deployment Steps

  1. Install AWS SAM CLI:
   pip install aws-sam-cli
Enter fullscreen mode Exit fullscreen mode
  1. Initialize Project (if starting from scratch):
   sam init --runtime python3.9 --app-template hello-world --name resume-visitor-counter
Enter fullscreen mode Exit fullscreen mode
  1. Build Application:
   sam build
Enter fullscreen mode Exit fullscreen mode
  1. Deploy to AWS:
   sam deploy --guided
Enter fullscreen mode Exit fullscreen mode
  • Follow prompts to configure stack name, AWS region, etc.
  • Confirm deployment when prompted
  1. View Outputs:
   aws cloudformation describe-stacks \
     --stack-name resume-visitor-counter \
     --query 'Stacks[0].Outputs'
Enter fullscreen mode Exit fullscreen mode

Key Benefits of This Approach

  1. Reproducibility: Entire infrastructure defined in code
  2. Version Control: Track changes alongside application code
  3. Easy Updates: Modify template and redeploy
  4. Environment Parity: Same configuration across dev/test/prod
  5. Rollback Capability: Revert to previous versions if needed

Advanced IaC Features

  1. Parameters: Make configurations dynamic
   Parameters:
     TableName:
       Type: String
       Default: WebsiteVisitorCounts
Enter fullscreen mode Exit fullscreen mode
  1. Custom Domains: Add your own domain to API Gateway
   Domain:
     DomainName: api.yourdomain.com
     CertificateArn: !Ref SSLCertificate
     Route53:
       HostedZoneId: !Ref HostedZone
Enter fullscreen mode Exit fullscreen mode
  1. Multiple Environments: Use SAM with different configs
   sam deploy --config-env production
Enter fullscreen mode Exit fullscreen mode
  1. CI/CD Integration: Add to your deployment pipeline
   # Example GitHub Actions step
   - name: Deploy with SAM
     run: sam deploy --no-confirm-changeset
     env:
       AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
       AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Enter fullscreen mode Exit fullscreen mode

Would you like me to show you how to add monitoring/alarms to this SAM template or demonstrate how to set up a CI/CD pipeline for automated deployments?

** source control - aka github version control etc

Setting Up CI/CD with GitHub Actions for Your Resume Website

Let's implement a complete CI/CD pipeline for both your frontend (S3 website) and backend (Lambda/DynamoDB/API Gateway) using GitHub Actions. This will automatically deploy changes whenever you push to your repository.

1. Repository Structure

resume-website/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ template.yaml       # SAM template
β”‚   β”œβ”€β”€ lambda_function.py  # Lambda code
β”‚   └── requirements.txt    # Python dependencies
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ index.html          # Your resume page
β”‚   β”œβ”€β”€ styles.css          # CSS files
β”‚   └── visitor-counter.js  # Frontend JavaScript
β”œβ”€β”€ .github/
β”‚   └── workflows/
β”‚       β”œβ”€β”€ backend.yml     # Backend CI/CD workflow
β”‚       └── frontend.yml    # Frontend CI/CD workflow
└── README.md
Enter fullscreen mode Exit fullscreen mode

2. Backend CI/CD (GitHub Actions)

Create .github/workflows/backend.yml:

name: Backend Deployment

on:
  push:
    branches: [ main ]
    paths: 
      - 'backend/**'
      - '.github/workflows/backend.yml'

env:
  AWS_REGION: us-east-1
  STACK_NAME: resume-visitor-counter

jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: ${{ secrets.AWS_IAM_ROLE }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Install SAM CLI
        run: pip install aws-sam-cli

      - name: Build SAM application
        working-directory: ./backend
        run: sam build

      - name: Deploy SAM application
        working-directory: ./backend
        run: sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --stack-name ${{ env.STACK_NAME }} --region ${{ env.AWS_REGION }} --capabilities CAPABILITY_IAM

      - name: Get API endpoint
        id: api
        working-directory: ./backend
        run: |
          API_URL=$(aws cloudformation describe-stacks \
            --stack-name ${{ env.STACK_NAME }} \
            --query 'Stacks[0].Outputs[?OutputKey==`ApiEndpoint`].OutputValue' \
            --output text)
          echo "API_URL=$API_URL" >> $GITHUB_ENV
          echo "::set-output name=url::$API_URL"

      - name: Update frontend config
        if: always()
        run: |
          echo "VITE_API_URL=${{ env.API_URL }}" >> $GITHUB_ENV
          echo "API endpoint: ${{ env.API_URL }}"
Enter fullscreen mode Exit fullscreen mode

3. Frontend CI/CD (GitHub Actions)

Create .github/workflows/frontend.yml:

name: Frontend Deployment

on:
  push:
    branches: [ main ]
    paths:
      - 'frontend/**'
      - '.github/workflows/frontend.yml'

env:
  AWS_REGION: us-east-1
  S3_BUCKET: your-resume-bucket-name
  CLOUDFRONT_DIST_ID: your-cloudfront-dist-id

jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: ${{ secrets.AWS_IAM_ROLE }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Install dependencies
        working-directory: ./frontend
        run: npm install  # If using npm packages

      - name: Build static files
        working-directory: ./frontend
        run: |
          # Replace API endpoint in JavaScript
          if [ -n "${{ vars.API_URL }}" ]; then
            sed -i "s|const apiUrl = .*|const apiUrl = '${{ vars.API_URL }}';|" visitor-counter.js
          fi

      - name: Deploy to S3
        run: |
          aws s3 sync ./frontend s3://${{ env.S3_BUCKET }} --delete
          echo "Frontend deployed to: https://${{ env.S3_BUCKET }}.s3-website-${{ env.AWS_REGION }}.amazonaws.com"

      - name: Invalidate CloudFront cache
        run: |
          aws cloudfront create-invalidation \
            --distribution-id ${{ env.CLOUDFRONT_DIST_ID }} \
            --paths "/*"
Enter fullscreen mode Exit fullscreen mode

4. Required GitHub Secrets

Set these in your GitHub repository settings (Settings > Secrets):

  1. AWS_IAM_ROLE: ARN of IAM role for deployment (e.g., arn:aws:iam::123456789012:role/GitHubActionsRole)
  2. AWS_REGION: Your AWS region (e.g., us-east-1)

5. AWS IAM Role Setup

Create an IAM role for GitHub Actions with these permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "cloudformation:*",
        "lambda:*",
        "apigateway:*",
        "dynamodb:*",
        "s3:*",
        "cloudfront:*",
        "iam:*"
      ],
      "Resource": "*"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

6. Frontend JavaScript (visitor-counter.js)

Update to use the environment variable:

// Get API URL from build process
const apiUrl = process.env.VITE_API_URL || 'https://yourapi.execute-api.region.amazonaws.com/Prod/count';

document.addEventListener('DOMContentLoaded', function() {
    const counterElement = document.getElementById('visitor-counter');
    if (!counterElement) return;

    fetch(apiUrl)
        .then(response => response.json())
        .then(data => {
            counterElement.textContent = `Visitor Count: ${data.count}`;
        })
        .catch(error => {
            console.error('Error fetching visitor count:', error);
            counterElement.textContent = 'Visitor count unavailable';
        });
});
Enter fullscreen mode Exit fullscreen mode

7. Workflow Improvements

  1. Add Testing Stage:
   - name: Run Python tests
     working-directory: ./backend
     run: |
       pip install pytest
       python -m pytest tests/
Enter fullscreen mode Exit fullscreen mode
  1. Environment-Specific Deployments:
   strategy:
     matrix:
       environment: [dev, prod]
Enter fullscreen mode Exit fullscreen mode
  1. Manual Approval for Production:
   deploy-prod:
     needs: test
     runs-on: ubuntu-latest
     environment: production
     steps:
       - name: Wait for approval
         uses: trstringer/manual-approval@v1
         with:
           secret: ${{ github.token }}
Enter fullscreen mode Exit fullscreen mode

Key Benefits

  1. Automatic Deployments: Code changes trigger deployments
  2. Consistency: Same process for all environments
  3. Visibility: Clear deployment history in GitHub
  4. Rollback Capability: Revert to previous commits if needed
  5. Security: Least-privilege IAM roles

** github actions in depth

Backend CI/CD Pipeline with Testing and SAM Deployment

Here's a complete GitHub Actions workflow that will:

  1. Run your Python tests on every push
  2. Only deploy if tests pass
  3. Package and deploy your SAM application to AWS

.github/workflows/backend-ci-cd.yml

name: Backend CI/CD Pipeline

on:
  push:
    branches: [ main ]
    paths:
      - 'backend/**'
      - '.github/workflows/backend-ci-cd.yml'

env:
  AWS_REGION: us-east-1
  STACK_NAME: resume-visitor-counter
  PYTHON_VERSION: '3.9'

jobs:
  test:
    name: Run Python Tests
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Python ${{ env.PYTHON_VERSION }}
        uses: actions/setup-python@v4
        with:
          python-version: ${{ env.PYTHON_VERSION }}

      - name: Install dependencies
        working-directory: ./backend
        run: |
          python -m pip install --upgrade pip
          pip install -r requirements.txt
          pip install pytest boto3 moto

      - name: Run unit tests
        working-directory: ./backend
        run: |
          python -m pytest tests/ -v --cov=./ --cov-report=xml

      - name: Upload coverage report
        uses: codecov/codecov-action@v3
        with:
          file: ./backend/coverage.xml
          flags: unittests

  deploy:
    name: Deploy SAM Application
    needs: test
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: ${{ secrets.AWS_DEPLOY_ROLE }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Install SAM CLI
        run: pip install aws-sam-cli

      - name: Build SAM application
        working-directory: ./backend
        run: sam build --use-container

      - name: Run SAM validate
        working-directory: ./backend
        run: sam validate

      - name: Deploy SAM application
        working-directory: ./backend
        run: |
          sam deploy \
            --no-confirm-changeset \
            --no-fail-on-empty-changeset \
            --stack-name ${{ env.STACK_NAME }} \
            --region ${{ env.AWS_REGION }} \
            --capabilities CAPABILITY_IAM \
            --resolve-s3

      - name: Output API endpoint
        working-directory: ./backend
        run: |
          echo "API Endpoint:"
          aws cloudformation describe-stacks \
            --stack-name ${{ env.STACK_NAME }} \
            --query "Stacks[0].Outputs[?OutputKey=='ApiEndpoint'].OutputValue" \
            --output text
Enter fullscreen mode Exit fullscreen mode

Required Setup

  1. AWS IAM Role:

    • Create a role with these permissions:
     {
       "Version": "2012-10-17",
       "Statement": [
         {
           "Effect": "Allow",
           "Action": [
             "cloudformation:*",
             "lambda:*",
             "apigateway:*",
             "dynamodb:*",
             "s3:*",
             "iam:PassRole"
           ],
           "Resource": "*"
         }
       ]
     }
    
  • Add the role ARN to GitHub Secrets as AWS_DEPLOY_ROLE
  1. Backend Directory Structure:
   backend/
   β”œβ”€β”€ template.yaml
   β”œβ”€β”€ lambda_function.py
   β”œβ”€β”€ requirements.txt
   └── tests/
       β”œβ”€β”€ test_lambda.py
       └── __init__.py
Enter fullscreen mode Exit fullscreen mode
  1. requirements.txt:
   pytest==7.4.0
   pytest-cov==4.1.0
   moto==4.1.5
   boto3==1.26.118
Enter fullscreen mode Exit fullscreen mode

Key Features

  1. Test Isolation:

    • Tests run in a separate job before deployment
    • Deployment only proceeds if tests pass (needs: test)
  2. Security:

    • Uses AWS IAM Role for credentials (not access keys)
    • Minimal required permissions
  3. Validation:

    • Runs sam validate to check template syntax
    • Uses containerized builds for consistency
  4. Visibility:

    • Outputs API endpoint after deployment
    • Generates code coverage reports
  5. Optimizations:

    • Caches Python dependencies between runs
    • Uses --no-fail-on-empty-changeset to avoid failures when no changes detected

Advanced Options

  1. Environment-Specific Deployments:
   strategy:
     matrix:
       environment: [dev, staging, prod]
       include:
         - environment: dev
           stack_name: resume-dev
         - environment: staging
           stack_name: resume-staging
         - environment: prod
           stack_name: resume-prod
Enter fullscreen mode Exit fullscreen mode
  1. Manual Approval for Production:
   deploy-prod:
     needs: test
     environment: production
     runs-on: ubuntu-latest
     steps:
       - name: Wait for approval
         uses: trstringer/manual-approval@v1
         with:
           secret: ${{ github.token }}
         if: github.ref == 'refs/heads/main'
Enter fullscreen mode Exit fullscreen mode
  1. Integration Tests:
   integration-test:
     needs: deploy
     runs-on: ubuntu-latest
     steps:
       - name: Run integration tests
         run: |
           pip install requests
           python -m pytest integration_tests/ -v
Enter fullscreen mode Exit fullscreen mode

**CI CD pipelines.

Frontend CI/CD Pipeline for S3 and CloudFront

Here's a secure GitHub Actions workflow that will automatically deploy your frontend changes to S3 and invalidate the CloudFront cache without exposing any credentials.

.github/workflows/frontend-ci-cd.yml

name: Frontend CI/CD Pipeline

on:
  push:
    branches: [ main ]
    paths:
      - '**'
      - '!.github/workflows/backend-*.yml'  # Exclude backend workflows

env:
  AWS_REGION: us-east-1
  S3_BUCKET: your-resume-bucket-name
  CLOUDFRONT_DIST_ID: your-distribution-id

jobs:
  deploy:
    name: Deploy to S3 and Invalidate CloudFront
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: ${{ secrets.AWS_FRONTEND_DEPLOY_ROLE }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Install dependencies (if using npm)
        run: npm ci
        if: exists('package.json')

      - name: Build project (if needed)
        run: npm run build
        if: exists('package.json')

      - name: Sync files to S3
        run: |
          aws s3 sync . s3://${{ env.S3_BUCKET }} \
            --delete \
            --exclude '.git/*' \
            --exclude '.github/*' \
            --exclude '*.yml' \
            --cache-control "max-age=31536000,public"  # Cache static assets

      - name: Create CloudFront invalidation
        run: |
          aws cloudfront create-invalidation \
            --distribution-id ${{ env.CLOUDFRONT_DIST_ID }} \
            --paths "/*"

      - name: Output website URL
        run: |
          echo "Website deployed to: https://$(aws cloudfront get-distribution \
            --id ${{ env.CLOUDFRONT_DIST_ID }} \
            --query 'Distribution.DomainName' \
            --output text)"
Enter fullscreen mode Exit fullscreen mode

Required Setup

  1. Create a dedicated IAM role for frontend deployments with these permissions:
   {
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [
           "s3:PutObject",
           "s3:GetObject",
           "s3:ListBucket",
           "s3:DeleteObject"
         ],
         "Resource": [
           "arn:aws:s3:::your-resume-bucket-name",
           "arn:aws:s3:::your-resume-bucket-name/*"
         ]
       },
       {
         "Effect": "Allow",
         "Action": [
           "cloudfront:CreateInvalidation",
           "cloudfront:GetDistribution"
         ],
         "Resource": "*"
       }
     ]
   }
Enter fullscreen mode Exit fullscreen mode
  1. Add these GitHub Secrets:

    • AWS_FRONTEND_DEPLOY_ROLE: ARN of your IAM role
    • CLOUDFRONT_DIST_ID: Your CloudFront distribution ID
    • S3_BUCKET: Your S3 bucket name
  2. Repository structure:

   frontend-repo/
   β”œβ”€β”€ index.html
   β”œβ”€β”€ styles/
   β”‚   └── main.css
   β”œβ”€β”€ scripts/
   β”‚   └── visitor-counter.js
   └── .github/
       └── workflows/
           └── frontend-ci-cd.yml
Enter fullscreen mode Exit fullscreen mode

Security Best Practices

  1. Never store credentials in code - Use GitHub Secrets and IAM roles
  2. Least privilege principle - Only grant required S3/CloudFront permissions
  3. Exclude sensitive files - The workflow excludes .git/ and .github/ from sync
  4. Use OIDC - GitHub's OpenID Connect for temporary credentials

Advanced Configuration Options

  1. Environment-specific deployments:
   strategy:
     matrix:
       environment: [staging, prod]
       include:
         - environment: staging
           s3_bucket: resume-staging
           cloudfront_id: ABC123
         - environment: prod
           s3_bucket: resume-prod
           cloudfront_id: XYZ789
Enter fullscreen mode Exit fullscreen mode
  1. Cache control headers for different file types:
   - name: Upload HTML files
     run: |
       aws s3 sync . s3://${{ env.S3_BUCKET }} \
         --exclude "*" \
         --include "*.html" \
         --cache-control "max-age=3600,public"

   - name: Upload assets
     run: |
       aws s3 sync ./assets s3://${{ env.S3_BUCKET }}/assets \
         --cache-control "max-age=31536000,public"
Enter fullscreen mode Exit fullscreen mode
  1. Preview deployments for pull requests:
   on:
     pull_request:
       branches: [ main ]
   jobs:
     deploy-preview:
       runs-on: ubuntu-latest
       steps:
         - name: Deploy to preview bucket
           run: aws s3 sync . s3://resume-preview-${{ github.event.number }}
Enter fullscreen mode Exit fullscreen mode

Top comments (0)