DEV Community

Cover image for AWS Cloud Resume Challenge
Khang Tran
Khang Tran

Posted on • Edited on

AWS Cloud Resume Challenge

Table of Contents

1. Intro

A few days ago, I decided to take on the Cloud Resume Challenge. This is a great way to expose yourself to multiple AWS Services within a fun project. I'll be documenting how I deployed the project and what I learned along the way. If you're deciding to take on the resume challenge, then hopefully you can use this as resource to get started. Now, lets begin.

2. Project Initialization

Setup a project environment and configure a git repo along with it. This will first include a frontend directory with your index.html, script.js, and styles.css.

If you want this done quickly, you could copy and paste your resume into ChatGPT and have it provide you with the 3 files to create a simple static website.

3. S3

Create an AWS account. Navigate to the S3 service and create a bucket. The name you choose for your bucket should be unique to your region. Once created, upload your files to the S3 bucket.

4. CloudFront

S3 will only host your static website over the HTTP protocol. To use HTTPS, you will have to serve your content over CloudFront, a CDN (Content Delivery Network). Not only will this provide secure access to your website, but it will deliver your content with low latency. CloudFront edge locations are global, and will cache your website to serve it fast and reliably from a client's nearest edge location.

Navigate to CloudFront from the AWS console and click "Create Distribution". Pick the origin domain (your S3 bucket). If you enabled Static Website Hosting on the S3 bucket, a button will appear recommending you to use the bucket endpoint, but for our purposes since we want CloudFront direct access to the S3 bucket.

Under "Origin Access Control", check the "Origin Access Control Setting (recommended)". We do this because we only want the bucket accessed by CloudFront and not the public.

Create a new OAC and select it.

Click the button that appears and says "Copy Policy".

In another window, navigate back to your S3 bucket and under the "Permissions" tab paste the policy under the "Bucket Policy" section.

It should look something like this:

{
        "Version": "2008-10-17",
        "Id": "PolicyForCloudFrontPrivateContent",
        "Statement": [
            {
                "Sid": "AllowCloudFrontServicePrincipal",
                "Effect": "Allow",
                "Principal": {
                    "Service": "cloudfront.amazonaws.com"
                },
                "Action": "s3:GetObject",
                "Resource": "arn:aws:s3:::<Your Bucket Name>/*",
                "Condition": {
                    "StringEquals": {
                      "AWS:SourceArn": "arn:aws:cloudfront::<Some Numbers>:distribution/<Your CloudFront Distribution>"
                    }
                }
            }
        ]
      }
Enter fullscreen mode Exit fullscreen mode

In the CloudFront window, finish the configuration by enabling HTTPS under "Viewer Protocol Policy" and finally leave the rest of the options default and create the distribution.

When the distribution is created, make sure the default root object is the index.html file. At this point, you should be able to open the Distribution domain name with your resume website up and running.

IMPORTANT You will now have a CloudFront distribution URL. Since your bucket is not public and in our current configuration, you can only access the html, css, and js files from that CloudFront distribution. Your HTML link and script tags will need to be updated.

For example, my script tag was

<link rel="stylesheet" href="styles.css">
Enter fullscreen mode Exit fullscreen mode

and changed to

<link rel="stylesheet" href="https://d2qo85k2yv6bow.cloudfront.net/styles.css">
Enter fullscreen mode Exit fullscreen mode

Once you update your link and script tags, re-upload your HTML file. You will also have to create an Invalidation Request in your CloudFront distribution so that it updates its own cache. When you create the request, simply input "/*". This makes sure that CloudFront serves the latest version of your files (if you are constantly making changes and want to see them immediately on the website, then you will have to repeatedly make invalidation requests).

5. Route 53

Your next step will be to route your own DNS domain name to the CloudFront distribution. Since I already had a domain name, I only needed to navigate to my hosted zone in Route53 and create an A record, switch on "alias", select the dropdown option "Alias to CloudFront distribution", select my distribution, keep it as a simple routing policy, and save.

Also, within the CloudFront distribution's settings, you have to request and configure an SSL certificate associated with your domain and attach it.

And with that, your website should be up and running!

6. View Counter

To set up a view counter, we will now have to incorporate a DynamoDB and Lambda as well as write some Javascript for our HTML. The idea is when someone views our resume, the Javascript will send a request to the Lambda function URL. Lambda will be some Python code that retrieves and updates data in the DynamoDB table, and returns the data to your Javascript.

7. DynamoDB

Navigate to the DynamoDB service and create a table.

Go to "Actions"" --> "Explore Items" and create an item.

Set the id (partition key) value to 1.

Create a number attribute and label is "views" with a value of 0.

8. Lambda

Next, we will create the Lambda function that can retrieve the data from DynamoDB and update it.

When creating the Lambda function in the AWS console, I chose Python3.12.

Enable function URL and set the AUTH type to None. Doing so allows your Lambda function to be invoked by anyone that obtains the function URL. I chose to set the Lambda function up this way so I can test the functionality of the Lambda function with my project without setting up API Gateway at the moment.

Here is my Lambda function code:

import json
import boto3

dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table("cloud-resume-challenge")

def lambda_handler(event, context):
    try:
        # Get the current view count from DynamoDB
        response = table.get_item(Key={
            "id": "1"
        })
        if 'Item' in response:
            views = int(response["Item"]["views"])
        else:
            views = 0  # Default to 0 if the item doesn't exist

        # Increment the view count
        views += 1

        # Update the view count in DynamoDB
        table.put_item(Item={
            "id": "1",
            "views": views
        })

        # Return the updated view count
        return {
            "statusCode": 200,
            "body": json.dumps({"views": views})
        }

    except Exception as e:
        print(f"Error: {e}")
        return {
            "statusCode": 500,
            "body": json.dumps({"error": str(e)})
        }

Enter fullscreen mode Exit fullscreen mode

Finally, in the "Configuration" tab, we need an execution role that has permission to invoke the DynamoDB table. To do this, you would navigate to IAM and create a new role. This role will need the "AmazonDynamoDBFullAccess" permission. Once created, attach the role to your Lambda function.

9. Javascript

Then, write some code into your script.js file. Something like this:

async function updateCounter() {
  try {
    let response = await fetch("Lambda Function URL");
    let data = await response.json();
    const counter = document.getElementById("view-count");
    counter.innerText = data.views;
  } catch (error) {
    console.error('Error updating counter:', error);
  }
}

updateCounter();
Enter fullscreen mode Exit fullscreen mode

The code sends a request to the Lambda function URL, parses it and sets it to the "data" variable. I have a with id="view-count" and set it to data.views, which is the retrieved view count from the Lambda function URL.

10. CI/CD with Github Actions

We can create a CI/CD pipeline with Github Actions. Doing so will automatically update our S3 bucket files whenever code changes are pushed to Github.

To summarize, you have to create a directory ".github" and within it will be another directory "workflows". Create a YAML file inside.

This is my "frontend-cicd.yaml" file:

on:
  push:
    branches:
    - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@master
    - uses: jakejarvis/s3-sync-action@master
      with:
        args: --follow-symlinks --delete
      env:
        AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        AWS_REGION: 'us-east-1' #optional: defaults to us-east-1
        SOURCE_DIR: 'frontend' # optional: defaults to entire repository
Enter fullscreen mode Exit fullscreen mode

Your Github will now have a new action, but you still have to setup your environment variables such as AWS_S3_BUCKET, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY.

Within your Github repo, you would have to navigate to Settings β†’ Secrets and variables (under the Security Section on the left side of the screen) β†’ Actions

These access keys are associated with your AWS user and will need to be retrieved from the AWS console.

11. Infrastructure as Code with Terraform

So far, we've been clicking around in the AWS console, setting permissions and configurations for multiple AWS services. It can all get confusing and messy very quickly. Terraform allows us to set up our infrastructure in a code based format. This allows us to roll back configurations through versioning, and easily replicate our setup.

This was my first time using Terraform. For now, I just used it to create an API Gateway and re-create my Lambda function. So instead of my Javascript hitting the public function URL of my Lambda Function, I can have it hit my API Gateway, which will invoke my Lambda function. API Gateway has much better security, providing:

  • Authentication and Authorization through IAM, Cognito, API Keys
  • Throttling and Rate Limiting
  • Private Endpoints
  • Input Validation

After downloading Terraform onto my machine, I created a "terraform" folder in the root directory of my project. Then I created two files:

  • provider.tf
  • main.tf

Here is my provider.tf:

terraform {
  required_providers {
    aws = {
      version =">=4.9.0"
      source = "hashicorp/aws"
    }
  }
}
provider "aws" {
  access_key = "*****"
  secret_key = "*****"
  region = "us-east-1"
}
Enter fullscreen mode Exit fullscreen mode

I've made sure to omit this from my Github using a .gitignore file, since it would expose my AWS user's access key and secret key.

This file basically configures the provider which Terraform will use. In our case, it is AWS.

Next the main.tf:

data "archive_file" "zip_the_python_code" {
  type        = "zip"
  source_file = "${path.module}/aws_lambda/func.py"
  output_path = "${path.module}/aws_lambda/func.zip"
}

resource "aws_lambda_function" "myfunc" {
  filename         = data.archive_file.zip_the_python_code.output_path
  source_code_hash = data.archive_file.zip_the_python_code.output_base64sha256
  function_name    = "myfunc"
  role             = "arn:aws:iam::631242286372:role/service-role/cloud-resume-views-role-bnt3oikr"
  handler          = "func.lambda_handler"
  runtime          = "python3.12"
}

resource "aws_lambda_permission" "apigateway" {
  statement_id  = "AllowAPIGatewayInvoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.myfunc.function_name
  principal     = "apigateway.amazonaws.com"
  source_arn    = "arn:aws:execute-api:us-east-1:${data.aws_caller_identity.current.account_id}:${aws_apigatewayv2_api.http_api.id}/*/GET/views"
}

resource "aws_apigatewayv2_api" "http_api" {
  name          = "views-http-api"
  protocol_type = "HTTP"
}

resource "aws_apigatewayv2_integration" "lambda_integration" {
  api_id             = aws_apigatewayv2_api.http_api.id
  integration_type   = "AWS_PROXY"
  integration_uri    = aws_lambda_function.myfunc.invoke_arn
  integration_method = "POST"
  payload_format_version = "1.0" # Explicitly set payload format version
}

resource "aws_apigatewayv2_route" "default_route" {
  api_id    = aws_apigatewayv2_api.http_api.id
  route_key = "GET /views"
  target    = "integrations/${aws_apigatewayv2_integration.lambda_integration.id}"
}

resource "aws_apigatewayv2_stage" "default_stage" {
  api_id      = aws_apigatewayv2_api.http_api.id
  name        = "$default"
  auto_deploy = true
}

output "http_api_url" {
  value = aws_apigatewayv2_stage.default_stage.invoke_url
}

data "aws_caller_identity" "current" {}
Enter fullscreen mode Exit fullscreen mode

The archive_file data source zips the Python code (func.py) into func.zip. The aws_lambda_function resource creates the Lambda function using this zip file. The aws_lambda_permission resource grants API Gateway permission to invoke the Lambda function. The aws_apigatewayv2_api, aws_apigatewayv2_integration, and aws_apigatewayv2_route resources set up an HTTP API Gateway that integrates with the Lambda function, and aws_apigatewayv2_stage deploys this API. The output block provides the API endpoint URL. Additionally, data "aws_caller_identity" "current" retrieves the AWS account details.

Before initializing and applying the terraform code, I created another folder called "aws_lambda" and within it created a file func.py. This is where the Lambda function code from earlier is pasted in.

With that in place, run the commands:

  • terraform init
  • terraform plan
  • terraform apply

After a few moments, my services and settings were configured in AWS.

One thing to note with this project, we can update the code for the frontend, commit and push to Github, invalidate the CloudFront cache, and see the changes applied. However, the Lambda function requires the Terraform commands to be executed for the changes to be applied.

12. Conclusion

I still have some updates to make with Terraform to configure the rest of the services I am utilizing, but I feel confident about what I've been able to build so far. This challenge has significantly deepened my understanding of AWS, providing me with hands-on experience in managing and automating cloud infrastructure. The skills and knowledge I’ve gained will be invaluable as I continue to build scalable, secure, and efficient cloud architectures in my career. I am excited to further refine my setup and explore additional AWS services and Terraform capabilities.

And if you want to checkout my project, click here!

13. Edits

The counter has stopped working and produced this error:

Access to fetch at 'https://g6thr4od50.execute-api.us-east-1.amazonaws.com/views' from origin 'https://andytran.click' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

My browser is sending a request to the API Gateway, which is invoking my Lambda function, but my Lambda function isn't responding with the necessary CORS headers. The browser saw that the response didn't include the Access-Control-Allow-Origin header and blocked the response, resulting in a CORS error.

So I updated the Lambda function here with this in both return statements:

"headers": {
                "Content-Type": "application/json",
                "Access-Control-Allow-Origin": "*"
            }
Enter fullscreen mode Exit fullscreen mode

So the updated Lambda function looks like:

import json
import boto3

dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table("cloud-resume-challenge")

def lambda_handler(event, context):
    try:
        # Get the current view count from DynamoDB
        response = table.get_item(Key={
            "id": "1"
        })
        if 'Item' in response:
            views = int(response["Item"]["views"])
        else:
            views = 0  # Default to 0 if the item doesn't exist

        # Increment the view count
        views += 1

        # Update the view count in DynamoDB
        table.put_item(Item={
            "id": "1",
            "views": views
        })

        # Return the updated view count with headers
        return {
            "statusCode": 200,
            "headers": {
                "Content-Type": "application/json",
                "Access-Control-Allow-Origin": "*"
            },
            "body": json.dumps({"views": views})
        }

    except Exception as e:
        print(f"Error: {e}")
        return {
            "statusCode": 500,
            "headers": {
                "Content-Type": "application/json",
                "Access-Control-Allow-Origin": "*"
            },
            "body": json.dumps({"error": str(e)})
        }

Enter fullscreen mode Exit fullscreen mode

Add burst/rate limiting to my API Gateway with my main.tf file:

resource "aws_apigatewayv2_stage" "default_stage" {
  api_id      = aws_apigatewayv2_api.http_api.id
  name        = "$default"
  auto_deploy = true

  default_route_settings {
    throttling_burst_limit = 10
    throttling_rate_limit  = 5
  }
}
Enter fullscreen mode Exit fullscreen mode

Top comments (0)