DEV Community

Michael Bozhilov
Michael Bozhilov

Posted on

How to ship dynamic web pages with AWS Lambda and set up a CI/CD pipeline for it using Github Actions

I've never been much of a photos as a memory person. I tend to bind memories with songs, and unfortunately, I sometimes run out of songs to listen to. Heh, I have a solution to this problem! What better way to find a new song to listen to than by deploying a cloud function that fetches a random song from a playlist filled with the latest songs my friends have been listening to.

Lambda generated page

You can check it out on https://bzhlvvvs.com

In this article we will go over the following:

  • Setting up a lambda locally and how to deploy it
  • Templating and shipping dynamic pages to the browser
  • Setting up a continuous integration and continuous delivery pipeline using Github actions
  • Using a custom domain

Github repo

Setting up the lambda

Creating a user

Before we start inputting commands, you will need to install the AWS SAM CLI. In order to be able to log in, you will need a user. You can create one by using the IAM Dashboard:

Example IAM User

I have given my user full permission - PowerUser policy. If you consider sharing this user's credentials, you might want to pick just the IAM policies you need for deploying the lambda. Here's an example of a more granular IAM policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "LambdaDeployment",
      "Effect": "Allow",
      "Action": [
        "lambda:CreateFunction",
        "lambda:UpdateFunctionCode",
        "lambda:UpdateFunctionConfiguration",
        "lambda:DeleteFunction"
      ],
      "Resource": "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME"
    },
    {
      "Sid": "S3Deployment",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::BUCKET_NAME/*"
    },
    {
      "Sid": "CloudFormationDeployment",
      "Effect": "Allow",
      "Action": [
        "cloudformation:CreateChangeSet",
        "cloudformation:DescribeChangeSet",
        "cloudformation:ExecuteChangeSet",
        "cloudformation:DeleteChangeSet"
      ],
      "Resource": "arn:aws:cloudformation:REGION:ACCOUNT_ID:stack/STACK_NAME/*"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

You will now want to create keys for authenticating using the newly created user.

Creating an access key

After generating your access key, you will need to run:

aws configure   # this is AWS CLI, not SAM CLI
Enter fullscreen mode Exit fullscreen mode

Setting up user

Then your credentials will be saved in ~/.aws/credentials (considering you're using Linux/Unix)

Initializing the project

To initialize the project, you need to run sam init

sam project initialization

You may pick the Hello World template with python and not use any tracing and logging services because they're paid 😆.

Assuming you've run sam init, you should now be able to run sam local start-api which will spin up a flask server and act as an api gateway.

API Gateway

You can test the endpoint by running:

curl http://localhost:3000/hello   # {"message": "hello world"}
Enter fullscreen mode Exit fullscreen mode

From now on, I'm going to be explaining by using my lambda's code

Configuring SAM template

This is the IaC template which is being used for deploying the lambda.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
  bzhlvvvs-spotify-lambda

  SAM Template for BZHLVVVS SPOTIFY Lambda 

# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
  Function:
    Timeout: 5  
# This value means that if your function takes more than 5 seconds 
# to get executed, the lambda will return an 
# Internal Server error

    MemorySize: 128  
# Pretty straightforward, this is the size of your lambda. 
# You need to increase this if your lambda 
# is going to be having data intensive workflows
# e.g. fetching large amounts of data and working with it

Resources:
  BzhlvvvsSpotifyFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/  # The directory where your code is stored
      Handler: app.lambda_handler  # Function which handles the requests towards the lambda
      Runtime: python3.9
      Architectures:
        - x86_64
      Events:
        Request:
          Type: Api
          Properties:
            Path: /  
# Endpoint on which we want to reach the lambda. 
# e.g. / means `http://localhost:3000`, 
# /hello means `http://localhost:3000/hello`

            Method: get  # Expected http method type

Outputs:
  BzhlvvvsSpotifyFunction:
    Description: 'Lambda Function ARN'
    Value: !GetAtt BzhlvvvsSpotifyFunction.Arn
  BzhlvvvsSpotifyIamRole:
    Description: 'Implicit IAM Role'
    Value: !GetAtt BzhlvvvsSpotifyFunctionRole.Arn
  BzhlvvvsSpotifyApi:
    Description: 'API Gateway endpoint URL for Prod stage'
    Value: !Sub 'https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/'

Enter fullscreen mode Exit fullscreen mode

To give your cloud function a custom name using this template, you simply need to update the following values within the template:

Resources:

  BzhlvvvsSpotifyFunction: # to {name}Function
  # e.g. ExampleModelFunction (name = ExampleModel)
Enter fullscreen mode Exit fullscreen mode

Outputs:

  BzhlvvvsSpotifyFunction: # should have the same name as
  # the resource above
Enter fullscreen mode Exit fullscreen mode
  BzhlvvvsSpotifyIamRole: # to {name}IamRole
  # e.g. ExampleModelIamRole
Enter fullscreen mode Exit fullscreen mode
  BzhlvvvsSpotifyApi: # to {name}SpotifyApi
  # e.g. ExampleModelApi
Enter fullscreen mode Exit fullscreen mode

This output provides the URL of the API Gateway endpoint for the "Prod" stage. API Gateway is a service for building and managing APIs. The output allows you to know the URL where your API is accessible.

Use of Makefile and some comments about API gateway

One way to simplify tasks and streamline your workflow is by creating a Makefile and documenting the frequently used commands.

# Makefile
.PHONY: aws-api
aws-api: 
    rm -rf .aws-sam && sam build && sam local start-api
Enter fullscreen mode Exit fullscreen mode

Regrettably, when running the local API gateway, it won't automatically rebuild your project if you have previously executed the build command and have the .aws-sam directory present within the repository. This directory contains the build package. Deleting the directory will trigger the gateway to rebuild your lambda function with each new API call, but please note that new dependencies will not be installed automatically.

To address this, one solution is to install all the requirements within the codeUri directory, which in this case is /src. You can achieve this by running the command cd ./src && pip install -r requirements.txt -t ./. By doing so, you can avoid restarting the gateway every time you make a change, but you will need to handle the exclusion of dependencies manually.

Here is an example Makefile that includes a target for installing dependencies:

# Makefile
.PHONY: install-deps
install-deps:
    cd ./src && pip install -r requirements.txt -t ./ --upgrade
Enter fullscreen mode Exit fullscreen mode

When using sam build, the dependencies will be installed for you. However, to see the effects of new changes in your lambda function, you will still need to restart the API.

Let's test a deployment

sam build --use-container && sam deploy --guided
Enter fullscreen mode Exit fullscreen mode

The guided flag will prompt you for the following information:

version = 0.1
[default.deploy.parameters]
stack_name = "bzhlvvvs-spotify"
resolve_s3 = true
s3_prefix = "bzhlvvvs-spotify"
region = "eu-north-1"
confirm_changeset = true
capabilities = "CAPABILITY_IAM"
image_repositories = []
Enter fullscreen mode Exit fullscreen mode

which will then be saved in samconfig.toml and be reused for each new deployment

deployment output

After a successful deployment, you will receive a URL that directs you to the API gateway's endpoint. This URL allows you to access your lambda function.

How to ship dynamic pages to the browser

For the Spotify lambda scenario, the process involves sending requests to Spotify's API and retrieving information. I then use this gathered data to update a static HTML template.

Setting up the html template

You may create placeholders {user.name}, similarly to how templating languages work, where you would insert the corresponding data before sending the request response back to the user.

<p class="description">Selected for you <br/> from <a class="playlist-link" href="{playlist.public_url}">{playlist.name}</a></p>
   <img class="song-image" src="{track.image}" crossorigin="anonymous"/>
   <div>
      <h3 class="song-title">{track.name}</h3>
      <p class="artists">{track.artist}</p>
   </div>
Enter fullscreen mode Exit fullscreen mode

The above code renders:

Code render example

You can easily achieve that by loading the index.html file (reference)

html = open("index.html", "r").read()
Enter fullscreen mode Exit fullscreen mode

and passing the string to a function which will then replace all occurrences of the placeholders with their corresponding data (reference):

html.replace("{artist.image}", "https://example.com")
        .replace("{track.name}", "song name")
        .replace("{track.artist}", "artist name")
Enter fullscreen mode Exit fullscreen mode

Lambda response body

In order to make the browser display the received HTML, all you need to do is change the type of content being sent back to the user (reference):

return {
    "headers": {"Content-Type": "text/html"},
    "statusCode": 200,
    "body": html
}
Enter fullscreen mode Exit fullscreen mode

Setting up a CI/CD pipeline with Github Actions:

Brief description of Github Actions:

GitHub Actions can be declared within a repository's .github/workflows directory using YAML configuration files. These files define the workflow's name, trigger events, and the steps to be executed, enabling automated actions and continuous integration workflows.

In .github/workflows/cicd.yaml, we have the following setup:

on:
    push:
      branches:
        - main
      paths:
        - 'src/**'
        # - '!src/tests/**'
jobs:
    cicd:
      runs-on: ubuntu-latest
      env:
        CLIENT_ID: ${{ secrets.CLIENT_ID }}
        CLIENT_SECRET: ${{ secrets.CLIENT_SECRET }}
        LAMBDA_URL: https://bzhlvvvs.com
        TITLE: "@bzhlvvvs"
        FAVICON_URL: https://personal-misho.s3.eu-north-1.amazonaws.com/favicon.ico
        PLAYLIST_ID: 4qw4F3Mi3eGjXwLeKM5pYx
        LINKEDIN: https://www.linkedin.com/in/mbozhilov/
        GITHUB: https://github.com/asynchroza/bzhlvvvs-spotify
      steps:
        - uses: actions/checkout@v2
        - uses: actions/setup-python@v2
        - uses: aws-actions/setup-sam@v1
        - uses: aws-actions/configure-aws-credentials@v2
          with:
            aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
            aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
            aws-region: eu-north-1

        - run: make pipeline-deploy
Enter fullscreen mode Exit fullscreen mode

Let's go over it line by:

Workflow conditions:

on:
    push:
      branches:
        - main
Enter fullscreen mode Exit fullscreen mode

The above snippet specifies that we want our workflow to be executed only when there someone pushes to the main branch.

paths: - 'src/**' indicates that the workflow should be executed only when there are changes in any file within the specified src directory.

Here we finally declare our CI/CD job under the name cicd but you can call it whatever you wish.

Setting up job

jobs:
    cicd:
      runs-on: ubuntu-latest  # we will be running the workflow on an ubuntu VM 
Enter fullscreen mode Exit fullscreen mode

cicd job

Here you may find all available virtual machine images for Github actions.
When developing native software for Windows, it is recommended to perform tests on VMs running on Windows. However, in the specific case of lambdas which are being run by Amazon Linux VMs, using Ubuntu as the virtual machine environment is a reasonable choice.

Environment

To ensure successful authentication with the Spotify API, we require certain environment variables for the Spotify lambda. Additionally, as we load our environment variables locally from an .env.spotify file, which is not accessible in the hosted repository, we need to find a solution for writing them to the lambda's .env.spotify file during the pipeline's execution.

 env:
        CLIENT_ID: ${{ secrets.CLIENT_ID }}
        CLIENT_SECRET: ${{ secrets.CLIENT_SECRET }}
        LAMBDA_URL: https://bzhlvvvs.com
        TITLE: "@bzhlvvvs"
        FAVICON_URL: https://personal-misho.s3.eu-north-1.amazonaws.com/favicon.ico
        PLAYLIST_ID: 4qw4F3Mi3eGjXwLeKM5pYx
        LINKEDIN: https://www.linkedin.com/in/mbozhilov/
        GITHUB: https://github.com/asynchroza/bzhlvvvs-spotify
Enter fullscreen mode Exit fullscreen mode

Before talking about what ${{ secrets.CLIENT_ID }} represents, I will quickly go over the script which is writing these environment variables to the .env.spotify file:

#!/bin/bash

DIR=src/.env.spotify
# Write environment variables to .env file
echo "CLIENT_ID=\"$CLIENT_ID\"" >> $DIR
echo "CLIENT_SECRET=\"$CLIENT_SECRET\"" >> $DIR
echo "TITLE=\"$TITLE\"" >> $DIR
echo "FAVICON_URL=\"$FAVICON_URL\"" >> $DIR
echo "PLAYLIST_ID=\"$PLAYLIST_ID\"" >> $DIR
echo "LINKEDIN=\"$LINKEDIN\"" >> $DIR
echo "GITHUB=\"$GITHUB\"" >> $DIR
Enter fullscreen mode Exit fullscreen mode

If you're not familiar with it, we can use bash scripts to perform specific actions since the workflow runs on a Linux Virtual Machine. The mentioned env section in the Github action is responsible for loading the environment variables into the VM's environment, similar to how a .bashrc file works, allowing you to reference them from any location. To reference them, you can use $VARIABLE and add them to the src/.env.spotify file.

Now, let's talk about the significance of ${{ secrets.CLIENT_ID }} and ${{ secrets.CLIENT_SECRET }}. These are references to secrets that we have stored within our GitHub repository. Specifically, we need them for authentication towards Spotify's API. To set up these values, you need to navigate to the Settings tab, then select Secrets and variables, and finally choose Actions.

Github actions

Then, after declaring your secret, you can reference it in the yaml file by doing {{ secrets.NAME_OF_SECRET }}. You will later see that we reference our AWS credentials the same way

Hardcoded environment variables:

TITLE: "@bzhlvvvs"
FAVICON_URL: https://personal-misho.s3.eu-north-1.amazonaws.com/favicon.ico
PLAYLIST_ID: 4qw4F3Mi3eGjXwLeKM5pYx
LINKEDIN: https://www.linkedin.com/in/mbozhilov/
GITHUB: https://github.com/asynchroza/bzhlvvvs-spotify
Enter fullscreen mode Exit fullscreen mode

If you followed the lambda link I shared earlier, you would have seen two icons that link to LinkedIn and GitHub. To ensure easy deployment of the lambda function for anyone, I have chosen to reference all the dynamic values from the .env.spotify file. Therefore, to populate these values using the previous bash script, you need to declare the values within the Action. If you're curious about the specific functionalities of these values, you can refer to the lambda's readme.

Steps:

actions/checkout@v2:
This step checks out the repository code and makes it available for subsequent actions in the workflow.

actions/setup-python@v2:
This step sets up the Python environment for the workflow. It ensures the required version of Python is installed and configures the environment accordingly.

aws-actions/setup-sam@v1:
This step sets up the AWS Serverless Application Model (SAM) CLI tool. It installs SAM and its dependencies, allowing for the deployment and management of serverless applications on AWS.

aws-actions/configure-aws-credentials@v2:
This step configures the AWS credentials for the workflow. It sets up the necessary authentication details (such as access keys or IAM roles) to enable interaction with AWS services during the workflow execution.

Just like when you authenticate with AWS on your own computer, you also need to authenticate the Virtual Machine created by the Github action with AWS. You can use the same login details, but it's recommended to use an IAM user that doesn't have full PowerUser privileges. Instead, it should have only the necessary IAM policies to ensure a successful deployment.

aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-north-1
Enter fullscreen mode Exit fullscreen mode

Deployment commands

We finally reached the part where we set up our commands for actually testing and deploying the lambda. You can declare the commands separately for better granularity but what I've done is declare them within a Makefile so that I can test them locally as well.

# cicd.yaml
- run: make pipeline-deploy
Enter fullscreen mode Exit fullscreen mode

Note: Make comes preinstalled on Ubuntu machines. If you're using a different image, you might want to install it before running your phonies.

.PHONY: pipeline-deploy
pipeline-deploy: 
    ./write_env.sh && pip install -r src/requirements.txt && \
    pytest ./src && sam build --use-container && \
    sam deploy --no-fail-on-empty-changeset --no-confirm-changeset && \
    ./health_check.sh
Enter fullscreen mode Exit fullscreen mode

Let's go over the commands step by step.

./write_env.sh
Enter fullscreen mode Exit fullscreen mode

This runs the script which writes the initialised environment variables from the workflow to the .env.spotify file which is being used for loading them in lambda's environment during runtime.

pip install -r src/requirements.txt && pytest ./src
Enter fullscreen mode Exit fullscreen mode

Install all python dependencies so that we can run our unit/integration tests without getting an exception for missing dependencies. Now we come to the continuous integration aspect of CI/CD. Assuming you are familiar with writing tests, if you have any doubts, you can refer to the sample unit tests for my lambda here. If we intend to create integration tests for the Spotify lambda, we would develop tests that attempt to retrieve a song from a playlist using the environment-loaded credentials to authenticate with Spotify's API.

sam deploy --no-fail-on-empty-changeset --no-confirm changeset
Enter fullscreen mode Exit fullscreen mode

Similarly to how we would deploy the lambda function locally but instead we specify that we don't want the command to fail if the changeset is empty and that we won't be able to accept the changes, therefore we declare them to be accepted by default.

Recently, I came across an interview task for a DevOps role that included a question about ensuring a successful deployment and preventing service exceptions. In our specific scenario, the simplest approach to achieve this is by sending a request to the service and verifying its response.

#!/bin/bash

URL="${LAMBDA_URL}"
response=$(curl -s -w "%{http_code}" $URL)

status_code=${response: -3}  

if [ $status_code -eq 200 ]; then
    echo "Request successful (Status Code: 200)"
else
    echo "Request failed with the following status code: ${status_code}"
    echo "Response: ${response%???}"
    exit 1
fi
Enter fullscreen mode Exit fullscreen mode

Within the provided script, we send a request to the lambda function and append the status code to the response. Our objective is to validate whether the status code corresponds to 200. If the status code differs from 200, we terminate the script with a failure to ensure that the GitHub action workflow also fails accordingly.

How to set up a custom domain

If you don't already have a domain, you can buy one from places like https://www.namecheap.com, https://www.domain.com, https://www.godaddy.com/.

Once you have your domain on hand, you will need to go to your AWS console and search for API Gateway.

API Gateway Console

Click on create and declare the subdomain/domain on which you want to be able to access your lambda.

Example domain

It can as well be api.example.com or dev.example.com.

Important: Before proceeding, if you don't have an ACM certificate, you need to get one by clicking on the link here (which can be found on the same page under Endpoint configuration):

ACM certificate

You will need to request a new one:
AWS ACM request

By using the domain name:
AWS ACM request

Once, you request the certificate, you will be asked to add these entries to your domain's DNS.

CNAME values

My domain is managed by namecheap so I use their Advanced DNS option to update my CNAME records.

DNS console

Once, your certificate is verified, you may continue setting up your API Gateway but this time you will select the issued certificate. It will be available in the dropdown.
Dropdown certificate

Next step is to map your domain to your lambda's stage.
Lambda stage

If you've followed this guide, your lambda is most probably deployed on the Prod stage of the API which is created upon deployment.

Add path if you want to set up your lambda to be accessible on a multiple level path such as api.example.com/v1/hello

Image description

And finally 🎉🎉. We conclude by mapping your domain to the API gateway, achieved by configuring a CNAME record for the root domain that points to the URL provided during the API stage mapping process.

In API Gateway console:
url

In your domain provider's console:
mapping domain to api gateway

Our task is complete!
At this point, your lambda function should be accessible via your customized domain.

If you have any questions, you can shoot me a message here.

Top comments (0)