As a Web Developer with over 13 years of experience, I've witnessed the incredible evolution in code delivery. Uploading PHP files onto shared servers is almost ancient history, nowadays we embrace the power of CI/CD Pipelines in the cloud.
Having a Continuous Integration/Continuous Delivery pipeline becomes invaluable when working in complex applications. It allows for faster and more frequent releases, automating the deployment process and minimizing human errors. This not only saves us time but also ensures consistency across different environments, from development and QA to production and everything in between.
In this post, I'll walk you through some exciting stuff. We'll create a Lambda to handle GET requests and an authorizer Lambda to validate JWT and control API access, both under the same API Gateway. And we'll finish strong by configuring GitHub Actions to effortlessly deploy our Lambdas to AWS. Ready? Let's get to it!
Prerequisites
Before diving in, please ensure that you meet the following requirements:
- An active AWS account with administrative IAM user access.
- AWS CLI installed on your machine.
- Configured AWS credentials for seamless interaction with AWS services.
- AWS SAM CLI installed to facilitate serverless application development.
- A GitHub repository to store your code.
- A local development environment already set up. In this guide, we'll be using Go.
- Docker installed on your machine for local testing purposes.
If you're starting from scratch and need assistance with the initial setup, you can refer to the Getting Started with AWS SAM guide in the AWS Documentation. It provides in-depth information to help you get up and running in no time.
Creating your lambdas
We'll be using SAM (Serverless Application Model) to create all of our resources. SAM is an open-source framework designed for building serverless applications on AWS. It utilizes a YAML format, which is a subset of the CloudFormation syntax. With SAM templates, you can easily define functions, APIs, databases, and various other AWS resources using just a few lines of code per resource. During deployment, SAM transforms and expands the SAM syntax into AWS CloudFormation syntax, simplifying the development process and accelerating serverless application building.
SAM offers a range of templates tailored for common use cases, and with the help of SAM CLI, creating a lambda function is very easy. You can start by using the "hello world" example and modify it according to your requirements.
Let's get started. Open your terminal and run sam init
to launch the assistant.
$ sam init
You can preselect a particular runtime or package type when using the `sam init` experience.
Call `sam init --help` to learn more.
Which template source would you like to use?
1 - AWS Quick Start Templates
2 - Custom Template Location
Choice:
You'll be presented with two options. Choose option 1. Next, a list of templates will appear, and once again, choose option 1 (Hello World Example).
Now, it's time to select your runtime. For this guide, I'll select go1.x
. As for the package type, let's choose zip
.
The assistant will prompt you with a question about enabling X-Ray tracing and CloudWatch Application Insight for the function. Answer y
to enable both.
Lastly, when asked for the name of your application, provide a suitable name. In my case, I'll name it hello-lambda
.
SAM will create a new folder called hello-lambda
with the following structure:
.
├── Makefile <-- Make to automate build
├── README.md <-- This instructions file
├── hello-world <-- Source code for a lambda function
│ ├── main.go <-- Lambda function code
│ └── main_test.go <-- Unit tests
└── template.yaml
Now, you can begin by renaming the hello-world
references with the desired name for your lambda or project, and add the functionality you need. For the sake of simplicity, I'll leave this hello world lambda as is.
Next, repeat the same process to create the foundation for a Lambda authorizer. I named my lambda lambda-authorizer
and updated the folder and references from hello-world
to authorizer
.
Your folder structure should resemble something like this:
.
├──hello-lambda
│ ├── Makefile
│ ├── README.md
│ ├── hello-world
│ │ ├── main.go
│ │ └── main_test.go
│ └── template.yaml
└──lambda-authorizer
├── Makefile <-- Make to automate build
├── README.md <-- This instructions file
├── authorizer <-- Source code for a lambda function
│ ├── main.go <-- Lambda function code
│ └── main_test.go <-- Unit tests
└── template.yaml
Note: Make sure to update the go version in the lambdas' go.mod
files to ensure compatibility with the SAM tooling. It is recommended to use at least Go 1.18
(I'm using 1.20
).
The code for the authorizer is quite straightforward. Currently, it simply validates the presence of a Bearer
token in the Authorization
header. An authorizer lambda must return an APIGatewayCustomAuthorizerResponse
with either an Allow
or Deny
IAM policy.
Here is the complete code for the authorizer. Feel free to add any additional validation logic you require within the handleRequest
method:
package main
import (
"context"
"strings"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
func handleRequest(ctx context.Context, event events.APIGatewayCustomAuthorizerRequest) (events.APIGatewayCustomAuthorizerResponse, error) {
token := event.AuthorizationToken
if token == "" || !strings.HasPrefix(token, "Bearer") {
return generatePolicy("Deny", event.MethodArn), nil
}
return generatePolicy("Allow", event.MethodArn), nil
}
func main() {
lambda.Start(handleRequest)
}
func generatePolicy(effect, resource string) events.APIGatewayCustomAuthorizerResponse {
authResponse := events.APIGatewayCustomAuthorizerResponse{PrincipalID: "user"}
if effect != "" && resource != "" {
authResponse.PolicyDocument = events.APIGatewayCustomAuthorizerPolicy{
Version: "2012-10-17",
Statement: []events.IAMPolicyStatement{
{
Action: []string{"execute-api:Invoke"},
Effect: effect,
Resource: []string{resource},
},
},
}
}
return authResponse
}
Building and Packaging Lambdas with SAM
We have our lambdas ready, and technically, we can use the SAM command sam deploy to deploy each lambda individually to AWS. However, before doing that, we need to take a few steps to:
- Deploy our lambdas as part of the same API gateway.
- Create the necessary roles for SAM to create resources and enable interaction between our resources.
- Deploy everything with a single command.
To achieve this, let's create a new template.yaml
file in the root folder, at the same level as hello-lambda
and lambda-authorizer
.
In this file, we'll define a few important components:
- The API gateway where our lambdas will be accessible.
- A role that grants permissions to our lambdas to write to CloudWatch.
- The build method for our lambdas.
To save time, let's copy the contents of the template.yaml
file from hello-lambda
and start adding our additional resources.
First, let's add the resource definition for the authorizer lambda. Note that it will have the same definition as the one in lambda-authorizer/template.yaml
, but without the Events section.
AuthorizerFunction:
Type: AWS::Serverless::Function
Metadata:
BuildMethod: makefile
Properties:
CodeUri: lambda-authorizer/
Handler: authorizer/authorizer
Runtime: go1.x
Architectures:
- x86_64
Then, we will define our API Gateway and specify the authorizer function.:
HelloWorldApiGateway:
Type: AWS::Serverless::Api
Properties:
StageName: v1
Auth:
DefaultAuthorizer: TokenAuthorizer
Authorizers:
TokenAuthorizer:
FunctionArn: !GetAtt AuthorizerFunction.Arn
Identity:
ReauthorizeEvery: 0
Things to note here are:
-
StageName
is optional, but this value represents the name of the deployment stage for your REST API. It is typically used to distinguish different versions or environments of your API. -
FunctionArn
, as the name implies, is the ARN of the authorizer lambda. Because we are just creating the lambda and we don't know the ARN yet, we are obtaining the ARN value with theGetAtt
function. If you already have a lambda created anddeployed, you can use its ARN here.
-
ReauthorizeEvery
: This property ofIdentity
controls the cache for the authorizer lambda. Here, we are disabling the cache by setting the value to 0. This means that every call will pass through the authorizer lambda.
Next, let's add another resource. This resource will define the role our lambda will assume to be able to write to CloudWatch and send traces to X-Ray.
ConfigurationLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: HelloLambdaRolePolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- xray:PutTraceSegments
- xray:PutTelemetryRecords
- xray:GetSamplingRules
- xray:GetSamplingTargets
Resource: '*'
Then, we are going back to the definition of our HelloWorldFunction
to add the role to assume.
This is the final resource definition:
HelloWorldFunction:
Type: AWS::Serverless::Function
Metadata:
BuildMethod: makefile
Properties:
CodeUri: hello-lambda/
Handler: hello-world/hello-world
Role: !GetAtt HelloLambdaRole.Arn
Runtime: go1.x
Architectures:
- x86_64
Events:
CatchAll:
Type: Api
Properties:
Path: /hello
Method: GET
RestApiId:
Ref: HelloWorldApiGateway
Note that we removed the environment section as we don't need it, but you can use this section to pass Env Var values to your lambda code.
The values for CodeUri
and Handler
have also changed. The new values follow the pattern:
CodeUri: <folder created by sam init>/
Handler: <lambda name>/<binary filename>
We also added a new property, Metadata
. This is used to specify a custom build command when compiling the lambda. However, we need to go to each of our lambda's Makefiles and add the following:
# hello-lambda/Makefile
build-HelloWorldFunction:
cd hello-world; GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o hello-world main.go
mv hello-world $(ARTIFACTS_DIR)
.PHONY: build-HelloWorldFunction
# lambda-authorizer/Makefile
build-AuthorizerFunction:
cd authorizer; GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o authorizer main.go
mv authorizer $(ARTIFACTS_DIR)
.PHONY: build-AuthorizerFunction
What's happening here? let's take hello-lambda
as example:
- The Makefile is located at the root level of the
hello-lambda
folder. So, we need to change the directory tohello-lambda
, which is the folder containing ourgo.mod
file. - We set some flags for building the project:
-
GOOS=linux
: This flag sets the target operating system to Linux. -
GOARCH=amd64
: This flag indicates that the resulting binary is optimized for 64-bit systems, specifically x86_64 processors. -
CGO_ENABLED=0
: This flag is commonly used when you want to create a pure Go binary without relying on external C libraries. It makes your Go application more easily compiled and executed on different platforms without worrying about compatibility issues with C libraries.
-
The last change to make in our global template.yaml file is to update the outputs to reference the API Gateway we just defined and the role we are creating:
Outputs:
HelloWorldAPI:
Description: API Gateway endpoint URL for Prod environment for First Function
Value: !Sub "https://${HelloWorldApiGateway}.execute-api.${AWS::Region}.amazonaws.com/v1/hello/"
HelloWorldFunction:
Description: First Lambda Function ARN
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: Explicit IAM Role created for Hello World function
Value: !GetAtt HelloLambdaRole.Arn
You can view the final and complete template.yml file here
Now we can start checking if everything works as expected. So lets build the project!
In your terminal, change directories to the project root and run:
$ sam build
If everything is correct, you should see something like:
Build Succeeded
Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml
Commands you can use next
=========================
[*] Validate SAM template: sam validate
[*] Invoke Function: sam local invoke
[*] Test Function in the Cloud: sam sync --stack-name {{stack-name}} --watch
[*] Deploy: sam deploy --guided
If you encounter any errors, here are some common issues to check:
- Misspelled resource names or wrong paths in
template.yaml
. - Incompatible (old) Go version.
- Misspelled attributes in
template.yaml
. A common error is using.arn
instead of.Arn
.
At this point, we should be able to deploy our lambdas using SAM CLI. You can do so by running:
$ sam deploy --guided
An assistant will ask you for some information about your project and your AWS account, such as the name to give the stack, the region for deployment, and if you need to review changes before applying. Once you provide the required details, you should be good to go! However, we don't want to manually deploy our lambdas. Let's see how we can set up a GitHub workflow and leverage some cool features of SAM.
After building our project and ensuring everything is in order, it's a good time to run:
$ sam pipeline init --bootstrap
This command will initiate an interactive assistant that guides you through creating the necessary AWS infrastructure resources for SAM to deploy your code. During the process, you will be prompted to select a user permissions provider. Choose OpenID Connect (OIDC), which will create a role for the GitHub workflow to deploy our project and monitor the creation process.
The assistant will also assist you in setting up a multi-stage deployment. It will ask you which branches should be pushed to the Dev environment (usually dev
or development
) and which branch should be deployed to the Production environment (usually main
branch).
Once you complete the questionnaire, the bootstrapping process will begin. When it finishes, you'll have:
- A new CloudFormation Stack with the resources required by SAM and GitHub Actions.
- An IAM role assumed by AWS CloudFormation when applying the changeset.
- An IAM role assumed by the GitHub Workflow to deploy our artifacts.
- A pair of S3 buckets used to upload your AWS CloudFormation template and store output logs.
- A new GitHub workflow file that contains everything you need to run your deployment pipeline upon merging. Remember to add, commit, and push this file to your repository.
If you are the owner of both the AWS account and the repository, and you have successfully completed the bootstrap process, you're all set! You can now create a new pull request to the specified Dev or Production environment branch, and the GitHub workflow will kick off.
You can monitor the deployment process either in the Actions tab of your GitHub repository or in the AWS console, specifically in the CloudFormation service panel.
Note that in the logs displayed for the deployment job in GitHub, there is an "Outputs" section where you can find the API Gateway URL for the lambda function. Let's give it a try!
First, let's check if the authorizer lambda is working correctly by making a request to our API without an Authorization
header:
Request
curl --location 'https://qk1d1uv590.execute-api.us-east-1.amazonaws.com/v1/hello/'
Response
We should receive an HTTP status code 401
(Unauthorized).
{
"message": "Unauthorized"
}
Now, let's add a JWT to our request:
Request
curl --location 'https://qk1d1uv590.execute-api.us-east-1.amazonaws.com/v1/hello/' \
--header 'Authorization: Bearer token.jwt.value'
Response
Hello, <your IP should appear here>
And there you have it! We did it! Let's recap and see what we have accomplished:
- Created a Lambda function
- Created an authorizer lambda
- Created a SAM template to manage all of our resources in one place
- Used SAM CLI to create the necessary roles for resource creation and deployment, as well as an OIDC role for GitHub Actions to assume and run our jobs
- Completed the setup of our CI/CD pipeline
But wait! There's more!
If you are the owner or admin of the AWS account, you can stop reading here. But what happens when you have a limited AWS role, or you don't need to deploy to multiple stages?
That was the case for me. I was working for a client (the owner of the AWS account) and my AWS role couldn't create resources "directly". Additionally, SAM was already bootstrapped, and I only needed to deploy to a single stage. As a result, I didn't benefit from running sam pipeline init --bootstrap
or sam pipeline bootstrap
, which would have set up the OIDC role, GitHub Workflow definition, and AWS stages.
So let's go back a bit and imagine that we just finished modifying our template.yaml
file and ran sam build
.
Setting Up GitHub Actions Workflow
Ok. Let's take a step back for a moment. What exactly are these GitHub Actions?
GitHub Actions is a feature of GitHub that allows us to automate our software development workflows within GitHub repositories. It can be used for continuous integration and continuous delivery (CI/CD) of code, as well as automating other software workflows such as building or testing code directly from GitHub.
To create and define a workflow, you need to create a YAML file in the .github/workflows/
folder of your repository. The name of the file doesn't matter. A workflow consists of one or more events that trigger the workflow, one or more jobs that execute on runner machines, and a series of steps within each job. Each step can run a script that you define or an action, which is a reusable extension that simplifies your workflow.
Now, let's create our workflow file. I'll create the file .github/workflows/dev-deploy.yaml
.
First, we specify the permissions we want to give to the GitHub Workflow runner. These permissions modify the default permissions granted to the GITHUB_TOKEN, allowing us to add or remove access as required:
permissions:
id-token: write # This is required for requesting the JWT
contents: read # This is required for actions/checkout
Next, we indicate the event that will trigger this workflow:
on:
push:
branches:
- main
In this case, we're telling the Workflow runner to execute the workflow whenever there are new changes pushed to the main branch. You can utilize various events for different triggers.
Now, let's move on to the main part of the workflow: the jobs. Each job runs in a runner environment specified by runs-on
. A job consists of a sequence of tasks called steps. Steps can run commands, set up tasks, or execute an action within your repository, a public repository, or an action published in a Docker registry.
For more information about Jobs and Steps, you can refer to the official documentation site.
Now, let's take a look at the job and steps we'll be using to deploy our resources:
steps:
- uses: actions/checkout@v3
- uses: aws-actions/setup-sam@v2
with:
use-installer: true
- uses: aws-actions/configure-aws-credentials@v2
with:
aws-region: us-east-1
role-duration-seconds: 1800
role-skip-session-tagging: true
role-to-assume: arn:aws:iam::269174633178:role/aws-sam-cli-managed-dev-pipe-PipelineExecutionRole-1TGFGMZ8QQ4DO
- uses: actions/setup-go@v2
with:
go-version: '1.20'
- run: |
# Define a list of folders (lambda_folder/code_folder) to run go mod tidy in.
folders=(
"lambda-authorizer/authorizer"
"hello-lambda/hello-world"
)
# Loop through each folder and run 'go mod tidy'
for folder in "${folders[@]}"; do
echo "Running 'go mod tidy' in $folder"
cd $folder
go mod tidy -go=1.20
cd ../..
done
- run: sam build
# Prevent prompts and failure when the stack is unchanged
- run: sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --role-arn arn:aws:iam::269174633178:role/aws-sam-cli-managed-dev-p-CloudFormationExecutionR-830GQUQ9J19P --stack-name sam-lambdas --resolve-s3 --s3-prefix "sam-lambdas" --region us-east-1 --capabilities CAPABILITY_IAM
Here, we have only one step, but several things are happening. First, we are utilizing some existing GitHub Actions. The uses
keyword is used to specify an action that should be executed as part of the workflow, and with: <parameters>
is a set of key-value pairs where the key represents the input parameter name expected by the action, and the value represents the value you want to assign to that parameter. In this case, we are using actions to check out the code, install SAM CLI in the GitHub runner, and build and deploy our project.
Lastly, the run
command is used to execute command-line programs using the operating system's shell. As you can see here, we are running the go mod tidy
command in each of our lambdas, ensuring that our project builds reliably and uses the correct versions of its dependencies.
The final run
command deploys our project with the following parameters:
-
--no-confirm-changeset
: Prompts to confirm whether the AWS SAM CLI should deploy the computed changeset. -
--no-fail-on-empty-changeset
: Specifies whether to return a non-zero exit code if there are no changes to make to the stack. This prevents the workflow from being marked as failed if there are no changes in the project. -
--role-arn
: The Amazon Resource Name (ARN) of an IAM role that AWS CloudFormation assumes when applying the changeset. In this case, I'm reusing the roles created during the SAM bootstrap process. However, you can refer to this guide to create a role specifically for use in GitHub workflows. -
--stack-name
: The name of the CloudFormation stack. -
--resolve-s3
: Automatically creates an Amazon S3 bucket to use for packaging and deploying during non-guided deployments. -
--s3-prefix
: The prefix added to the names of the artifacts that are uploaded to the Amazon S3 bucket. -
--region
: The AWS Region to deploy to. -
--capabilities
: A list of capabilities that must be specified to allow AWS CloudFormation to create certain stacks.
You can learn more about these and other sam deploy
options in the official documentation.
And that's it! Once you have pushed your workflow file to your repository, you should see the workflow start. You can monitor its progress in the "Actions" tab of your repository.
Some common issues you might encounter are:
-
"An error occurred (AccessDenied) when calling the PutObject operation: Access Denied."
- This means the OIDC role doesn't have the necessary permissions. It should have access to most CloudFormation and S3 actions.
-
"Missing option 'some option', 'sam deploy --guided' can be used to provide and save the needed parameters for future deployments."
- As the message implies, an option might be missing in your
sam deploy
command.
- As the message implies, an option might be missing in your
Conclusion and Best Practices
We have covered a lot in this article! In conclusion, we explored the capabilities of AWS Serverless Application Model (SAM) templates for creating Lambdas, implementing Authorizer Lambdas, defining AWS resources in a stack, deploying our stack, and setting up a CI/CD pipeline with GitHub Workflows.
SAM templates are a powerful and efficient approach to developing serverless applications. They allow us to define our application's infrastructure and functions in a single template file. With SAM, we can easily manage and deploy our serverless applications, ensuring scalability, reliability, and fast evolution. When combined with GitHub Workflows, we have a seamless CI/CD pipeline that automates our deployments and keeps our application updates flowing smoothly. With these awesome tools at our disposal, we are ready to build and deploy apps like professionals!
However, this article only scratches the surface, and there are many more topics to explore. For example, we didn't cover best practices. When deploying Lambdas with SAM and GitHub Actions, there are some best practices and recommendations to consider:
- Continuous Integration and Deployment (CI/CD): We set up a CI/CD pipeline to deploy our stack, but it's common to use GitHub Actions to automate the build, test, and deployment process.
- Environment-specific Configuration: Use environment variables to manage configuration values such as API keys, database connections, and other sensitive information. GitHub Secrets or AWS Systems Manager Parameter Store can be used to securely store and retrieve these variables during the CI/CD process.
- Automated Testing: Implement automated tests for your Lambdas to ensure their functionality.
- Security and Permissions: Follow the principle of least privilege when assigning IAM roles and permissions to your Lambdas. Only grant the necessary permissions required for their specific functionality. Avoid using long-lived AWS credentials (
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
) and use roles instead. - Logging and Monitoring: Enable logging in your Lambdas and use CloudWatch Logs for log management. Implement monitoring and alerting using CloudWatch Alarms or third-party monitoring tools to detect issues and ensure your Lambdas work as intended.
And that's it, friends! You can take a look at the repository with the full code here.
Some recommended reading
Would you like to know more? click here to know more!
AWS SAM:
- AWS SAM Developer Guide - Official documentation for AWS SAM, providing an overview, concepts, and examples.
- AWS SAM GitHub Repository - The official GitHub repository for AWS SAM. It contains code samples, templates, and community contributions.
AWS Lambda:
- AWS Lambda Developer Guide - Official documentation for AWS Lambda, covering various aspects, including deployment, programming model, and best practices.
- AWS Lambda GitHub Repository - The official GitHub repository for AWS Lambda. It includes examples, guides, and resources specifically for working with Lambda in Go.
GitHub Actions:
- GitHub Actions Documentation - The official documentation for GitHub Actions, covering topics such as workflows, syntax, and available actions.
- GitHub Actions Marketplace - A marketplace of pre-built actions that can be used in your GitHub Actions workflows. You can search for actions related to AWS, SAM, or other specific needs.
Examples and Tutorials:
- AWS Serverless Application Repository - A repository of serverless applications built using AWS SAM. You can explore and deploy ready-to-use applications or learn from their implementation.
- GitHub Actions Examples - Example workflows that demonstrate the CI/CD features of GitHub Actions.
Top comments (0)