DEV Community

Alex Casalboni for AWS

Posted on • Originally published at Medium on

4 2

How to FaaS like a pro: 12 less common ways to invoke your serverless functions on Amazon Web Services [Part 3]

“I really like the peace of mind of building in the cloud” cit. myself [Photo by Kaushik Panchal on Unsplash]

This is the last part of my FaaS like Pro series, where I discuss and showcase some less common ways to invoke your serverless functions with AWS Lambda.

You can find [Part 1] here — covering Amazon Cognito User Pools, AWS Config, Amazon Kinesis Data Firehose, and AWS CloudFormation.

And [Part 2] here — covering AWS IoT Button, Amazon Lex, Amazon CloudWatch Logs, and Amazon Aurora.

In the third part I will describe four more:

  1. AWS CodeDeploy — pre & post deployment hooks
  2. AWS CodePipeline — custom pipeline actions
  3. Amazon Pinpont — custom segments & channels
  4. AWS ALB (Application Load Balancer) — HTTP target

9. AWS CodeDeploy (pre/post-deployment hooks)

CodeDeploy is part of the AWS Code Suite and allows you automate software deployments to Amazon EC2, AWS Fargate, AWS Lambda, and even on-premises environments.

Not only it enables features such as safe deployments for serverless functions, but it also integrates with Lambda to implement custom hooks. This means you can inject custom logic at different steps of a deployment in order to add validation, 3rd-party integrations, integrations tests, etc. Each hook runs only one per deployment and can potentially trigger a rollback.

You can configure different lifecycle event hooks, depending on the compute platform (AWS Lambda, Amazon ECS, Amazon EC2 or on-premises).

AWS Lambda

  • BeforeAllowTraffic  — runs before traffic is shifted to the deployed Lambda function
  • AfterAllowTraffic  — runs after all traffic has been shifted

Amazon ECS & Amazon EC2/on-premises

See the full documentation here.

Amazon ECS and EC2 have a more complex deployment lifecycle, while Lambda follows a simple flow: Start > BeforeAllowTraffic > AllowTraffic > AfterAllowTraffic > End. In this flow, you can inject your custom logic before traffic is shifted to the new version of your Lambda function and after all traffic is shifted.

For example, we could run some integration tests in the BeforeAllowTraffic hook. And we could implement a 3rd-party integration (JIRA, Slack, email, etc.) in the AfterAllowTraffic hook.

Let’s have a look at a sample implementation of a Lambda hook for CodeDeploy:

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy();
exports.handler = async (event, context) => {
const {DeploymentId, LifecycleEventHookExecutionId} = event;
const functionToTest = process.env.NewVersion; // to be defined in CFN
/* Enter validation tests here */
const status = 'Succeeded'; // 'Succeeded' or 'Failed'
// Pass AWS CodeDeploy the prepared validation test results.
return await codedeploy.putLifecycleEventHookExecutionStatus({
deploymentId: DeploymentId,
lifecycleEventHookExecutionId: LifecycleEventHookExecutionId,
status: status
}).promise();
};
view raw index.js hosted with ❤ by GitHub

The code snippet above doesn’t do much, but it shows you the overall hook structure:

  • It receives a DeploymentId and LifecycleEventHookExecutionId that you’ll use to invoke CodeDeploy’s PutLifecycleEventHookExecutionStatus API
  • The execution status can be either Succeeded or Failed
  • You can easily provide an environment variable to the hook function so that it knows which functions we are deploying and what’s its ARN

I’d recommend defining the hook functions in the same CloudFormation (or SAM) template of the function you’re deploying. This way it’s very easy to define fine-grained permissions and environment variables.

For example, let’s define an AWS SAM template with a simple Lambda function and its corresponding Lambda hook:

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
myFunctionToBeDeployed:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
AutoPublishAlias: live
DeploymentPreference:
Type: Linear10PercentEvery1Minute
Hooks:
PreTraffic: !Ref preTrafficHook
# ...
# all the other properties here
# ...
preTrafficHook:
Type: AWS::Serverless::Function
Properties:
Handler: preTrafficHook.handler
Runtime: nodejs8.10
DeploymentPreference:
Enabled: false
Environment:
Variables:
NewVersion: !Ref myFunctionToBeDeployed.Version
Policies:
- Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- codedeploy:PutLifecycleEventHookExecutionStatus
Resource: !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
- Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource: !Ref myFunctionToBeDeployed.Version
view raw template.yml hosted with ❤ by GitHub

The template above is defining two functions:

  1. myFunctionToBeDeployed is our target function, the one we’ll be deploying with AWS CodeDeploy
  2. preTrafficHook is our hook, invoked before traffic is shifted to myFunctionToBeDeployed during the deployment

I’ve configured two special properties on myFunctionToBeDeployed called DeploymentPreference and AutoPublishAlias . These properties allows us to specify which deployment type we want (linear, canary, etc.), which hooks will be invoked, and which alias will be used to shifting the traffic in a weighted fashion.

A few relevant details about the pre-traffic hook definition:

  • I am defining an environment variable named NewVersion which will contain the ARN of the newly deployed function, so that we could invoke it and run a some tests
  • preTrafficHook needs IAM permissions to invoke the codedeploy:PutLifecycleEventHookExecutionStatus API and I’m providing fine-grained permissions by referencing the deployment group via ${ServerlessDeploymentApplication}
  • since we want to run some tests on the new version of myFunctionToBeDeployed, our hook will need IAM permissions to invoke thelambda:invokeFunction API, and I’m providing fine-grained permissions by referencing myFunctionToBeDeployed.Version

In a real-world scenario, you may want to set up a proper timeout based on which tests you’re planning to run and how long you expect them to take.

In even more complex scenarios, you may event execute an AWS Step Functions state machine that will run multiple tasks in parallel before reporting the hook execution status back to CodeDeploy.

Last but not least, don’t forget that you can implement a very similar behaviour for non-serverless deployments involving Amazon ECS or EC2. In this case, you’ll have many more hooks available such as BeforeInstall, AfterInstall, ApplicationStop, DownloadBundle, ApplicationStart, ValidateService, etc (full documentation here).

10. AWS CodePipeline (custom action)

CodePipeline is part of the AWS Code Suite and allows you to design and automate release pipelines (CI/CD). It integrates with the other Code Suite services such as CodeCommit, CodeBuild, and CodeDeploy, as well as popular 3rd-party services such as GitHub, CloudBees, Jenkins CI, TeamCity, BlazeMeter, Ghost Inspector, StormRunner Load, Runscope, and XebiaLabs.

In situations when built-in integrations don’t suit your needs, you can let CodePipeline integrate with your own Lambda functions as a pipeline stage. For example, you can use a Lambda function to verify if a website’s been deployed successfully, to create and delete resources on-demand at different stages of the pipeline, to back up resources before deployments, to swag CNAME values during a blue/green deployment, and so on.

Let’s have a look at a sample implementation of a Lambda stage for CodePipeline:

const http = require('http');
const AWS = require('aws-sdk');
const codepipeline = new AWS.CodePipeline();
exports.handler = async (event, context) => {
// Retrieve event data
const jobData = event["CodePipeline.job"];
const jobId = jobData.id;
const url = jobData.data.actionConfiguration.configuration.UserParameters;
// validate URL
if(!url || !url.includes('http://') || !url.includes('https://')) {
return await putJobFailure(jobId, 'Invalid URL: ' + url, context.invokeid);
}
try {
const page = await fetchPage(url);
if (page.statusCode === 200) {
return await putJobSuccess(jobId, "Tests passed.");
} else {
return await putJobFailure(jobId, "Invalid status code: " + page.statusCode, context.invokeid);
}
} catch (error) {
return await putJobFailure(jobId, "Couldn't fetch page", context.invokeid);
}
};
const fetchPage = (url) => {
let page = {
body: '',
statusCode: 0
};
return new Promise((resolve, reject) => {
http.get(url, function(response) {
page.statusCode = response.statusCode;
response.on('data', function (chunk) {
page.body += chunk;
});
response.on('end', function () {
resolve(page);
});
response.on('error', (error) => {
reject(error);
});
response.resume();
});
});
}
const putJobSuccess = async (jobId, message) => {
await codepipeline.putJobSuccessResult({
jobId: jobId
}).promise();
return message;
};
const putJobFailure = async (jobId, message, executionId) => {
await codepipeline.putJobFailureResult({
jobId: jobId,
failureDetails: {
message: JSON.stringify(message),
type: 'JobFailed',
externalExecutionId: executionId
}
}).promise();
return message;
};
view raw index.js hosted with ❤ by GitHub

The function will receive three main inputs in the CodePipeline.job input:

  • id — the JobID required to report success or failure via API
  • data.actionConfiguration.configuration.UserParameters — the stage dynamic configuration; you can think of this as an environment variable that depends on the pipeline stage, so you could reuse the same function for dev, test, and prod pipelines
  • context.invokeid — the invocation ID related to this pipeline execution, useful for tracing and debugging in case of failure

In the simple code snippet above I am doing the following:

  1. Verify that the given URL is valid
  2. Fetch the URL via HTTP(S)
  3. Report success via the CodePipeline putJobSuccessResult API if the HTTP status is 200
  4. Report failure via the CodePipeline putJobFailureResult API in case of errors — using different error messages and contextual information

Of course, we could extend and improve the validation step, as well as the URL verification. Receiving a 200 status is a very minimal way to verify that our website was deployed successful. Here we could add automated browser testing and any other custom logic.

It’s also worth remembering that you can implement this logic in any programming language supported by Lambda (or not). Here I’ve used Node.js but the overall structure wouldn’t change much in Python, Go, C#, Ruby, Java, PHP, etc.

Now, let me show you how we can integrate all of this into a CloudFormation template (using AWS SAM as usual):

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
myPipelineFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Policies:
- AWSLambdaExecute # Managed Policy
- Version: '2012-10-17' # Policy Document
Statement:
- Effect: Allow
Action:
- codepipeline:PutJobSuccessResult
- codepipeline:PutJobFailureResult
Resource: '*'
# ...
# all the other properties here
# ...
myPipelineLambdaPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref myPipelineFunction
Action: lambda:InvokeFunction
Principal:
Service: 'codepipeline.amazonaws.com'
SourceAccount: !Ref 'AWS::AccountId'
SourceArn: !GetAtt myPipeline.Arn
myPipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
ArtifactStore:
Type: S3
Location: my-pipeline-bucket
RoleArn: !Ref MyIAMRoleForCodePipeline
Stages:
- ... OTHER STAGES HERE (deploy the website) ...
- Name: myCustomStage
Actions:
- ActionTypeId:
Category: Invoke
Owner: AWS
Provider: Lambda
Version: '1.0'
Configuration:
FunctionName: !Ref myPipelineFunction
UserParameters: 'http://mywebsite.com'
Name: Lambda
outputArtifacts: []
inputArtifacts: []
view raw template.yml hosted with ❤ by GitHub

In the template above I’ve defined three resources:

  • An AWS::Serverless::Function to implement our custom pipeline stage; note that it will require IAM permissions to invoke the two CodePipeline API’s
  • An AWS::CodePipeline::Pipeline where we’d normally add all our pipeline stages and actions; plus, I’m adding an action of type Invoke with provider Lambda that will invoke the myPipelineFunction function
  • An AWS::Lambda::Permission that grants CodePipeline permissions to invoke the Lambda function

One more thing to note: in this template I’m not including the IAM role for CodePipeline for brevity.

You can find more details and step-by-step instructions in the official documentation here.

11. Amazon Pinpoint (custom segments & channels)

Amazon Pinpoint is a managed service that allows you to send multi-channel personalized communications to your own customers.

Pinpoint natively supports many channels including email, SMS (in over 200 countries), voice (audio messages), and push notifications (Apple Push Notification service, Amazon Device Messaging, Firebase Cloud Messaging, and Baidu Cloud Push).

As you’d expect, Pinpoint allows you to define users/endpoints and messaging campaigns to communicate with your customers.

And here’s where it nicely integrates with AWS Lambda for two interesting use cases:

  1. Custom segments  — it allows you to dynamically modify the campaign’s segment at delivery-time , which means you can implement a Lambda function to filter out some of the users/endpoints to engage a more narrowly defined subset of users, or even to enrich users’ data with custom attributes (maybe coming from external systems)
  2. Custom channels  — it allows you to integrate unsupported channels such as instant messaging services or web notifications, so you can implement a Lambda function that will take care of the message delivery outside of Amazon Pinpoint

Let’s dive into both use cases!

Note: both use cases are still in beta and some implementation details are still subject to change

11.A — How to define Custom Segments

We can connect a Lambda function to our Pinpoint Campaign and dynamically modify, reduce, or enrich our segment’s endpoints.

Our Lambda function will receive a structured event:

{
"MessageConfiguration": {...},
"ApplicationId": "ABC",
"CampaignId": "XYZ",
"TreatmentId": "XYZ2",
"ActivityId": "123",
"ScheduledTime": "2019-10-08T15:00:00.000Z",
"Endpoints": {...}
}
view raw event.json hosted with ❤ by GitHub
{
"endpoint_id_1": {
"ChannelType": "GCM",
"Address": "4d5e6f1a2b3c4d5e6f7g8h9i0j1a2b3c",
"EndpointStatus": "ACTIVE",
"OptOut": "NONE",
"Demographic": {
"Make": "android"
},
"EffectiveDate": "2019-10-04T21:26:48.598Z",
"User": {}
},
"endpoint_id_1": {
"ChannelType": "APNS",
"Address": "1a2b3c4d5e6f7g8h9i0j1a2b3c4d5e6f",
"EndpointStatus": "ACTIVE",
"OptOut": "NONE",
"Demographic": {
"Make": "apple"
},
"EffectiveDate": "2019-10-05T21:26:48.598Z",
"User": {}
}
}

The important section of the input event is the set of Endpoints. The expected output of our function is a new set of endpoints with the same structure. This new set might contain fewer endpoints and/or new attributes as well. Also note that our function will receive at most 50 endpoints in a batch fashion. If you segment contains more than 50 endpoints, the function will be involved multiple times.

For example, let’s implement a custom segment that will include only the APNS channel (Apple) and generate a new custom attribute named CreditScore:

import random
def handler(event, context):
# fetch events from input event
endpoints = event['Endpoints']
# iterate over endpoints one by one
for id, endpoint in endpoints.items():
print("Processing endpoint with id: %s" % id)
# don't include enpoints if not APNS
if endpoint['ChannelType'] != 'APNS':
endpoints.pop(id)
continue
# generate new CreditScore if missing, only for active endpoints
if endpoint['EndpointStatus'] == 'ACTIVE':
# add 'Attributes' if missing
endpoint['Attributes'] = endpoint.get('Attributes', {})
# add new random CreditScore -> [0, 100]
endpoint['Attributes']['CreditScore'] = random.randint(0,101)
print("New endpoints: %s" % endpoints)
return endpoints
view raw index.py hosted with ❤ by GitHub

The code snippet above is iterating over the given endpoints and dynamically modify the set before returning it back to Amazon Pinpoint for delivery.

For each endpoint, we are excluding it from the set if it’s not APNS (just as an example), then we are generating a new CreditScore attribute only for active endpoints.

Let’s now define the CloudFormation template for our Pinpoint app:

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
myHookFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
# ...
# all the other properties here
# ...
myPinpointApp:
Type: AWS::Pinpoint::App
Properties:
Name: MySampleApp
myPinpointSegment:
Type: AWS::Pinpoint::Segment
Properties:
ApplicationId: !Ref myPinpointApp
Name: 'My segment'
myPinpointAPNSChannel:
Type: AWS::Pinpoint::APNSChannel
Properties:
ApplicationId: !Ref myPinpointApp
# ...
# all the other properties here
# ...
myPinpointCampaign:
Type: AWS::Pinpoint::Campaign
Properties:
ApplicationId: !Ref myPinpointApp
SegmentId: !GetAtt myPinpointSegment.SegmentId
MessageConfiguration: {...}
CampaignHook:
LambdaFunctionName: !Ref myHookFunction
Mode: FILTER
Schedule:
StartTime: '2019-11-01T15:00:00.000'
TimeZone: UTC
# ...
# all the other properties here
# ...
view raw template.yml hosted with ❤ by GitHub

The important section of the template above is the CampaignHook attribute of the AWS::Pinpoint::Campaign resource. We are providing the Lambda function name and configuring it with Mode: FILTER. As we’ll see in the next section of this article, we are going to use Mode: DELIVERY to implement custom channels.

In case we had multiple campaigns that required the same custom segment, we could centralize the CampaignHook definition into an AWS::Pinpoint:ApplicationSettings resource:

myAppSettings:
Type: AWS::Pinpoint::ApplicationSettings
Properties:
ApplicationId: !Ref myPinpointApp
CampaignHook:
LambdaFunctionName: !Ref myHookFunction
Mode: FILTER
view raw template.yml hosted with ❤ by GitHub

This way, all the campaigns in our Pinpoint application will inherit the same Lambda hook.

You can find the full documentation here.

11.B — How to define Custom Channels

We can connect a Lambda function to our Pinpoint Campaign to integrate unsupported channels. For example, Facebook Messenger or even your own website backend to show in-browser notifications.

To define a custom channel we can use the same mechanism described above for custom segments, but using Mode: DELIVERY in our CampaignHook configuration. The biggest difference is that Pinpoint won’t deliver messages itself, as our Lambda hook will take care of that.

Our function will receive batches of 50 endpoints, so if you segment contains more than 50 endpoints the function will be involved multiple times (round(N/50) times to be precise).

We will receive the same input event:

{
"MessageConfiguration": {...},
"ApplicationId": "ABC",
"CampaignId": "XYZ",
"TreatmentId": "XYZ2",
"ActivityId": "123",
"ScheduledTime": "2019-10-08T15:00:00.000Z",
"Endpoints": {...}
}
view raw event.json hosted with ❤ by GitHub
{
"endpoint_id_1": {
"ChannelType": "GCM",
"Address": "4d5e6f1a2b3c4d5e6f7g8h9i0j1a2b3c",
"EndpointStatus": "ACTIVE",
"OptOut": "NONE",
"Demographic": {
"Make": "android"
},
"EffectiveDate": "2019-10-04T21:26:48.598Z",
"User": {}
},
"endpoint_id_1": {
"ChannelType": "APNS",
"Address": "1a2b3c4d5e6f7g8h9i0j1a2b3c4d5e6f",
"EndpointStatus": "ACTIVE",
"OptOut": "NONE",
"Demographic": {
"Make": "apple"
},
"EffectiveDate": "2019-10-05T21:26:48.598Z",
"User": {}
}
}

Our Lambda function will need to iterate through all the given Endpoints and deliver messages via API.

Let’s implement the Lambda function that will deliver messages to FB Messenger, in Node.js:

const https = require("https");
/* FB configuration */
const FB_ACCESS_TOKEN = "EAF...DZD";
const FB_PSID = "facebookMessengerPsid";
const FB_REQUEST = {
host: "graph.facebook.com",
path: "/v2.6/me/messages?access_token=" + FB_ACCESS_TOKEN,
method: "POST",
headers: {
"Content-Type": "application/json",
},
};
const isFBActive = (user) =>
user &&
user.UserAttributes &&
user.UserAttributes[FB_PSID]
exports.handler = async (event, context) => {
const messagesToSend = [];
if (event.Message && event.Endpoints) {
for(const [id, endpoint] of Object.entries(event.Endpoints)) {
if (isFBActive(endpoint.User)) {
messagesToSend.push(deliver(event.Message, endpoint.User));
}
}
}
await Promise.all(messagesToSend);
return {
body: "ok",
statusCode: 200,
};
};
const deliver = (message, user) => {
console.log("Sending message for user: ", user.UserId);
const requestBody = JSON.stringify({
recipient: {
id: user.UserAttributes[FB_PSID][0],
},
message: {
text: message["smsmessage"]["body"],
},
});
return new Promise((resolve, reject) => {
const req = https.request(FB_REQUEST, (response) => {
let responseBody = '';
response.on('data', chunk => {
responseBody += chunk;
});
response.on('end', () => {
resolve(responseBody);
});
response.on('error', (error) => {
reject(error);
});
response.resume();
});
req.write(requestBody);
req.end();
});
};
view raw index.js hosted with ❤ by GitHub

The code snippet above defines a few configuration parameters, that I’d recommend storing on the AWS SSM Parameter Store or AWS Secrets Manager, here hard-coded for brevity.

The Lambda handler is simply iterating over event.Endpoints and generating an async API call for each one. Then we run all the API calls in parallel and wait for their completion using await Promise.all(...).

You could start from this sample implementation for FB Messenger and adapt it for your own custom channel by editing the deliver(message, user) function.

Let’s now define the CloudFormation template for our Pinpoint app:

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
myHookFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
# ...
# all the other properties here
# ...
myPinpointApp:
Type: AWS::Pinpoint::App
Properties:
Name: MySampleApp
myPinpointSegment:
Type: AWS::Pinpoint::Segment
Properties:
ApplicationId: !Ref myPinpointApp
Name: 'My segment'
myPinpointCampaign:
Type: AWS::Pinpoint::Campaign
Properties:
ApplicationId: !Ref myPinpointApp
SegmentId: !GetAtt myPinpointSegment.SegmentId
MessageConfiguration: {...}
CampaignHook:
LambdaFunctionName: !Ref myHookFunction
Mode: DELIVERY
Schedule:
StartTime: '2019-11-01T15:00:00.000'
TimeZone: UTC
# ...
# all the other properties here
# ...
view raw template.yml hosted with ❤ by GitHub

The overall structure is the same of custom segments. Only two main differences:

  • We don’t need to define a channel
  • We are using DELIVERY for the campaign hook mode

You can find the full documentation here.

12. AWS ALB (Application Load Balancer)

AWS ALB is one of the three type of load balancers supported by Elastic Load Balancing on AWS, together with Network Load Balancers and Classic Load Balancers.

ALB operates at the Layer 7 of the OSI model, which means it has the ability to inspect packets and HTTP headers to optimize its job. It was announced in August 2016 and it introduced popular features such as content-based routing, support for container-based workloads, as well as for WebSockets and HTTP/2.

Since Nov 2018, ALB supports AWS Lambda too, which means you can invoke Lambda functions to serve HTTP(S) traffic behind your load balancer.

For example — thanks to the content-based routing feature — you could configure your existing application load balancer to serve all traffic under /my-new-feature with AWS Lambda, while all other paths are still served by Amazon EC2, Amazon ECS, or even on-premises servers.

While this is great to implement new features, it also opens up new interesting ways to evolve your compute architecture over time without necessarily refactoring the whole application. For example, by migrating one path/domain at a time transparently for your web or mobile clients.

If you’ve already used AWS Lambda with Amazon API Gateway, AWS ALB will look quite familiar, with a few minor differences.

Let’s have a look at the request/response structure:

{
"requestContext": {
"elb": {
"targetGroupArn": "arn:aws:elasticloadbalancing:XXX:YYY:targetgroup/lambda-ZZZ"
}
},
"httpMethod": "GET",
"path": "/my-new-feature",
"queryStringParameters": {
"id": "123"
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip",
"connection": "keep-alive",
"host": "lambda-alb-123578498.us-east-2.elb.amazonaws.com",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36",
"x-amzn-trace-id": "Root=1-5c536348-3d683b8b04734faae651f476",
"x-forwarded-for": "72.12.164.125",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-imforwards": "20"
},
"body": "",
"isBase64Encoded": false
}
view raw event.json hosted with ❤ by GitHub

AWS ALB will invoke our Lambda functions synchronously and the event structure looks like the JSON object above, which includes all the request headers, its body, and some additional metadata about the request itself such as HTTP method, query string parameters, etc.

ALB expects our Lambda function to return a JSON object similar to the following:

{
"statusCode": 200,
"statusDescription": "200 OK",
"isBase64Encoded": false,
"headers": {
"Content-Type": "application/json"
},
"body": "{\"message\": \"Hello world!\"}"
}
view raw response.json hosted with ❤ by GitHub

That’s it! As long as you apply a few minor changes to your Lambda function’s code, it’s quite straightforward to switch from Amazon API Gateway to AWS ALB. Most differences are related to the way you extract information from the input event and the way you compose the output object before it’s converted into a proper HTTP response. I’d personally recommend structuring your code by separating your business logic from the platform-specific input/output details (or the “adaptor”). This way, your business logic won’t change at all and you’ll just need to adapt how its inputs and outputs are provided.

For example, here’s how you could implement a simple Lambda function to work with both API Gateway and ALB:

def handler(event, context):
input = extract_input_from_event('ALB', event)
output = my_business_logic(input)
return format_output_for('ALB', output)
def extract_input_from_event(alb_or_apigw, event):
# extract data from input event
def format_output_for(alb_or_apigw, output):
# return a json object
def my_business_logic(data):
# implement business logic here
view raw index.py hosted with ❤ by GitHub

Now, I wouldn’t recommend this coding exercise unless you have a real-world use case where your function needs to handle both API Gateway and ALB requests. But keep this in mind when you implement your business logic so that switching in the future won’t be such a painful refactor.

For example, here’s how I would implement a simple Lambda function that returns Hello Alex! when I invoke the endpoint with a querystring such as ?name=Alex and returns Hello world! if no name is provided:

import json
def lambda_handler(event, context):
name = get_name(event) # extract inputs
message = get_message(name) # business logic
return build_response(message) # format output for ALB
def get_name(event):
# fetch inputs from event (could be missing)
return event['queryStringParameters'].get('name')
def get_message(name=None):
# my business logic
return "Hello %s!" % (name or 'world')
def build_response(message):
# build response for ALB
return {
"statusCode": 200,
"statusDescription": "200 OK",
"isBase64Encoded": False,
"headers": {
"Content-Type": "application/json",
},
"body": json.dumps({"message": message})
}
view raw index.py hosted with ❤ by GitHub

In this case, I’d only need to apply very minor changes to build_response if I wanted to integrate the same function with API Gateway.

Now, let’s have a look at how we’d build our CloudFormation template. AWS SAM does not support ALB natively yet, so we’ll need to define a few raw CloudFormation resources:

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Parameters:
Subnets:
Type: List<AWS::EC2::Subnet::Id>
VpcId:
Type: AWS::EC2::VPC::Id
Resources:
myFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
# ...
# all the other properties here
# ...
myLoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Scheme: internet-facing
Subnets: !Ref Subnets
SecurityGroups: [!Ref mySecurityGroup]
myTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
DependsOn: myLambdaPermission
Properties:
TargetType: lambda
Targets:
- Id: !GetAtt myFunction.Arn
myHttpListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
LoadBalancerArn: !Ref myLoadBalancer
Port: 80
Protocol: HTTP
DefaultActions:
- TargetGroupArn: !Ref myTargetGroup
Type: forward
mySecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow http on port 80
VpcId: !Ref VpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
myLambdaPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !GetAtt myFunction.Arn
Action: lambda:InvokeFunction
Principal: elasticloadbalancing.amazonaws.com
Outputs:
DNSName:
Value: !GetAtt myLoadBalancer.DNSName
view raw template.yml hosted with ❤ by GitHub

The Application Load Balancer definition requires a list of EC2 subnets and a VPC. This is a good time to remind you that AWS ALB is not fully serverless, as it requires some infrastructure/networking to be managed and it’s priced by the hour. Also, it’s worth noting that we need to grant ALB permissions to invoke our function with a proper AWS::Lambda::Permission resource.

That said, let me share a few use cases where you may want to use AWS ALB to trigger your Lambda functions:

  1. You need a “hybrid” compute architecture including EC2, ECS, and Lambda under the same hostname — maybe to implement new features for a legacy system or to cost-optimize some infrequently used sub-systems
  2. Your API’s are under constant load and you are more comfortable with a by-the-hour pricing (ALB) than a pay-per-request model (API Gateway) — this might be especially true if you don’t need many of the advanced features of API Gateway such as input validation, velocity templates, DDOS protection, canary deployments, etc.
  3. You need to implement some advanced routing logic — with ALB’s content-based routing rules you can route requests to different Lambda functions based on the request content (hostname, path, HTTP headers, HTTP method, query string, and source IP)
  4. You want to build a global multi-region and highly resilient application powered by AWS Global Accelerator — ALB can be configured as an accelerated endpoint using the AWS global network

Let me know if you could think of a different use case for ALB + Lambda.

You can read more about this topic on the official documentation.

Also, here you can find an ALB app on the Serverless Application Repository.

Conclusions

That’s all for part 3!

I sincerely hope you’ve enjoyed diving deep into AWS CodeDeploy, AWS CodePipeline, Amazon Pinpoint, and AWS Application Load Balancer.

Now you can customize your CI/CD pipelines, implement custom segments or channels for Amazon Pinpoint, and serve HTTP traffic though AWS ALB.

This is the last episode of this series and I’d recommend checking out the first two articles here and here if you haven’t read them yet, where I talked about integrating Lambda with Amazon Cognito User Pools, AWS Config, Amazon Kinesis Data Firehose, AWS CloudFormation, AWS IoT Button, Amazon Lex, Amazon CloudWatch Logs, and Amazon Aurora.

Thank you all for reading and sharing your feedback!

As usual, feel free to share and/or drop a comment below :)


Originally published on HackerNoon on Oct 30, 2019.


Image of Datadog

The Future of AI, LLMs, and Observability on Google Cloud

Datadog sat down with Google’s Director of AI to discuss the current and future states of AI, ML, and LLMs on Google Cloud. Discover 7 key insights for technical leaders, covering everything from upskilling teams to observability best practices

Learn More

Top comments (0)

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay