DEV Community

Bryson Tyrrell for AWS Community Builders

Posted on

Dynamically Configure Your Lambda@Edge Functions

Lambda@Edge is the thing I always find something new to gripe about each time I come back to it. Almost every time the core of my issue is the lack of any ability to perform dynamic configuration of that running code.

This is a situation that I find the AWS blog doesn't help with. Go look and you will find all sorts of examples where the configuration and/or secrets are hard coded in their functions.

I don't consider this to be best practice at all. My serverless app should be able to be redeployed without branching to commit different resource ARNs, or customizing the build process to inject values at deployment. I need to be able to have my template - the definition of my app in CloudFormation - to be able to determine all of this using the standard tooling AWS provides.


There are two sets of event types for Lambda@Edge functions with CloudFront: viewer-request, origin-request, origin-response, and viewer-response. I wrote them in the order they occur during the client request lifecycle. As the names imply, viewer- event occur on the client's side of the CloudFront distribution, while origin- events occur the origin/source's side.

For Lambda@Edge, the triggering defines where our limitations are going to be. origin- events allow the most freedom. We can set function memory as high as we want, the timeout can be a full 30 seconds (same as an API Gateway event source), and the size of the function code can be up to 50 MB.

Switching to viewer- events severely restricts our resources. Function memory can only be the default 128 MB, our timeout can not exceed 5 seconds, and the function code cannot be more than 1 MB.

Beyond this, both event types allow us network access. As long as our functions work without the bounds set on them, we are able to leverage AWS (or other) services in our code.

For any other serverless app we would rely on environment variables to determine how we configure our code at runtime. Things like ARNs of required resources, paths or locations to read in needed secrets, all the usual suspects. With Lambda@Edge we not allow to use environment variables. At all. Period.

Origin-Request Example

For origin-request event functions there is a trick we have available: origin custom headers. These are a configuration on the origin for our CloudFront distribution, and we can dynamically set these values in our CloudFormation template. Essentially, the custom headers will become our missing environment variables.

Here's an example of one I implemented once. The use case here is that my distribution serves content from a variety of S3 buckets in numerous AWS regions. I have an API, deployed in all those regions, that can take a file identifier and then return the key name and bucket domain of where it is located. I need the Lambda@Edge function to take the file from the request, look it up in the API, and then set the origin for the request to serve it. My implementation is a variation on this AWS blog post.

In this example, my API uses a Cognito User Pool for authentication. I created an app client specifically for the CloudFront distribution's use, and it can obtain access tokens uses client credentials flow.

I'm going to skip most of the template as the important piece is how we configure the default origin of the CloudFront distribution.

  - DomainName: !GetAtt DefaultBucket.DomainName # See comment below
    Id: default-origin
      - HeaderName:api-domain
        HeaderValue: !Ref ApiDomainName # Template parameter
      - HeaderName: cognito-domain
        HeaderValue: !Ref UserPoolDomain # Template parameter
      - HeaderName: client-id
        HeaderValue: !Ref DistributionClientId
      - HeaderName: client-secret
        HeaderValue: !Ref DistributionClientSecret
      OriginAccessIdentity: !Sub origin-access-identity/cloudfront/${CloudFrontOAI} # Template parameter
Enter fullscreen mode Exit fullscreen mode

In this template I do have a DefaultBucket S3 bucket resource to use with this. You don't have to do that, but the DomainName attribute must be set. In any event, this never gets used as our function is routing to another origin or returning 404 responses.

This isn't without drawbacks. We are exposing the client credentials to CloudFront (though they are also visible in the Cognito console). The client that made the original request will never see these headers, and because they're injected by CloudFront after receiving the request. Our Lambda@Edge function processes the event before the origin, and we have the option of removing these headers as we modify the event and pass it back to CloudFormation (thus preventing exposed to the origin).

Consider an adaptations of this pattern where you pass SSM or Secrets Manager paths that the code can then read at runtime and cache for successive invocations.

Viewer Request Example

The next example I'm going to show really applies to the remaining three event types (custom headers apply only to origin-request events), but my example is specifically around implementing HTTP basic auth for resources behind my CloudFront distribution.

Here, my plan was to have the function perform a lookup in a DynamoDB table for the user that it would perform the comparison against and allow or deny access. My challenge is to

I'm not entirely sure how I was struck by this inspiration, but it occurred to me one day that the Lambda@Edge function does indeed already know exactly what it needs to talk to. That information is in the IAM role I assigned to it.

This solution can be distilled down very simply to granting the function the ability to read its own IAM role and then extract the DynamoDB ARN to configure the boto3 client. It feels wrong, but it works, and it works within the 128 MB memory and 5 second timeout limits imposed by this event type (including cold start invocations).

Let's look at the fuction's definition in my CloudFormation/SAM template first.

    Type: AWS::Serverless::Function
      Runtime: python3.8
      Handler: index.lambda_handler
      CodeUri: ./src/distribution/authorizer
      Role: !GetAtt DistributionAuthorizerRole.Arn
      MemorySize: 128 # Max for viewer-request
      Timeout: 5 # Max for viewer-request
      AutoPublishAlias: live

    Type: AWS::IAM::Role
      Path: "/"
          - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
        Version: 2012-10-17
          - Effect: Allow
              - sts:AssumeRole
        - PolicyName: root
            Version: "2012-10-17"
              - Effect: Allow
                Action: iam:GetRole
                Resource: !Sub arn:aws:iam::${AWS::AccountId}:role/${AWS::StackName}-DistributionAuthorizer-*
              - Effect: Allow
                Action: dynamodb:GetItem
                Resource: !GetAtt ServiceTable.Arn
Enter fullscreen mode Exit fullscreen mode

All very standard for a Lambda@Edge function, but I have added a new permission for this: iam:GetRole. Because we have a chicken-and-the-egg scenario unfolding where I am grating a permission to read the IAM role that the permission is defined within, I'm taking advantage of how CloudFormation names resources ( {stack name}-{logical ID}-{random suffix} ) to be sure I've targeted down the resource as best as I can without having to name it (it is best practice to not name your resources and let CloudFormation do it for you).

To make this happen in code, I need to use STS GetCallerIdentity to find out the name of the IAM role used for the credentials my function was provided, then use IAM GetRolePolicy to read the policy of the role, and then after parsing it I can find the ARN of the DynamoDB table and configure my client.

Here's the code.

import boto3

session = boto3.Session()
iam_client = session.client("iam")
sts_client = session.client("sts")

ROLE_NAME = sts_client.get_caller_identity()["Arn"].split("/")[-2]
ROLE_POLICY = iam_client.get_role_policy(RoleName=ROLE_NAME, PolicyName="root")[

for arn in ROLE_POLICY["Statement"][0]["Resource"]:
    if arn.startswith("arn:aws:dynamodb:"):
        arn_parts = arn.split("/")
        TABLE_NAME = arn_parts[-1]
        TABLE_REGION = arn_parts[0].split(":")[3]

dynamodb_table = session.resource("dynamodb", region_name=TABLE_REGION).Table(

def lambda_handler(event, context):
    result = dynamodb_table.get_item(Key={"pk": "U#username", "sk": "A"})
    # Basic auth code will go here. L@E function always returns a 401 response for now.

    return {
        "status": "401",
        "statusDescription": "Unauthorized",
        "body": "Unauthorized",
        "headers": {
            "www-authenticate": [{"key": "WWW-Authenticate", "value": "Basic"}]
Enter fullscreen mode Exit fullscreen mode

This is an incomplete HTTP basic authorizer as it isn't parsing the authorization header to get the username to perform the lookup, but it does demonstrate all the working pieces of configuring boto3 by referencing the IAM role policy and then performing a lookup against the table.

The runtime results, despite the number of AWS API calls being made (across three services) are encouraging:

# Cold Start
Duration: 171.38 ms Billed Duration: 172 ms Memory Size: 128 MB Max Memory Used: 80 MB  Init Duration: 519.87 ms

# Warm Executions
Duration: 24.46 ms  Billed Duration: 25 ms  Memory Size: 128 MB Max Memory Used: 80 MB
Duration: 43.25 ms  Billed Duration: 44 ms  Memory Size: 128 MB Max Memory Used: 80 MB
Duration: 25.76 ms  Billed Duration: 26 ms  Memory Size: 128 MB Max Memory Used: 80 MB
Duration: 30.47 ms  Billed Duration: 31 ms  Memory Size: 128 MB Max Memory Used: 80 MB
Enter fullscreen mode Exit fullscreen mode

Well within limits. It works.

It's not really good, through, is it?

We Shouldn't Have To Do This

All of the above might sound very clever, and if it does that should tell us that it shouldn't have to be this way. My issue with Lambda@Edge is I don't feel it's developer friendly. Look at the mental hoops I've gone through to figure out how to configure my Lambda@Edge functions the way I would a normal Lambda function. There's a massive ripple effect that comes from not having environmental configuration built in.

I've reached out on this topic before, and I've never received a really good answer on the "how" to do it. If anyone from AWS's CloudFront, Lambda@Edge, CloudFormation, and/or SAM teams are stumbling across this; I ask you to reflect on the what many other customers must also have to contend with for something that should be simple, straightforward, and is easy for developers to do.

Top comments (5)

rehanvdm profile image
Rehan van der Merwe

Interesting solution.. I also "inject" env variables for my Lambda@Edge with CloudFront headers.

One thing you need to watch out for is the low limit on iam:GetRole AWS will throttle that call if it is made to often, the limit is quite low.

We actually create a Cloudfront per client as they need a custom domain anyway. So maybe the better way to do it is to create a cloudfront per client and then inject the table name in the headers. Then Lambda@Edge just uses that?

brysontyrrell profile image
Bryson Tyrrell

@rehanvdm I would also say to stick with injecting values via custom headers when possible, but that only applies to Lambda@Edge functions that are attached to the origin-request - you can't do if you're in a scenario where you need to use viewer-request.

Rolling a distro per client is something I think breaks down at large scale. The default quota is 200 per account (that can of course be raised - I'm curious how high it really goes). When you have thousands of customers, if you're issuing a unique domain for all of them you're either possible hitting an upper limit AWS allows, or juggling a multi-account strategy for your service.

The limit on iam and sts calls - very real. Keeping it in the init of the function is about all that can be done to minimize those calls to once per execution environment, but you're right that it's probably going to get hit pretty soon on a traffic spike. I couldn't find any published docs on what that rate limit is, though.

rehanvdm profile image
Rehan van der Merwe

Agree. So it only breaks down for B2C companies and if you didn't plan to go multi account from the begining. That is the only way to avoid AWS imposed limits, if I recal correctly, I read somewhere that a company could easily get a raise to 1000 CloudFronts per account but please don't hold me on that.

jamesshapiro profile image
James Shapiro

Great post. For the viewer-request example, I had to change the following lines to get this working for me:

Before: Action: iam:GetRole ->
After: Action: iam:GetRole*

Before: Resource: !Sub arn:aws:iam::${AWS::AccountId}:role/${AWS::StackName}-DistributionAuthorizer-* ->
After: Resource: !Sub arn:aws:iam::${AWS::AccountId}:role/${AWS::StackName}-DistributionAuthorizerRole-*

kmihaltsov profile image

Awesome tricks!
Lambda Edge Architect must have been drunk when decided to ignore env variables concept completely.