DEV Community

Kameron Kales
Kameron Kales

Posted on

Lambda Function Got Ya's

I am working on a new product for a privacy focused startup. We protect customers from logging, tracking and all that bad stuff we've come to dislike.

We are starting with a chrome extension as an MVP.

(if you know of a chrome extension with >30,000 users that hasn't been updated in >18 months, email me. I will pay a referral fee.)

Inside of our chrome extension we're going to do a few different things. Mainly protect your history. To do this, we need:

  • Account Creation
  • Account Log In
  • API to check against

I didn't want to manage all of the typical ECS/Docker/ELB/Route53 crap that comes along with getting a service out to customers these days. So I decided to write these services in a serverless style.

This post details some common mistakes I made, and how I have fixed them.

One thing to note, I generally do not like adding packages that introduce another "dialect" to get the job done. When I have done this, it often abstracts away what is actually happening, and makes it a struggle to debug.

I want to stay in plain javascript, python or go as much as possible. The bugs are more predictable and docs clearer. That is not to say my way is the best, just know that this blog will only focus on my personal approach.

Last administrative thing: I hate blogs that don't show the FULL code. So we will. It doesn't help if half of what you need to reproduce is missing.

First thing to realize, when you create a lambda function you upload a zipped file with a flat file structure to Lambda.

If you're using Node, that means you must zip up your node_modules. If you're using Python you will do the same with your requirements.txt packages.

There are a few easy ways to do this:

In node => I created a file called build_upload.sh. That file houses the following code:

zip -r ./dev/lambda_function.zip ./node_modules/* ./main.js -x "./dev" && aws s3 cp ./dev/lambda_function.zip s3://$1-$2/$2/v$3/lambda_function.zip

This bash file takes 3 variables:

  • $1 = stage (for example, dev)
  • $2 = integration name (for example, stripe-integration)
  • $3 = version number (for example, 1.0.0)

I chose to set these variables up like this. The s3://$1-2 portion combines to correspond with my bucket name. If you named your bucket differently please alter the example!

A working example with the variables filled in would be:

bash build_upload.sh dev stripe-integration 1.0.0

That would fill the code in from above and produce:

zip -r ./dev/lambda_function.zip ./node_modules/* ./main.js -x "./dev" && aws s3 cp ./dev/lambda_function.zip s3://dev-stripe-integration/stripe-integration/v1.0.0/lambda_function.zip

This then shows up version controlled in your s3 bucket. Which lets you roll back at any point needed. There is an infinite number of other set up options you can get rolling but this was the easiest way for me to get a working build and deploy sequence going.

In python,

zip -r9 ../lambda.zip * -x "bin/*" requirements.txt setup.cfg

I'd suggest using Terraform to manage all of this. The best way to do this is to upload the code to an s3 bucket => grab the most recent version via Terraform => deploy onto Lambda.

I provided the node example above showing how to do that. The python one only builds the updated zip locally.

Here is the block for the Terraform resource:

resource "aws_lambda_function" "check_file_lambda" {
    filename = "${var.function_path}"
    function_name = "${var.function_name}"
    role = "${aws_iam_role.check_file_lambda.arn}"
    handler = "${var.handler}"
    runtime = "${var.runtime}"
    timeout = "${var.timeout}"
    source_code_hash = "${base64sha256(file("${var.function_path}"))}"
}

In Terraform you have a file called variables.tfvars. All of the mentions of ${var.function_path} or similar are variables stored in that file. These variables are set up with a syntax like so:

variable "function_path" {
    description: "The path to the lambda function package you're deploying"
}

You can additionally add a default field:

variable "function_path" {
    description: "The path to the lambda function package you're deploying"
    default: "../../lambda_function.zip"
}

Without the default field the command line will prompt you to enter the definition of default when you run Terraform plan or Terraform apply.

Moving on, the one field we want to pay special attention to is the source_code_hash. This will take a hash of our source code and upload to lambda new versions when we run Terraform apply.

You can also do this other ways, but since I use Terraform for everything (s3, functions, cronjobs, database, routing, etc) I deploy changes through it. #TeamNeverUseAWSUI

That magic little field will make it so I can zip up changes and re-deploy from anywhere.

You might run into not being able to edit the code in the online editor. My python function didn't let me, but my node package did. For this, I don't currently have a great solution. But if anyone does, please let me know!

I like to write function based code. This means my code very often looks like this:

def initial_function():
        blah blah blah 
        blah blah blah 
        format_data(blah)

def format_data(blah):
        blah2 blah2 blah2
        blah2 blah2 blah2
        insert_data(blah2)

def insert_data(blah2):
        blah3 blah3 blah3 
        blah3 blah3 blah3

One mistake I made early was not quite understanding how to integrate this into the lambda function syntax of:

def handler(event, context):

To do this, I altered the above such that the handler function calls the initial_function.

def handler(event, context):
    initial_function()

def initial_function():
        blah blah blah 
        blah blah blah 
        format_data(blah)

def format_data(blah):
        blah2 blah2 blah2
        blah2 blah2 blah2
        insert_data(blah2)

def insert_data(blah2):
        blah3 blah3 blah3 
        blah3 blah3 blah3
        return 

You can pass params from the event if you need. This is a basic example showing how to make your local code work in Lambda.

I did the exact same with Node shown below:

One key reminder: Once your function is completed, remember to call the callback. The callback(null, response is shown below)

exports.handler = function (event, context, callback){
    .......... bunch of code here 
    .......... bunch of code here
    .......... bunch of code here
    .......... bunch of code here
    callback(null, response)

Once you have gotten this far, you should have a working lambda function testable through the console. You just click the test button and make a fake test (if your function depends on params then make it more realistic).

You can also create a file called invoke_lambda.sh

aws lambda invoke --region=us-east-1 --function-name=$1 $1.txt

This takes 1 variable (as shown by $1). The function name. It outputs the result of the function into a file titled function_name.txt.

You can then open it or read via cat from your terminal.

This makes it super easy, to once again, figure out what is going on without opening Amazons impossible UI.

One issue I ran into with our scraping infrastructure was lambda timing out. It is initially set at a 3 second time out. I altered to 30 and was golden once again.

You may also run out of memory. This is a common issue people run into. I didn't have the same problem but if you do, there is a slider in the UI or one field in Terraform to update.

The next logical step is to create a custom domain from your lambda function. So you can do something like:

https://api.kameronkales.com/stripe

to do a get request on the stripe-integration lambda function.

I can update this later if people want to know about that. It is pretty simple:

  • Request a certificate from AWS Certificate Manager
  • Do the verification (DNS or Email)
  • Set up a CName such that your desired route is available at your cloudfront distribution url
  • map the /route to that domain in the custom domain portal of api gateway.

Top comments (0)