DEV Community

Cover image for Bearcam Companion: My First Lambda
Ed Miller for AWS Community Builders

Posted on

Bearcam Companion: My First Lambda

I have been making progress on the Bearcam Companion web application. I have implemented most of the main React frontend components with the associated Amplify backends. However, some of the functionality which I had implemented in the UI should really be automated. This calls for one of the staples of serverless, AWS Lambda.

AWS Lambda

What is AWS Lambda? Here's what the AWS Lambda page says:

AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use.

Creating a Lambda with Amplify CLI

The first thing I wanted to automate was running the object detection machine learning models on every new image. In a previous post, I described how I accomplished this using Amazon Rekognition from the UI. In my most recent post, I described how I upload images to S3 and update the Images table. Now I want to use the Images table update to trigger a Lambda to run Rekognition on the image and save the object detection results to the Objects table.

I created the Lambda using the Amplify CLI to add a function:

amplify add function
Enter fullscreen mode Exit fullscreen mode

There are numerous options for setting up the Lambda, so read the documentation carefully. For my needs, here are some key settings:

  • Function Name: bcOnImagesFindObjects
  • Runtime: NodeJS
  • Function template: CRUD function for Amazon DynamoDB, since I will be reading from the Images table and saving the Rekognition results in the Objects table
  • Resource access: GraphQL endpoints for Images and Objects
  • Trigger: DynamoDB Lambda Trigger for Images

Developing the Lambda

After creation, the function template appears in your project under:

amplify/backend/function/<function-name>/src/index.js
Enter fullscreen mode Exit fullscreen mode

The template provides a basic structure from which to build. The trigger data comes in an event stream (multiple events can be batched for efficiency). The first thing I did was to parse the event records. I only care about INSERT events. From those I pull out the S3 information for the image. Here's my parseRecords() function:

function parseRecords (records) {
  var inserts = [];
  records.forEach(record => {
    if (record.eventName === "INSERT")
    {
      // get image info
      const imageS3obj = record.dynamodb.NewImage.file.M
      const insert = {
        imageID: record.dynamodb.NewImage.id.S,
        Bucket: imageS3obj.bucket.S,
        Region: imageS3obj.region.S,
        Key: "public/" + imageS3obj.key.S
      }
    }
  });
  return (inserts);
}
Enter fullscreen mode Exit fullscreen mode

Next I loop through the images, calling processLabel() which sends the image to Rekognition for object detection using rekognition.detectLabels:

async function processImage(imageInfo) {
  const params = {
    Image: {
      S3Object: {
        Bucket: imageInfo.Bucket,
        Name: imageInfo.Key
      },
    },
    MinConfidence: MinimumConfidence
  }
  return await rekognition.detectLabels(params).promise();
}
Enter fullscreen mode Exit fullscreen mode

For each result, I call parseDetections() to pull out the relevant bounding box information from the JSON response:

function parseDetections(detections) {
  var boxes = [];
  const labels = detections.Labels;
  labels.forEach(object => {
    object.Instances.forEach(instance => {
      var bb = instance.BoundingBox;
      const box = {
        Name: object.Name,
        Confidence: instance.Confidence,
        Width: bb.Width,
        Height: bb.Height,
        Left: bb.Left,
        Top: bb.Top
      }
      boxes.push(box);
    })
  })
  return (boxes);
}
Enter fullscreen mode Exit fullscreen mode

Finally, I save each box to the Objects table by using fetch() to POST the data to the appropriate GraphQL endpoint. The main handler looks like this:

exports.handler = async function(event, context, callback) {
  try { // Parse DynamoDB Images Records
    const inserts = parseRecords(event.Records);
    for (insert of inserts) {
      // Call Rekognition on every new image
      let detections = await processImage(insert);
      const boxes = parseDetections(detections);
      for (box of boxes) {
        // Save each bounding box to Objects
        const options = getFetchOptions(box, insert.imageID);
        response = await fetch(GRAPHQL_ENDPOINT, options);
        body = await response.json();
        if (body.errors) {
          console.log("GraphQL error", body);
        } else {
          console.log("GraphQL success")
        }
      }
    }
  } catch (err) {
    callback(err.message);
  }
  return { status: "complete" };
}
Enter fullscreen mode Exit fullscreen mode

Once complete, you can deploy the Lambda with amplify push. Of course it didn't work at first!

Testing the Lambda Locally

There are multiple ways to debug Lambdas. You can start testing locally, using amplify mock function. The mock capability will run the Lambda locally and feed it with event data from a JSON file. I was able to capture a DynamoDB stream event from CloudWatch, which I used as my test JSON.

One of my main problems, and not for the first time, had to do with asynchronous functions. I still don't have I still have some problems with awaits and promises, etc. I am mainly using await inside of async functions, but sometimes I find no data is coming back because I have somehow returned from the function before the data arrived.

Another problem I encountered was writing data directly to DynamoDB. This works, but it doesn't fill in all the automatic fields that Amplify had created. Instead, use the GraphQL endpoints to write through AppSync.

Testing the Lambda in the Console

One of the first problems I ran into when I did an amplify push to deploy the Lambda was a missing module. The following line was failing:

const fetch = require('node-fetch');
Enter fullscreen mode Exit fullscreen mode

Not surprisingly, node-fetch is not part of the standard NodeJS runtime. Somehow I needed to include this package. I could either go to the src directory of the Lambda function and install the package there, or I could use a Lambda Layer. I chose the latter. More on that in a bit.

Once the Lambda is loading properly, you can test and modify code in the Lambda console:

Lambda Code Panel

You can test with pre-defined event JSON files, much as you can with amplify mock:

Lambda Test Panel

From this console, you can also access various monitor logs:

Lambda Monitor Panel

From the monitors logs you can jump to the CloudWatch LogStream:

CoudWatch Log

Lambda Layers

Lambda Layers provide a means to share common libraries across multiple Lambdas. Here's a diagram from the Amplify docs on layers.

CloudWatch Log

With amplify, you add a Lambda Layer much like you add a Lambda

amplify add function
Enter fullscreen mode Exit fullscreen mode

Once I have the layer, I can add packages with the appropriate package manager, in my case, npm for NodeJS:

npm i node-fetch
Enter fullscreen mode Exit fullscreen mode

When I'm done setting up the Lambda Layer, I need to update the Lambda function to have it use the layer:

amplify update function
Enter fullscreen mode Exit fullscreen mode

When I am done with everything, I can deploy the updates function and new layer with amplify push.

I still had an error related to JavaScript versions. I had to downgrade node-fetch from 3.x to 2.x. Once I did, I redeployed the Lambda Layer and updated the Lambda function to use the new version. I can see the trigger and layer information in the Lambda function overview:

Image description

Conclusion

In this post I described

  • Creating a Lambda function triggered by a change in a DynamoDB table
  • Testing the Lambda function locally and in the console
  • Implementing a Lambda Layer for common libraries

Overall, Amplify continues to impress by making it easy to deploy backend functionality. I was able to deploy a serverless function using a Lambda written in the same language as my frontend code. I still have some challenges with asynchronous functions, but that's more to do with my own inexperience with NodeJS/JavaScript.

Next time I will write about publishing my shiny new website. Follow along here and on Twitter (bluevalhalla).

Top comments (0)