DEV Community ๐Ÿ‘ฉโ€๐Ÿ’ป๐Ÿ‘จโ€๐Ÿ’ป

Mariano Ramborger
Mariano Ramborger

Posted on

Serverless (AWS) - How to trigger a Lambda function from a S3 bucket, and use it to update a DDB.

What is a Lambda on AWS?

If you are a developer or have some programming background, you may be familiar with Lambda functions.
They are a useful feature in many of the most popular modern languages, and they basically are functions that lack an identifier, which is why they are also called anonymous functions. This last name may not be appropriate for certain languages, where not all anonymous functions are necessary Lambda functions, but thatโ€™s a discussion for another day.
Aws Lambda takes inspiration from this concept, but itโ€™s fundamentally different. For starters, AWS Lambda is a service that lets you run code without having to provision servers, or even EC2 instances. As such, it is one of the cornerstones of Amazonโ€™s serverless services, alongside Api-Gateway, DynamoDB and S3, to name a few.
Aws Lambda functions are event-driven architecture, and as such they can be triggered and executed by a wide variety of events. On this article, we will create a Lambda function and configure it to trigger based whenever an object is put inside of an S3 bucket. Then weโ€™ll use it to update a DynamoDB table.

Alt Text

Getting Started

Weโ€™ll start on the AWS console. For this little project, weโ€™ll need an S3 bucket, so letโ€™s go ahead and create it.
In case you havenโ€™t heard of it, S3 stands for Simple Storage Service, and allows us to store data as objects inside of structures named โ€œBucketsโ€.
So weโ€™ll search for S3 on the AWS console and create a bucket.

Alt Text

Today we donโ€™t need anything fancy so letโ€™s just give it a unique name.
We will also need a DynamoDB table. DynamoDB is Amazonโ€™s serverless no-SQL database solution, and it we will use it to keep track of the files uploaded in our bucket.
Weโ€™ll go into โ€œCreate Tableโ€, then give it a name and a partition key.

Alt Text

We could also give it a sort key, like the bucket where the file is stored. That way we could receive a list ordered by bucket when we do a query or scan. However, that is not our focus today. And since we are using just the one bucket, weโ€™ll leave it unchecked.

Next, weโ€™ll search for Lambda on the console.

Alt Text

Here we can choose to make a lambda from scratch, or build it using one of the many great templates provided by Amazon. We can also source it from a container or a repository. This time weโ€™ll start from scratch and make with the latest version of Node.js.
We will be greeted by a function similar to this one

Alt Text

Itโ€™s just a simple function that returns an โ€œOKโ€ whenever itโ€™s triggered. Those of you familiar with Node may be wondering about that โ€œhandlerโ€ on the first line.

Anatomy of a Lambda function.

The handler is sort of an entry point for your Lambda. When the function is invoked, the handler method will be executed. The handler function also receives two parameters: event and context.
Event is JSON object which provides event-related information, which your Lambda can use to perform tasks. Context, on the other hand, contains methods and properties that allow you to get more information about your function and its invocation.
If at any point you want to test what your Lambda function is doing, you can press on the โ€œTestโ€ button, and try its functionality with parameters of your choosing.
We know we want our Lambda to be triggered each time somebody uploads an object to a specific S3 buckets. We could configure the bucket, then upload a file to see kind of data that our function receives. Fortunately, AWS already has a test even that mimics an s3 PUT operation trigger.
We just need to select the dropdown on the โ€œTestโ€ button, and select the โ€œAmazon S3 Putโ€ templateโ€.

var AWS = require("aws-sdk");

exports.handler = async (event) => {

        //Get event info.
        let bucket = event.Records[0].s3
        let obj = bucket.object
        let params = {
            TableName: "files",
            Item : {
                    file_name : obj.key,
                    bucket_name: bucket.bucket.name,
                    bucket_arn: bucket.bucket.arn
            }
        }
        //Define our target DB.
          let newDoc = new AWS.DynamoDB.DocumentClient(
            {
                region: "us-east-1"});
        //Send the request and return the response.
        try {
             let result = await newDoc.put(params).promise()

             let res = {
                 statusCode : result.statusCode,
                 body : JSON.stringify(result)
             }
             return res
        }
        catch (error) {

             let res = {
                 statusCode : error.statusCode,
                 body : JSON.stringify(error)
             }
             return res
        }
}
Enter fullscreen mode Exit fullscreen mode

This simple code will capture event info and enter it into our DynamoDB table.
However, if you go ahead and execute it, you will probably receive an error stating that our Lambda function doesnโ€™t have the required permission to run this operation.
So weโ€™ll just head to IAM. Hopefully weโ€™ll be security-conscious and create a role with the minimum needed permissions for our Lambda.

Just kidding, for this tutorial full DynamoDB access will do!

Alt Text

Now everything should work. Letโ€™s publish our Lambda and upload a file to our S3 bucket and test it out!

Alt Text

Everything is working as it should!
Now, what we did was pretty basic, but this basic workflow (probably backed by Api Gateway or orchestrated with Step Functions) can make the basis of a pretty complex serverless app.

Thanks for reading!

Top comments (0)

๐ŸŒš Browsing with dark mode makes you a better developer.

It's a scientific fact.