DEV Community

Cover image for Trigger Lambda Function When New Image is Uploaded to S3 - (Let's Build 🏗️ Series)
awedis for AWS Community Builders

Posted on

Trigger Lambda Function When New Image is Uploaded to S3 - (Let's Build 🏗️ Series)

Let's build a simple architecture where a Lambda function is triggered whenever a user uploads a new image to an S3 bucket.

📋 Note: This article is preparation for a more advanced one which will be published inside the same series.

The main parts of this article:
1- Architecture Overview (Terraform)
2- About AWS Services (Info)
3- Technical Part (Code)
4- Result
5- Conclusion

Architecture Overview



variable "region" {
  default     = "eu-west-1"
  description = "AWS Region to deploy to"
}

terraform {
  required_version = "1.5.1"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.22.0"
    }
  }
}

provider "aws" {
  region = var.region
}

resource "aws_s3_bucket" "image_bucket" {
  bucket = "lets-build-1"
}

resource "aws_iam_role" "lambda_execution_role" {
  name = "lets-build-lambda-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Action = "sts:AssumeRole",
        Effect = "Allow",
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}

resource "aws_iam_policy_attachment" "lambda_basic_execution" {
  name       = "lets-build-lambda-attachment"
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
  roles      = [aws_iam_role.lambda_execution_role.name]
}

resource "aws_lambda_function" "image_analysis_lambda" {
  filename         = "./main.zip"
  function_name    = "lets-build-function"
  handler          = "main"
  runtime          = "go1.x"
  role             = aws_iam_role.lambda_execution_role.arn
  memory_size      = "128"
  timeout          = "3"
  source_code_hash = filebase64sha256("./main.zip")
  environment {
    variables = {
      REGION = "${var.region}"
    }
  }
}

resource "aws_lambda_permission" "allow_bucket" {
  statement_id  = "AllowExecutionFromS3Bucket"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.image_analysis_lambda.function_name
  principal     = "s3.amazonaws.com"
  source_arn    = aws_s3_bucket.image_bucket.arn
}

resource "aws_s3_bucket_notification" "bucket_notification" {
  bucket = aws_s3_bucket.image_bucket.id

  lambda_function {
    lambda_function_arn = aws_lambda_function.image_analysis_lambda.arn
    events              = ["s3:ObjectCreated:*"]
    filter_prefix       = "images/"
    filter_suffix       = ".png"
  }

  depends_on = [aws_lambda_permission.allow_bucket]
}


Enter fullscreen mode Exit fullscreen mode

Hint: always run terraform fmt and terraform validate to clean the syntax and validate that all is good, after you can run terraform plan to see the resources that will be created and finally terraform apply

About AWS Services

1- Amazon S3: To store the image.
2- AWS Lambda: To trigger our function and execute the code.
3- Identity and Access Management (IAM): To give permissions for the S3 and Lambda events.

Technical Part

Our fancy go code 😜 does simple logging.



package main

import (
    "context"
    "fmt"
    "github.com/aws/aws-lambda-go/lambda"
)

func handler(ctx context.Context) (string, error) {
    fmt.Println("Lambda function executed.")
    return "Lambda function executed successfully.", nil
}

func main() {
    lambda.Start(handler)
}


Enter fullscreen mode Exit fullscreen mode

Since we are not using any frameworks, we need to build and create our own .zip file in order to be able to upload the code to our Lambda function. Inside the Terraform configuration file already you can notice that I'm pointing to main.zip file which holds our built code.

Build your Go binary for the appropriate target platform:



GOARCH=amd64 GOOS=linux go build -o main


Enter fullscreen mode Exit fullscreen mode

Create a ZIP file containing your main binary and any other dependencies:



zip main.zip main

Enter fullscreen mode Exit fullscreen mode




Result

That's it, let's try to upload a new image to our S3 bucket, and we can see the logs being printed out when we check Amazon CloudWatch.

Here is a simple screenshot from Amazon CloudWatch after I uploaded an image to my S3 Bucket, also remember we are using prefixes for certain folders which in our case it's /images, and the file type should be .png

Image description

Sweet, now it's your turn to become more creative and build on top of this. And a small reminder that the techniques and skills that are used in this article will be also used for my next one, which will hold more advanced services and skills.

Conclusion

I wanted to write this article a separate one since there are many use cases in this simple architecture that can be really handy. Hope you guys enjoyed it.

If you'd like more content like this, feel free to connect with me on LinkedIn.

Top comments (3)

Collapse
 
rdarrylr profile image
Darryl Ruggles

A great example - thanks!

Collapse
 
momonaguilar profile image
Reymond

Do we need to do anything or maybe upload the .zip file generated to aws? Sorry for noob questions

Collapse
 
awedis profile image
awedis

yes for sure, if you can notice inside the terraform I'm pointing my .zip file.