DEV Community

Cover image for Build a Serverless Event-Driven Application - Pixelator3000
itsmenilik
itsmenilik

Posted on

Build a Serverless Event-Driven Application - Pixelator3000

Introduction:

Starting a new AWS project is always invigorating. Lately, I've been diving into another cloud venture, driven by my keen interest in exploring event-driven architecture intricacies. I'm eager to share my experiences, particularly in configuring Lambda Roles for specific event handling.

WHAT DOES THE ARCHITECTURE LOOK LIKE 🤔

Image description

I was experimenting with certain AWS features, S3 event notifications and Lambda. You're going to see how I used both of those technologies to implement a simple event-driven serverless image processing workflow.

HOW DOES EVERY COMPONENT FLOW?!?!!?!?!?

  1. We're going to configure S3 event notifications
    on an S3 bucket and we're going to call this bucket the source bucket.

  2. Now, when any images get uploaded to this S3 bucket, an event is going to be generated.

  3. That event will be passed through to a Lambda function, which will be invoked.

  4. That Lambda function will gather all of the data contained within that event object.

  5. It will load the original image and it will process that image, in this case, generating five different pixelated versions of that image and putting them into a processed bucket.

Where Did You Learn This

Image description

I am a big proponent of continuous learning and continuous development. So I did what anyone with internet would do, I searched YouTube for videos that could teach me the processes related to Database Migration. Lucky, I came across LearnCantrill videos and went through one of his Mini Projects.

Here is a link to his channel https://www.youtube.com/@LearnCantrill.

I was able to find him through one of my favorite AWS Gurus Be A Better Dev.

Here is a link to his channel https://www.youtube.com/@BeABetterDev

LETS GET STARTED 🤩

Stage 1: Create the S3 Buckets

source bucket
Enter fullscreen mode Exit fullscreen mode

Image description

processed bucket
Enter fullscreen mode Exit fullscreen mode

Image description

My journey began with the creation of S3 buckets. This was a familiar territory for me, as I had worked with S3 before.

I set up the source buckets to store images that would trigger events, forming the basis of my event-driven architecture.

I then set up the processed bucket once the lambda function has finished pixelated them.

Stage 2: Create the Lambda Role

Creating the Lambda Role was the first step that presented a unique challenge.

To build a serverless architecture, it's essential to grant the Lambda functions the necessary permissions to interact with AWS services.

In my case, I needed to configure the Lambda Role to allow access to S3 buckets and, more importantly, define the specific events that would trigger the Lambda function.

Here, I had to dive deep into AWS Identity and Access Management (IAM) policies and roles. I learned that IAM roles are crucial for security and least privilege access.

{
    "Version": "2012-10-17",
    "Statement": 
    [
      {
        "Effect":"Allow",
        "Action":[
           "s3:*"
        ],
        "Resource":[
            "arn:aws:s3:::REPLACEME-processed",
            "arn:aws:s3:::REPLACEME-processed/*",
            "arn:aws:s3:::REPLACEME-source/*",
            "arn:aws:s3:::REPLACEME-source"
        ]

      },
      {
          "Effect": "Allow",
          "Action": "logs:CreateLogGroup",
          "Resource": "arn:aws:logs:us-east-1:YOURACCOUNTID:*"
      },
      {
          "Effect": "Allow",
          "Action": [
              "logs:CreateLogStream",
              "logs:PutLogEvents"
          ],
          "Resource": [
              "arn:aws:logs:us-east-1:YOURACCOUNTID:log-group:/aws/lambda/pixelator:*"
          ]
      }
    ]
}
Enter fullscreen mode Exit fullscreen mode

I crafted a policy that granted the Lambda function the required permissions to access the source bucket to load in the original image. It can also access CloudWatch Logs to log out any status events
and then it can access the processed bucket
to store the output.

Image description

Stage 3: Create the Lambda Function

import os
import json
import uuid
import boto3

from PIL import Image

# bucketname for pixelated images
processed_bucket=os.environ['processed_bucket']

s3_client = boto3.client('s3')

def lambda_handler(event, context):
    print(event)

    # get bucket and object key from event object
    source_bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']

    # Generate a temp name, and set location for our original image
    object_key = str(uuid.uuid4()) + '-' + key
    img_download_path = '/tmp/{}'.format(object_key)

    # Download the source image from S3 to temp location within execution environment
    with open(img_download_path,'wb') as img_file:
        s3_client.download_fileobj(source_bucket, key, img_file)

    # Biggify the pixels and store temp pixelated versions
    pixelate((8,8), img_download_path, '/tmp/pixelated-8x8-{}'.format(object_key) )
    pixelate((16,16), img_download_path, '/tmp/pixelated-16x16-{}'.format(object_key) )
    pixelate((32,32), img_download_path, '/tmp/pixelated-32x32-{}'.format(object_key) )
    pixelate((48,48), img_download_path, '/tmp/pixelated-48x48-{}'.format(object_key) )
    pixelate((64,64), img_download_path, '/tmp/pixelated-64x64-{}'.format(object_key) )

    # uploading the pixelated version to destination bucket
    upload_key = 'pixelated-{}'.format(object_key)
    s3_client.upload_file('/tmp/pixelated-8x8-{}'.format(object_key), processed_bucket,'pixelated-8x8-{}'.format(key))
    s3_client.upload_file('/tmp/pixelated-16x16-{}'.format(object_key), processed_bucket,'pixelated-16x16-{}'.format(key))
    s3_client.upload_file('/tmp/pixelated-32x32-{}'.format(object_key), processed_bucket,'pixelated-32x32-{}'.format(key))
    s3_client.upload_file('/tmp/pixelated-48x48-{}'.format(object_key), processed_bucket,'pixelated-48x48-{}'.format(key))
    s3_client.upload_file('/tmp/pixelated-64x64-{}'.format(object_key), processed_bucket,'pixelated-64x64-{}'.format(key))

def pixelate(pixelsize, image_path, pixelated_img_path):
    img = Image.open(image_path)
    temp_img = img.resize(pixelsize, Image.BILINEAR)
    new_img = temp_img.resize(img.size, Image.NEAREST)
    new_img.save(pixelated_img_path)

Enter fullscreen mode Exit fullscreen mode

Above is the Lambda function code, keeping in mind that it would process events triggered by S3 bucket changes.

  1. First, it reads an environment variable from the Lambda runtime environment called processed_bucket. We have to provide it with the bucket to store the processed images.

  2. Then it uses the boto3 libraries to connect to S3. It receives an event, because remember, this is an event-driven workflow, so basically, when an object is uploaded to the source bucket, an event is generated and this event is passed into the Lambda function when the Lambda function is invoked.

  3. Next lambda creates a temporary name for this object, so a random string. It sets the path to download the source image to inside the temp folder of the Lambda runtime environment. It then uses the boto3 library to download that original object.

  4. After that, it runs the pixelate function five times, generating five different versions of this image with various different sized pixels.

Essentially, all this function does is to decrease the size of the image and then increase the size again, which causes it to become pixelated.

Image description

  1. We run this five times and then again, we use the boto3 library to upload each of those five new pixelated objects to the processed bucket.

Note: This python code uses python libraries that aren't defaulted in Lambda, so we will have to zip this .py file along with the Pillow library as a deployment zip.

Stage 4: Configure the Lambda Function & Trigger

When you're creating Lambda function from the console UI, you can often directly enter the code into it, but because we're using a function which requires additional libraries, we need to upload the deployment zip.

Next we have few bits of configuration
that we need to change to make sure this function invokes without any issues.

To do that, we have to set an environment variable so that the function knows where to place the processed images, so the output images of this function.

Image description

Now that's the function configured. The next step is to configure the trigger. So what causes this function to be invoked? We want it to be invoked
based on S3 event notifications. So anytime an object is uploaded to the source bucket, we want it to invoke the Lambda function.

Image description

We now have an event notification configured on our S3 bucket.

Stage 5: Test and Monitor

After setting up the entire system, rigorous testing was essential. I uploaded files to the S3 buckets and closely monitored how the Lambda function responded to the events. It was fascinating to see how my architecture seamlessly processed events in real-time, thanks to the power of serverless computing.

For monitoring, I relied on AWS CloudWatch to keep an eye on Lambda function executions, error rates, and the performance of my event-driven system.

Image description

Stage 6: Cleanup

As responsible cloud users, it's crucial to clean up resources when they're no longer needed. After thoroughly testing and monitoring my architecture, I followed AWS best practices by deleting the S3 buckets, Lambda function, and associated resources.

Final Thoughts:

Image description

Building a serverless event-driven architecture was a rewarding experience, and I gained valuable insights along the way. While I was familiar with creating S3 buckets, configuring the Lambda Role to handle specific events was a significant learning curve. AWS's robust IAM system and user-friendly interfaces made it possible for me to achieve this.

I hope my journey serves as a helpful guide for anyone looking to embark on a similar adventure in building serverless event-driven architectures.

Top comments (0)