DEV Community

Cover image for Image Processing Serverless Project using AWS Lambda (with terraform)
Amit Kushwaha
Amit Kushwaha

Posted on

Image Processing Serverless Project using AWS Lambda (with terraform)

In this tutorial, I'll show you how to build a production-ready serverless image processing pipeline that automatically creates multiple image variants when you upload a photo to S3.

What we'll build:

  • Automatic image processing triggered by S3 uploads
  • 5 different image variants (compressed, low-quality, WebP, PNG, thumbnail)
  • Email notifications via SNS
  • Complete Infrastructure as Code using Terraform
  • Cross-platform Lambda Layer build with Docker

Tech Stack:

  • AWS Lambda (Python 3.12)
  • AWS S3 (storage)
  • AWS SNS (notifications)
  • Terraform (infrastructure)
  • Docker (Lambda layer build)
  • Pillow (image processing)

Architecture Overview

The flow is simple:

  1. User uploads an image to the Source S3 Bucket
  2. S3 event triggers the Lambda Function
  3. Lambda (with Pillow layer) processes the image into 5 variants
  4. Processed images are saved to the Destination S3 Bucket
  5. SNS sends an email notification with processing details
  6. CloudWatch logs everything for monitoring

Why This Architecture?

Serverless Benefits
No Server Management

  • No EC2 instances to maintain
  • No patching or updates
  • Automatic scaling

Cost-Effective

  • Pay only for execution time
  • ~$0.14/month for 1,000 images
  • Free tier covers most small projects

Event-Driven

  • Automatic processing on upload
  • No polling or cron jobs needed
  • Real-time processing

Prerequisites

Before we start, make sure you have:

  1. AWS Account with CLI configured
aws configure
Enter fullscreen mode Exit fullscreen mode
  1. Terraform (v1.0+)
terraform --version
Enter fullscreen mode Exit fullscreen mode
  1. Docker desktop(running)
docker info
Enter fullscreen mode Exit fullscreen mode
  1. Basic knowledge of:
  • AWS services (S3, Lambda, SNS)
  • Terraform Basics
  • Python

Project Structure

Day-18/
├── Assets/
│   ├── architecture-diagram.jpg
│   └── ... (screenshots)
├── lambda/
│   ├── lambda_function.py
│   └── requirements.txt
├── scripts/
│   ├── build_layer_docker.sh
│   ├── deploy.sh
│   └── destroy.sh
├── terraform/
│   ├── main.tf
│   ├── variables.tf
│   ├── outputs.tf
│   ├── provider.tf
│   └── terraform.tfvars.example
└── Readme.md
Enter fullscreen mode Exit fullscreen mode

Step1: Lambda Function
Let's start with the core - the Lambda function that processes images.

The Image Processor

import json
import boto3
import os
from PIL import Image
from io import BytesIO
import uuid

s3_client = boto3.client('s3')
sns_client = boto3.client('sns')

def lambda_handler(event, context):
    """Process uploaded images into multiple variants"""

    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']

        # Download image
        response = s3_client.get_object(Bucket=bucket, Key=key)
        image_data = response['Body'].read()

        # Process image
        processed_images = process_image(image_data, key)

        # Upload variants
        processed_bucket = os.environ['PROCESSED_BUCKET']
        for img in processed_images:
            s3_client.put_object(
                Bucket=processed_bucket,
                Key=img['key'],
                Body=img['data'],
                ContentType=img['content_type']
            )

        # Send notification
        send_notification(key, processed_images, processed_bucket)

    return {'statusCode': 200}
Enter fullscreen mode Exit fullscreen mode

Creating Image Variants

def process_image(image_data, original_key):
    """Create 5 variants of the image"""
    processed_images = []
    image = Image.open(BytesIO(image_data))

    # Convert RGBA to RGB for JPEG compatibility
    if image.mode in ('RGBA', 'LA', 'P'):
        background = Image.new('RGB', image.size, (255, 255, 255))
        if image.mode == 'P':
            image = image.convert('RGBA')
        background.paste(image, mask=image.split()[-1])
        image = background

    # Auto-resize large images
    if image.size[0] > 4096 or image.size[1] > 4096:
        ratio = min(4096 / image.size[0], 4096 / image.size[1])
        new_size = (int(image.size[0] * ratio), int(image.size[1] * ratio))
        image = image.resize(new_size, Image.Resampling.LANCZOS)

    base_name = os.path.splitext(original_key)[0]
    unique_id = str(uuid.uuid4())[:8]

    # Create variants
    variants = [
        {'format': 'JPEG', 'quality': 85, 'suffix': 'compressed'},
        {'format': 'JPEG', 'quality': 60, 'suffix': 'low'},
        {'format': 'WEBP', 'quality': 85, 'suffix': 'webp'},
        {'format': 'PNG', 'quality': None, 'suffix': 'png'}
    ]

    for variant in variants:
        output = BytesIO()
        if variant['quality']:
            image.save(output, format=variant['format'], 
                      quality=variant['quality'], optimize=True)
        else:
            image.save(output, format=variant['format'], optimize=True)

        output.seek(0)
        extension = variant['format'].lower()
        if extension == 'jpeg':
            extension = 'jpg'

        processed_images.append({
            'key': f"{base_name}_{variant['suffix']}_{unique_id}.{extension}",
            'data': output.getvalue(),
            'content_type': f"image/{variant['format'].lower()}"
        })

    # Create thumbnail
    thumbnail = image.copy()
    thumbnail.thumbnail((300, 300), Image.Resampling.LANCZOS)
    thumb_output = BytesIO()
    thumbnail.save(thumb_output, format='JPEG', quality=80, optimize=True)
    thumb_output.seek(0)

    processed_images.append({
        'key': f"{base_name}_thumbnail_{unique_id}.jpg",
        'data': thumb_output.getvalue(),
        'content_type': 'image/jpeg'
    })

    return processed_images
Enter fullscreen mode Exit fullscreen mode

Step 2: Building the Lambda Layer

The Docker Challenge
Problem: AWS Lambda runs on Linux, but you might be developing on Windows or Mac. The Pillow library has C dependencies that must be compiled for the target OS.

Solution: Use Docker to create a Linux environment and build the layer there.

The Build Script

#!/bin/bash
set -e

echo "🚀 Building Lambda Layer with Pillow using Docker..."

SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_DIR="$( cd "$SCRIPT_DIR/.." && pwd )"
TERRAFORM_DIR="$PROJECT_DIR/terraform"

# Check Docker is running
if ! docker info &> /dev/null 2>&1; then
    echo "❌ Docker is not running. Please start Docker first."
    exit 1
fi

# Get Windows-compatible path
if command -v cygpath &> /dev/null; then
    DOCKER_MOUNT_PATH=$(cygpath -w "$TERRAFORM_DIR")
elif [[ -n "$WINDIR" ]]; then
    DOCKER_MOUNT_PATH=$(cd "$TERRAFORM_DIR" && pwd -W 2>/dev/null || pwd)
else
    DOCKER_MOUNT_PATH="$TERRAFORM_DIR"
fi

# Build layer in Linux container
docker run --rm \
  --platform linux/amd64 \
  -v "$DOCKER_MOUNT_PATH":/output \
  python:3.12-slim \
  bash -c "
    pip install --quiet Pillow==10.4.0 -t /tmp/python/lib/python3.12/site-packages/ && \
    cd /tmp && \
    apt-get update -qq && apt-get install -y -qq zip > /dev/null 2>&1 && \
    zip -q -r pillow_layer.zip python/ && \
    cp pillow_layer.zip /output/ && \
    echo '✅ Layer built successfully!'
  "

echo "📍 Location: $TERRAFORM_DIR/pillow_layer.zip"
Enter fullscreen mode Exit fullscreen mode

Step 3: Terraform Infrastructure

Main Infrastructure (main.tf)

# S3 Upload Bucket
resource "aws_s3_bucket" "upload_bucket" {
  bucket        = local.upload_bucket_name
  force_destroy = true  # Allows easy cleanup
}

resource "aws_s3_bucket_versioning" "upload_bucket" {
  bucket = aws_s3_bucket.upload_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "upload_bucket" {
  bucket = aws_s3_bucket.upload_bucket.id
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

# S3 Processed Bucket
resource "aws_s3_bucket" "processed_bucket" {
  bucket        = local.processed_bucket_name
  force_destroy = true
}

# Lambda Function
resource "aws_lambda_function" "image_processor" {
  filename         = data.archive_file.lambda_zip.output_path
  function_name    = local.lambda_function_name
  role             = aws_iam_role.lambda_role.arn
  handler          = "lambda_function.lambda_handler"
  runtime          = "python3.12"
  timeout          = var.lambda_timeout
  memory_size      = var.lambda_memory_size
  layers           = [aws_lambda_layer_version.pillow_layer.arn]

  environment {
    variables = {
      PROCESSED_BUCKET = aws_s3_bucket.processed_bucket.id
      SNS_TOPIC_ARN    = var.notification_email != "" ? aws_sns_topic.processing_notifications[0].arn : ""
    }
  }
}

# Lambda Layer
resource "aws_lambda_layer_version" "pillow_layer" {
  filename            = "${path.module}/pillow_layer.zip"
  layer_name          = "${var.project_name}-pillow-layer"
  compatible_runtimes = ["python3.12"]
  description         = "Pillow library for image processing"
}

# S3 Event Trigger
resource "aws_s3_bucket_notification" "upload_bucket_notification" {
  bucket = aws_s3_bucket.upload_bucket.id

  lambda_function {
    lambda_function_arn = aws_lambda_function.image_processor.arn
    events              = ["s3:ObjectCreated:*"]
  }

  depends_on = [aws_lambda_permission.allow_s3]
}

# SNS Topic
resource "aws_sns_topic" "processing_notifications" {
  count        = var.notification_email != "" ? 1 : 0
  name         = "${var.project_name}-${var.environment}-notifications"
  display_name = "Image Processing Notifications"
}

resource "aws_sns_topic_subscription" "email_subscription" {
  count     = var.notification_email != "" ? 1 : 0
  topic_arn = aws_sns_topic.processing_notifications[0].arn
  protocol  = "email"
  endpoint  = var.notification_email
}
Enter fullscreen mode Exit fullscreen mode

IAM Permissions

resource "aws_iam_role" "lambda_role" {
  name = "${local.lambda_function_name}-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "lambda.amazonaws.com"
      }
    }]
  })
}

resource "aws_iam_role_policy" "lambda_policy" {
  name = "${local.lambda_function_name}-policy"
  role = aws_iam_role.lambda_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ]
        Resource = "arn:aws:logs:${var.aws_region}:*:*"
      },
      {
        Effect = "Allow"
        Action = [
          "s3:GetObject",
          "s3:GetObjectVersion"
        ]
        Resource = "${aws_s3_bucket.upload_bucket.arn}/*"
      },
      {
        Effect = "Allow"
        Action = [
          "s3:PutObject",
          "s3:PutObjectAcl"
        ]
        Resource = "${aws_s3_bucket.processed_bucket.arn}/*"
      },
      {
        Effect = "Allow"
        Action = ["sns:Publish"]
        Resource = var.notification_email != "" ? aws_sns_topic.processing_notifications[0].arn : "*"
      }
    ]
  })
}
Enter fullscreen mode Exit fullscreen mode

Step 4: Deployment

Configuration
Create terraform.tfvars:

aws_region         = "us-east-1"
environment        = "dev"
project_name       = "serverless-image-processor"
lambda_timeout     = 60
lambda_memory_size = 1024
notification_email = "your-email@example.com"
Enter fullscreen mode Exit fullscreen mode

Deploy with Scripts

# 1. Build Lambda Layer
cd scripts
./build_layer_docker.sh

# 2. Deploy Infrastructure
./deploy.sh
Enter fullscreen mode Exit fullscreen mode

Manual Deployment

# 1. Build layer
cd scripts
./build_layer_docker.sh

# 2. Initialize Terraform
cd ../terraform
terraform init

# 3. Plan
terraform plan

# 4. Apply
terraform apply
Enter fullscreen mode Exit fullscreen mode

Testing the Pipeline

1. Confirm SNS Subscription
Check your email for the AWS SNS confirmation and click "Confirm subscription".

2. Upload a Test Image

# Get bucket name
terraform output upload_bucket_name

# Upload image
aws s3 cp test-image.jpg s3://YOUR-UPLOAD-BUCKET/
Enter fullscreen mode Exit fullscreen mode

3. Check Processed Images

# List processed variants
aws s3 ls s3://YOUR-PROCESSED-BUCKET/ --recursive
Enter fullscreen mode Exit fullscreen mode

Expected output:

test-image_compressed_a1b2c3d4.jpg
test-image_low_a1b2c3d4.jpg
test-image_webp_a1b2c3d4.webp
test-image_png_a1b2c3d4.png
test-image_thumbnail_a1b2c3d4.jpg
Enter fullscreen mode Exit fullscreen mode

Lessons:

1. Docker is Essential for Lambda Layers
Initially, I tried installing Pillow directly on Windows. The layer worked locally but failed on Lambda with:

Unable to import module 'lambda_function': No module named '_imaging'
Enter fullscreen mode Exit fullscreen mode

Solution: Always use Docker to build layers for Lambda, regardless of your development OS.

2. Force Destroy is Your Friend (in Dev)
Without force_destroy = true on S3 buckets, terraform destroy fails if buckets contain objects.

resource "aws_s3_bucket" "upload_bucket" {
  bucket        = local.upload_bucket_name
  force_destroy = true  # Enables easy cleanup
}
Enter fullscreen mode Exit fullscreen mode

Warning: Never use this in production!

3. Image Format Conversion is Tricky
JPEG doesn't support transparency. Converting RGBA images directly to JPEG results in black backgrounds.

Solution: Create a white background and paste the image with alpha channel as mask:

if image.mode in ('RGBA', 'LA', 'P'):
    background = Image.new('RGB', image.size, (255, 255, 255))
    background.paste(image, mask=image.split()[-1])
    image = background
Enter fullscreen mode Exit fullscreen mode

4. SNS Requires Email Confirmation
SNS subscriptions aren't active until the user confirms via email. Make sure to mention this in documentation!

5. Unique Filenames Prevent Conflicts
Using UUIDs in filenames prevents overwriting when processing multiple images with the same name:

unique_id = str(uuid.uuid4())[:8]
output_key = f"{base_name}_{variant['suffix']}_{unique_id}.{extension}"
Enter fullscreen mode Exit fullscreen mode

Conclusion

We've built a production-ready serverless image processing pipeline that:

  • Automatically processes images on upload
  • Creates 5 optimized variants
  • Sends email notifications
  • Costs less than $0.15/month for 1,000 images
  • Scales automatically
  • Requires zero server management

Resources:

Github Repository
AWS Lambda
Pillow Docs
Terraform AWS Provider

Reference:

>> Connect With Me

If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:

💼 LinkedIn: Amit Kushwaha

🐙 GitHub: Amit Kushwaha

📝 Hashnode / Amit Kushwaha

🐦 Twitter/X: Amit Kushwaha

Found this helpful? Drop a ❤️ and follow for more AWS and Terraform tutorials!

Questions? Drop them in the comments below! 👇

Top comments (0)