Stop memorizing. Start building. Here is how I connected S3, Lambda, and Rekognition to build a real-world project.
The Problem with "Tutorial Hell"
I’ve spent weeks reading about S3 Buckets, IAM Roles, and Lambda triggers. But reading isn't knowing. Building is knowing.
Tonight, I challenged myself: Can I build a fully serverless AI image analyzer in under 60 minutes?
It took me around 40 minutes.
The Architecture
We are building an event-driven workflow. No servers to manage.
Trigger: You upload an image to an S3 Bucket.
Compute: This event triggers an AWS Lambda function (Python).
Intelligence: The code sends the image to Amazon Rekognition (AI).
Output: The AI tells us what is in the image via CloudWatch Logs.
Step 1: The Input (S3)
First, we need a place to drop our files.
Go to S3 in the AWS Console.
Click Create bucket.
Name it something unique (e.g., image-analysis-lab-yourname).
Leave all settings as default (Block Public Access should be ON).
Click Create bucket.
Step 2: The Security (IAM)
How I Built a Serverless AI Image Analyzer in 40 Minutes (Free Tier)
Subtitle: Stop memorizing. Start building. Here is how I connected S3, Lambda, and Rekognition to build a real-world project.
The Problem with "Tutorial Hell"
I’ve spent weeks reading about S3 Buckets, IAM Roles, and Lambda triggers. But reading isn't knowing. Building is knowing.
Tonight, I challenged myself: Can I build a fully serverless AI image analyzer in under 60 minutes? It took me around 40 minutes.
Here is the step-by-step guide on how you can build this too during your lunch break.
The Architecture
We are building an event-driven workflow. No servers to manage.
Trigger: You upload an image to an S3 Bucket.
Compute: This event triggers an AWS Lambda function (Python).
Intelligence: The code sends the image to Amazon Rekognition (AI).
Output: The AI tells us what is in the image via CloudWatch Logs.
Step 1: The Input (S3)
First, we need a place to drop our files.
Go to S3 in the AWS Console.
Click Create bucket.
Name it something unique (e.g., image-analysis-lab-yourname).
Leave all settings as default (Block Public Access should be ON).
Click Create bucket.
Step 2: The Security (IAM)
This is where many beginners get stuck. Our Lambda function needs an ID badge (Role) to access the photos and talk to the AI.
1.Go to IAM -> Roles -> Create role.
2.Select AWS Service and choose Lambda.
3.Add these three permissions:
AmazonS3ReadOnlyAccess (To see the file)
AmazonRekognitionReadOnlyAccess (To ask the AI)
AWSLambdaBasicExecutionRole (To write logs)
4.Name the role "Lambda-AI-Role" and create it.
Step 3: The Logic (Lambda)
Now, the brain of the operation.
- Go to Lambda -> ** Create function.**
2.Select Python 3.12 as the runtime.
Under Permissions, choose "Use an existing role" and select your "Lambda-AI-Role."
4.Create the function.
Once created, scroll up to the Function Overview and click + Add trigger.
Select S3, choose your bucket, and save. Now your code wakes up whenever a file lands in the bucket.
Step 4: The Code (And the Bug I Found)
This was my "Aha!" moment.
My first attempt failed with an "InvalidS3ObjectException."
I uploaded a file named "Test Image 1.jpg". The issue: S3 automatically encodes spaces in filenames to "+" (so it became "Test+Image+1.jpg").
My code was looking for the file with "+" in the name, which didn't exist on the disk.
The Fix: I had to import "urllib.parse" to decode the filename back to normal text.
Here is the full, working code:
import json
import boto3
import urllib.parse
def lambda_handler(event, context):
s3_client = boto3.client('s3')
rekognition_client = boto3.client('rekognition')
# Get the bucket name and filename from the event
bucket_name = event['Records'][0]['s3']['bucket']['name']
# DECODE THE FILENAME (The crucial fix!)
raw_file_name = event['Records'][0]['s3']['object']['key']
file_name = urllib.parse.unquote_plus(raw_file_name)
print(f"Analyzing: {file_name} from {bucket_name}")
try:
# Call Amazon Rekognition
response = rekognition_client.detect_labels(
Image={'S3Object': {'Bucket': bucket_name, 'Name': file_name}},
MaxLabels=10,
MinConfidence=75
)
# Print results to logs
print("--- RESULTS ---")
label_names = []
for label in response['Labels']:
print(f"Found: {label['Name']} ({label['Confidence']:.2f}%)")
label_names.append(label['Name'])
return {
'statusCode': 200,
'body': json.dumps(f"Found: {', '.join(label_names)}")
}
except Exception as e:
print(f"ERROR: {str(e)}")
raise e
Step 5: The Result
I uploaded an image of my desk setup. Checking the CloudWatch Logs, I got this:
Found: Computer (100.00%)
Found: Laptop (100.00%)
Found: Hardware (90.87%)
It works. Zero servers provisioned. Total cost: $0.00 (Free Tier).
Conclusion
Don't just collect certifications. Collect projects. The "magic" of AI is accessible to anyone willing to connect the dots and debug the errors.
Go build this!
Top comments (2)
I've been meaning to get into working with AWS for ages, but it's always seemed like a steep learning curve.
Your explanation really helped break it down in a way that feels approachable though, so thanks for sharing your knowledge in a way that's easy to follow.
Your post feels like a great starting point for me Ali.
This first comment made my day, Aryan!
The AWS learning curve can look like a „wall“, but once you start breaking it into small bricks (like Lambda and S3), it becomes a staircase.
If you decide to give this project a try and get stuck anywhere, feel free to drop a question here. Building is the best way to learn.
Good luck starting