I’ve built and reviewed several serverless systems where AWS Lambda and Amazon S3 form the backbone—file ingestion pipelines, media processing platforms, and event-driven APIs. Over time, I noticed a recurring challenge: people either see Lambda + S3 as too simple ("just trigger a function on upload") or too abstract when diagrams become overwhelming.
In this article, I’ll walk you through how I think about designing a complex yet easy-to-understand Lambda + S3 architecture, using real-world patterns and the latest AWS capabilities. I’ll also show you how I draw this architecture in Lucidchart, so I can explain it clearly and broadly.
What I’m trying to solve
When I design serverless systems, I usually want three things:
- Simple entry points for users and clients
- Asynchronous, resilient processing behind the scenes
- Strong cost and operational control as the system scales
Instead of thinking in terms of individual AWS services, I divide the architecture into layers. This mental model also maps very well to a Lucidchart diagram.
Edge / Client layer
This is where requests originate:
- Web or mobile clients
- CLI tools or third‑party webhooks
Most of the time, clients never talk to Lambda directly. I prefer to keep a clean boundary.
API & ingress layer
Here I typically use:
- API Gateway for REST or HTTP APIs
- Lambda Function URLs for very focused, internal endpoints
- CloudFront + WAF when the API is public and needs protection
This layer is responsible only for authentication, validation, and routing—not heavy logic.
Compute layer (Lambda)
This is where Lambda shines.
I usually split responsibilities across multiple functions:
- Request Lambdas – fast, synchronous (auth, presigned URLs)
- Processor Lambdas – async workers (image resize, metadata extraction)
- Indexer Lambdas – background tasks (search indexing, embeddings)
Storage & data plane (S3‑centric)
This is the heart of the system.
I usually work with multiple buckets, each with a clear purpose:
- ingest-bucket – raw uploads
- processed-bucket – derived artifacts
- archive-bucket – long‑term retention
Key S3 features I actively use:
- Direct uploads with presigned URLs (clients bypass Lambda)
- S3 Intelligent‑Tiering to control costs automatically
- S3 Object Lambda when I want on‑the‑fly transformations
- S3 Batch Operations + Lambda for large‑scale reprocessing jobs
Below is the infra diagram for the above structure:
Final thoughts
Lambda and S3 are often introduced as simple services, but together they can power extremely sophisticated systems. The key is to:
- Keep synchronous paths small
- Push everything else to events
- Let S3 do what it does best: scale and store cheaply

Top comments (0)