When building event-driven systems in AWS, you’ll eventually hit a common problem — payload size limits.
Services like SNS, SQS, and EventBridge (256 KB per event) can become bottlenecks or point of failure if you send large JSON or binary payloads.
Fortunately, established architectural patterns help you manage large payloads gracefully. Below are five key patterns — with AWS + CDK examples — along with references you can use to dig deeper.
Patterns Overview
Here are the key patterns for managing large payloads in event-driven architectures:
- Claim Check Pattern — Store the payload externally and send only a reference
- Payload Compression — Compress before sending
- Data Partitioning / Chunking — Split the payload into smaller messages
- Enrichment Pattern — Send IDs; consumers fetch data as needed
- Event Stream Filtering / Projection — Emit only the truly needed fields
1) Claim Check Pattern
Concept
Store the full payload outside the event (e.g. in Amazon S3) and publish a lightweight event containing only a “claim check” (e.g. object key, pre-signed URL, or pointer).
Process
- Producer persists the payload in S3.
- Producer publishes an event containing the S3 reference (or pointer).
- Consumer reads that reference, fetches the payload from S3, and continues processing.
Benefits
- Keeps event messages small
- Reduces the load on the event bus
- Decouples event routing from heavy data transfer
CDK + TypeScript Example
import * as cdk from "aws-cdk-lib";
import * as s3 from "aws-cdk-lib/aws-s3";
import * as sns from "aws-cdk-lib/aws-sns";
import * as subs from "aws-cdk-lib/aws-sns-subscriptions";
import * as lambda from "aws-cdk-lib/aws-lambda";
export class ClaimCheckStack extends cdk.Stack {
constructor(scope: cdk.App, id: string) {
super(scope, id);
const bucket = new s3.Bucket(this, "PayloadBucket");
const topic = new sns.Topic(this, "ClaimCheckTopic");
const consumer = new lambda.Function(this, "ConsumerHandler", {
runtime: lambda.Runtime.NODEJS_20_X,
handler: "index.handler",
code: lambda.Code.fromAsset("lambda"),
environment: { BUCKET: bucket.bucketName },
});
topic.addSubscription(new subs.LambdaSubscription(consumer));
bucket.grantRead(consumer);
}
}
Event Format Example
{
"type": "UserReportGenerated",
"payloadLocation": "s3://my-bucket/reports/report-1234.json"
}
Real-World References
- How to publish large events with Amazon EventBridge (boyney.io)
- AWS Serverless Land - Claim Check Pattern
- EDA Visuals by David Boyne
2) Payload Compression
Concept
Compress the payload (e.g. with Gzip, Snappy, LZ4) before embedding it in the event; the consumer decompresses it on receipt.
Process
- Producer serializes and compresses the payload, encodes (e.g. Base64), then sends.
- Consumer decodes and decompresses before processing.
Benefits
- Reduces network usage and message size
- Keeps everything in a single event
- Adds CPU overhead and latency
Example (Node.js / Lambda)
import * as zlib from "zlib";
export const handler = async (event: any) => {
// On producer side:
const compressed = zlib.gzipSync(JSON.stringify(event.data)).toString("base64");
// On consumer side:
const decompressed = JSON.parse(
zlib.gunzipSync(Buffer.from(compressed, "base64")).toString()
);
console.log("Received data:", decompressed);
};
Use compression when your payload is moderately large (e.g. 100 KB–250 KB) and still under the service limits.
3) Data Partitioning / Chunking
Concept
Divide a massive payload into manageable chunks, each sent as a separate event, with metadata to allow reassembly.
Process
- Producer splits the payload into N parts.
- Each event includes:
correlationId
,partNumber
,totalParts
,dataChunk
. - Consumer receives all parts (possibly out of order), then reconstructs.
Benefits
- Supports parallel and incremental processing
- Avoids single large event breaches
- Requires coordination and tracking
Event Payload Example
{
"correlationId": "file-xyz",
"partNumber": 4,
"totalParts": 10,
"dataChunk": "base64encodedSegment"
}
AWS Tips
- Use SQS, Kinesis, or EventBridge Pipes as the transport for chunks
- After full reassembly, persist result to S3 or DynamoDB
4) Enrichment Pattern (Contextual Data Fetching)
Concept
Emit minimal identifiers or context in the event. Consumers decide whether and when to fetch full data via APIs, databases, or external systems.
Process
- Producer emits event containing just IDs or essential fields.
- Consumer decides to call the service or DB to retrieve full details (lazily).
Benefits
- Event bus stays lightweight
- Consumers fetch only what they need
- Reduces coupling between producers and consumers
Example Event
{
"type": "OrderCreated",
"orderId": "ORD-12345",
"customerId": "CUST-98765"
}
Consumer Logic Example
const orderData = await ordersTable.get({ id: event.orderId });
5) Event Stream Filtering / Projection
Concept
Design event schemas and routing rules so that consumers receive only the subset of data they need, not a monolithic “everything” message.
Process
- Producers emit minimal, well-defined event schemas.
- Routers or event buses use filters or projections to trim fields.
- Consumers get only relevant fields.
Benefits
- Reduces data volume over time
- Encourages clear schema contracts
- Easier evolvability and consumer isolation
AWS (EventBridge) Example
import * as events from "aws-cdk-lib/aws-events";
import * as targets from "aws-cdk-lib/aws-events-targets";
new events.Rule(this, "FilteredRule", {
eventPattern: {
source: ["orders.service"],
detailType: ["OrderCreated"],
detail: {
status: ["PAID"]
}
},
targets: [new targets.LambdaFunction(handler)],
});
EventBridge Event Patterns Documentation
Choosing the Right Pattern
Scenario | Recommended Pattern | Why |
---|---|---|
Very large binary/data files | Claim Check | Offload storage to S3, send reference |
Large but compressible message | Compression | Reduce size, maintain single event |
Streaming / chunked content | Chunking / Partitioning | Process incrementally |
Many consumers, but only some need full data | Enrichment | Fetch data on demand |
Many consumers with different needs | Filtering / Projection | Route minimal fields to each consumer |
What if the payload is not huge?
When the payload is small (well under 256 KB), you can keep it inline and select a simple event pattern based on fan-out and routing needs.
Use case | Recommended Pattern | Why |
---|---|---|
1→many fan-out | SNS → (Lambda/SQS) | Simple pub/sub |
Cross-domain routing | EventBridge Bus | Filtering, schema registry |
Point-to-point, retries | SQS → Lambda | Durable queue, DLQ |
Ordered stream | Kinesis | Sharded, ordered ingestion |
Inline Event Example
{
"detail-type": "OrderCreated",
"source": "orders.service",
"detail": { "orderId": "ORD-12345", "amount": 129.99 }
}
TL;DR
If the payload is not large:
- Keep it inline.
- Use SNS for broadcast, SQS for queues, EventBridge for routing, Kinesis for ordered streams.
- Always include idempotency keys, DLQs, and minimal schemas.
Top comments (0)