Forwarding every webhook event directly to a downstream API is a recipe for throttling, duplicate processing, and out-of-order writes. This post walks through a simple pattern — three Logic Apps and one table — that buffers events and processes only the latest state per entity.
The Problem
A source system fires events on every create or update. The same entity can be updated dozens of times in minutes. You don't need to process every intermediate state — only the final one.
If an entity is updated 20 times in 30 minutes, only the most recent state needs to be processed — and the downstream API only gets called once.
The Pattern
Source System Webhook
│
▼
rcv-events (HTTP trigger)
│ upsert each event → EventBuffer table
▼
Azure Table Storage: EventBuffer
│ PartitionKey: "relation-events" RowKey: entityId Status: "Pending"
▼
prc-events (Timer: every 5 min)
│ query Pending rows older than X min → dispatch each
▼
prc-process-single-event
│ mark Processing → fetch fresh from source → call downstream
│ delete on success / reset to Pending on failure
▼
Downstream API
Step 1 — Receive
rcv-events accepts a batch of events via HTTP and upserts each one into the buffer table. No queue, no broker — the HTTP trigger is the ingress.
Each row looks like this:
{
"PartitionKey": "relation-events",
"RowKey": "<entityId>",
"Event": "updated",
"EntityType": "Record",
"Status": "Pending",
"ReceivedAt": "2026-04-20T14:30:00Z"
}
RowKey = entityId is the key insight. No matter how many events arrive for the same entity, there is always exactly one row. The tenth update overwrites the ninth. Deduplication is a schema decision, not code.
Step 2 — Wait
prc-events runs on a timer (every 5 minutes) and queries rows where Status eq 'Pending' and LastUpdated <= utcNow() - X minutes. The time window is your debounce threshold — nothing gets processed until the burst settles.
Step 3 — Process
For each pending row, prc-process-single-event:
- Marks the row Processing — prevents double-processing if the timer fires again mid-run
- Fetches the current state from the source system — never trusts the buffered payload, which may already be stale
- Calls the downstream API with fresh data
- Deletes the row on success / resets to Pending on failure
This gives at-least-once delivery with automatic retry — no custom infrastructure needed.
Status Lifecycle
Pending → Processing → [deleted]
│
└──(on failure)──→ Pending
Three states, one field. Fully visible in Azure Storage Explorer during an incident.
Why It Works
- Deduplication for free — one row per entity, always the latest
- No ordering concerns — you fetch fresh data at processing time, so intermediate states are irrelevant
- Respects downstream rate limits — instead of hammering a downstream API with every intermediate update, you send one call per entity per time window. If an entity changes 20 times in 30 minutes, the downstream system sees exactly one request. This makes the pattern a natural fit when integrating with third-party APIs that enforce rate limits or throttle bursts
- Operationally transparent — query the table, see exactly what's pending or stuck
- No broker needed at low-to-moderate scale — if your HTTP trigger can handle the inbound burst and your timer cadence keeps up with the queue depth, you don't need Service Bus
Consider adding Service Bus only if you need strict ordering, dead-lettering, or multiple consumers on the same stream.
No Service Bus. No custom retry logic. No ordering guarantees needed. Just a table, a timer, and one row per entity.



Top comments (0)