Every major webhook provider — Stripe, Shopify, GitHub, Twilio — delivers webhooks with at-least-once semantics. That means duplicates aren't a bug; they're a guarantee. Your system will receive the same event two, three, or more times. If you process each delivery independently, you'll charge customers twice, send duplicate emails, or create duplicate records.
Most developers know they need idempotency. The problem is that the most common idempotency pattern has a race condition that fails under real production load. This post covers the wrong way, the right way, and the battle-tested patterns for exactly-once webhook processing.
Why Webhook Duplicates Are Inevitable
Duplicates happen for multiple reasons, and you can't prevent any of them:
- Provider retries: Your server returned 200 but the response was lost in transit (network blip, load balancer timeout). The provider sees no response and retries.
- At-least-once queues: If the provider uses a message queue internally (most do), the queue may deliver the same message twice. This is a fundamental property of distributed message queues.
- Webhook replay: Someone clicks "resend" in the Stripe dashboard, or an automated recovery system replays events.
- Multiple delivery paths: Some systems have both real-time delivery and a recovery/catch-up mechanism that can send the same event through different paths.
You cannot make duplicates stop happening. You can only make your system handle them correctly.
The Pattern Everyone Uses (And Why It Breaks)
Here's the "obvious" idempotency pattern — check if you've seen the event, skip if you have:
// THE WRONG WAY: check-then-act (has a race condition)
app.post('/webhook', async (req, res) => {
const eventId = req.body.id; // e.g., evt_abc123
// Step 1: Check if already processed
const existing = await db.processedEvents.findUnique({
where: { eventId }
});
if (existing) {
return res.json({ duplicate: true }); // Already handled
}
// Step 2: Process the event
await processEvent(req.body);
// Step 3: Mark as processed
await db.processedEvents.create({
data: { eventId, processedAt: new Date() }
});
res.json({ received: true });
});
This looks correct. It checks for duplicates, processes the event, records that it was processed. What could go wrong?
The Race Condition
Imagine two deliveries of the same event arrive 50ms apart (this happens regularly under load):
Timeline:
T=0ms Request A: SELECT * FROM processed_events WHERE event_id = 'evt_abc123'
T=5ms Request A: Result: no rows → not a duplicate
T=10ms Request B: SELECT * FROM processed_events WHERE event_id = 'evt_abc123'
T=15ms Request B: Result: no rows → not a duplicate (!!!)
T=20ms Request A: processEvent(event) → charges customer
T=25ms Request B: processEvent(event) → charges customer AGAIN
T=50ms Request A: INSERT INTO processed_events (event_id, ...)
T=55ms Request B: INSERT INTO processed_events → duplicate key error (too late)
Both requests pass the duplicate check because neither has written to the database yet when the other checks. The customer gets charged twice. The second INSERT might fail on a unique constraint, but by then the damage is done.
This is the classic check-then-act race condition (also called TOCTOU — Time Of Check to Time Of Use). It's the same bug that causes double-spend in payment systems, double-voting in election software, and overselling in e-commerce. In webhook processing, it's everywhere.
The Correct Patterns
Pattern 1: Database Constraint as the Lock (Recommended)
Instead of checking then acting, use the database's unique constraint as an atomic lock. Insert first, process only if the insert succeeds:
// THE RIGHT WAY: insert-first with unique constraint
app.post('/webhook', async (req, res) => {
const eventId = req.body.id;
// Attempt to claim this event atomically
try {
await db.processedEvents.create({
data: {
eventId,
status: 'processing',
receivedAt: new Date()
}
});
} catch (err) {
if (isDuplicateKeyError(err)) {
// Another request already claimed this event
return res.json({ duplicate: true });
}
throw err; // Some other database error
}
// If we get here, we "own" this event — no race condition
try {
await processEvent(req.body);
await db.processedEvents.update({
where: { eventId },
data: { status: 'processed', processedAt: new Date() }
});
} catch (err) {
await db.processedEvents.update({
where: { eventId },
data: { status: 'failed', error: err.message }
});
// Don't rethrow — we still return 200 so the provider doesn't retry
// The failed event needs manual investigation or a retry job
}
res.json({ received: true });
});
function isDuplicateKeyError(err: any): boolean {
// Prisma
if (err.code === 'P2002') return true;
// PostgreSQL
if (err.code === '23505') return true;
// MySQL
if (err.errno === 1062) return true;
// SQLite
if (err.code === 'SQLITE_CONSTRAINT_UNIQUE') return true;
return false;
}
The key insight: the INSERT with a unique constraint on eventId is atomic at the database level. Two concurrent requests can't both succeed. One will insert, the other will get a duplicate key error. No race condition possible.
Pattern 2: SELECT ... FOR UPDATE (Pessimistic Locking)
If you need more flexibility (e.g., the idempotency key isn't the primary key), use row-level locking:
// Pessimistic locking with a transaction
app.post('/webhook', async (req, res) => {
const eventId = req.body.id;
await db.$transaction(async (tx) => {
// Lock the row (or lock nothing if it doesn't exist yet)
const existing = await tx.$queryRaw`
SELECT * FROM processed_events
WHERE event_id = \${eventId}
FOR UPDATE
`;
if (existing.length > 0) {
return; // Already processed, skip
}
await tx.processedEvents.create({
data: { eventId, status: 'processing' }
});
await processEvent(req.body);
await tx.processedEvents.update({
where: { eventId },
data: { status: 'processed' }
});
});
res.json({ received: true });
});
Tradeoff: This holds a database lock for the entire duration of processing. Fine for fast operations, but problematic if processEvent() takes seconds or calls external APIs. The lock blocks all other requests for the same event.
Pattern 3: Idempotency Key in a Separate Store (Redis/KV)
For high-throughput systems where database transactions are expensive, use an atomic set-if-not-exists operation in a fast key-value store:
// Redis-based idempotency with atomic SETNX
app.post('/webhook', async (req, res) => {
const eventId = req.body.id;
const lockKey = `webhook:lock:\${eventId}`;
// SETNX: set only if key doesn't exist (atomic)
// EX 86400: expire after 24 hours (cleanup)
const acquired = await redis.set(lockKey, 'processing', 'EX', 86400, 'NX');
if (!acquired) {
// Another process already claimed this event
return res.json({ duplicate: true });
}
try {
await processEvent(req.body);
await redis.set(lockKey, 'processed', 'EX', 86400);
} catch (err) {
// Release the lock so a retry can attempt processing
await redis.del(lockKey);
throw err;
}
res.json({ received: true });
});
Tradeoff: Redis is fast but not as durable as a database. If Redis restarts between the lock acquisition and processing completion, you might process the event twice. For most webhook use cases, this is acceptable — Redis rarely restarts, and the 24-hour TTL keeps the keyspace clean.
Which Pattern Should You Use?
| Pattern | Best For | Tradeoff |
|---|---|---|
| Unique constraint (Pattern 1) | Most applications | Requires database write before processing |
| SELECT FOR UPDATE (Pattern 2) | Complex idempotency keys | Holds lock during processing |
| Redis SETNX (Pattern 3) | High-throughput systems | Less durable than database |
For 90% of webhook handlers, Pattern 1 (unique constraint) is the right choice. It's simple, correct, and uses infrastructure you already have.
Edge Cases That Still Bite You
Processing Succeeds But Status Update Fails
You process the event (charge the customer), but the database update to mark it as "processed" fails (network error, database full). On retry, you see the event in "processing" state. Is it safe to reprocess? You need a way to check the actual side effect (was the charge created?) rather than just the status field.
Non-Deterministic Processing
If processEvent() creates resources with random IDs, processing the same event twice creates two different resources. True idempotency means the same input always produces the same output. Use deterministic IDs derived from the event ID where possible.
Multi-Step Processing
If processing involves multiple steps (update DB, send email, call API), partial failure means some steps completed and some didn't. On retry, you need to skip completed steps. This is the saga pattern, and it's significantly more complex than single-step idempotency.
How EventDock Handles Deduplication
EventDock deduplicates at the infrastructure level before events reach your handler. Each event gets a unique ID, and the delivery pipeline uses KV-based atomic dedup to ensure your endpoint receives each event exactly once — even if the webhook provider delivers it multiple times.
This means your webhook handler doesn't need any of the patterns above. You write simple processing logic, and EventDock guarantees you won't see the same event twice. If you want defense-in-depth, you can still add application-level idempotency using Pattern 1 — but you won't need it for correctness.
Frequently Asked Questions
Why do webhooks get delivered more than once?
At-least-once delivery is a fundamental property of distributed systems. If the provider doesn't receive your 200 response (network blip, load balancer timeout, process crash), it retries. Some providers also use internal queues that may deliver the same message twice. You cannot prevent duplicates — only handle them.
What is the check-then-act race condition?
It's when you check if an event was processed (SELECT), then process it, then mark it done (INSERT). Two concurrent requests can both pass the check before either writes, causing double processing. Fix it by inserting first with a unique constraint — the database enforces atomicity.
What is an idempotency key for webhooks?
A unique identifier for a webhook event used to detect duplicates. Stripe provides event.id, Shopify provides X-Shopify-Webhook-Id, GitHub provides X-GitHub-Delivery. Store these with a unique database constraint.
How do I make my webhook handler idempotent?
Use the insert-first pattern: INSERT the idempotency key with a unique constraint. If the insert succeeds, process the event. If it fails with a duplicate key error, skip it. This is the only pattern that's both correct and race-condition-free without requiring explicit locks.
---
Skip the dedup complexity
EventDock deduplicates webhook events at the infrastructure level. Your handler receives each event exactly once — no idempotency keys, no race conditions, no duplicate processing.
5,000 events/month free. No credit card required.
[Start Free](https://dashboard.eventdock.app/login)
Top comments (0)