Distributed systems lie.
Requests get retried. Webhooks arrive twice. Clients timeout and try again.
What should be a single operation suddenly runs multiple times — and now you’ve double-charged a customer or processed the same event five times.
Idempotency is the fix.
Doing it correctly is the hard part.
This post shows how to implement idempotent APIs in Node.js using Redis, and how the idempotency-redis package helps handle retries, payments, and webhooks safely.
What idempotency means for APIs
An API operation is idempotent if:
Multiple calls with the same idempotency key produce the same result — and side effects happen only once.
In practice:
- One execution per idempotency key
- Concurrent or retried requests replay the same result
- Failures can be replayed too
This matters for:
- 💳 Payments
- 🔁 Automatic retries
- 🔔 Webhooks
- 🧵 Concurrent requests
Why naive solutions fail
Common approaches break down quickly:
- In-memory locks → don’t work across instances
- Database uniqueness → hard to replay results
- Redis
SETNX→ no result or error replay - Returning
409 Conflict→ pushes complexity to clients
What you actually need is coordination + caching + replay, shared across all nodes.
Using idempotency-redis
idempotency-redis provides idempotent execution backed by Redis:
- One request executes the action
- Others wait and replay the cached result
- Errors are cached and replayed by default
- Works across multiple Node.js instances
Basic example
import Redis from 'ioredis';
import { IdempotentExecutor } from 'idempotency-redis';
const redis = new Redis();
const executor = new IdempotentExecutor(redis);
await executor.run('payment-123', async () => {
return chargeCustomer();
});
Call this five times concurrently with the same key — the function runs once.
Real-world use cases
Payments
Payment providers and clients retry aggressively.
Your API must never double-charge.
await executor.run(`payment:${paymentId}`, async () => {
const charge = await stripe.charges.create(...);
await saveToDB(charge);
return charge;
});
If the response is lost, retries replay the cached result — no second charge.
Webhooks
Webhook providers explicitly say “events may be delivered more than once.”
await executor.run(`webhook:${event.id}`, async () => {
await processWebhook(event);
});
Duplicate delivery? Same result. One execution.
Retries without fear
With idempotency in place, you can safely:
- Enable HTTP retries
- Retry background jobs
- Handle slow or flaky dependencies
No duplicate work. No race conditions.
Error handling and control
By default, errors are cached and replayed — preventing infinite retries.
You can opt out selectively:
await executor.run(key, action, {
shouldIgnoreError: (err) => err.retryable === true
});
When to use this
Use idempotency-redis if you:
- Build APIs that mutate state
- Accept retries or webhooks
- Run multiple Node.js instances
- Care about correctness under failure
Learn more
- 📦 npm: https://www.npmjs.com/package/idempotency-redis
- 🐙 GitHub: https://github.com/foreverest/idempotency-redis
If you’ve ever debugged a “why did this run twice?” incident — idempotency isn’t optional. It’s infrastructure.
Top comments (2)
Solid approach. I've been bitten by the duplicate webhook problem more times than I'd like to admit — Stripe in particular loves to retry aggressively when your server takes more than a few seconds to respond.
One thing I'm curious about: how do you handle the TTL for cached results? In my experience you want it long enough to cover retry windows but not so long that Redis memory balloons. We ended up doing something like 24h for payment keys and 1h for general API calls.
Also worth noting for anyone reading — the
shouldIgnoreErrorcallback is clutch. Not all errors should be cached. Transient stuff like network timeouts should absolutely be retried, but validation errors you probably want to cache so the client gets a consistent response.Great question on TTL, and you’re absolutely right about the tradeoff. We had an open GitHub issue for this exact problem, and your comment pushed me to prioritize it. I’ve now released v1.5.1, which adds configurable TTL for cached results.
To keep this release simple, TTL is currently a single global executor-level setting. I’m considering a few next steps: per-call TTL overrides, separate TTLs for successful vs failed results, and dynamic TTL via callbacks.
Thanks again for the thoughtful feedback.