In distributed systems, the assumption that every request reaches its destination and that every response returns to the sender is not true at all times. When a transaction, such as a $500 payment, is executed twice due to a network timeout or an retry policy, it's more than a minor bug, is a failure of system architecture.
Idempotency is not optional. It is a fundamental requirement for maintaining data integrity in any environment where network partitions, client retries, and system failures are inevitable.
In this article I'll show why it matters and how to fix it.
What is Idempotency?
An operation is idempotent if it can be applied multiple times without changing the result beyond the initial application. In the context of an API, this means that making the same call multiple times must not have any side effects on the business layer.
| Method | Idempotent | Description |
|---|---|---|
| GET | Yes | Should not change state; multiple reads return the same resource. |
| PUT | Yes | Replaces the resource; repeating the replacement results in the same state. |
| DELETE | Yes | Deleting a resource twice results in the same outcome (it is gone). |
| POST | No | Typically creates a new resource. Without intervention, repeated POSTs result in duplicates. |
Why Systems Fail Without Idempotency
Idempotency Keys
The standard pattern for enforcing exactly once processing of one transaction is the use of Idempotency Keys.
Implementation Workflow
Key Generation: The client generates a unique identifier (e.g, UUID) for the operation.
Request Header: The key is sent in a custom header (e.g., X-Idempotency-Key).
Server-Side Validation: If the key exists in the "processed" cache, the server returns the cached response immediately.
If the key is new, the server acquires a lock, processes the request, and stores the result before committing.
Client Side
// Generate UUID on first attempt
const idempotencyKey = uuidv4(); // "abc-123-def-456"
// Send request with key in header
fetch('/api/orders', {
method: 'POST',
headers: {
'Idempotency-Key': idempotencyKey,
'Content-Type': 'application/json'
},
body: JSON.stringify({ productId: 42, quantity: 1 })
});
// On retry (timeout/error), use THE SAME key
// The key doesn't change until operation succeeds or max retry is reached
Server Side in Go
func HandleCreateOrder(w http.ResponseWriter, r *http.Request) {
idempotencyKey := r.Header.Get("Idempotency-Key")
// Reject requests without key
if idempotencyKey == "" {
http.Error(w, "Idempotency-Key required", http.StatusBadRequest)
return
}
// Check if already processed
cachedResponse, found := checkKeyAlreadyProcessed(idempotencyKey)
if found {
// Return cached response
w.WriteHeader(http.StatusCreated)
w.Write(cachedResponse)
return
}
// Process the order
order, err := createOrder(r.Body)
if err != nil {
http.Error(w, "Failed to create order", http.StatusInternalServerError)
return
}
// Store key for future retries
storeProcessedKey(idempotencyKey, order, 24*time.Hour)
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(order)
}
By implementing idempotency, multiple requests with the same key won't have any side effects.
Strategic Implementation Best Practices
Persistence Layer Choice
Redis: Ideal for performance. Set a Time-to-Live (TTL) of 24–48 hours to prevent memory growth.
Relational DB: Ideal for strict consistency. Store the key within the same transaction as the business logic to ensure atomicity.
Deterministic Responses
An idempotent retry should return the original status code. If the first request returned a 201 Created, the subsequent retry with the same key should also return 201 Created, not a 200 OK or 409 Conflict. This ensures the client-side logic remains simple and consistent.
Scope of the Key
Keys should be scoped to the user or account. This prevents key collisions, where two different users might accidentally generate the same UUID (I know that this is not very likelly but it can happen, separate in tenants).
The Read-Modify-Write problem
A common mistake in implementing idempotency is failing to account for concurrency. In high-traffic systems, two identical requests might reach your API at the exact same millisecond.
// ❌ CRITICAL BUG: Race Condition
const record = await db.idempotency.find(key);
if (!record) {
// Both Request A and Request B can reach this line simultaneously
await service.processTransaction();
}
This can be fixed with a database Level Atomicity. To solve this, you must rely on database constraints or atomic "Set-if-Not-Exists" operations:
- SQL: Use a UNIQUE constraint on the idempotency key column and handle the violation error.
- Redis: Use the SET NX command to ensure only one worker can process the key.
- NoSQL: Use conditional updates.
Conclusion
Idempotency is the main pillar of building resilient, fault tolerant distributed systems. By moving the responsibility of deduplication from the business logic to a structured architectural pattern, we eliminate entire classes of bugs related to double-spending and data corruption.

Top comments (0)