SELECT * FROM accounts;
Start by checking current balances so there is a clear baseline before repeating any operations. This helps track how much the values change after duplicate executions.
BEGIN;
UPDATE accounts
SET balance = balance - 100
WHERE name = 'Alice';
UPDATE accounts
SET balance = balance + 100
WHERE name = 'Bob';
COMMIT;
Run a normal transfer once to simulate a valid request. This acts as the expected correct behavior for a single operation.
SELECT * FROM accounts;
Verify that balances updated correctly after the first transfer. This confirms the system is working as intended for a single execution.
BEGIN;
UPDATE accounts
SET balance = balance - 100
WHERE name = 'Alice';
UPDATE accounts
SET balance = balance + 100
WHERE name = 'Bob';
COMMIT;
Run the exact same transaction again to simulate a duplicate request. This mimics real-world retries caused by network issues or user actions.
SELECT * FROM accounts;
Observe that the balances change again, meaning the same transfer was applied twice. This shows the database does not automatically detect duplicates.
BEGIN;
UPDATE accounts
SET balance = balance - 100
WHERE name = 'Alice';
UPDATE accounts
SET balance = balance + 100
WHERE name = 'Bob';
COMMIT;
Repeat once more to further confirm behavior. Each execution keeps modifying balances, proving there is no built-in protection against repeated transactions.
SELECT * FROM accounts;
Balances continue to change with every execution. This clearly shows that duplicate operations are treated as separate valid transactions.
The database only ensures correctness of each individual transaction, not whether the same action was already performed earlier. Preventing duplicate processing must be handled at a higher level, such as using unique transaction IDs, idempotency keys, or logs to track processed requests. Real systems rely on these techniques to ensure that even if the same request is sent multiple times, it is applied only once.
Top comments (0)