This was the most eye opening problem in the entire series so far. Everything before this was single session stuff. One query, one transaction, one outcome. This one throws two sessions at the same account at the same time and asks what happens. The answer depends entirely on something called isolation levels and understanding this is what separates someone who writes SQL from someone who actually understands databases.
The Setup
Same table, same data.
CREATE TABLE accounts (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
balance INT NOT NULL CHECK (balance >= 0),
last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO accounts (name, balance)
VALUES ('Alice', 1000), ('Bob', 500);
Alice has 1000. Bob has 500. Now imagine two people trying to send money from Alice's account at the exact same moment.
The Problem We Are Trying to Avoid
Alice has 1000. Session 1 tries to deduct 800. Session 2 also tries to deduct 700. If both succeed, Alice ends up with negative 500 which the CHECK constraint will catch. But the scarier version is when one session reads the balance before the other commits, does its own math on that stale number, and both deductions go through based on the same starting balance. That is a lost update and that is real money disappearing.
Experiment 1 — Default Isolation (Read Committed)
PostgreSQL's default isolation level is Read Committed. This means a transaction can only see data that has already been committed by other transactions. It cannot see uncommitted changes from another session.
Open Session 1 and start a transaction but do not commit it yet:
-- Session 1
BEGIN;
UPDATE accounts
SET balance = balance - 800,
last_updated = CURRENT_TIMESTAMP
WHERE name = 'Alice';
-- Do not commit yet. Leave this transaction open.
Alice's balance is now 200 inside Session 1 but this change is not committed so the rest of the world cannot see it yet.
Now open Session 2 and try to read Alice's balance:
-- Session 2
SELECT balance FROM accounts WHERE name = 'Alice';
Session 2 still sees 1000. It cannot see the uncommitted deduction from Session 1. That is Read Committed doing its job. No dirty reads.
Now Session 2 tries to deduct 700 from Alice:
-- Session 2
BEGIN;
UPDATE accounts
SET balance = balance - 700,
last_updated = CURRENT_TIMESTAMP
WHERE name = 'Alice';
Session 2 gets blocked. PostgreSQL puts it in a wait state because Session 1 is holding a lock on Alice's row. Session 2 cannot touch that row until Session 1 either commits or rolls back.
Now go back to Session 1 and commit:
-- Session 1
COMMIT;
Alice's balance is now 200. Session 1's lock is released. Session 2 wakes up, reads the updated balance of 200, and tries to deduct 700. That would bring Alice to negative 500 which violates the CHECK constraint. PostgreSQL blocks it.
ERROR: new row for relation "accounts" violates check constraint "accounts_balance_check"
Session 2 rolls back. Alice stays at 200. The system protected itself.
Experiment 2 — Repeatable Read Isolation
Repeatable Read is stricter. Once a transaction reads a row, it will keep seeing the same value for that row for the entire duration of the transaction, even if another session commits a change to it in the meantime.
-- Session 1
BEGIN ISOLATION LEVEL REPEATABLE READ;
SELECT balance FROM accounts WHERE name = 'Alice';
-- Sees 1000
-- Session 2
BEGIN;
UPDATE accounts
SET balance = balance - 200
WHERE name = 'Alice';
COMMIT;
-- Alice is now 800
Now back in Session 1:
-- Session 1
SELECT balance FROM accounts WHERE name = 'Alice';
-- Still sees 1000, not 800
Session 1 is living in a snapshot of the data from the moment it started. Even though Session 2 committed a real change, Session 1 does not see it. This prevents non-repeatable reads which is when the same query returns different results within the same transaction.
Now if Session 1 tries to update Alice based on what it thinks the balance is:
-- Session 1
UPDATE accounts
SET balance = balance - 800
WHERE name = 'Alice';
PostgreSQL detects that another transaction already modified this row after Session 1's snapshot was taken. It throws a serialization error and forces Session 1 to retry.
ERROR: could not serialize access due to concurrent update
Experiment 3 — Serializable Isolation
Serializable is the strictest level. PostgreSQL acts as if all transactions ran one after another in a sequence even if they are actually running at the same time. If it detects that the concurrent execution could produce a result that no sequential order would produce, it aborts one of the transactions.
-- Session 1
BEGIN ISOLATION LEVEL SERIALIZABLE;
SELECT balance FROM accounts WHERE name = 'Alice';
-- Sees 1000
UPDATE accounts
SET balance = balance - 800
WHERE name = 'Alice';
-- Session 2
BEGIN ISOLATION LEVEL SERIALIZABLE;
SELECT balance FROM accounts WHERE name = 'Alice';
-- Also sees 1000
UPDATE accounts
SET balance = balance - 700
WHERE name = 'Alice';
When both sessions try to commit, PostgreSQL figures out that there is no valid sequential order where both of these transactions can succeed without conflicting. It picks one to commit and forces the other to fail with a serialization error. The losing session has to catch that error and retry the whole transaction from the beginning.
This is the gold standard for correctness but it comes with a cost. Retries add complexity and under very high load they can slow things down.
Summary of Isolation Levels
Read Committed is the default and stops dirty reads. It is good for most situations but can still have issues with lost updates if your application reads a value, thinks about it, and then writes based on that stale value.
Repeatable Read locks your view of the data for the whole transaction. What you read at the start is what you keep seeing. Good for reports or anything where consistency within one transaction matters.
Serializable is the strictest. It guarantees correctness even in the most complex concurrent scenarios but requires your application to handle retries when transactions conflict.
What This Changed for Me
Before this problem I thought transactions were just about wrapping updates together. After this I understand that transactions also have to deal with time, specifically the fact that other transactions are running at the same time and changing the same data.
Isolation levels are how you tell the database how paranoid to be about that. For a payment system the answer is usually very paranoid. Serializable or at minimum Repeatable Read with proper retry logic is what keeps balances correct when thousands of transfers are happening every second.
The CHECK constraint stops impossible balances. Transactions stop partial updates. Isolation levels stop concurrent sessions from stepping on each other. All three working together is what makes a payment system actually trustworthy.
Top comments (0)