DEV Community

Philip McClarence
Philip McClarence

Posted on

Diagnosing and Resolving Lock Contention in PostgreSQL

Why One Slow Query Can Block Your Entire PostgreSQL Database

You're running a healthy production PostgreSQL database. Response times are stable, connections are flowing, everything is fine. Then someone starts a migration, a reporting query opens a long transaction, or a developer forgets to COMMIT — and within seconds, your entire application is timing out.

This is lock contention, and it's one of the most disruptive problems in PostgreSQL because it escalates faster than almost any other issue.

How PostgreSQL Locking Works

Every row-level operation acquires a lock. An UPDATE takes a RowExclusiveLock on the table and an exclusive tuple-level lock on each modified row. A SELECT FOR UPDATE does the same. These locks are invisible when transactions finish quickly — held for milliseconds, then released.

The problem starts when a transaction holds locks longer than it should. A long-running query, an idle-in-transaction session, or an ALTER TABLE waiting for its AccessExclusiveLock — any of these can create a chain reaction.

The Lock Queue Pile-Up

Here's the scenario that catches most teams off guard:

  1. Transaction A starts an ALTER TABLE to add a column. It needs AccessExclusiveLock, but Transaction B is doing a slow read. So A queues.
  2. Transaction C comes in with a simple SELECT. Normally instant — but there's a queued AccessExclusiveLock ahead of it. C has to wait too.
  3. Transactions D, E, F, G all pile up behind the queued DDL.
  4. Within 10 seconds, 50 transactions are queued. Your application starts returning timeouts.

The root blocker is Transaction B (the slow read), but the cascade was caused by Transaction A's DDL queueing an exclusive lock.

Detecting Lock Contention

The essential diagnostic query joins pg_stat_activity with pg_locks:

SELECT
    blocked.pid AS blocked_pid,
    blocked.query AS blocked_query,
    blocked.wait_event_type,
    blocked.wait_event,
    blocking.pid AS blocking_pid,
    blocking.query AS blocking_query,
    now() - blocked.query_start AS blocked_duration
FROM pg_stat_activity AS blocked
JOIN pg_locks AS blocked_locks
    ON blocked.pid = blocked_locks.pid
JOIN pg_locks AS blocking_locks
    ON blocked_locks.locktype = blocking_locks.locktype
    AND blocked_locks.relation = blocking_locks.relation
    AND blocked_locks.pid != blocking_locks.pid
    AND blocking_locks.granted
    AND NOT blocked_locks.granted
JOIN pg_stat_activity AS blocking
    ON blocking_locks.pid = blocking.pid
ORDER BY blocked_duration DESC;
Enter fullscreen mode Exit fullscreen mode

Look for blocked_duration over a few seconds. Check whether the blocker is actively running a query or idle in transaction — the fix is different for each case.

For deadlocks, check pg_stat_database:

SELECT datname, deadlocks FROM pg_stat_database WHERE datname = current_database();
Enter fullscreen mode Exit fullscreen mode

Immediate Resolution

When you find a blocking session:

-- Try cancel first (gentle)
SELECT pg_cancel_backend(12345);

-- If idle in transaction, cancel won't work — terminate
SELECT pg_terminate_backend(12345);

-- Verify
SELECT count(*) FROM pg_locks WHERE NOT granted;
Enter fullscreen mode Exit fullscreen mode

Preventing Lock Contention

The real fix is preventing the pile-up in the first place:

Set lock_timeout. This is the single most important setting. It makes queries fail immediately when they can't acquire a lock, instead of queuing:

ALTER DATABASE mydb SET lock_timeout = '5s';
Enter fullscreen mode Exit fullscreen mode

Catch idle transactions. A forgotten BEGIN without COMMIT holds locks indefinitely:

ALTER DATABASE mydb SET idle_in_transaction_session_timeout = '30s';
Enter fullscreen mode Exit fullscreen mode

Use NOWAIT for explicit locking.

SELECT * FROM inventory WHERE product_id = 123 FOR UPDATE NOWAIT;
Enter fullscreen mode Exit fullscreen mode

Run DDL safely. Always set lock_timeout in migration scripts. Always use CREATE INDEX CONCURRENTLY. Never run ALTER TABLE during peak traffic without a timeout.

Fix deadlocks at the code level. Ensure all transactions that touch the same rows do so in the same order. If Transaction A updates orders then payments, Transaction B must do the same — not payments then orders.

Monitoring Matters

The challenge with lock contention is timing: by the time you notice it, the queue is already deep. Continuous monitoring of lock waits — watching for sustained Lock:tuple, Lock:relation, or Lock:transactionid wait events — turns this from a firefighting exercise into a preventable condition.

The full article with monitoring screenshots and lock chain visualization: Read on mydba.dev

Top comments (0)