In a concurrent database context locks are used to prevent race conditions between data accesses. To handle these situations we need to decide how we'll react to a race condition, in an optimistic or an pessimistic way. This article will be a quick summary for when using each of the approaches and how to apply them in PostgreSQL.
Pessimistic Locking
We tend to use pessimistic locks when data is frequently changed and it's known that conflicts are common. The idea of the strategy is to use a lock in a database level that blocks other processes to access the locked rows. This will add more latency because of the blocking behavior, so given that the tradeoff here needs to be considered at scale. Overall, this approach is used when data being updated is critical, we can't take the action atomically in the database layer and blocking is acceptable.
To give a quick example on how to apply this pattern I'll show a quick sample of Go using PostgreSQL.
package main
import (
"context"
"database/sql"
"errors"
"fmt"
"log/slog"
"time"
)
var ErrConflict = errors.New("optimistic lock conflict: data has been modified by another process")
var ErrInsufficientFunds = errors.New("insufficient funds for transfer")
type User struct {
ID int64
PixKey string
Bio string
IsActive bool
Version int
Credits float64
}
type UserService struct {
DB *sql.DB
}
func NewUserService(db *sql.DB) *UserService {
return &UserService{DB: db}
}
// TransferPixCredits action shows pessimistic locking
// We lock the row, do our work, and then commit the changes
// Concurrent queries on the row will wait the lock to be released
func (s *UserService) TransferPixCredits(ctx context.Context, fromPixKey string, toPixKey string, amount float64) error {
tx, err := s.DB.BeginTx(ctx, nil)
if err != nil {
return fmt.Errorf("failed to begin transaction: %v", err)
}
defer tx.Rollback()
ctxDb, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
slog.Info("Acquiring locks for transfer", "from", fromPixKey, "to", toPixKey)
// Because we used FOR UPDATE in our Select query we'll lock all the rows affected until
// the transaction is finished
const getQuery = "SELECT id, pix_key, is_active, version, credits FROM users WHERE pix_key = $1 FOR UPDATE;"
userFrom := tx.QueryRowContext(ctxDb, getQuery, fromPixKey)
var sender User
if err := userFrom.Scan(&sender.ID, &sender.PixKey, &sender.IsActive, &sender.Version, &sender.Credits); err != nil {
if err == sql.ErrNoRows {
return fmt.Errorf("sender PIX key not found: %w", err)
}
return fmt.Errorf("error scanning sender row: %v", err)
}
userTo := tx.QueryRowContext(ctxDb, getQuery, toPixKey)
var receiver User
if err := userTo.Scan(&receiver.ID, &receiver.PixKey, &receiver.IsActive, &receiver.Version, &receiver.Credits); err != nil {
if err == sql.ErrNoRows {
return fmt.Errorf("receiver PIX key not found: %w", err)
}
return fmt.Errorf("error scanning receiver row: %v", err)
}
slog.Info("Locks acquired. Performing transfer")
if !sender.IsActive {
return fmt.Errorf("sender account is inactive")
}
if !receiver.IsActive {
return fmt.Errorf("receiver account is inactive")
}
if sender.Credits < amount {
return ErrInsufficientFunds
}
newSenderCredits := sender.Credits - amount
newReceiverCredits := receiver.Credits + amount
const updateQuery = "UPDATE users SET credits = $1 WHERE id = $2;"
if _, err := tx.ExecContext(ctxDb, updateQuery, newSenderCredits, sender.ID); err != nil {
return fmt.Errorf("failed to update sender credits: %w", err)
}
if _, err := tx.ExecContext(ctxDb, updateQuery, newReceiverCredits, receiver.ID); err != nil {
return fmt.Errorf("failed to update receiver credits: %w", err)
}
const logQuery = "INSERT INTO credit_transfers (from_user_id, to_user_id, amount) VALUES ($1, $2, $3);"
if _, err := tx.ExecContext(ctxDb, logQuery, sender.ID, receiver.ID, amount); err != nil {
return fmt.Errorf("failed to log transfer: %w", err)
}
slog.Info("Committing transaction and releasing locks")
return tx.Commit()
}
Optimistic Locking
On the other hand, optimistic locks are used in environments that data conflicts rarely change. This method does not hold any lock in the selected rows and the control is implemented in the codebase when updating records. We can track if an update happened before we commited our change by looking at a version or an last updated at column, for example. To sum up, this method must only be choosen if conflict rates are low, since we will rollback the entire operation if our read data is stale, adding not desired latency.
package main
import (
"context"
"database/sql"
"errors"
"fmt"
"log/slog"
)
var ErrConflict = errors.New("optimistic lock conflict: data has been modified by another process")
type User struct {
ID int64
PixKey string
Bio string
IsActive bool
Version int
Credits float64
DailyLimit float64
}
type UserService struct {
DB *sql.DB
}
func NewUserService(db *sql.DB) *UserService {
return &UserService{DB: db}
}
// UpdateDailyLimit uses optimistic locking to prevent lost updates on critical settings
// If two admins try to change limits simultaneously, the second one will fail safely
// rather than overwriting the first one's changes
func (s *UserService) UpdateDailyLimit(ctx context.Context, pixKey string, newLimit float64) error {
slog.Info("Reading user limit (no lock).")
const getQuery = "SELECT id, pix_key, bio, is_active, version, credits, daily_limit FROM users WHERE pix_key = $1;"
row := s.DB.QueryRowContext(ctx, getQuery, pixKey)
var user User
if err := row.Scan(&user.ID, &user.PixKey, &user.Bio, &user.IsActive, &user.Version, &user.Credits, &user.DailyLimit); err != nil {
if err == sql.ErrNoRows {
return fmt.Errorf("no PIX key related to the user: %w", err)
}
return fmt.Errorf("error while scanning postgres row: %v", err)
}
slog.Info("Validating business rules against current state", "current_limit", user.DailyLimit)
tx, err := s.DB.BeginTx(ctx, nil)
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback()
slog.Info("Attempting update", "user", user.ID, "version", user.Version)
const updateQuery = `
UPDATE users
SET daily_limit = $1, version = version + 1
WHERE id = $2 AND version = $3;`
result, err := tx.ExecContext(ctx, updateQuery, newLimit, user.ID, user.Version)
if err != nil {
return fmt.Errorf("failed to attempt update: %w", err)
}
rowsAffected, err := result.RowsAffected()
if err != nil {
return fmt.Errorf("failed to check rows affected: %w", err)
}
if rowsAffected == 0 {
// This is the critical catch
// It implies someone else changed the limit
// after we read it but before we commited our changes
slog.Warn("CONFLICT -> Data changed by another process before we commited")
return ErrConflict
}
slog.Info("Update successful")
return tx.Commit()
}
Conclusion
To conclued, dealing with database race conditions goes way beyond the examples discussed in this article, since they were meant to be simple. The tradeoffs between choosing a optimistic or pessimistic lock strategy always need to be considered. If a lock is indeed needed the key to know which one to choose is to identify how often data changes happen in the application. Hope this small article was usefull and leave a comment about locks down below!
Top comments (1)
Nice overview. One extra detail for readers: in PostgreSQL the isolation level also matters a lot here. For example, REPEATABLE READ can prevent some anomalies but still allows write skew, whereas SERIALIZABLE adds predicate locking under the hood and can behave like a safer “automatic optimistic locking” (via serialization failures). It's useful to test these patterns under both isolation levels with concurrent load to see how they really behave.