DEV Community

Cover image for Concurrency Control and Database Recovery in SQLite
Athreya aka Maneshwar
Athreya aka Maneshwar

Posted on

Concurrency Control and Database Recovery in SQLite

Hello, I'm Maneshwar. I'm working on FreeDevTools online currently building **one place for all dev tools, cheat codes, and TLDRs* — a free, open-source hub where developers can quickly find and use tools without any hassle of searching all over the internet.*

In the previous post, I explored how SQLite works with multiple databases through a single library connection and how transactions provide atomicity and consistency across those databases.

That discussion naturally leads to a deeper question: what happens when multiple transactions run at the same time, and how does SQLite recover from failures?

Today’s learning focused on concurrency control and database recovery, two foundational mechanisms that allow SQLite to remain simple, safe, and predictable—even under crashes or concurrent access.

Concurrency Control in SQLite

SQLite stores an entire database in a single native file.

This design choice heavily influences how concurrency is implemented.

Instead of row-level or table-level locking, SQLite uses a database-level locking framework, built on top of the file-locking primitives provided by the operating system.

Database-Level Locking Model

SQLite enforces the following rules:

  • Multiple transactions may read concurrently
  • Only one write transaction may exist at a time
  • Readers block writers
  • Writers block both readers and other writers

This means if any transaction is reading any part of the database, no transaction can write to any part of it and if a transaction is writing, no other transaction can read or write

Its restrictive compared to server databases, this approach keeps SQLite simple, deterministic and free from deadlocks.

Strict Two-Phase Locking

SQLite follows strict two-phase locking (2PL):

  • Locks are acquired during execution
  • Locks are released only at transaction termination (commit or rollback)

This guarantees serializable execution of concurrent transactions, the strongest isolation level. Even though concurrency is limited, correctness is never compromised.

Database Recovery Using Journals

Concurrency alone is not enough. A database must also survive crashes, power failures, or aborted transactions. SQLite achieves this using a rollback journal.

Rollback Journal Basics

For every database file, SQLite uses a corresponding journal file:

  • Database: MyDB
  • Journal: MyDB-journal

Image

Key properties:

  • Created only for write transactions
  • Stored in the same directory as the database file
  • Deleted (or finalized) when the transaction completes

There is no rollback journal for read-only transactions.

What the Journal Stores

SQLite uses physical (value-based) logging:

  • Every log record stores the entire database page
  • Even if only one byte is modified

This is inefficient in terms of space and I/O, but intentional. The design keeps:

  • Recovery logic extremely simple
  • SQLite’s codebase small and maintainable

Recovery is just a matter of restoring old page images.

Journal Finalization and Retention

By default, SQLite treats the journal file as transient:

  • Created at the start of a write transaction
  • Deleted when the transaction commits

Importantly:

  • Journal deletion is the commit point

SQLite also supports alternative behaviors:

  • Truncating the journal to zero length
  • Invalidating the journal
  • Retaining it at a fixed size

These options are valuable on platforms where file creation and deletion are expensive.

WAL: Write-Ahead Logging

From SQLite 3.7.0 onward, SQLite introduced WAL (Write-Ahead Logging):

  • Uses -wal files instead of -journal
  • Journal files persist across commits
  • Improves concurrency between readers and writers

This marks a significant evolution in SQLite’s concurrency and recovery design and is explored in more depth in thelater posts.

image

Statement Subtransactions

Inside a user-defined transaction, SQLite introduces another layer: statement subtransactions.

Key points:

  • Every non-SELECT statement runs in its own subtransaction
  • Only one subtransaction can exist at a time
  • Multiple SELECT statements may execute concurrently

If a statement fails:

  • Only the subtransaction is rolled back
  • The outer transaction continues to exist

Each subtransaction uses a temporary statement journal for recovery, independent of the main rollback journal.

This allows fine-grained error handling without aborting the entire transaction.

Transactions Across Multiple Databases

When a transaction spans multiple attached databases:

  • SQLite creates a separate rollback journal for each database
  • A master journal file is also used

The master journal:

  • Records the names of individual rollback journals
  • Coordinates recovery across databases

This ensures atomicity even when multiple database files are involved—either all databases commit, or all roll back.

Closing Thoughts

Today’s learning shows how SQLite prioritizes correctness over concurrency:

  • Database-level locking simplifies coordination
  • Journals guarantee safe rollback and crash recovery
  • Subtransactions add resilience at the statement level
  • WAL offers a modern alternative for better concurrency

Despite its lightweight nature, SQLite implements these mechanisms with remarkable clarity and reliability.

Next, I’ll explore the SQLite catalog and inherent limitations, which will round out a practical understanding of how SQLite works under the hood.

My experiments and hands-on executions related to SQLite will live here: lovestaco/sqlite

References:

SQLite Database System: Design and Implementation. N.p.: Sibsankar Haldar, (n.d.).

FreeDevTools

👉 Check out: FreeDevTools

Any feedback or contributors are welcome!

It’s online, open-source, and ready for anyone to use.

⭐ Star it on GitHub: freedevtools

Top comments (1)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.