Here's a fact that might surprise you: SQLite is the most deployed database engine in the world. Not PostgreSQL. Not MySQL. Not MongoDB. SQLite. It's in every iPhone, every Android device, every Mac, every Windows 10+ machine, every Firefox and Chrome browser, every Airbus A350, and even the Mars rovers. There are literally trillions of SQLite databases in active use right now.
And yet, for years, the developer community treated it as a toy. "SQLite isn't for production." "It's fine for prototypes." "You'll need a real database eventually." This sentiment was so deeply ingrained that suggesting SQLite for anything beyond local development would get you laughed out of a code review.
That changed. In 2025-2026, something remarkable happened. A constellation of technologies — Turso, Cloudflare D1, LibSQL, Litestream, and the embedded database movement — converged to solve SQLite's real limitations while preserving its absurd simplicity. The result? SQLite is now a legitimate production database for a class of applications that was previously unthinkable.
This isn't hype. This is architecture. Let's break down exactly what's happening, why it matters, and when you should (and shouldn't) reach for SQLite in 2026.
Why SQLite Was "Not for Production"
Before we get excited, let's be honest about why everyone dismissed SQLite for server-side work. These weren't wrong — they were correct assessments of the wrong era.
The Classic Limitations
1. Single-writer concurrency. SQLite uses a file-level lock. One writer at a time. In WAL (Write-Ahead Logging) mode, you get concurrent reads with a single writer, but that's it. If your app needs high write throughput from multiple connections, traditional SQLite chokes.
2. No built-in replication. PostgreSQL has streaming replication. MySQL has binlog replication. SQLite has... copying the file. For any application that needs high availability, failover, or geographic distribution, SQLite was a non-starter.
3. Single-machine storage. The database is a file on disk. No clustering, no sharding, no distributed storage. Your data lives and dies with that one machine.
4. No network access. Unlike PostgreSQL or MySQL, which run as network services that accept connections from anywhere, SQLite is an embedded library. Your application talks to it directly — which means no shared access from multiple services.
These are real limitations. They haven't gone away. But the landscape around them has shifted so dramatically that they now matter far less than they used to — or have been directly solved.
What Changed: The Four Pillars
Pillar 1: Hardware Got Fast Enough
The most underrated change is hardware. Modern NVMe SSDs deliver ~500K-1M random IOPS with sub-100μs latency. When SQLite runs on NVMe, its single-writer model can handle thousands of writes per second — easily enough for the vast majority of web applications.
Consider the numbers:
Traditional HDD:
Random IOPS: ~100-200
Write latency: ~5-10ms
SQLite throughput: ~100-200 writes/sec
NVMe SSD (2026):
Random IOPS: ~500K-1M
Write latency: ~10-50μs
SQLite throughput: ~10K-50K writes/sec
Most web apps don't need more than a few hundred writes per second. An e-commerce site processing 100 orders per minute? That's under 2 writes per second. A SaaS app with 10K daily active users? You'd be hard-pressed to exceed 100 writes per second. SQLite on NVMe handles this trivially.
Pillar 2: WAL Mode and BEGIN CONCURRENT
SQLite's WAL (Write-Ahead Logging) mode, which has existed since 2010, allows concurrent readers with a single writer. Most developers haven't actually benchmarked this — they just heard "single writer" and moved on.
In practice, WAL mode delivers excellent throughput:
-- Enable WAL mode (do this once)
PRAGMA journal_mode=WAL;
-- Recommended SQLite production settings
PRAGMA busy_timeout = 5000; -- Wait up to 5s for locks
PRAGMA synchronous = NORMAL; -- Durable enough, much faster
PRAGMA cache_size = -64000; -- 64MB cache
PRAGMA foreign_keys = ON; -- Enforce referential integrity
PRAGMA temp_store = MEMORY; -- Temp tables in RAM
The BEGIN CONCURRENT extension (available in SQLite forks like LibSQL) goes further, allowing multiple writers to proceed simultaneously as long as they don't modify the same pages. This is a game-changer for write-heavy workloads:
BEGIN CONCURRENT;
INSERT INTO orders (user_id, total) VALUES (42, 99.99);
COMMIT;
-- Multiple connections can execute this simultaneously
-- Conflicts only happen if they touch the same data pages
Pillar 3: LibSQL — The Fork That Changed Everything
LibSQL is an open-source fork of SQLite that preserves full compatibility while adding the features everyone wished SQLite had. Created by the Turso team, LibSQL treats SQLite's limitations not as laws of physics but as engineering problems to solve.
Key additions:
- Native replication. LibSQL supports embedded replicas that sync from a primary server. You get local read performance with replicated durability.
-
BEGIN CONCURRENTfor multi-writer support. - Server mode. LibSQL can run as a network-accessible server (via HTTP and WebSockets), solving the "no network access" limitation.
- ALTER TABLE improvements. Operations that SQLite makes painful (like adding columns with constraints) are smoothed over.
- WASM UDFs. User-defined functions written in WebAssembly, extending SQLite's capabilities without C extensions.
- Vector search built in, for AI/embedding use cases.
The best part? LibSQL maintains a merge-forward strategy from upstream SQLite, so you get all SQLite improvements plus the extras.
Pillar 4: The Platform Layer
The real revolution isn't LibSQL alone — it's the platforms built on top of it.
Turso is a managed LibSQL service that provides:
- Edge replicas in 30+ locations worldwide
- Embedded replicas that sync to your application (sub-millisecond reads)
- Per-tenant databases (one database per user/org — no multi-tenancy hacks)
- Point-in-time recovery and branching
Cloudflare D1 is Cloudflare's serverless SQLite offering:
- Runs on Cloudflare's edge network (300+ cities)
- Native integration with Workers (zero network hop)
- Automatic read replication to edge nodes
- SQL API with batch operations
LiteFS (by Fly.io) is worth mentioning for historical context:
- FUSE-based SQLite replication across Fly.io machines
- Automatic primary election and failover
- Transparent to the application (just use SQLite normally)
- ⚠️ Note: LiteFS Cloud (the managed backup service) was sunset in October 2024, and Fly.io has deprioritized active development. LiteFS remains stable and usable in production, but it's in a pre-1.0 beta state with no guaranteed roadmap. For new projects, Turso or D1 are safer bets for managed replication.
Litestream provides:
- Continuous streaming backup of SQLite to S3/GCS/Azure
- Point-in-time recovery from object storage
- Lightweight, battle-tested, and actively maintained — a solid choice for self-managed SQLite backup regardless of hosting platform
The Architecture: How Production SQLite Actually Works
Let's get concrete. Here's how you'd architect a real application with SQLite in 2026.
Pattern 1: Turso with Embedded Replicas
This is the most popular pattern for web applications. Your application embeds a local SQLite replica that syncs from a Turso primary.
import { createClient } from '@libsql/client';
// Create a client with an embedded replica
const db = createClient({
url: 'file:local.db', // Local replica
syncUrl: 'libsql://my-app-db.turso.io', // Remote primary
authToken: process.env.TURSO_AUTH_TOKEN,
syncInterval: 60, // Sync every 60s
});
// Reads hit the local replica — sub-millisecond
const users = await db.execute('SELECT * FROM users WHERE active = 1');
// Writes go to the remote primary, then sync back
await db.execute({
sql: 'INSERT INTO users (email, name) VALUES (?, ?)',
args: ['alice@example.com', 'Alice'],
});
// Force sync when you need it
await db.sync();
The read path is extraordinary: your application reads from a local SQLite file. No network hop. No connection pool. No cold start. Sub-millisecond latency for every read. Writes go to the primary over the network, but that's fine — most apps are read-heavy.
Pattern 2: Database-per-Tenant with Turso
This is where SQLite's architecture shines brightest. Instead of one database with a tenant_id column on every table, you give each tenant their own database:
import { createClient } from '@libsql/client';
function getClientForTenant(tenantId: string) {
return createClient({
url: `libsql://${tenantId}-my-app.turso.io`,
authToken: process.env.TURSO_AUTH_TOKEN,
});
}
// Tenant A's operations only touch Tenant A's database
const tenantA = getClientForTenant('tenant-a');
await tenantA.execute('SELECT * FROM invoices');
// Tenant B is completely isolated
const tenantB = getClientForTenant('tenant-b');
await tenantB.execute('SELECT * FROM invoices');
Benefits of this pattern:
-
True isolation. No risk of cross-tenant data leaks. No
WHERE tenant_id = ?on every query. - Independent scaling. Heavy tenants get their own resources.
- Simpler queries. Every query is tenant-scoped by default.
- Easy compliance. GDPR deletion? Delete the database file.
- Independent backups/restores. Restore one tenant without affecting others.
Turso supports up to 100 monthly active databases on the free tier (with 5GB storage), scaling to thousands of databases on paid plans — more than enough for the database-per-tenant pattern. The per-database overhead is negligible because SQLite databases are just files.
Pattern 3: Cloudflare D1 with Workers
For applications already on Cloudflare, D1 provides zero-latency database access from Workers:
// wrangler.toml
// [[d1_databases]]
// binding = "DB"
// database_name = "my-app-db"
// database_id = "xxxx-xxxx-xxxx"
export default {
async fetch(request: Request, env: Env): Promise<Response> {
// Zero network hop — D1 runs on the same machine as your Worker
const { results } = await env.DB.prepare(
'SELECT * FROM products WHERE category = ?'
).bind('electronics').all();
// Batch multiple operations
const batchResults = await env.DB.batch([
env.DB.prepare('UPDATE inventory SET count = count - 1 WHERE id = ?').bind(productId),
env.DB.prepare('INSERT INTO orders (product_id, user_id) VALUES (?, ?)').bind(productId, userId),
]);
return Response.json(results);
}
};
Pattern 4: Local-First with Sync
The most exciting pattern is local-first applications where the database lives on the client device and syncs to a server:
import { createClient } from '@libsql/client/web';
// In the browser or mobile app
const localDb = createClient({
url: 'file:local-user.db',
syncUrl: 'libsql://user-data.turso.io',
authToken: userToken,
});
// Works offline — reads and writes to local DB
await localDb.execute(
'INSERT INTO notes (title, body) VALUES (?, ?)',
['Meeting notes', 'Discussed Q3 roadmap...']
);
// When online, sync to server
await localDb.sync();
This pattern enables:
- Offline-first applications that work without internet
- Instant UI updates (no loading spinners for reads)
- Conflict resolution handled by LibSQL's replication protocol
- Cross-device sync through the server-side primary
The Numbers: Real-World Benchmarks
Let's put concrete numbers on this. These benchmarks compare SQLite-based solutions against traditional approaches for common web workloads.
Read Performance
Simple SELECT by primary key (p50 latency):
SQLite (local file): ~0.01ms
SQLite (Turso embedded replica): ~0.02ms
Cloudflare D1 (from Worker): ~0.5ms
PostgreSQL (same region): ~1-3ms
PostgreSQL (managed, e.g. Neon): ~3-10ms
PlanetScale MySQL (regional): ~3-8ms
For read-heavy workloads, local SQLite is 100-1000x faster than any network database. It's not a fair comparison — you're comparing in-process calls to network round trips — but that's exactly the point.
Write Performance
Single INSERT (p50 latency):
SQLite WAL mode (NVMe): ~0.05ms
SQLite (Turso, write to primary): ~15-50ms (network)
Cloudflare D1 (write): ~5-30ms
PostgreSQL (same region): ~1-5ms
For writes, network-based SQLite solutions have higher latency because writes must reach the primary. This is the trade-off: reads are blazing fast, writes go over the network. For read-heavy apps (most web apps), this is an excellent trade-off.
Concurrent Write Throughput
Sustained writes/second (single database):
SQLite WAL (local, NVMe): ~10K-50K
SQLite + BEGIN CONCURRENT: ~5K-20K (multi-writer)
Turso (single primary): ~1K-5K (network bound)
Cloudflare D1: ~500-2K
PostgreSQL: ~10K-50K
MySQL: ~10K-50K
This is where traditional databases still win for write-heavy workloads. If your app needs high sustained write throughput from many concurrent processes — think a real-time bidding system or a high-frequency logging pipeline — PostgreSQL or MySQL is still the better choice.
When to Use SQLite in Production (And When Not To)
Use SQLite When:
1. Your app is read-heavy. Most web applications are 90%+ reads. Blogs, content sites, e-commerce catalogs, dashboards, documentation — all read-heavy. SQLite with embedded replicas is perfect for these.
2. You want geographic distribution without complexity. With Turso's edge replicas or Cloudflare D1, your data is physically closer to your users. No managing replication topologies — it just works.
3. You need per-tenant isolation. SaaS applications with strict data isolation requirements benefit enormously from the database-per-tenant pattern. No more tenant_id on every table and every query.
4. You're building local-first or offline-capable apps. If your app needs to work offline and sync later, SQLite is the natural foundation. CRDTs and other conflict resolution mechanisms layer cleanly on top.
5. You want minimal operational overhead. No database server to manage. No connection pooling to tune. No vacuum or analyze to schedule (well, occasionally). SQLite is zero-ops for many use cases.
6. Your project is a side project or small-to-medium app. This is the sweet spot. The vast majority of web applications never scale beyond what a single SQLite database can handle. Why pay for a managed PostgreSQL instance when your app serves 1K daily users?
Don't Use SQLite When:
1. You need high sustained write throughput from many sources. If your app genuinely needs thousands of concurrent write transactions per second from different processes, use PostgreSQL or MySQL. Financial trading platforms, real-time analytics ingest, and IoT sensor data pipelines are examples.
2. You need complex multi-table transactions with strict SERIALIZABLE isolation across distributed systems. SQLite doesn't have the distributed transaction capabilities of PostgreSQL or CockroachDB.
3. You need row-level security or advanced access control. PostgreSQL's Row-Level Security (RLS) is battle-tested for multi-tenant scenarios where you want fine-grained access control within a single database. SQLite doesn't have this.
4. Your team already has strong PostgreSQL/MySQL expertise and infrastructure. Don't switch to SQLite just because it's trendy. If your existing stack works well, the migration cost probably isn't worth it.
5. You need features like LISTEN/NOTIFY, logical replication, or stored procedures. PostgreSQL's ecosystem of extensions and advanced features has no equivalent in SQLite.
The Stack: Putting It All Together
Here's a production-ready stack using SQLite in 2026:
The Minimal Stack
App: Next.js / Remix / SvelteKit / Astro
DB: Turso (LibSQL) with embedded replica
ORM: Drizzle ORM (first-class LibSQL/Turso support)
Hosting: Vercel / Cloudflare / Fly.io
Drizzle ORM has excellent LibSQL/Turso integration:
// drizzle.config.ts
import { defineConfig } from 'drizzle-kit';
export default defineConfig({
schema: './src/db/schema.ts',
dialect: 'turso',
dbCredentials: {
url: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
},
});
// src/db/schema.ts
import { sqliteTable, text, integer } from 'drizzle-orm/sqlite-core';
export const users = sqliteTable('users', {
id: integer('id').primaryKey({ autoIncrement: true }),
email: text('email').notNull().unique(),
name: text('name').notNull(),
createdAt: text('created_at').notNull().default(sql`(datetime('now'))`),
});
export const posts = sqliteTable('posts', {
id: integer('id').primaryKey({ autoIncrement: true }),
title: text('title').notNull(),
content: text('content'),
authorId: integer('author_id').references(() => users.id).notNull(),
published: integer('published', { mode: 'boolean' }).default(false),
});
// src/db/index.ts
import { drizzle } from 'drizzle-orm/libsql';
import { createClient } from '@libsql/client';
import * as schema from './schema';
const client = createClient({
url: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
});
export const db = drizzle(client, { schema });
// Queries work exactly like any other Drizzle setup
const publishedPosts = await db
.select()
.from(schema.posts)
.where(eq(schema.posts.published, true))
.leftJoin(schema.users, eq(schema.posts.authorId, schema.users.id));
The Cloudflare Stack
App: Cloudflare Workers / Pages
DB: Cloudflare D1
ORM: Drizzle ORM
Hosting: Cloudflare
// For D1 with Drizzle
import { drizzle } from 'drizzle-orm/d1';
export default {
async fetch(request: Request, env: Env) {
const db = drizzle(env.DB);
const users = await db
.select()
.from(usersTable)
.where(eq(usersTable.active, true));
return Response.json(users);
}
};
SQLite Tips for Production
If you're going to use SQLite in production, these practices will save you from common pitfalls:
1. Always Enable WAL Mode
PRAGMA journal_mode=WAL;
This is non-negotiable. WAL mode gives you concurrent reads while writing. Without it, readers block writers and vice versa.
2. Set Appropriate PRAGMAs
-- Production PRAGMA configuration
PRAGMA journal_mode = WAL;
PRAGMA busy_timeout = 5000;
PRAGMA synchronous = NORMAL;
PRAGMA cache_size = -64000;
PRAGMA foreign_keys = ON;
PRAGMA temp_store = MEMORY;
PRAGMA mmap_size = 268435456; -- 256MB memory-mapped I/O
PRAGMA page_size = 4096; -- Match filesystem block size
3. Use Connection Pooling Wisely
// For server-side SQLite, use a single connection for writes
// and multiple for reads
import Database from 'better-sqlite3';
const writeDb = new Database('app.db');
writeDb.pragma('journal_mode = WAL');
writeDb.pragma('synchronous = NORMAL');
// For read-heavy loads, you can have multiple read connections
const readDb = new Database('app.db', { readonly: true });
4. Handle SQLITE_BUSY Gracefully
// With better-sqlite3, the busy_timeout PRAGMA handles this
// With @libsql/client, retries are built in
// For custom handling:
async function executeWithRetry(db, sql, args, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await db.execute({ sql, args });
} catch (err) {
if (err.code === 'SQLITE_BUSY' && i < maxRetries - 1) {
await new Promise(r => setTimeout(r, Math.pow(2, i) * 100));
continue;
}
throw err;
}
}
}
5. Backup Strategy
# With Litestream — continuous backup to S3
litestream replicate /data/app.db s3://my-bucket/app.db
# Point-in-time restore
litestream restore -o /data/restored.db s3://my-bucket/app.db
# With Turso — built-in point-in-time recovery
turso db create my-app --from-db my-app-backup --timestamp "2026-02-26T10:00:00Z"
The Migration Path: From PostgreSQL to SQLite
If you're considering migrating an existing app to SQLite, here's a realistic roadmap:
Step 1: Audit Your Schema
SQLite has some schema differences from PostgreSQL:
-- PostgreSQL -- SQLite equivalent
SERIAL / BIGSERIAL INTEGER PRIMARY KEY AUTOINCREMENT
UUID DEFAULT gen_random_uuid() TEXT (generate UUID in app code)
TIMESTAMP WITH TIME ZONE TEXT (ISO8601 strings)
JSONB TEXT (JSON stored as text)
ENUM types TEXT with CHECK constraints
ARRAY types No equivalent (use JSON or junction tables)
Step 2: Identify Incompatible Patterns
Some PostgreSQL features have no direct SQLite equivalent:
- Row-Level Security — implement in application layer
- LISTEN/NOTIFY — use polling or external pub/sub
- Full-text search — SQLite has FTS5 (different syntax, equally powerful)
- Stored procedures — move logic to application code
- Materialized views — use regular views or tables with triggers
Step 3: Migrate Data
# Export from PostgreSQL
pg_dump --data-only --inserts mydb > data.sql
# Adjust SQL syntax for SQLite
# (boolean TRUE/FALSE → 1/0, etc.)
sed -i 's/TRUE/1/g; s/FALSE/0/g' data.sql
# Import to SQLite
sqlite3 new.db < data.sql
For production migrations, consider running both databases in parallel during a transition period.
The Bigger Picture: Why This Matters
The SQLite renaissance isn't just about SQLite. It represents a broader shift in how we think about databases:
1. Locality is king. The fastest database query is one that doesn't cross a network boundary. Embedded replicas and edge databases embody this principle. As edge computing grows, bringing data closer to users eliminates the biggest source of latency.
2. Simplicity compounds. Every database connection pool configuration, every ORM connection string, every managed database instance is operational overhead. SQLite eliminates most of it. Over time, this simplicity compounds — fewer moving parts mean fewer things that can break.
3. The "right tool for the job" is getting redefined. For years, the "right tool" for any serious project was PostgreSQL. That remains true for many applications. But for a growing class of use cases — content sites, personal projects, small SaaS, local-first apps, edge functions — SQLite with modern tooling is now genuinely the right tool.
4. Single-tenancy is making a comeback. The multi-tenant PostgreSQL pattern (one database, tenant_id everywhere) creates complexity in querying, migration, backup, and compliance. Per-tenant SQLite databases offer a cleaner architecture for many SaaS applications.
Conclusion
The SQLite renaissance is real, and it's not going away. The combination of faster hardware, LibSQL's innovations, and platforms like Turso and Cloudflare D1 has solved enough of SQLite's historical limitations to make it a legitimate production choice.
But let's be clear about what this is and isn't:
- It IS a viable production database for read-heavy web applications, SaaS with per-tenant isolation, edge deployments, local-first apps, and small-to-medium projects.
- It ISN'T a replacement for PostgreSQL in every scenario. High write throughput, complex distributed transactions, row-level security, and advanced extension ecosystems remain PostgreSQL's domain.
The best advice? Start your next side project with SQLite. Not as a placeholder for "the real database" — as the database. Use Turso or D1 for hosting. Use Drizzle for your ORM. Build something. You'll be surprised at how far it takes you.
And if you outgrow it — which statistically, most projects never will — migrating to PostgreSQL is a well-understood path. But you might find that SQLite is all you ever needed.
The most deployed database in the world is finally getting the server-side respect it deserves. And honestly? It's about time.
🛠️ Developer Toolkit: This post first appeared on the Pockit Blog.
Need a Regex Tester, JWT Decoder, or Image Converter? Use them on Pockit.tools or install the Extension to avoid switching tabs. No signup required.
Top comments (0)