PostgreSQL vs SQLite for SaaS: The Architecture Decision Most Developers Get Wrong
The default answer for SaaS databases is PostgreSQL. Neon, Supabase, PlanetScale, Railway — every managed hosting platform pushes you toward Postgres. It's the safe, scalable choice that nobody gets fired for picking.
But there's a pattern where SQLite — specifically Turso/libSQL — outperforms Postgres, costs less, and involves less operational complexity. The problem is most developers discard SQLite based on outdated assumptions about it being a toy database for local development.
This article gives you the actual criteria to make the decision correctly, with real benchmark numbers and a detailed breakdown of multi-tenant isolation patterns that differ significantly between the two.
What Changed: libSQL and Turso
SQLite's original limitations for web applications were concrete:
- No network access — file system only, no TCP connections
- Single writer at a time (WAL mode helps readers, but writes still serialize)
- No replication or high availability
- Painful to manage at scale across many tenants
Turso's libSQL fork addresses all of these. libSQL is a SQLite fork with extensions for network access, replication, and multi-database management. Turso provides the managed hosting layer on top.
What you get with Turso:
- Embedded replicas — a local SQLite file that syncs bidirectionally with a remote primary via the libSQL protocol. Reads hit the local file; writes sync to remote.
- Per-database branching — create a new database per tenant programmatically via REST API in ~200ms
- Edge replication — read replicas deployed to Cloudflare's global network; reads are local, writes go to the primary region
- Standard SQLite wire format — your existing SQLite tooling, ORMs, and SQL statements work unchanged
import { createClient } from '@libsql/client';
// Remote database — same API as better-sqlite3, but async over the network
const db = createClient({
url: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
});
const result = await db.execute({
sql: 'SELECT * FROM users WHERE tenant_id = ? ORDER BY created_at DESC LIMIT 50',
args: [tenantId],
});
// result.rows is typed — same shape as better-sqlite3 output
const users = result.rows;
The API is async (unlike better-sqlite3 which is synchronous), but otherwise identical to local SQLite queries. Existing Drizzle or Kysely setups work with a single adapter swap.
Read-Heavy Workloads: Where SQLite Wins
SQLite in WAL (Write-Ahead Logging) mode has a specific concurrency property: readers never block writers, and writers never block readers. Multiple concurrent readers can operate simultaneously with no lock contention. Postgres achieves similar isolation via MVCC but with more per-transaction overhead — connection setup, transaction IDs, vacuum, the works.
For a SaaS application where the read-to-write ratio is roughly 20:1 (users browsing dashboards, loading reports, viewing settings), SQLite on a local file consistently shows lower latency because there's no network round-trip between your application server and the database.
With Turso's embedded replicas, you get this benefit even in serverless environments where there's no persistent local disk:
import { createClient } from '@libsql/client';
// Embedded replica: reads come from local file, writes sync to remote primary
const db = createClient({
url: 'file:/tmp/local-replica.db', // local read path
syncUrl: process.env.TURSO_SYNC_URL!, // remote primary
authToken: process.env.TURSO_AUTH_TOKEN!,
syncInterval: 60, // background sync every 60 seconds
});
// Force a sync before critical reads (e.g., after a write from another process)
await db.sync();
// This read never leaves the server — hits the local file
const { rows } = await db.execute(
'SELECT id, name, email, plan FROM users WHERE org_id = ? AND active = 1',
[orgId]
);
// This write persists locally and syncs to remote asynchronously
await db.execute({
sql: 'UPDATE users SET last_seen_at = ? WHERE id = ?',
args: [new Date().toISOString(), userId],
});
In benchmarks on read-heavy workloads with fewer than 1,000 concurrent connections, embedded SQLite shows 2–5x lower p99 latency versus a Postgres instance on the same machine. The entire difference comes from eliminating the Unix socket round-trip — even localhost TCP has measurable overhead at scale.
Benchmark Numbers: Realistic SaaS Query Patterns
These benchmarks use Drizzle ORM with a 10,000-row dataset across four normalized tables, on Node 22, same hardware for both databases:
| Query Type | Postgres (local Unix socket) | SQLite WAL (local file) | Turso (embedded replica) |
|---|---|---|---|
PK lookup — SELECT * WHERE id = ?
|
0.8ms | 0.09ms | 0.11ms |
| 3-table JOIN with filter | 4.2ms | 2.1ms | 2.3ms |
Aggregation — COUNT, SUM by group |
6.8ms | 3.4ms | 3.6ms |
| Single row INSERT with index update | 1.1ms | 2.8ms | 3.1ms |
| 500 concurrent reads (same query) | p99: 28ms | p99: 8ms | p99: 9ms |
| 500 concurrent writes | p99: 12ms | p99: 340ms | p99: 380ms |
SQLite wins significantly on reads, Postgres wins on write concurrency. The write row is the most important one: SQLite serializes writes at the WAL level, so 500 concurrent writers queue up and p99 balloons to 340ms. Postgres's MVCC handles concurrent writers without serialization.
For most SaaS products below $50K MRR with normal user activity patterns, the write concurrency limit doesn't matter — you're not doing 500 simultaneous writes to the same database. The read advantage is universal.
Multi-Tenant Isolation: Where the Architecture Diverges
This is the decision that has the most lasting architectural consequences. Three patterns exist, and they behave differently in Postgres versus SQLite/Turso.
Pattern 1: Shared Table with tenant_id Column
Works identically in both Postgres and SQLite. Every table has a tenant_id column and every query filters by it. Simple, cheap, and the most common pattern in early-stage SaaS.
// Drizzle schema — identical for Postgres or SQLite
import { pgTable, text, timestamp } from 'drizzle-orm/pg-core';
// or: import { sqliteTable, text, integer } from 'drizzle-orm/sqlite-core';
export const documents = pgTable('documents', {
id: text('id').primaryKey(),
tenantId: text('tenant_id').notNull(),
title: text('title').notNull(),
content: text('content'),
createdAt: timestamp('created_at').defaultNow(),
});
// Application-layer enforcement — you must add tenantId to every query
async function getDocuments(db: DrizzleDb, tenantId: string) {
return db
.select()
.from(documents)
.where(eq(documents.tenantId, tenantId))
.orderBy(desc(documents.createdAt));
}
The footgun here is obvious: forget the tenantId filter once and you return all tenants' data. Postgres has a mitigation — Row Level Security policies enforced at the database level:
-- Postgres RLS — enforced by the database engine, not application code
ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
CREATE POLICY tenant_isolation ON documents
USING (tenant_id = current_setting('app.current_tenant_id'));
-- Set per-connection at query time
SET LOCAL app.current_tenant_id = 'tenant_abc123';
-- Now all queries on this connection automatically filter by tenant
SELECT * FROM documents; -- only returns tenant_abc123's documents
SQLite has no equivalent RLS mechanism. You enforce isolation in application code only. For compliance-sensitive SaaS (HIPAA, SOC 2), Postgres's database-level enforcement is a meaningful advantage.
Pattern 2: Separate Schema per Tenant (Postgres Only)
Postgres supports multiple schemas within a single database. Each tenant gets their own schema (tenant_abc123.documents, tenant_abc123.users) with identical table structures. You switch which schema the connection uses by setting search_path.
import { drizzle } from 'drizzle-orm/node-postgres';
import { Pool } from 'pg';
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
async function getTenantDb(tenantId: string) {
const client = await pool.connect();
// Validate tenantId — critical to prevent schema injection
if (!/^[a-z0-9_]+$/.test(tenantId)) throw new Error('Invalid tenant ID');
// Switch to tenant's schema
await client.query(`SET search_path TO tenant_${tenantId}, public`);
return {
db: drizzle(client),
release: () => client.release(),
};
}
// Usage — queries hit tenant_abc123.documents, not public.documents
const { db, release } = await getTenantDb('abc123');
try {
const docs = await db.select().from(documents);
} finally {
release();
}
This is strong isolation — a query on one tenant's schema cannot touch another's. But it has a real operational cost: migrations must execute against every tenant schema. With 1,000 tenants, an ALTER TABLE ADD COLUMN runs 1,000 times. You need migration orchestration tooling (or write it yourself) to handle schema updates reliably without leaving tenants on different schema versions mid-deploy.
Pattern 3: Separate Database per Tenant (Turso's Killer Feature)
Turso lets you create a new isolated SQLite database per tenant via API call in roughly 200ms. Each tenant gets completely isolated storage — not just a different schema, a different file entirely. No shared tables, no RLS policies, no schema path switching.
const TURSO_ORG = process.env.TURSO_ORG_SLUG!;
const TURSO_API_TOKEN = process.env.TURSO_API_TOKEN!;
// Call this during tenant onboarding
async function provisionTenantDatabase(tenantId: string): Promise<string> {
const response = await fetch(
`https://api.turso.tech/v1/organizations/${TURSO_ORG}/databases`,
{
method: 'POST',
headers: {
Authorization: `Bearer ${TURSO_API_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
name: `tenant-${tenantId}`,
group: 'default',
// Fork from a template database — already has your schema + seed data applied
seed: { type: 'database', name: 'tenant-template' },
}),
}
);
if (!response.ok) {
throw new Error(`Failed to provision database: ${await response.text()}`);
}
const { database } = await response.json();
return database.hostname;
}
// Get a client for a specific tenant
async function getTenantClient(hostname: string) {
return createClient({
url: `libsql://${hostname}`,
authToken: TURSO_API_TOKEN,
});
}
// Migrations: run against the template, new tenants get it for free
// Existing tenants: iterate and migrate in parallel batches
async function migrateAllTenants(migrationSql: string) {
const tenants = await getAllTenantHostnames(); // your lookup
// Run migrations in parallel batches of 10
for (let i = 0; i < tenants.length; i += 10) {
const batch = tenants.slice(i, i + 10);
await Promise.all(
batch.map(async (hostname) => {
const client = await getTenantClient(hostname);
await client.execute(migrationSql);
})
);
}
}
Turso's free tier supports 500 databases. Their Scaler plan at $29/month gives you 10,000 databases. For a B2B SaaS where each tenant is a small company with modest data volumes, this is dramatically cheaper than provisioning a Postgres instance per tenant — which would run $7–$15/month per tenant on even the cheapest managed services.
When Postgres Is the Right Answer
Postgres wins in these concrete scenarios:
High write concurrency — If you're doing more than ~100 simultaneous writes to a single database (event ingestion, real-time collaboration, financial ledgers), Postgres's MVCC handles concurrent writers correctly where SQLite's serialized WAL creates a queue.
Complex analytical queries — Window functions with frame specifications, recursive CTEs, full-text search with tsvector and tsquery, PostGIS for geospatial data — SQLite supports some of these but Postgres's query planner is more sophisticated and the execution is faster for multi-step aggregations.
Supabase — If you want Supabase Auth, Realtime subscriptions, Storage, or Edge Functions, you're on Postgres. Turso doesn't have an equivalent all-in-one platform, and Supabase's tight integration across services is a real productivity advantage for teams building quickly.
Compliance requirements — Row Level Security, column-level encryption, audit logging via pgaudit, and the broader Postgres tooling ecosystem (pgvector, pg_partman) matter for HIPAA and SOC 2 requirements in ways that SQLite can't match.
Your team already knows Postgres — Knowledge and tooling familiarity has real value. If your team runs pgAdmin, pg_stat_statements, EXPLAIN ANALYZE workflows, and has production Postgres experience, the operational cost of switching isn't zero.
The Decision Framework
Stripped of preferences:
Needs Supabase ecosystem? → Postgres
Needs PostGIS or pgvector? → Postgres
Need RLS for compliance? → Postgres
>100 concurrent writers? → Postgres
Cloudflare Workers / no TCP? → Turso
B2B SaaS, <5K tenants, read-heavy? → Turso
Bundle/cold-start sensitive? → Turso (SQLite)
Want per-tenant database isolation → Turso
Early-stage, budget-conscious? → Turso
Everything else? → Postgres (safer default)
For most early-stage SaaS products (sub-$20K MRR, fewer than 1,000 tenants, dashboard-heavy read patterns), Turso with per-tenant databases is a better technical and economic fit than Postgres. The per-tenant isolation is stronger by default, the reads are faster, and the cost scales linearly with tenant count at a much lower base rate.
The right time to migrate to Postgres is when you hit write concurrency limits, need PostGIS, or sign a contract that requires SOC 2 with database-level RLS. That's a good problem to have — it means the product is working.
Skip the Boilerplate
The AI SaaS Starter Kit ships with both Drizzle + Postgres (Neon) and Drizzle + Turso configurations — switch between them with a single environment variable. Multi-tenant schema, migration tooling, and tenant provisioning API included. Ship production-ready in hours.
Top comments (0)