I run 4 production SaaS products on SQLite. Not PostgreSQL. Not MySQL. SQLite.
Before you close this tab — hear me out.
The Case for SQLite in Production
SQLite handles:
- Thousands of reads per second without breaking a sweat
- Write-ahead logging (WAL mode) for concurrent reads during writes
- Zero configuration, zero maintenance, zero connection pooling
- Atomic transactions with ACID compliance
- Backup = copy one file
My 4 products (DocuMint, CronPing, FlagBit, WebhookVault) each use SQLite. Combined, they handle signups, API key management, usage tracking, webhook captures, feature flag evaluations, and cron monitor pings.
Total database ops: ~500/day.
SQLite's comfortable range: ~100,000/day.
I'm at 0.5% capacity. PostgreSQL would be over-engineering by a factor of 200.
The Setup
Each product runs in a Docker container with a mounted volume:
# docker-compose.yml
services:
app:
build: .
volumes:
- ./data:/app/data
# database.py
import sqlite3
import os
def get_db():
db = sqlite3.connect(
os.path.join('/app/data', 'app.db'),
check_same_thread=False
)
db.execute('PRAGMA journal_mode=WAL') # Key for concurrency
db.execute('PRAGMA busy_timeout=5000') # Wait 5s if locked
db.row_factory = sqlite3.Row
return db
That's the entire database layer. No ORM, no connection pool, no migration framework.
Migration Strategy
I use a dead-simple migration approach: check if columns/tables exist, create them if they don't.
def init_db():
db = get_db()
db.executescript('''
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE NOT NULL,
api_key_hash TEXT NOT NULL,
plan TEXT DEFAULT 'free',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS usage (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER REFERENCES users(id),
action TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
''')
# Add columns that might not exist yet
try:
db.execute('ALTER TABLE users ADD COLUMN stripe_customer_id TEXT')
except:
pass # Column already exists
No Alembic. No Django migrations. No version tracking tables. It just works.
When SQLite ISN'T Enough
Be honest about SQLite's limitations:
Concurrent writes from multiple processes
SQLite locks the entire database on writes. If you have multiple worker processes writing simultaneously, you'll hit SQLITE_BUSY errors. WAL mode helps but doesn't eliminate the problem.
When to switch: If you need background workers + web server writing to the same DB simultaneously with high frequency.
Multi-server deployments
SQLite is a file. Files live on one disk. If you need horizontal scaling across multiple servers, SQLite can't follow.
When to switch: If you need more than one application server.
Complex relational queries
SQLite supports JOINs, subqueries, and window functions. But PostgreSQL's query planner is significantly more sophisticated for complex analytics queries.
When to switch: If your queries involve 4+ JOINs or you need materialized views.
Full-text search at scale
SQLite has FTS5, which is decent. But PostgreSQL's full-text search with GIN indexes is a different league.
When to switch: If full-text search is a core feature, not an afterthought.
The Migration Path
When you DO need to migrate, the path is straightforward:
# Export SQLite to SQL
sqlite3 app.db .dump > dump.sql
# Adjust syntax for PostgreSQL (main differences)
sed -i 's/AUTOINCREMENT/GENERATED ALWAYS AS IDENTITY/g' dump.sql
sed -i 's/INTEGER PRIMARY KEY/SERIAL PRIMARY KEY/g' dump.sql
# Import into PostgreSQL
psql -d mydb -f dump.sql
Plus updating your application code from sqlite3 to asyncpg or psycopg2. If you kept your queries simple (no SQLite-specific extensions), this is a weekend project.
Security Note: Hash Your API Keys
One thing I do regardless of database choice — API keys are SHA-256 hashed before storage:
import hashlib
def hash_key(api_key: str) -> str:
return hashlib.sha256(api_key.encode()).hexdigest()
# On signup: show the key once, store only the hash
# On auth: hash incoming key and compare with stored hash
This way, if someone dumps the database, they get hashes — not usable API keys.
The Decision Framework
Use SQLite when:
- ✅ Single-server deployment
- ✅ < 10,000 writes/day
- ✅ Single-process writes (or low write contention)
- ✅ You value simplicity over features
- ✅ Your data fits comfortably in memory
Use PostgreSQL when:
- ✅ Multiple servers or workers writing concurrently
- ✅ Complex relational queries are core to the product
- ✅ You need pub/sub, LISTEN/NOTIFY, or advanced extensions
- ✅ Full-text search is a primary feature
- ✅ You're past product-market fit and scaling
Try It Yourself
All four of my SQLite-powered products have free tiers:
# Generate a PDF invoice
curl -X POST https://documint.anethoth.com/api/v1/demo-invoice \
-H 'Content-Type: application/json' \
-d '{"company": "Test", "items": [{"description": "Item", "quantity": 1, "unit_price": 100}]}' \
--output test.pdf
# Parse a cron expression
curl 'https://cronping.anethoth.com/api/v1/cron/describe?expr=0+9+*+*+1-5'
# Echo back your request headers
curl https://webhookvault.anethoth.com/api/v1/echo
All backed by SQLite. All serving production traffic. All running on a single $48/month VPS.
Are you running SQLite in production? I'd love to hear war stories — both good and bad.
Top comments (0)