A year ago, I launched PropFirm Key — a comparison site that lets users browse, filter, and compare prop trading firms. I expected modest traffic. I got 50,000+ daily visitors within three months.
The database behind all of it? SQLite. A single .db file sitting on disk. And it has been one of the best architectural decisions I have made.
Why SQLite in Production?
- Zero operational overhead. No separate process, no connection strings, no credentials to rotate.
- Deployment is a file copy. Your entire database travels with your application.
- Reads are absurdly fast. No network round-trip. Queries hit memory-mapped pages directly.
- ACID compliance. Full transactions, WAL mode for concurrent readers, crash recovery built in.
When SQLite is the Right Choice
- Read-heavy workload. My site has a 98/2 read-write ratio.
- Single server deployment. One file, one server.
- Dataset fits in memory. My database is 47 MB — the OS page cache keeps it in RAM.
- You value simplicity. No database container to manage.
If you need multi-server writes, real-time replication, or terabytes of data — use PostgreSQL.
Setting Up SQLite with Drizzle ORM in Next.js
npm install drizzle-orm better-sqlite3
npm install -D drizzle-kit @types/better-sqlite3
// src/lib/db/index.ts
import { drizzle } from "drizzle-orm/better-sqlite3";
import Database from "better-sqlite3";
import * as schema from "./schema";
const sqlite = new Database("data/app.db");
// Critical PRAGMA settings — run once at connection time
sqlite.pragma("journal_mode = WAL");
sqlite.pragma("busy_timeout = 5000");
sqlite.pragma("synchronous = NORMAL");
sqlite.pragma("cache_size = -64000"); // 64MB cache
sqlite.pragma("foreign_keys = ON");
sqlite.pragma("temp_store = MEMORY");
sqlite.pragma("mmap_size = 268435456"); // 256MB memory-map
export const db = drizzle(sqlite, { schema });
The better-sqlite3 driver is synchronous — no connection pool to manage, no promise overhead, and queries that take microseconds do not benefit from being async.
Schema Design Patterns
JSON Columns for Flexible Data
Comparison sites have a common problem: every provider has different attributes. One has 5 pricing tiers, another has 12. Rigid column-per-attribute schemas don't scale.
export const firms = sqliteTable("firms", {
id: integer("id").primaryKey({ autoIncrement: true }),
slug: text("slug").notNull().unique(),
name: text("name").notNull(),
challenges: text("challenges", { mode: "json" })
.$type<Challenge[]>()
.notNull()
.default([]),
payoutRules: text("payout_rules", { mode: "json" })
.$type<PayoutRule[]>()
.notNull()
.default([]),
rating: real("rating").notNull().default(0),
isActive: integer("is_active", { mode: "boolean" }).notNull().default(true),
});
Drizzle's { mode: "json" } handles serialization transparently.
Full-Text Search Without External Services
SQLite's FTS5 extension is remarkably capable:
CREATE VIRTUAL TABLE firms_fts USING fts5(
name, description, features,
content='firms', content_rowid='id'
);
export function searchFirms(query: string, limit = 20) {
return db.all(sql`
SELECT f.*,
highlight(firms_fts, 0, '<mark>', '</mark>') as name_hl,
rank
FROM firms_fts
JOIN firms f ON f.id = firms_fts.rowid
WHERE firms_fts MATCH ${query}
ORDER BY rank LIMIT ${limit}
`);
}
Results with highlighted matches in under a millisecond for thousands of rows. No Elasticsearch needed.
Performance: The Full PRAGMA Checklist
sqlite.pragma("journal_mode = WAL"); // Concurrent reads + writes
sqlite.pragma("busy_timeout = 5000"); // Wait instead of SQLITE_BUSY
sqlite.pragma("synchronous = NORMAL"); // Safe with WAL, 2x faster than FULL
sqlite.pragma("cache_size = -64000"); // 64MB page cache
sqlite.pragma("foreign_keys = ON"); // Enforce FK constraints (off by default!)
sqlite.pragma("temp_store = MEMORY"); // Temp tables in RAM
sqlite.pragma("mmap_size = 268435456"); // Memory-map 256MB for faster reads
The mmap_size PRAGMA tells SQLite to memory-map the database file, letting the OS handle page caching. For databases that fit in RAM, this eliminates syscall overhead on reads.
Preventing Multiple Instances in Next.js
const globalForDb = globalThis as unknown as {
sqlite: Database.Database | undefined;
};
export const sqlite = globalForDb.sqlite ?? createDatabase();
if (process.env.NODE_ENV !== "production") {
globalForDb.sqlite = sqlite;
}
This prevents hot module reloading from creating multiple database connections.
Handling Concurrency
Reads: No problem. SQLite in WAL mode handles concurrent reads effortlessly. I have served 200 concurrent requests without contention.
Writes: SQLite allows only one writer at a time. The strategy:
export function updateFirmsBatch(updates: FirmUpdate[]) {
sqlite.transaction(() => {
for (const update of updates) {
updateStmt.run({ id: update.id, ...update });
}
})();
}
Single transaction = single lock acquisition. Instead of acquiring the write lock 500 times, you acquire it once.
Backup Strategies
Never copy the file directly while the application is running — you will get a corrupted backup. Use the .backup() API:
function backup() {
const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
const backupPath = `data/backups/app-${timestamp}.db`;
const source = new Database("data/app.db", { readonly: true });
source.backup(backupPath).then(() => {
console.log(`Backup created: ${backupPath}`);
source.close();
cleanOldBackups("data/backups", 7);
});
}
I run this every 6 hours plus before every deployment.
Docker Volume Management
Without a persistent volume, every docker compose up --build nukes your database:
services:
app:
build: .
ports:
- "3001:3000"
volumes:
- app-data:/app/data
volumes:
app-data:
driver: local
For Next.js static chunks persistence, use an entrypoint script:
#!/bin/sh
# Merge new build assets into persistent volume
cp -rn /tmp/next-static-build/* /app/.next/static/ 2>/dev/null || true
# Clean up chunks older than 7 days
find /app/.next/static -name "*.js" -mtime +7 -delete 2>/dev/null || true
exec node server.js
This prevents stale chunk 404 errors when users have cached HTML pointing to old JavaScript hashes.
Monitoring
// src/app/api/health/route.ts
export function GET() {
const integrity = sqlite.pragma("integrity_check");
const walMode = sqlite.pragma("journal_mode");
const dbSize = sqlite.pragma("page_count")[0].page_count *
sqlite.pragma("page_size")[0].page_size;
return NextResponse.json({
status: "ok",
database: {
integrity: integrity[0].integrity_check,
journalMode: walMode[0].journal_mode,
sizeMB: (dbSize / 1024 / 1024).toFixed(2),
},
});
}
Slow Query Detection
const SLOW_QUERY_MS = 50;
export function withQueryLogging<T>(label: string, fn: () => T): T {
const start = performance.now();
const result = fn();
const duration = performance.now() - start;
if (duration > SLOW_QUERY_MS) {
console.warn(`[SLOW QUERY] ${label}: ${duration.toFixed(1)}ms`);
}
return result;
}
When to Migrate to PostgreSQL
Migrate when:
- You need multiple application servers writing to the same database
- Your dataset exceeds available RAM and queries hit disk
- You need LISTEN/NOTIFY, row-level security, or PostGIS
- Write contention becomes a bottleneck
Do not migrate because:
- Someone on Twitter said SQLite is not a "real" database
- You want to pad your resume with PostgreSQL experience
- You assume you will need it someday
The actual migration with Drizzle: change sqliteTable to pgTable, update column types, switch the dialect in drizzle.config.ts. The queries — select, where, join, orderBy — remain identical.
Real-World Schema: Modeling Evaluation Programs
The comparison site I built tracks firms that offer funded trading accounts. Each firm has multiple evaluation tiers with different rules. Here is how the schema handles this variance using the patterns above:
const firmProfile = {
slug: 'takeprofittrader',
name: 'TakeProfitTrader',
headquarters: 'Windermere, Florida',
founded: 2021,
market: 'futures',
evaluationSteps: 1,
trustpilotRating: 4.4,
totalReviews: 8323,
platforms: [
'NinjaTrader', 'Tradovate', 'TradingView', 'Quantower',
'MotiveWave', 'ATAS', 'MultiCharts', 'Sierra Chart', 'Bookmap'
// 13 platforms total
],
payoutFrequency: 'on-demand',
payoutMethods: ['ACH', 'PayPal', 'Wise'],
challenges: [
{ accountSize: 25000, price: 150, profitTarget: 1500,
maxLoss: 1500, maxLossType: 'trailing', dailyLossLimit: null,
minTradingDays: 5, profitSplit: 80, activationFee: 130 },
{ accountSize: 50000, price: 170, profitTarget: 3000,
maxLoss: 2000, maxLossType: 'trailing', dailyLossLimit: null,
minTradingDays: 5, profitSplit: 80, activationFee: 130 },
{ accountSize: 75000, price: 245, profitTarget: 4500,
maxLoss: 2500, maxLossType: 'trailing', dailyLossLimit: null,
minTradingDays: 5, profitSplit: 80, activationFee: 130 },
{ accountSize: 100000, price: 330, profitTarget: 6000,
maxLoss: 3000, maxLossType: 'trailing', dailyLossLimit: null,
minTradingDays: 5, profitSplit: 80, activationFee: 130 },
{ accountSize: 150000, price: 360, profitTarget: 9000,
maxLoss: 4500, maxLossType: 'trailing', dailyLossLimit: null,
minTradingDays: 5, profitSplit: 80, activationFee: 130 },
],
tradingRules: {
scalping: true, newsTrading: true, eaBots: false,
hft: false, weekendHolding: false,
consistencyRule: 'Best day < 50% of total profit'
}
};
Key design decisions visible in this data:
-
dailyLossLimit: null— This firm has no daily loss limit, only a trailing max loss. The nullable field captures this meaningfully. -
JSON arrays for platforms — 13 platforms stored as a JSON column, queried with
json_array_length(). -
Nested challenges array — Each tier is a row in the
challengestable with a foreign key to the firm.
The full comparison across all firms is available at PropFirm Key.
Final Thoughts
SQLite in production is not a hack. It is an engineering decision that trades write scalability for operational simplicity and raw read performance. For read-heavy, single-server applications, it is a genuinely excellent choice.
Start simple. Measure. Migrate when the data tells you to, not when hype tells you to.
Top comments (0)