Why I Stopped Using PostgreSQL for Side Projects (And You Should Too)
SQLite is faster, simpler, and zero-maintenance. Here's my production setup that handles 10K+ queries/minute on a $5 VPS.
The Hot Take
For side projects, small SaaS, and solo developer tools:
SQLite > PostgreSQL > MySQL
Fighting words? Let me show you the numbers.
The Setup That Changed Everything
My previous stack for every project:
Node.js → PostgreSQL (via pg library) → hosted on Railway/Neon
Monthly cost: $5-20
Cold starts: 2-5 seconds
Setup time: 30 minutes
My current stack:
Node.js → SQLite (via better-sqlite3) → local file
Monthly cost: $0
Cold starts: 0 (it's a file)
Setup time: 2 minutes
The Code Difference
PostgreSQL (Before)
// database.js
const { Pool } = require('pg');
// Need connection pooling, retry logic, migration tools
const pool = new Pool({
host: process.env.DB_HOST,
port: 5432,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});
// Need try/catch everywhere
async function getUser(id) {
try {
const result = await pool.query('SELECT * FROM users WHERE id = $1', [id]);
return result.rows[0];
} catch (err) {
console.error('DB error:', err);
throw err;
}
}
SQLite (After)
// database.js
const Database = require('better-sqlite3');
const path = require('path');
const db = new Database(path.join(__dirname, 'data.db'));
// Enable WAL mode for concurrent reads
db.pragma('journal_mode = WAL');
db.pragma('busy_timeout = 5000');
// Synchronous, no try/catch needed for queries
function getUser(id) {
return db.prepare('SELECT * FROM users WHERE id = ?').get(id);
}
// Prepared statements are auto-cached by better-sqlite3
const insertUser = db.prepare('INSERT INTO users (name, email) VALUES (?, ?)');
function addUser(name, email) {
return insertUser.run(name, email);
}
That's it. No connection pool. No async/await. No environment variables for DB config.
Performance Numbers
Benchmark: 10,000 queries
| Operation | SQLite (WAL) | PostgreSQL (local) | PostgreSQL (remote) |
|---|---|---|---|
| Simple SELECT | 2.1ms total | 18ms total | 340ms total |
| INSERT | 8ms total | 45ms total | 520ms total |
| UPDATE | 5ms total | 32ms total | 480ms total |
| Transaction (100 inserts) | 0.8ms | 4ms | 85ms |
SQLite is 5-160x faster for simple operations, especially when the database is remote.
Real-World: My PR Monitoring Tool
Operations per 5-minute cycle:
- INSERT: 208 PR status records
- SELECT: ~50 queries (filtering, aggregating)
- UPDATE: ~20 status changes
Database size after 30 days: 45MB
Queries per minute: ~55
CPU usage: < 1%
This runs on the same $5 VPS as everything else. No separate database server needed.
The WAL Mode Secret
SQLite's default journal mode blocks on writes. WAL (Write-Ahead Logging) fixes this:
db.pragma('journal_mode = WAL');
What WAL gives you:
- Readers don't block writers
- Writers don't block readers
- Better crash recovery
- Slightly better write performance
Important settings for production:
const db = new Database('data.db');
// Essential for concurrent access
db.pragma('journal_mode = WAL');
db.pragma('busy_timeout = 5000'); // Wait up to 5s if DB is locked
db.pragma('synchronous = NORMAL'); // Safe enough, faster than FULL
db.pragma('cache_size = -64000'); // 64MB cache (negative = KB)
db.pragma('foreign_keys = ON'); // Enforce FK constraints
Migrations (Yes, SQLite Has Them)
// migrations.js
const DB_VERSION = 3;
function runMigrations(db) {
// Create migrations table if not exists
db.exec(`
CREATE TABLE IF NOT EXISTS _migrations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
applied_at TEXT DEFAULT (datetime('now'))
)
`);
const applied = db.prepare(
'SELECT name FROM _migrations ORDER BY id'
).all().map(r => r.name);
const migrations = [
{
name: '001_create_users',
version: 1,
sql: `
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
)
`
},
{
name: '002_create_posts',
version: 2,
sql: `
CREATE TABLE IF NOT EXISTS posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL,
title TEXT NOT NULL,
body TEXT,
published INTEGER DEFAULT 0,
created_at TEXT DEFAULT (datetime('now')),
FOREIGN KEY (user_id) REFERENCES users(id)
)
`
},
{
name: '003_add_index',
version: 3,
sql: `
CREATE INDEX IF NOT EXISTS idx_posts_user ON posts(user_id);
CREATE INDEX IF NOT EXISTS idx_posts_published ON posts(published);
`
}
];
for (const migration of migrations) {
if (!applied.includes(migration.name) && migration.version > (applied.length)) {
console.log(`Running migration: ${migration.name}`);
db.exec(migration.sql);
db.prepare('INSERT INTO _migrations (name) VALUES (?)').run(migration.name);
}
}
}
Backups (Trivially Easy)
# Single command backup
cp data.db backups/data-$(date +%Y%m%d-%H%M).db
# Or use SQLite's built-in backup
sqlite3 data.db ".backup backups/hot-backup.db"
No pg_dump, no database export tools, no scheduling complex backup jobs.
I add this to my daily cron:
# /etc/cron.daily/sqlite-backup
#!/bin/bash
DB_PATH="/app/data.db"
BACKUP_DIR="/app/backups"
mkdir -p "$BACKUP_DIR"
cp "$DB_PATH" "$BACKUP_DIR/data-$(date +%Y%m%d).db"
# Keep only last 7 days
find "$BACKUP_DIR" -name "data-*.db" -mtime +7 -delete
When NOT to Use SQLite
SQLite isn't perfect. Here's when to use something else:
| Scenario | Use Instead | Why |
|---|---|---|
| Multiple servers need write access | PostgreSQL | SQLite is single-writer |
| Need advanced features (JSONB, arrays, full-text) | PostgreSQL | SQLite has limited type support |
| Heavy concurrent writes (>100/sec) | PostgreSQL | SQLite will lock |
| Need horizontal scaling | PostgreSQL + read replicas | SQLite is single-node |
| Enterprise compliance requirements | PostgreSQL | Better tooling ecosystem |
The threshold: If your project has ONE server and fewer than 100 writes/second, SQLite is probably fine.
The "But What About..." Section
"What about scaling?"
If your side project scales to the point where SQLite can't handle it, congratulations! You have a successful product. Migrate to PostgreSQL then. Don't pre-optimize for scale that might never come.
"What about transactions?"
SQLite has full ACID transactions:
const insertPost = db.prepare('INSERT INTO posts (user_id, title, body) VALUES (?, ?, ?)');
const updateUserCount = db.prepare('UPDATE users SET post_count = post_count + 1 WHERE id = ?');
const createPostWithCount = db.transaction((userId, title, body) => {
const result = insertPost.run(userId, title, body);
updateUserCount.run(userId);
return result;
});
// Usage — automatically commits or rolls back
createPostWithCount(1, 'Hello', 'World');
"What about full-text search?"
SQLite has FTS5 built in:
db.exec(`
CREATE VIRTUAL TABLE posts_fts USING fts5(title, body, content=posts);
CREATE TRIGGER posts_ai AFTER INSERT ON posts BEGIN
INSERT INTO posts_fts(rowid, title, body) VALUES (new.id, new.title, new.body);
END;
`);
// Search
const results = db.prepare(`
SELECT p.* FROM posts p
JOIN posts_fts fts ON p.id = fts.rowid
WHERE posts_fts MATCH ?
ORDER BY rank
LIMIT 20
`).all('node javascript tutorial');
My SQLite-Powered Projects
| Project | DB Size | Queries/min | Uptime |
|---|---|---|---|
| PR Monitor | 45MB | 55 | 6 months |
| Text Formatter | 0.5MB | 2 | 3 months |
| Stock Signals | 120MB | 30 | 4 months |
| Blog (SQLite not Hugo, older version) | 8MB | 10 | 2 months |
Zero database-related incidents across all projects.
The One-Liner Migration Guide
If you're currently using an ORM like Prisma, Drizzle, or Sequelize:
// Most ORMs support SQLite as a driver
// Prisma: datasource db { provider = "sqlite" url = "file:./data.db" }
// Drizzle: drizzle-kit push --config drizzle.config.ts
// Knex: knex({ client: 'sqlite3', connection: { filename: './data.db' } })
The SQL is 95% compatible. Minor differences:
-
AUTO_INCREMENT→AUTOINCREMENT -
NOW()→datetime('now') -
SERIAL→INTEGER PRIMARY KEY AUTOINCREMENT -
BOOLEAN→INTEGER(0/1) -
$1params →?
Final Take
SQLite isn't a toy database. It's used in:
- Every smartphone (Android and iOS)
- Every web browser
- Most desktop applications
- Aircraft systems
- Medical devices
If it's good enough for an airplane, it's good enough for your side project.
Stop overthinking your database. Ship the thing.
Have opinions about SQLite vs PostgreSQL? Let's debate in the comments.
Follow @armorbreak for more opinionated takes on developer tools.
Top comments (0)