Hear Me Out
Every side project tutorial starts with "set up PostgreSQL" or "configure MongoDB." But for 90% of side projects, you don't need a database server at all.
Here's what I use instead — and why my projects actually ship.
Option 1: JSON Files (For <10K Records)
import json
from pathlib import Path
DB_FILE = Path("data.json")
def load():
return json.loads(DB_FILE.read_text()) if DB_FILE.exists() else []
def save(data):
DB_FILE.write_text(json.dumps(data, indent=2))
# Usage
users = load()
users.append({"name": "Alice", "email": "alice@example.com"})
save(users)
When it works: Personal tools, config storage, small datasets, prototypes.
When it doesn't: Concurrent writes, >10K records, need for queries.
Option 2: SQLite (For Everything Else)
import sqlite3
db = sqlite3.connect("app.db")
db.execute("""CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT UNIQUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)""")
# Insert
db.execute("INSERT INTO users (name, email) VALUES (?, ?)", ("Alice", "alice@example.com"))
db.commit()
# Query
for row in db.execute("SELECT * FROM users WHERE name LIKE ?", ("%Ali%",)):
print(row)
SQLite handles:
- Millions of records without breaking a sweat
- Concurrent reads (WAL mode)
- Full SQL — JOINs, aggregations, indexes
- Zero configuration — it's just a file
-
Built into Python — no
pip installneeded
Option 3: DuckDB (For Analytics)
import duckdb
# Query CSV files directly — no import needed
result = duckdb.sql("""
SELECT category, COUNT(*), AVG(price)
FROM 'products.csv'
GROUP BY category
ORDER BY 2 DESC
""")
print(result)
When you need to analyze data but don't need a persistent database. Full DuckDB tutorial →
Option 4: SQLite + Vector Search (For AI Projects)
import sqlite3
import sqlite_vec # pip install sqlite-vec
db = sqlite3.connect("knowledge.db")
db.enable_load_extension(True)
sqlite_vec.load(db)
# Now you have vector similarity search in SQLite
# No Pinecone, no Weaviate, no Docker
Build RAG apps and semantic search with zero infrastructure. Full tutorial →
When You Actually Need PostgreSQL
- Multiple servers writing to the same database
- Complex permissions and row-level security
- PostGIS for geospatial queries
- Full-text search at scale (though SQLite FTS5 is surprisingly good)
- Your team already uses it and has the infrastructure
The Side Project Graveyard
I've seen dozens of side projects die at the "set up the database" step:
- Choose between PostgreSQL, MySQL, MongoDB
- Install Docker (or sign up for a managed service)
- Configure connection strings, migrations, ORMs
- Realize you need a migration tool
- Spend 3 hours on Docker networking
- Give up and move to the next idea
With SQLite:
import sqlite3db = sqlite3.connect("app.db")- Ship it
But What About Deployment?
SQLite works perfectly on:
- VPS (DigitalOcean, Hetzner, Fly.io) — the .db file lives next to your app
- Serverless — Cloudflare D1, Turso (distributed SQLite)
- Desktop apps — Electron, Tauri
- Mobile — Every phone already has SQLite
The only place it doesn't work well: horizontal scaling with write-heavy workloads. But if your side project has that problem, congratulations — you have a real business, not a side project.
My Stack for Side Projects
Frontend: HTML + vanilla JS (or React if needed)
Backend: Python (FastAPI) or Node (Hono)
Database: SQLite (always)
Hosting: Fly.io ($0-5/mo) or VPS ($5/mo)
Total monthly cost: $0-5
No Docker. No Kubernetes. No managed database. No $50/month "hobby" tier.
What do you use for side project databases? Am I wrong about PostgreSQL being overkill for most projects? Fight me in the comments.
I write about practical dev tools and keeping things simple. Follow for more.
Need custom dev tools, scrapers, or API integrations? I build automation for dev teams. Email spinov001@gmail.com — or explore awesome-web-scraping.
More from me: 10 Dev Tools I Use Daily | 77 Scrapers on a Schedule | 150+ Free APIs
Top comments (0)