Running a database shouldn't cost you $50/month for a side project. Cloudflare D1 gives you a serverless SQLite database at the edge — with a generous free tier.
What Is Cloudflare D1?
D1 is Cloudflare's serverless SQL database built on SQLite. It runs at the edge, close to your users, and integrates natively with Cloudflare Workers.
Free Tier
- 5 million rows read per day
- 100,000 rows written per day
- 5 GB storage included
- Unlimited databases (up to 50,000)
That's enough for most side projects, MVPs, and even small production apps.
Quick Start
# Create a database
npx wrangler d1 create my-database
# Run migrations
npx wrangler d1 execute my-database --file=./schema.sql
-- schema.sql
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE NOT NULL,
name TEXT NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
// Worker code
export default {
async fetch(request, env) {
const { results } = await env.DB.prepare(
"SELECT * FROM users WHERE email = ?"
).bind("user@example.com").all();
return Response.json(results);
}
};
Why D1 Stands Out
| Feature | D1 | Traditional DBs |
|---|---|---|
| Setup time | 30 seconds | 15-30 minutes |
| Edge latency | <10ms | 50-200ms |
| Scaling | Automatic | Manual |
| Cost (hobby) | Free | $5-50/month |
| Backups | Automatic | Manual setup |
Real-World Use Case
An indie developer built a URL shortener with D1. The database runs at 300+ edge locations globally. Users get sub-10ms response times regardless of location. Total cost: $0/month for 50K daily clicks.
Key Features
- Time Travel — query your database at any point in the last 30 days
- Automatic replication — reads served from nearest edge location
- REST API — query from anywhere, not just Workers
- Import/Export — standard SQLite format
- Batch operations — multiple queries in a single round trip
When to Use D1
- Side projects and MVPs (free tier is generous)
- Read-heavy workloads (edge caching)
- Global applications (low latency everywhere)
- Cloudflare Workers ecosystem apps
Get Started
Building a scraping pipeline that needs fast storage? Check out my web scraping actors on Apify — extract data and store it anywhere. Questions? Email spinov001@gmail.com
Top comments (0)