Cloudflare D1: Serverless SQLite at the Edge — Production Patterns
Cloudflare D1 is SQLite running inside Workers. You get sub-5ms query latency at the edge, zero ops, and $0 for most hobby and early-stage SaaS workloads. Here's how to use it in production.
Why D1 Over Planetscale or Neon
| D1 | PlanetScale | Neon | |
|---|---|---|---|
| Latency | <5ms (edge) | 10-50ms | 10-50ms |
| Free tier | 5M rows/day reads | 5B rows/mo | 0.5 GB storage |
| Pricing at scale | $0.001/M reads | $39+/mo | $19+/mo |
| Engine | SQLite | MySQL | PostgreSQL |
| Vendor | Cloudflare | PlanetScale | Neon |
D1 wins on latency for edge-deployed apps. The SQLite limitation is real but manageable.
Setup
npm create cloudflare@latest my-app -- --framework=next
cd my-app
wrangler d1 create my-db
Add to wrangler.toml:
[[d1_databases]]
binding = "DB"
database_name = "my-db"
database_id = "your-database-id-here"
Schema and Migrations
D1 uses SQL migrations, not an ORM migration system:
# Create migration
wrangler d1 migrations create my-db add-users-table
-- migrations/0001_add_users_table.sql
CREATE TABLE users (
id TEXT PRIMARY KEY DEFAULT (lower(hex(randomblob(16)))),
email TEXT NOT NULL UNIQUE,
plan TEXT NOT NULL DEFAULT 'free',
created_at INTEGER NOT NULL DEFAULT (unixepoch())
);
CREATE INDEX idx_users_email ON users(email);
# Apply locally
wrangler d1 migrations apply my-db --local
# Apply to production
wrangler d1 migrations apply my-db --remote
Querying in Workers
// worker.ts
interface Env {
DB: D1Database;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const { results } = await env.DB.prepare(
'SELECT * FROM users WHERE plan = ? LIMIT ?'
)
.bind('pro', 100)
.all<User>();
return Response.json(results);
},
};
With Drizzle ORM (Recommended)
npm install drizzle-orm
npm install -D drizzle-kit
// db/schema.ts
import { sqliteTable, text, integer } from 'drizzle-orm/sqlite-core';
import { sql } from 'drizzle-orm';
export const users = sqliteTable('users', {
id: text('id').primaryKey().$defaultFn(() => crypto.randomUUID()),
email: text('email').notNull().unique(),
plan: text('plan', { enum: ['free', 'pro', 'enterprise'] }).default('free').notNull(),
createdAt: integer('created_at', { mode: 'timestamp' })
.default(sql`(unixepoch())`).notNull(),
});
// db/index.ts
import { drizzle } from 'drizzle-orm/d1';
import * as schema from './schema';
export function getDb(env: Env) {
return drizzle(env.DB, { schema });
}
// Usage in worker
const db = getDb(env);
// Type-safe queries
const proUsers = await db
.select()
.from(schema.users)
.where(eq(schema.users.plan, 'pro'))
.limit(50);
D1 in Next.js (Pages on Cloudflare)
// app/api/users/route.ts
import { getRequestContext } from '@cloudflare/next-on-pages';
import { getDb } from '@/db';
export const runtime = 'edge';
export async function GET() {
const { env } = getRequestContext();
const db = getDb(env);
const users = await db.select().from(schema.users);
return Response.json(users);
}
Production Patterns
Batch Writes (Avoid N+1)
// ❌ N+1 writes — slow
for (const user of users) {
await env.DB.prepare('INSERT INTO users (id, email) VALUES (?, ?)')
.bind(user.id, user.email)
.run();
}
// ✅ Batch in one round-trip
const stmts = users.map(user =>
env.DB.prepare('INSERT INTO users (id, email) VALUES (?, ?)')
.bind(user.id, user.email)
);
await env.DB.batch(stmts);
Transactions
// D1 supports transactions via batch with implicit transaction
const results = await env.DB.batch([
env.DB.prepare('UPDATE accounts SET balance = balance - ? WHERE id = ?')
.bind(100, fromId),
env.DB.prepare('UPDATE accounts SET balance = balance + ? WHERE id = ?')
.bind(100, toId),
]);
// Both statements run atomically
Read Replicas (Global Distribution)
D1 automatically replicates reads to the nearest Cloudflare PoP. Writes go to the primary (currently one region). For read-heavy workloads this is effectively global:
User in Tokyo → reads from Tokyo PoP (<5ms)
User in London → reads from London PoP (<5ms)
All writes → primary region (~50ms for write, then async replication)
D1 Limits to Know
- Max DB size: 2GB (free), 10GB (Workers Paid)
- Max row size: 1MB
- No full-text search: Use Cloudflare Vectorize or an external search service
- No stored procedures: Application-layer logic only
-
SQLite-specific:
TEXTfor UUIDs,INTEGERfor timestamps (Unix epoch)
When Not to Use D1
- Multi-tenant apps where each tenant needs an isolated DB (use Workers KV or separate D1 per tenant instead)
- Apps with heavy write throughput (>1k writes/sec) — D1's write path still routes to a single region
- Complex JOIN-heavy analytics — better to use a separate OLAP store
For typical SaaS with more reads than writes, D1 is a compelling choice that eliminates database ops entirely.
Deploy a Full-Stack Edge App Today
The Workflow Automator MCP includes Cloudflare Workers deployment patterns, D1 schema templates, and Drizzle integration — get a production edge app running without the setup overhead.
Running D1 in production? What workload and what limits have you hit? Comment below.
Top comments (0)