Your API server is in us-east-1. Your users are in Tokyo, Berlin, and Sao Paulo. Every request crosses the ocean.
Cloudflare Workers runs your code in 300+ cities worldwide — milliseconds from every user. And the free tier is generous.
Free Tier
- 100,000 requests/day
- 10ms CPU time per request
- Workers KV: 100K reads/day, 1K writes/day
- R2 Storage: 10GB stored, 10M Class A ops/month
- D1 Database: 5M rows read/day, 100K writes/day
Hello World (30 Seconds)
npm create cloudflare@latest my-worker
cd my-worker
// src/index.ts
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/api/hello") {
return Response.json({ message: "Hello from the edge!", city: request.cf?.city });
}
return new Response("Not found", { status: 404 });
},
};
npx wrangler deploy
# Live in 300+ cities in ~5 seconds
D1 — SQLite at the Edge
export default {
async fetch(request: Request, env: Env) {
// Create table
await env.DB.exec(`
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT UNIQUE
)
`);
// Insert
await env.DB.prepare("INSERT INTO users (name, email) VALUES (?, ?)")
.bind("Alice", "alice@example.com")
.run();
// Query
const { results } = await env.DB.prepare("SELECT * FROM users").all();
return Response.json(results);
},
};
SQLite database replicated across Cloudflare's network.
KV — Global Key-Value Store
// Write (eventually consistent globally)
await env.KV.put("user:123", JSON.stringify({ name: "Alice" }), {
expirationTtl: 3600, // 1 hour
});
// Read (from nearest edge location)
const user = await env.KV.get("user:123", "json");
R2 — S3-Compatible Storage (No Egress Fees)
// Upload
await env.R2.put("images/photo.jpg", imageBuffer, {
httpMetadata: { contentType: "image/jpeg" },
});
// Download
const object = await env.R2.get("images/photo.jpg");
return new Response(object.body, {
headers: { "Content-Type": object.httpMetadata.contentType },
});
R2 is S3-compatible but with zero egress fees.
Queues (Async Processing)
// Producer
await env.MY_QUEUE.send({ userId: "123", action: "sendEmail" });
// Consumer
export default {
async queue(batch: MessageBatch<any>, env: Env) {
for (const message of batch.messages) {
await processMessage(message.body);
message.ack();
}
},
};
Cron Triggers
export default {
async scheduled(event: ScheduledEvent, env: Env) {
// Runs on schedule (e.g., every hour)
await cleanupExpiredSessions(env);
},
};
# wrangler.toml
[triggers]
crons = ["0 * * * *"] # Every hour
Workers vs Lambda vs Vercel Edge
| Feature | Lambda | Vercel Edge | Workers |
|---|---|---|---|
| Cold start | 100-500ms | ~0ms | ~0ms |
| Locations | ~30 regions | ~30 cities | 300+ cities |
| Free tier | 1M req/mo | 100K req/mo | 100K req/day |
| Storage | S3 | Blob | R2 (no egress) |
| Database | DynamoDB | Postgres | D1 (SQLite) |
| Max execution | 15 min | 30s | 30s (paid: unlimited) |
Need edge-deployed APIs or data processing? I build web tools and scraping solutions. Email spinov001@gmail.com or explore my Apify tools.
Top comments (0)