I recently launched freedivingbase, a directory of freediving destinations and schools. The whole stack runs on Cloudflare's free tier, with single-digit-millisecond response times globally. Here's what the build looks like.
The stack
-
Astro SSR (
output: 'server') for the framework - Cloudflare Workers as the runtime
- Cloudflare D1 for the database (SQLite at the edge)
- Cloudflare R2 for image storage, with Image Resizing for responsive delivery
-
caches.defaultfor edge caching of public GETs - Arctic for Google OAuth (admin dashboard)
Why this stack over Vercel/Supabase/etc.
I wanted three things: cheap, fast everywhere, and minimal infra to think about. Cloudflare's free tier covers Workers (100k requests/day), D1 (5M reads/day), R2 (10GB storage), and Image Resizing all without leaving the same dashboard. For a content site that's mostly reads, that's more than enough.
The other big draw is that there's no concept of "cold starts" the way there is on Lambda. Workers run V8 isolates, not containers. SSR responses come back in 50ms even on the free tier.
D1: normalized schema at the edge
D1 is just SQLite with a network layer, but writing it that way actually matters. The schema is fully normalized: countries, destinations, schools, certifications, etc. all in separate tables with foreign keys. No JSON columns, no document store patterns. The whole app uses about 12 tables.
Queries from a Worker:
export async function getDestinationBySlug(env: Env, slug: string) {
const result = await env.DB
.prepare('SELECT * FROM destinations WHERE slug = ? LIMIT 1')
.bind(slug)
.first();
return result;
}
D1 also supports prepared statements and batched queries, which I use heavily for the destination detail pages (one batch fetches the destination plus all its schools, conditions, and certifications).
Edge caching pattern
The trick that makes this site feel instant is using caches.default directly inside Astro middleware. Public GET requests check the cache first; only cache misses hit D1.
const cache = caches.default;
const cached = await cache.match(request);
if (cached) return cached;
const response = await renderPage();
ctx.waitUntil(cache.put(request, response.clone()));
return response;
The interesting part is invalidation. When an admin edits a school or destination, I purge the affected URLs:
await Promise.all([
cache.delete(`https://freedivingbase.com/schools/${slug}`),
cache.delete(`https://freedivingbase.com/schools/`),
cache.delete(`https://freedivingbase.com/`),
]);
This avoids the classic stale-cache problem without needing a separate Redis or KV layer.
Images
Original WebP files live in R2. Cloudflare Image Resizing handles every variant on the fly via /cdn-cgi/image/ URLs:
/cdn-cgi/image/width=640,quality=75,format=auto/<r2-url>
format=auto negotiates AVIF for browsers that support it, WebP otherwise. srcset widths are [400, 640] for cards and [640, 1024, 1440, 1920] for hero images. No build step, no image pipeline.
Auth
The admin dashboard uses Google OAuth via Arctic, which is by far the cleanest OAuth library I've used in TypeScript. Sessions are HTTP-only cookies; admin role is stored on the user record in D1. About 80 lines total for login + logout + session middleware.
What surprised me
A few things, in no order:
- Astro middleware on Workers is genuinely composable. You can layer auth, caching, and logging without it feeling like Express.
-
D1's local-dev story has gotten really good.
wrangler devruns against a local SQLite file that mirrors the schema, andgetPlatformProxy()lets vitest tests hit a real D1 instance. -
Image Resizing's
/cdn-cgi/image/syntax is undersold in Cloudflare's docs. It's basically the killer feature for content sites. - Bundle size on Workers is real (1MB compressed limit). I had to switch out a fuzzy-search library to stay under it.
Wrapping up
If you're building a content-heavy side project and don't want to think about infrastructure, this stack genuinely delivers. The whole site is at freedivingbase.com if you want to poke around.
Top comments (0)