DEV Community

盛永裕介
盛永裕介

Posted on

Why I'm betting static SSG beats dynamic AI rendering for directory SEO

The bet is specific: three directory sites, all Astro 5 SSG, all statically generated at build time, content refreshed via a nightly GitHub Actions cron. No server-side rendering. No edge functions. No dynamic personalization. Every page is a flat HTML file on a CDN.

My hypothesis is that this will outperform dynamically-rendered alternatives for the content types I'm targeting — AI model directories, open-source software comparisons, indie game recommendations. Six-month deadline: I'll know by November 2026 whether I was right.

What I'm actually betting on

Static pages served from a CDN are faster than dynamically-rendered pages at the p95 latency level, and Core Web Vitals scoring reflects this. TTFB from Vercel's edge with a pre-rendered HTML file is single-digit milliseconds. Server-rendered pages, even fast ones, add the overhead of a function invocation, database query, and render cycle.

For programmatic SEO — where I'm generating thousands of pages from a data pipeline — the question isn't "is SSG faster?" It is. The question is: does the freshness tradeoff matter enough to justify dynamic rendering's added complexity?

My pipeline runs nightly. A Turso libSQL database stores model records. The nightly refresh-content job fetches new models from HuggingFace, writes updated JSON files, and Astro rebuilds from those files. The gap between a new model appearing on HuggingFace and appearing on my site is about 24 hours, not milliseconds.

That's the bet: 24-hour staleness is acceptable for this content type, and the CDN speed advantage is worth more than real-time freshness.

The strongest counterargument

Dynamic rendering advocates would point to search query context. If a user in Tokyo searches "best AI image models" and a user in Berlin searches the same, a dynamic server could theoretically tailor results — regional trending, language-specific comparisons, real-time availability. A static page can't do this without client-side hydration.

This counterargument has real weight. Google's Helpful Content guidance explicitly rewards content that serves the user's specific situation. A dynamically-tailored page could be more helpful.

Here's why I don't think it wins for my use case: the content I'm targeting is niche enough that query personalization adds marginal value. Someone searching "open-source alternative to Notion" isn't getting a meaningfully better experience from a localized result. The value is in comprehensive, well-organized information — not in knowing they're in Tokyo versus Berlin.

The counterargument is legitimately stronger for e-commerce (inventory changes, live pricing) or news (freshness is the whole product). For AI model directories, I don't believe it wins.

What the architecture actually looks like

The refresh pipeline keeps API call counts near zero for routine runs. In .github/workflows/refresh-content.yml, ANTHROPIC_API_KEY is deliberately absent:

# ANTHROPIC_API_KEY intentionally NOT set — pipeline uses built-in
# fallback templates for new entries (zero API cost). The weekly
# polish Routine upgrades fallback entries to high-quality content
# using the Claude Code subscription.
Enter fullscreen mode Exit fullscreen mode

New entries get template-based content immediately — so the page exists and is indexable — then get upgraded to Claude Haiku-generated content in a separate polish cycle. The sequencing:

  1. A new model appears on HuggingFace
  2. The nightly job fetches it, writes a template entry, triggers an Astro rebuild on Vercel
  3. Within 24 hours, /models/new-model-id exists with indexable content
  4. Within the next polish run, Claude Haiku replaces the template with substantive copy

This sequencing is only possible because I'm generating static pages. A dynamic renderer would need to either call the API on each request (expensive and slow) or maintain the same pre-generation pipeline anyway — at which point you've built a static-page pipeline and added server rendering on top of it for unclear benefit.

Why Turso libSQL and not Postgres

The data storage choice reinforces the SSG bet. Turso libSQL is SQLite at the edge — same file format locally and in production. The getClient() singleton in packages/shared/src/db/index.ts checks for TURSO_DATABASE_URL; absent, it falls back to file:./data/local.db.

My CI environment and my laptop run the same code path. No Postgres container in CI. No drift between SQLite-in-dev and Postgres-in-prod. The tradeoff: Turso doesn't support stored procedures, triggers, or high concurrent writes.

For a pipeline that writes serially — the refresh workflow sets max-parallel: 1 explicitly — concurrent write limits don't matter. For a static site where reads happen entirely at build time, they matter even less. This is the same philosophy as SSG vs. dynamic rendering: pick the simpler system if its tradeoffs don't hurt you. They don't hurt me yet. I'm noting "yet" because I genuinely don't know what the pain point is — maybe it's 500 concurrent users hitting the dashboard, maybe it's a schema migration I haven't had to do. I'll find out.

The timeline and what counts as winning

By November 1, 2026 — six months from the April 23 launch — I expect to have:

  • At least 50 pages ranked in Google's top 100 for their target queries
  • Evidence that the 24-hour refresh cycle caused a rankings problem, or evidence that it didn't
  • Actual Turso + Vercel + Anthropic cost data, not estimates

I haven't published any traffic or ranking numbers yet — 12 days post-launch, nothing meaningful has been indexed. I'll post real data at the 30-day mark. What I'm committing to now is the criterion: static SSG "wins" if there's no meaningful rankings disadvantage from staleness or lack of personalization, at lower infrastructure cost and complexity than a dynamic alternative.

Winning doesn't require topping rankings. It requires not losing specifically because of the architectural choice.

What would change my mind

Three scenarios would push me toward dynamic rendering:

A direct competitor — same content category, comparable domain authority — builds a dynamic stack and outranks me by 20+ positions across a meaningful sample of queries within six months. That would be evidence the freshness tradeoff costs more than I believe. I'd need to see it in Search Console data, not inferred from domain comparisons.

Google explicitly signals, in documentation or confirmed ranking evidence, that directory-category content is rewarded for real-time freshness. I don't see this for "best AI model for X" queries, but I could be wrong.

My content pipeline requires more-than-daily refresh to be useful. Right now, "AI model directory updated daily" feels defensible. If HuggingFace release velocity accelerates to the point where day-old data is materially misleading — comparisons breaking because a model changed its license, pricing, or availability — I'd need to reconsider.

None of these feel likely in the short term. But I'm writing them down now, before results come in, because you can't fairly evaluate a bet if you move the goalposts after the outcome.

Part of an ongoing 6-month experiment running three AI-curated directory sites. The technical claims here are real; this article was AI-assisted.

Top comments (0)