DEV Community

suren_
suren_

Posted on

Shipping translations to the edge before your PR merges

If your i18n workflow looks like "edit JSON → push → wait → translator returns file → merge → deploy → translations live," you've already lost a day.

I built i18now because that loop felt absurd in 2026. Here's what changed when we flipped it.

The old loop

  1. Dev edits en.json
  2. PR opens, copy gets exported to a TMS
  3. Translators (or LLM jobs) work async
  4. Files come back, get reviewed, merged
  5. Deploy → translations are live

Time-to-live: hours to days. Worse, the source of truth lives in two places (repo + TMS), so drift is inevitable.

The new loop

  1. Dev edits a key in i18now (or pushes from code via SDK)
  2. AI generates translations across locales using your chosen model (BYOK — GPT, Claude, Gemini, etc.)
  3. You review/approve in-context
  4. Translations publish to a CDN edge cache
  5. App fetches them at runtime — no rebuild

Time-to-live: seconds.

How it works under the hood

  • Edge delivery: Cloudflare CDN serves locale bundles globally with sub-50ms TTFB
  • BYOK model routing: Pick the model per language or per project — Claude for nuance, Gemini for cost, GPT for fallback
  • SDKs: Drop-in for Next.js, Nuxt, React, Vue
  • Runtime fetch + ISR fallback: Translations update without redeploy, but you can pin versions for stability

What this unlocks

  • Marketing tweaks copy at 11pm? Live in seconds, no eng on-call
  • New language for a launch? Generate, review, ship in minutes
  • A/B testing copy? Just version the keys

What I'm still figuring out

  • Approval workflows for regulated industries (legal, medical)
  • Translation memory + glossary handoff between AI and human reviewers
  • Pricing for very high-volume edge fetches

If you've solved any of these in your stack, I'd love to hear how. And if the loop above resonates, give i18now a try — free tier is real, no card required.

Top comments (0)