If your i18n workflow looks like "edit JSON → push → wait → translator returns file → merge → deploy → translations live," you've already lost a day.
I built i18now because that loop felt absurd in 2026. Here's what changed when we flipped it.
The old loop
- Dev edits
en.json - PR opens, copy gets exported to a TMS
- Translators (or LLM jobs) work async
- Files come back, get reviewed, merged
- Deploy → translations are live
Time-to-live: hours to days. Worse, the source of truth lives in two places (repo + TMS), so drift is inevitable.
The new loop
- Dev edits a key in i18now (or pushes from code via SDK)
- AI generates translations across locales using your chosen model (BYOK — GPT, Claude, Gemini, etc.)
- You review/approve in-context
- Translations publish to a CDN edge cache
- App fetches them at runtime — no rebuild
Time-to-live: seconds.
How it works under the hood
- Edge delivery: Cloudflare CDN serves locale bundles globally with sub-50ms TTFB
- BYOK model routing: Pick the model per language or per project — Claude for nuance, Gemini for cost, GPT for fallback
- SDKs: Drop-in for Next.js, Nuxt, React, Vue
- Runtime fetch + ISR fallback: Translations update without redeploy, but you can pin versions for stability
What this unlocks
- Marketing tweaks copy at 11pm? Live in seconds, no eng on-call
- New language for a launch? Generate, review, ship in minutes
- A/B testing copy? Just version the keys
What I'm still figuring out
- Approval workflows for regulated industries (legal, medical)
- Translation memory + glossary handoff between AI and human reviewers
- Pricing for very high-volume edge fetches
If you've solved any of these in your stack, I'd love to hear how. And if the loop above resonates, give i18now a try — free tier is real, no card required.
Top comments (0)