Last month I shipped a SaaS app in 6 languages. The translation part took me 2 days. Not 2 weeks. Two days.
If you've ever set up internationalization in a production app, you know that sounds suspicious. The typical i18n story goes like this: you spend a day configuring the library, another day wiring up translation files, then you lose the next two weeks chasing down missing keys, syncing JSON files back and forth with translators, and praying that your build doesn't break because someone added a comma in the wrong place.
I want to share what actually worked for me, and the specific decisions that saved me from the usual i18n death spiral.
The problem nobody warns you about
Every i18n tutorial starts the same way: install a library, create a locales/ folder, add some JSON files. Done!
Except that's only 10% of the work.
The other 90% is:
- Key management - Who decides the key names? What happens when you rename a component and forget to update the key? What about unused keys from features you deleted 3 months ago?
- Translation sync - Your translator works in a spreadsheet or a separate SaaS tool. You work in Git. Someone has to bridge that gap manually, every single sprint.
- Deployment coupling - You fix a typo in the French translation. Now you need a full redeploy to push that one-character change to production.
-
Type safety - You call
t("header.buttn_text")with a typo, and you don't find out until a user reports a blank button in production.
These are the problems that make i18n painful at scale. The library choice matters far less than your workflow around it.
My setup (and why I chose it)
I'll walk through the actual stack I used, with code. The app is a Next.js 14 project with TypeScript.
Step 1: Pick a translation format
I went with ICU MessageFormat. Not because it's trendy, but because it handles pluralization without ugly hacks:
// Bad: separate keys for every plural form
"items_zero": "No items"
"items_one": "1 item"
"items_other": "{count} items"
// Good: one ICU key handles all cases
"items": "{count, plural, =0 {No items} one {1 item} other {{count} items}}"
ICU is the format used by react-intl (FormatJS) and @better-i18n/use-intl. It's also what most professional translators already know.
Step 2: Set up the provider
Here's the minimal setup:
// app/[locale]/layout.tsx
import { BetterI18nProvider } from '@better-i18n/use-intl';
import { getMessages } from '@better-i18n/use-intl/server';
export default async function LocaleLayout({
children,
params: { locale }
}: {
children: React.ReactNode;
params: { locale: string };
}) {
const messages = await getMessages({
project: 'my-app',
locale
});
return (
<BetterI18nProvider locale={locale} messages={messages}>
{children}
</BetterI18nProvider>
);
}
The getMessages call fetches translations from a CDN at build/request time. No local JSON files to maintain.
Step 3: Use translations in components
import { useTranslations } from '@better-i18n/use-intl';
function PricingCard({ plan }) {
const t = useTranslations('pricing');
return (
<div>
<h3>{t('plan.title', { name: plan.name })}</h3>
<p>{t('plan.description')}</p>
<span>{t('plan.price', { amount: plan.price })}</span>
<button>{t('cta')}</button>
</div>
);
}
Nothing revolutionary here. Every i18n library has a t() function. The difference is what happens next.
Step 4: Handle routing
For multilingual SEO, you need locale-prefixed URLs: /en/pricing, /de/pricing, /fr/pricing.
With Next.js App Router:
// middleware.ts
import { createI18nMiddleware } from '@better-i18n/next';
export default createI18nMiddleware({
locales: ['en', 'de', 'fr', 'es', 'pt', 'ja'],
defaultLocale: 'en',
});
export const config = {
matcher: ['/((?!api|_next|.*\\..*).*)'],
};
This middleware detects the user's preferred language from the Accept-Language header, checks if the URL already has a locale prefix, and redirects accordingly.
Important for SEO: use 301 redirects (not 302) for locale redirects. A 302 tells Google "this is temporary" and it won't pass link equity to the localized page.
Step 5: Add hreflang tags
This is where most tutorials stop, and where most sites get penalized. Every localized page needs <link rel="alternate" hreflang="x"> tags pointing to all its language variants:
// app/[locale]/layout.tsx
export function generateMetadata({ params: { locale } }) {
const locales = ['en', 'de', 'fr', 'es', 'pt', 'ja'];
const path = /* current path without locale */;
return {
alternates: {
canonical: `https://myapp.com/${locale}/${path}`,
languages: Object.fromEntries(
locales.map(l => [l, `https://myapp.com/${l}/${path}`])
),
},
};
}
Rules that took me way too long to figure out:
-
Every hreflang URL must return HTTP 200. If
/en/aboutredirects to/about, Google ignores the hreflang and you get "conflicting hreflang" errors in Search Console. -
Include a
x-defaulthreflang pointing to your primary language. This tells Google what to show users whose language you don't support. -
Hreflang tags must be bidirectional. If
/en/aboutsays "my French version is/fr/about", then/fr/aboutmust also say "my English version is/en/about".
Step 6: Structured data per locale
If you have Organization, Product, or Article schema markup, the name and description fields should match the page language:
const structuredData = {
'@context': 'https://schema.org',
'@type': 'SoftwareApplication',
name: t('schema.appName'),
description: t('schema.appDescription'),
applicationCategory: 'DeveloperApplication',
operatingSystem: 'Web',
offers: {
'@type': 'Offer',
price: 0,
priceCurrency: 'USD',
},
};
Google specifically looks for language consistency between the page content and structured data. A page in German with English schema markup sends mixed signals.
The workflow that actually scales
Here's where the library comparison stops mattering and the workflow starts mattering.
The old way (what I did before)
1. Developer adds t("newFeature.title") in code
2. Developer manually adds "newFeature.title": "" to en.json
3. Developer commits, pushes, creates PR
4. PM exports en.json, uploads to translation tool
5. Translator translates in the tool
6. PM downloads de.json, fr.json, es.json...
7. PM creates a PR with the new JSON files
8. Merge conflicts. Always merge conflicts.
9. Repeat steps 4-8 for every sprint
This workflow has a person-shaped bottleneck in the middle. The PM becomes a full-time JSON file courier.
The Git-native way
What I do now:
1. Developer adds t("newFeature.title") in code
2. CLI scans the codebase automatically, discovers new keys
3. AI translates with glossary-aware context
4. Translations land as a GitHub PR for review
5. Approved translations deploy to CDN instantly
Steps 2-5 are automated. No JSON files to commit. No merge conflicts. No PM spending their Tuesday copying cells from Google Sheets.
The key insight: translations should flow through the same system as your code (Git), but deploy independently (CDN).
When a translator fixes a typo in the Spanish copy, that fix goes live in seconds through the CDN. No build. No deploy. No release cycle.
Automatic key discovery
This is the feature I didn't know I needed until I had it. The CLI scans your codebase for t() calls and builds a manifest of every translation key:
npx @better-i18n/cli scan
Output:
Discovered 847 translation keys
- 12 new keys (not yet translated)
- 3 unused keys (safe to remove)
- 832 up to date
Those 3 unused keys? They're from a feature I removed last month. Without the scanner, they'd sit in the JSON files forever, confusing every translator who encounters them.
Framework-specific gotchas
Next.js App Router
Server Components can't use useTranslations() (it's a hook). You need the server import:
// Server Component
import { getTranslations } from '@better-i18n/use-intl/server';
export default async function Page({ params: { locale } }) {
const t = await getTranslations({ locale, namespace: 'home' });
return <h1>{t('title')}</h1>;
}
// Client Component
'use client';
import { useTranslations } from '@better-i18n/use-intl';
export function CTAButton() {
const t = useTranslations('home');
return <button>{t('cta')}</button>;
}
React (Vite / CRA)
No server components, so everything uses the hook:
import { BetterI18nProvider, useTranslations } from '@better-i18n/use-intl';
function App() {
return (
<BetterI18nProvider
project="my-app"
locale={detectedLocale}
messages={messages}
>
<Router />
</BetterI18nProvider>
);
}
Vue 3
<script setup>
import { useI18n } from '@better-i18n/vue';
const { t } = useI18n();
</script>
<template>
<h1>{{ t('home.title') }}</h1>
<p>{{ t('home.description') }}</p>
</template>
React Native (Expo)
Mobile has an extra requirement: offline support. Users open your app on a plane and you can't fetch translations from a CDN.
import { BetterI18nExpoProvider } from '@better-i18n/expo';
import fallback from './locales/en.json';
export default function App() {
return (
<BetterI18nExpoProvider
projectId="my-app"
defaultLocale="en"
fallbackMessages={fallback}
>
<Navigation />
</BetterI18nExpoProvider>
);
}
The Expo provider caches translations locally and uses the bundled fallback when offline. When connectivity returns, it pulls fresh translations from the CDN in the background.
The SEO checklist I wish I had on day one
After getting 1,775 hreflang errors in one Semrush crawl (true story), I built myself a checklist. Here it is:
- [ ] Every locale URL returns HTTP 200 (not a redirect)
- [ ] Canonical tags use the locale-prefixed URL (
/en/about, not/about) - [ ] Hreflang tags are bidirectional across all locales
- [ ]
x-defaulthreflang points to your primary language - [ ] Sitemap includes all locale variants
- [ ] Structured data fields are translated (not English-only)
- [ ]
<html lang="xx">matches the page locale - [ ]
<title>tags are under 70 characters (watch out for German, it's 30% longer than English) - [ ] Redirects use 301, not 302/307
- [ ] OG tags are localized (og:title, og:description, og:locale)
AI-powered translation with guard rails
I'll be honest: I use AI translation for the first pass. It saves days of work. But raw machine translation shipped straight to production is a bad idea.
What works is a review workflow:
- AI translates with access to your glossary (so it knows "workspace" is always "Arbeitsbereich" in German, not "Arbeitsplatz")
- Native speaker reviews the AI output, approves or edits
- Approved translations deploy to CDN
The glossary part is critical. Without it, AI translates the same term differently across pages. Your "Dashboard" becomes "Armaturenbrett" on one page and "Instrumententafel" on another.
What I'd do differently
If I started over:
Set up i18n on day one, not day 100. Retrofitting
t()calls into 200 components is brutal. Starting with i18n means every new component uses translation keys from the start.Use ICU format from the beginning. I started with simple key-value JSON and had to migrate when I needed pluralization. ICU handles plurals, dates, numbers, and gender from day one.
Automate the key discovery early. The CLI scanner caught 47 unused keys in my codebase. That's 47 strings translators were spending time on for no reason.
Don't skip the SEO fundamentals. Hreflang, canonical tags, and locale routing are not optional if you want organic traffic from non-English markets. I learned this by losing 3 months of German organic traffic to indexing issues.
Wrapping up
The library you pick for t() calls matters less than you think. What matters is:
- Can your translators work without needing a developer to shuffle files around?
- Can you fix a translation without triggering a full deploy?
- Do you know which keys are unused and which are missing?
- Are your localized pages actually indexed correctly?
If the answer to any of those is "no", you have a workflow problem, not a library problem.
I'm using Better i18n for my projects. It handles the Git sync, CDN delivery, AI translation, and key discovery parts so I can focus on building features instead of managing JSON files. The free tier covers small projects; the Pro plan at $19/month works for production apps with multiple locales.
Whatever tool you pick, get the workflow right first. The code is the easy part.
Top comments (0)