Last project I had to internationalize an app with around 200 keys across 5 languages. I figured it would take a morning. It took most of a day, and the interesting part wasn't the translating. It was everything else.
The actual translations were fine. DeepL, Google Translate, whatever. Ten minutes. What killed me was the workflow around it:
Creating 5 copies of every JSON file. Keeping them in sync. Discovering that {name} got translated to {nombre} in the Spanish file and broke interpolation. Finding a typo (settigns.title) that rendered as blank in production because there's no runtime error for a missing key. Realizing a week later that I forgot three keys in Japanese entirely.
The thing that surprised me was the ratio. The translation itself was maybe 10% of the time. The other 90% was file management, variable protection, and chasing silent failures.
What silent failures actually look like
This is the part nobody talks about. Your app doesn't crash when a translation key is wrong. It just renders nothing. Or it renders common.header.subtitl as literal text. Or it works in English and Spanish but breaks in Japanese because you have {count} items and the variable got mangled.
You don't catch these in development because you're developing in English. They show up in production, reported by users who don't speak English, often weeks later.
How I stopped doing it manually
I built i1n to automate the parts that kept breaking. The whole workflow now:
i1n push --translate es,fr,de,ja,pt
One command. Variables stay untouched in every language. And it generates a .d.ts file, so if I typo a key, TypeScript catches it before I even run the app.
But the real shift was adding an MCP server. If you use Cursor or Claude Code, you can tell your agent "internationalize this component" and it extracts the strings, translates, and rewrites the code. I was skeptical about this one. For repetitive extraction work, it's genuinely good.
The 85% problem
AI translation isn't perfect though. It gets context wrong on short strings ("Save" as a verb vs "Save" as a noun), goes too literal on idioms, and sometimes loses tone completely. I've seen it translate "Drop us a line" as the equivalent of "drop a rope" in Japanese.
That's why I ended up building a dashboard too. The AI handles the first pass, you review what it generated, fix what needs fixing, and mark those as approved so they don't get overwritten next time. If your source text changes later, the system flags which translations went stale.
85% automation, 15% human review. That split turned a full day of work into maybe 20 minutes.
What I'd tell someone setting up i18n today
Don't start with the translation. Start with the workflow. Figure out how you're going to keep files in sync, how you'll catch missing keys before production, and how you'll handle variables across languages. The translation is the easy part.
The CLI and dashboard are open source (MIT) if you want to try it: i1n.ai
How are you handling translations? I keep running into teams that either set up a full TMS they barely use, or just skip internationalization entirely because the workflow is too painful.
Top comments (0)