When implementing i18n in admin/dashboard applications, a common challenge is figuring out how to load translation resources. Early on, when the project is small, teams often pack everything into one bundle—or at least a few large bundles—just to get it working. But as the system grows, the initial load slows down and translation files become increasingly difficult to maintain.
In this post, I’ll explain three ideas in a straightforward way:
- Route-level lazy loading
- Component-level lazy loading
- A loaded cache + inflight deduplication
Drawbacks of a single translation bundle
The biggest advantage of shipping translations as one bundle is simplicity: you don’t need to think about namespaces, loading timing, or caching. However, admin systems typically need runtime language switching so that users can change languages without interrupting their workflow. With a single large bundle, a few practical issues tend to show up:
- Initial load time gets dragged down by translations — the more features you add, the larger the translation bundle becomes.
- Maintenance cost increases — different team members update strings independently, and conflicts can happen without anyone noticing until later.
- The loading strategy becomes crude — you end up loading everything, and when something goes wrong, the “solution” becomes reloading the page.
For websites that don’t require much interaction (like marketing sites), a reload might be acceptable. For admin apps, dynamic loading matters much more.
What is route-level lazy loading?
The route-level approach is intuitive: load the translation resources for a feature area only when the user enters that route/feature. For example, load the Inventory translation bundle when entering Inventory, and load the CRM bundle when entering CRM.
The benefit is a clear mental model. It’s also easy to load translations before the page renders, which helps prevent situations where keys appear first and translations show up later.
But the downsides are also common. Admin apps are rarely “closed worlds” where each route is fully self-contained. There are many shared components—tables, toolbars, dialogs, menus—that are reused across routes. If you rely only on route-level loading, you usually end up with one of these outcomes:
The route bundles keep getting bigger (since the route is already loading a bundle, people just throw shared translations into it), or
Some components suddenly miss translations in certain places (especially dynamic dialogs/overlays).
What is component-level lazy loading?
The core idea of component-level lazy loading is: a component declares the namespace it needs, and that namespace is loaded the first time the component is used. This works well for admin apps because they tend to have many shared components, and many of them are reused across routes—or even created dynamically.
When done well, component-level loading brings several benefits:
More precise loading — you only load the translations you actually use.
Translations travel with the component — refactors are less likely to miss or break translation ownership.
Great for overlays/dialogs — components don’t depend on routing to “bring in” their translations.
But there’s one very consistent pitfall: duplicate loads. The same component can appear multiple times on a page, or multiple components can mount at the same time and require the same namespace. Without caching, you’ll see a lot of identical requests.
That’s why component-level lazy loading almost always needs a solid caching design.
How should the cache be designed?
The most important step is defining the cache key clearly. The simplest and safest option is:
cache key = language + namespace
For example:
en:inventory
zh-Hant:inventory
This prevents languages from overwriting each other, and switching back doesn’t require re-downloading.
However, a loaded cache alone isn’t enough in practice. Many duplicates don’t happen as “load again after it finished”—they happen because multiple callers load the same namespace at the same time. For example, if 10 components mount simultaneously and you only write results into the cache after completion, those 10 components may each trigger their own request.
So you typically add another layer: inflight deduplication (caching in-progress loads). The idea is:
Store completed results in the loaded cache;
Store the in-progress promise in an inflight store;
If a later caller sees an existing inflight promise, it simply awaits the same promise instead of starting a new request.
This inflight dedupe prevents the “lazy-load is enabled but the app still hammers the API” problem.
Flashing keys
Another detail with translations is whether the UI briefly flashes raw keys when entering a page.
Lazy loading inevitably introduces a timing issue: the UI can render before translations arrive, so users briefly see raw keys. This makes the product feel broken or unprofessional.
There are multiple ways to handle this. Personally, I prefer to avoid rendering that part of the UI until translations are ready, or show a blank placeholder first. For route-level loading, you can preload before entering the page. For component-level loading, you can decide render timing based on a ready state.
Why I built ngx-atomic-i18n
When I built ngx-atomic-i18n, it was mainly to address these admin-specific needs:
- Runtime language switching
- Namespace modularization + lazy loading
- Loaded cache + inflight dedupe to avoid duplicate loading
- A “ready” mechanism to prevent flashing keys
It’s not meant to cover every i18n scenario. It focuses on admin/dashboard apps where runtime switching is required and the UI composition is complex. If you’re interested, you can try it here: https://www.npmjs.com/package/ngx-atomic-i18n
I’m also happy to discuss any feedback or ideas.
Summary
Route-level lazy loading can be sufficient when a project is small, but as shared components and dynamic UI grow, it often becomes too coarse. Component-level lazy loading fits admin apps better, but it forces you to take caching seriously—especially inflight deduplication.
Top comments (0)