- How to audit bundles and set measurable performance goals
- Route-level splitting patterns that actually reduce TTI
- Splitting third-party libraries and shared chunks without duplication
- Runtime loading: preloading, prefetching, and caching strategies
- Audit-to-deploy protocol: a one-day checklist
Shipping a single, monolithic JavaScript payload is a deliberate UX tax: it amplifies parse/compile time, blocks hydration, and hands low-end devices a CPU bill they can't afford. Aggressive, measurable code splitting — at the route, component, and library level — plus pragmatic runtime loading and cache controls is how you trade bytes for meaningful milliseconds.
Your users perceive slowness as the combination of long time-to-interactive and delayed visual feedback. Symptoms you already recognize: first paint happens but interactions lag, navigation stutters when a route's JS parses, Lighthouse flags high TBT and LCP that spike on mobile, and bundle analyzers show duplicate packages and giant vendor chunks. Those are not abstract metrics — they cause bounce, lower retention, and raise support tickets on lower-end devices.
How to audit bundles and set measurable performance goals
Start with evidence: collect RUM metrics and run synthetic tests. Use Lighthouse for controlled, repeatable runs and a Real User Monitoring (RUM) library to capture the 75th-percentile experience across real devices and networks. The Core Web Vitals — LCP, CLS, INP — give you thresholds to measure against. Treat those metrics as your product-level SLAs.
Practical tools you should run today:
- Static bundle visualization:
webpack-bundle-analyzerto inspect chunk composition andsource-map-explorerto see what’s inside each file. - Lighthouse lab runs: run in CI and capture trends.
- RUM: capture LCP/INP in production so you don’t optimize for a lab-only case.
Example quick commands:
# analyze generated bundles (create stats.json from your build or point at built files)
npx webpack-bundle-analyzer build/stats.json
# inspect what's inside a built JS file (create source maps in build)
npx source-map-explorer build/static/js/*.js
Set concrete, enforceable budgets and automate checks in CI. A pragmatic starting budget (adjust by app complexity): aim to keep the initial JS payload in the low hundreds of kilobytes (gzipped) for mobile-first experiences and reduce the number of bytes parsing on first load. Add a size-limit or bundlesize gate to your pipeline so regressions fail the build.
Important: Metrics matter more than beliefs. Use RUM for final validation and always measure the 75th percentile on real devices — not just desktop dev boxes.
Route-level splitting patterns that actually reduce TTI
Splitting by route is the highest-leverage move in most SPAs: hold back the code for routes the user hasn't reached yet and only hydrate what’s visible. Use React.lazy + Suspense for straightforward client-side splits. React.lazy is simple, but remember it’s client-only — server-side rendering (SSR) needs a SSR-aware loader (for example @loadable/component) if you need server-rendered splits.
Minimal route lazy-loading pattern:
import React, { Suspense } from 'react';
import { BrowserRouter, Routes, Route } from 'react-router-dom';
const Dashboard = React.lazy(() => import(/* webpackChunkName: "route-dashboard" */ './routes/Dashboard'));
const Settings = React.lazy(() => import(/* webpackChunkName: "route-settings" */ './routes/Settings'));
export default function App() {
return (
<BrowserRouter>
<Suspense fallback={<div className="spinner">Loading…</div>}>
<Routes>
<Route path="/" element={<Dashboard />} />
<Route path="/settings" element={<Settings />} />
</Routes>
</Suspense>
</BrowserRouter>
);
}
Use chunk naming (webpackChunkName) to make network traces readable and to group logical route bundles.
Prefetching strategies that actually pay off:
- Use
/* webpackPrefetch: true */for likely-next-route chunks so the browser downloads them at idle time. - Trigger a targeted
import()on link hover or touchstart to pre-warm the network if the user intent is strong. Example: callimport('./Settings')from the linkonMouseEnteroronTouchStarthandlers.
Avoid these common mistakes:
- Blindly lazy-loading every single component. Tiny components add hydrations and boundary overhead; they don’t always reduce main-thread work.
- Relying exclusively on
React.lazyfor SSR apps — it won’t hydrate server-rendered HTML server-side without an SSR-capable loader.
Use a simple decision rule: if a route’s client bundle exceeds your initial parse budget or contains heavy libraries (charts, maps), route-level splitting will most likely improve TTI.
Splitting third-party libraries and shared chunks without duplication
A single vendor blob often becomes the largest chunk. Split vendors smartly to get caching benefits and avoid repeated downloads across routes. optimization.splitChunks in webpack gives you full control; create a vendor cache group and consider package-level chunking for very large libraries.
Example splitChunks snippet:
// webpack.config.js (excerpt)
module.exports = {
optimization: {
runtimeChunk: 'single',
splitChunks: {
chunks: 'all',
maxInitialRequests: 10,
minSize: 20000,
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/,
name(module) {
const match = module.context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/);
return match ? `npm.${match.replace('@','')}` : 'vendor';
},
priority: 20,
},
common: {
minChunks: 2,
name: 'common',
priority: 10,
reuseExistingChunk: true,
},
},
},
},
};
runtimeChunk: 'single' isolates the webpack runtime so long-lived vendor and app chunks survive cache and avoid invalidation on minor app changes.
Tree shaking and ESM:
- Tree shaking only works well when modules are published as ES modules. CommonJS packages make tree shaking ineffective; prefer ESM builds or smaller helpers that expose only what you need. Verify a dependency’s module field in
package.json.
Track duplication with webpack-bundle-analyzer and source-map-explorer. Look for multiple versions of the same package — that’s the usual cause of duplicated bytes. Use package manager resolutions or dedupe strategies to converge versions where possible.
A contrarian point: splitting every dependency into its own tiny chunk sounds clean but increases request overhead. Optimize for reduced main-thread parse/compile and hydration cost, not just number-of-bytes. On HTTP/1 connections, fewer well-sized chunks sometimes outperform a swarm of tiny requests.
Runtime loading: preloading, prefetching, and caching strategies
Understand the difference: preload fetches a resource with high priority because it’s needed for the current navigation; prefetch is low priority and intended for future navigations. Use rel="preload" for an LCP-critical script or font and rel="prefetch" (or webpackPrefetch) for next-route bundles.
Use magic comments for fine-grained control:
/* webpackPrefetch: true */ import('./Settings'); // low-priority, next navigation
/* webpackPreload: true */ import('./criticalWidget'); // high-priority for current nav
Preload example for an LCP image:
<link rel="preload" as="image" href="/images/hero.avif">
Preload a script when you know it’s critical to render above-the-fold UI, but remember that rel="preload" does not execute the script — you must also insert the corresponding script tag or use module loader semantics.
Caching policies and service workers:
- Serve hashed assets (
app.a1b2c3.js) with longCache-Control: public, max-age=31536000, immutable. Non-hashed HTML should remain short-lived. - Use a service worker (Workbox) to precache stable chunks and to apply runtime caching for resources like images and API responses. Precache the main route bundles you know will be used frequently; let the SW serve them from cache to avoid network round trips on subsequent loads.
Example Workbox precache snippet:
import { precacheAndRoute } from 'workbox-precaching';
precacheAndRoute(self.__WB_MANIFEST || []);
Combine stale-while-revalidate for non-critical assets with CacheFirst for vendor chunks you want to keep quickly available.
Measure the impact of prefetching: track effective bytes fetched and percent of prefetch hits in RUM. Prefetching can waste bytes if user behavior doesn't match your assumptions.
Audit-to-deploy protocol: a one-day checklist
This protocol turns analysis into enforceable outcomes. Treat it as a runbook you can execute in a single workday.
-
Morning — Baseline collection (1–2 hours)
- Run Lighthouse on a representative CI profile; capture LCP, TBT, INP.
- Pull 24–72 hours of RUM data for LCP/INP distributions.
-
Midday — Static analysis (1–2 hours)
- Run
npx webpack-bundle-analyzerandnpx source-map-explorerto locate the top 5 bytes consumers. - Identify large vendors, duplicate packages, and heavy route bundles.
- Run
-
Afternoon — Tactical splits and quick wins (2–3 hours)
- Convert the heaviest route or component to
React.lazy+Suspense(or SSR-aware loader if server-rendered). - Extract any very large library (charting, maps) to a separate vendor chunk and add
runtimeChunk: 'single'. - Add
/* webpackPrefetch: true */to the likely-next-route imports where appropriate.
- Convert the heaviest route or component to
-
Late afternoon — Validation and automation (1–2 hours)
- Re-run Lighthouse and collect the revised RUM snapshot to validate changes.
- Add or update CI checks:
size-limitorbundlesizeand a build step that fails on budget breaches. - Commit the
webpacksplitChunks config and add a short doc block in the repo explaining the chunking rationale.
Checklist table (quick reference):
| Action | Tool / Pattern | Expected gain |
|---|---|---|
| Find top bytes |
webpack-bundle-analyzer / source-map-explorer
|
Targets for splitting |
| Split heavy route |
React.lazy + Suspense
|
Reduces initial parse/hydration |
| Extract vendor |
splitChunks cacheGroups |
Long-term caching, smaller initial |
| Prefetch next route |
webpackPrefetch or import() on hover |
Faster perceived navigation |
| Enforce in CI |
size-limit, Lighthouse CI |
Prevent regressions |
Sources of truth for validation: use both synthetic (Lighthouse CI) and RUM metrics — a lab improvement with no RUM win means you likely missed a real-world case.
A final operational tip: add a comment header above non-trivial splitChunks rules explaining why a cache group exists. The next engineer should be able to understand the tradeoff in 60 seconds.
Sources:
Core Web Vitals - Definitions and thresholds for LCP, CLS, and INP used to set performance SLAs.
React — Code Splitting - React.lazy, Suspense, and guidance on client vs server loading.
MDN — import() - The standard dynamic import syntax and runtime semantics.
webpack — Code Splitting - splitChunks, runtimeChunk, and bundling strategies.
webpack — Tree Shaking - How ESM enables dead-code elimination and what prevents it.
Resource Hints - When to use preload vs prefetch and how to apply resource hints.
Workbox - Patterns and APIs for precaching and runtime caching via Service Workers.
webpack-bundle-analyzer (GitHub) - Visualize bundle composition and spot duplicate modules.
source-map-explorer (GitHub) - Explore what's inside a compiled JS file using source maps.
Performance Budgets - How to set and automate size and timing budgets for builds.
Lighthouse (Chrome DevTools) - Synthetic testing for performance regressions and diagnostics.
MDN — HTTP Caching - Best practices for cache headers and immutable assets.
Start shaving the first critical milliseconds by measuring where parsing, compiling, and hydration happen — then stop shipping what you don't need on first load.
Top comments (0)