As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Let's talk about something that happens to every developer eventually. You build an application. You add features. You install helpful libraries. Before you know it, your main JavaScript file is massive. The first time a user visits your site, their browser has to download this huge file before anything can happen. On a slow connection, they're just staring at a blank screen. It feels bad.
This is the problem code splitting solves. Instead of giving the user one enormous file containing everything your app might ever do, you give them only what they need to start. Then, you load other pieces—like a product page, a charting component, or an admin panel—only when the user is about to need them. It turns a monolithic application into a collection of smart, on-demand pieces.
Think of it like a bookstore. A monolithic app is like handing a customer every single book in the store as soon as they walk in. Code splitting is like having a well-organized store where they can browse the front displays, and you only fetch specific books from the storeroom when they ask for them.
The heart of modern code splitting is the dynamic import. It looks like a function call: import(). This is your tool for telling the browser, "Don't load this now; load it later when I say so." Your bundler (like Webpack or Vite) sees this syntax and automatically creates a separate file, or "chunk," for that module.
Here’s how simple it can be. Instead of importing a component at the top of your file, you import it when you need to render it.
// This loads the component immediately with the main bundle
// import HeavyChart from './components/HeavyChart';
// This tells the app to load it only when required
const HeavyChart = React.lazy(() => import('./components/HeavyChart'));
function Dashboard() {
const [showChart, setShowChart] = useState(false);
return (
<div>
<button onClick={() => setShowChart(true)}>
Show Performance Chart
</button>
{showChart && (
<React.Suspense fallback={<div>Loading chart...</div>}>
<HeavyChart />
</React.Suspense>
)}
</div>
);
}
In this example, the HeavyChart component—and all the large charting libraries it might use—won't be downloaded until the user clicks that button. The React.Suspense component provides a placeholder (the fallback) to show while the new code is loading.
The most logical place to split your code is at the route level. There's no reason to download the code for the "Admin Dashboard" for a user who is just browsing the public product catalog. Frameworks like React Router make this straightforward.
import { BrowserRouter, Routes, Route } from 'react-router-dom';
// These pages are split into separate chunks
const HomePage = React.lazy(() => import('./pages/HomePage'));
const ProductPage = React.lazy(() => import('./pages/ProductPage'));
const AdminDashboard = React.lazy(() => import('./pages/AdminDashboard'));
function App() {
return (
<Suspense fallback={<GlobalLoader />}>
<BrowserRouter>
<Routes>
<Route path="/" element={<HomePage />} />
<Route path="/products" element={<ProductPage />} />
<Route path="/admin" element={<AdminDashboard />} />
</Routes>
</BrowserRouter>
</Suspense>
);
}
With this setup, visiting /admin triggers the browser to fetch the AdminDashboard chunk. A user who never visits that page never pays the cost of downloading it.
But we can be smarter than just waiting for a click. We can anticipate what the user will do next. A common pattern is to start loading a chunk when the user hovers over a link, not when they click it. This slight head start can make the next page feel instantaneous.
function PrefetchLink({ to, children }) {
const prefetchChunk = () => {
// This dynamic import prefetches the chunk
import(`./pages/${to}`);
};
return (
<Link
to={to}
onMouseEnter={prefetchChunk} // Start loading on hover
onFocus={prefetchChunk} // Also for keyboard navigation
>
{children}
</Link>
);
}
It's important to be respectful of the user's network, though. We shouldn't prefetch large chunks on a slow or metered connection.
function shouldPrefetch() {
// Check the Network Information API
if ('connection' in navigator) {
const conn = navigator.connection;
// Avoid prefetching if data saver is on or connection is slow
if (conn.saveData || conn.effectiveType.includes('2g')) {
return false;
}
}
return true;
}
So far, we've split our own code. But what about the big libraries from node_modules? This is where your bundler's configuration becomes powerful. You can tell it to group all your third-party dependencies into separate, stable chunks.
Here's a practical Webpack configuration. It groups React libraries together, UI libraries together, and ensures code used in multiple places is extracted into a shared chunk.
// webpack.config.js
module.exports = {
optimization: {
splitChunks: {
chunks: 'all',
cacheGroups: {
reactBundle: {
test: /[\\/]node_modules[\\/](react|react-dom|react-router)[\\/]/,
name: 'react-vendor',
priority: 40,
},
utilityBundle: {
test: /[\\/]node_modules[\\/](lodash|date-fns|axios)[\\/]/,
name: 'utility-vendor',
priority: 30,
},
sharedComponents: {
test: /[\\/]src[\\/]components[\\/]shared[\\/]/,
name: 'shared-ui',
minChunks: 2, // Only create this chunk if used in 2+ places
priority: 10,
},
},
},
},
};
Why do this? Caching. The react-vendor chunk file name will only change if you upgrade React. A user who visits your site next week will already have it cached, so they skip that download entirely and only get your new application code.
Vite, with its Rollup foundation, offers a slightly different but very effective approach. Its configuration feels more declarative.
// vite.config.js
export default defineConfig({
build: {
rollupOptions: {
output: {
manualChunks(id) {
// Group React-related dependencies
if (id.includes('node_modules/react')) {
return 'react';
}
// Group state management libraries
if (id.includes('node_modules/zustand') || id.includes('node_modules/redux')) {
return 'state';
}
// Create a chunk for large UI component libraries
if (id.includes('node_modules/@mui') || id.includes('node_modules/@chakra-ui')) {
return 'ui-lib';
}
}
}
}
}
});
Once you start splitting, you need to see what you've made. Blindly creating chunks can sometimes make things worse. Tools like webpack-bundle-analyzer give you a visual map of your bundles.
# Install the analyzer
npm install --save-dev webpack-bundle-analyzer
Then, update your Webpack config to generate a report.
const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;
module.exports = {
plugins: [
new BundleAnalyzerPlugin({
analyzerMode: 'static', // Generates an HTML file
reportFilename: 'report.html',
openAnalyzer: false, // Don't open automatically
})
]
};
Running your build will create an interactive report.html file. You'll see a colorful treemap. Large blocks are your biggest dependencies. This visualization helps you answer critical questions: Is my vendor chunk too big? Should I dynamically import that massive utility library that's only used in one feature?
For example, you might see that a PDF generation library is taking up 2MB in your main bundle, but it's only used on a single "Generate Report" page. That's a prime candidate for dynamic import.
All of this becomes more interesting when you introduce Server-Side Rendering (SSR). The goal is to send a fully rendered HTML page from the server, but the client still needs the exact same JavaScript chunks to "hydrate" that page and make it interactive. You have to carefully coordinate.
Libraries like @loadable/component are built for this. They help the server collect which chunks are needed for a specific page render and then tell the client to load them.
// On the Server (Node.js)
import { ChunkExtractor } from '@loadable/server';
import { renderToString } from 'react-dom/server';
import path from 'path';
function handleRequest(req, res) {
// Point to the stats file generated by the client build
const statsFile = path.resolve('./build/loadable-stats.json');
const extractor = new ChunkExtractor({ statsFile });
const jsx = extractor.collectChunks(<App location={req.url} />);
const html = renderToString(jsx);
// Get the script tags for the exact chunks this page needs
const scriptTags = extractor.getScriptTags();
res.send(`
<html>
<body>
<div id="root">${html}</div>
${scriptTags}
</body>
</html>
`);
}
// On the Client
import { loadableReady } from '@loadable/component';
import { hydrateRoot } from 'react-dom/client';
loadableReady(() => {
hydrateRoot(document.getElementById('root'), <App />);
});
The server's extractor figures out that rendering the /products page needs the ProductPage chunk. It injects a <script> tag for it. The client's loadableReady waits for those specific scripts to load before hydrating, preventing errors.
Finally, how do you know if your splitting strategy is actually helping? You need to measure. The Performance API in browsers is your friend. You can track how long it takes for chunks to load in real user conditions.
// A simple utility to track chunk performance
const chunkMetrics = {};
export function trackChunkLoad(chunkName) {
const start = performance.now();
return function chunkLoaded() {
const duration = performance.now() - start;
if (!chunkMetrics[chunkName]) {
chunkMetrics[chunkName] = [];
}
chunkMetrics[chunkName].push(duration);
// Optionally send to your analytics
console.log(`Chunk "${chunkName}" loaded in ${duration.toFixed(2)}ms`);
};
}
// Using it with a dynamic import
async function loadFeature() {
const markLoad = trackChunkLoad('AnalyticsFeature');
const module = await import('./features/analytics');
markLoad(); // Record the time
return module;
}
Over time, you can collect this data to see which chunks are slow on average, indicating they might be too large and need to be split further.
The philosophy behind code splitting is a shift in mindset. It moves us from "build everything, ship everything" to "build everything, ship intelligently." It acknowledges that not all users will take the same path through an application. By loading code precisely and proactively, we respect the user's time, their data plan, and their device's capabilities. The result is an application that feels light, fast, and responsive—no matter how complex it becomes under the hood. It’s not just an optimization; it's a fundamental part of building considerate, professional web experiences.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)