Our DockForge application was loading in 3.2 seconds. Here's the full playbook we used to get it down to 0.8s — including the profiling workflow we ran before writing a single line of optimization code.
⚡ Step 0: Real-World Profiling Workflow
The biggest mistake teams make is optimizing by instinct. Before touching anything, build a baseline with real data.
Run Lighthouse in CI, not just DevTools
DevTools Lighthouse runs on your machine with your hardware. For consistent results, use the CLI:
npm install -g lighthouse
lighthouse https://your-app.com --output=json --output-path=./report.json \
--chrome-flags="--headless" --throttling-method=simulate
Or integrate it into CI so every PR shows a score diff:
# .github/workflows/lighthouse.yml
- name: Run Lighthouse
uses: treosh/lighthouse-ci-action@v10
with:
urls: |
https://staging.your-app.com
budgetPath: ./budget.json
uploadArtifacts: true
Use Chrome DevTools Performance tab properly
- Open DevTools → Performance tab
- Click the gear icon → set CPU throttle to 4x slowdown (simulates a mid-range Android)
- Record a full page load
- Look for Long Tasks (red bars) — anything over 50ms blocks the main thread
Track Core Web Vitals in production
Lighthouse is synthetic. Real users are different. Add this to your root layout:
// app/layout.tsx
export function reportWebVitals(metric) {
// Send to your analytics (Vercel, Datadog, etc.)
console.log(metric); // { name: 'LCP', value: 2400, ... }
}
Or use the web-vitals package for more control:
import { onCLS, onFID, onLCP, onFCP, onTTFB } from 'web-vitals';
function sendToAnalytics({ name, value, id }) {
fetch('/api/vitals', {
method: 'POST',
body: JSON.stringify({ name, value, id }),
});
}
onCLS(sendToAnalytics);
onLCP(sendToAnalytics);
onFCP(sendToAnalytics);
Only optimize what you've measured. Then measure again after.
📦 Bundle Analysis: Find What's Killing Your Load Time
Before optimizing code, you need to see what's actually in your bundle.
Install the analyzer
npm install --save-dev @next/bundle-analyzer
// next.config.js
const withBundleAnalyzer = require('@next/bundle-analyzer')({
enabled: process.env.ANALYZE === 'true',
});
module.exports = withBundleAnalyzer({
// your existing config
});
ANALYZE=true npm run build
This opens an interactive treemap in your browser. Look for:
- Suspiciously large dependencies — moment.js (67KB gzipped), lodash (24KB), etc.
- Duplicate packages — two versions of React, two versions of a utility lib
- Server-only code leaking into client bundles — DB clients, secrets, heavy Node libs
Common offenders and fixes
Moment.js → date-fns
# moment.js: 67KB gzipped
# date-fns (tree-shakeable): ~3KB per function used
npm uninstall moment
npm install date-fns
// Before
import moment from 'moment';
moment(date).format('YYYY-MM-DD');
// After
import { format } from 'date-fns';
format(date, 'yyyy-MM-dd');
Lodash → lodash-es
// Before: imports entire lodash (24KB)
import _ from 'lodash';
_.groupBy(items, 'category');
// After: tree-shakeable, only imports groupBy
import groupBy from 'lodash-es/groupBy';
groupBy(items, 'category');
Enable optimized package imports for large icon/UI libraries:
// next.config.js
module.exports = {
experimental: {
optimizePackageImports: ['lucide-react', '@mui/material', 'antd'],
},
};
🖼️ 1. Image Optimization
The single highest-impact change in most Next.js apps.
Before:
<img src="/logo.png" alt="Logo" />
After:
import Image from 'next/image';
<Image
src="/logo.png"
alt="Logo"
width={200}
height={50}
priority // preloads LCP images
placeholder="blur"
blurDataURL="data:image/jpeg;base64,..."
/>
Why priority matters
Only add priority to images above the fold — your hero image, logo, or anything visible on load. It injects a <link rel="preload"> tag so the browser fetches it before parsing the rest of the page. Using it on everything defeats the purpose.
Generate blur placeholders at build time
Instead of hardcoding a blurDataURL, generate it dynamically:
// lib/imageUtils.js
import { getPlaiceholder } from 'plaiceholder';
export async function getBlurData(src) {
const { base64 } = await getPlaiceholder(src);
return base64;
}
// In your page (server component)
const blurDataURL = await getBlurData('/hero.jpg');
Remote images need explicit domains
// next.config.js
module.exports = {
images: {
remotePatterns: [
{
protocol: 'https',
hostname: 'cdn.your-app.com',
pathname: '/images/**',
},
],
formats: ['image/avif', 'image/webp'], // AVIF is ~40% smaller than WebP
},
};
Result: 60% smaller images with automatic WebP/AVIF conversion, near-zero CLS from blur placeholders.
⚡ 2. Code Splitting
Next.js handles route-level splitting automatically. The wins come from component-level splitting within a page.
Dynamic imports for heavy components
import dynamic from 'next/dynamic';
// Heavy chart library — don't load until needed
const HeavyChart = dynamic(() => import('@/components/HeavyChart'), {
loading: () => <Skeleton className="h-64 w-full" />,
ssr: false, // charts often use window/document — skip SSR
});
// Modal — no point loading it until user triggers it
const EditModal = dynamic(() => import('@/components/EditModal'));
Conditional loading based on user interaction
const [showMap, setShowMap] = useState(false);
const Map = dynamic(() => import('@/components/Map'), { ssr: false });
return (
<>
<button onClick={() => setShowMap(true)}>Show Map</button>
{showMap && <Map />}
</>
);
The map bundle (often 200KB+) never loads unless the user actually clicks the button.
Third-party scripts
Don't let analytics or chat widgets block your render:
import Script from 'next/script';
// afterInteractive: loads after page is interactive (good for analytics)
<Script src="https://analytics.example.com/script.js" strategy="afterInteractive" />
// lazyOnload: lowest priority, loads during idle time (good for chat widgets)
<Script src="https://cdn.chat-widget.com/widget.js" strategy="lazyOnload" />
🔤 3. Font Optimization
Fonts are a silent LCP killer — the browser can't render text until the font file downloads.
import { Inter, JetBrains_Mono } from 'next/font/google';
const inter = Inter({
subsets: ['latin'],
display: 'swap', // show fallback font while loading, swap when ready
preload: true,
variable: '--font-inter', // expose as CSS variable for flexibility
});
const mono = JetBrains_Mono({
subsets: ['latin'],
display: 'swap',
variable: '--font-mono',
});
export default function RootLayout({ children }) {
return (
<html className={`${inter.variable} ${mono.variable}`}>
<body className="font-sans">{children}</body>
</html>
);
}
// tailwind.config.js
theme: {
extend: {
fontFamily: {
sans: ['var(--font-inter)'],
mono: ['var(--font-mono)'],
},
},
}
next/font downloads and self-hosts fonts at build time. No external requests at runtime, no layout shift, no FOUT.
For local/brand fonts:
import localFont from 'next/font/local';
const brandFont = localFont({
src: [
{ path: './fonts/Brand-Regular.woff2', weight: '400' },
{ path: './fonts/Brand-Bold.woff2', weight: '700' },
],
variable: '--font-brand',
});
🗄️ 4. Caching Strategy
Caching is where you get the most leverage with the least code.
Static Generation for public pages
// app/blog/[slug]/page.tsx
export async function generateStaticParams() {
const posts = await getPosts();
return posts.map((post) => ({ slug: post.slug }));
}
// Rendered at build time → served from CDN edge
export default async function BlogPost({ params }) {
const post = await getPost(params.slug);
return <Article post={post} />;
}
ISR for content that changes occasionally
// Rebuild this page in the background at most once per hour
export const revalidate = 3600;
// On-demand revalidation from a CMS webhook
// app/api/revalidate/route.ts
import { revalidatePath } from 'next/cache';
export async function POST(req) {
const { path, secret } = await req.json();
if (secret !== process.env.REVALIDATION_SECRET) {
return Response.json({ error: 'Invalid secret' }, { status: 401 });
}
revalidatePath(path);
return Response.json({ revalidated: true });
}
API Route caching
export async function GET() {
const data = await fetchExpensiveData();
return Response.json(data, {
headers: {
// CDN caches for 1 hour, serves stale while revalidating for 24h
'Cache-Control': 'public, s-maxage=3600, stale-while-revalidate=86400',
},
});
}
React cache() for deduplication
If multiple components on the same page request the same data, cache() ensures the fetch runs only once per request:
import { cache } from 'react';
export const getUser = cache(async (id: string) => {
return db.user.findUnique({ where: { id } });
});
// Now safe to call from Header, Sidebar, and Page — one DB query total
🗃️ 5. Database Query Optimization
Eliminate N+1 queries
// Before: 1 query for users + N queries for posts = N+1 total
const users = await db.user.findMany();
for (const user of users) {
user.posts = await db.post.findMany({ where: { userId: user.id } });
}
// After: 1 query, JOIN handled by Prisma
const users = await db.user.findMany({
include: { posts: true },
});
Select only what you need
// Before: fetches entire user row (including password hash, tokens, etc.)
const user = await db.user.findUnique({ where: { id } });
// After: fetch only what the component renders
const user = await db.user.findUnique({
where: { id },
select: { name: true, email: true, avatarUrl: true },
});
Add indexes for your most common queries
// schema.prisma
model Post {
id String @id
authorId String
createdAt DateTime @default(now())
@@index([authorId]) // speeds up posts-by-user queries
@@index([createdAt(sort: Desc)]) // speeds up "latest posts" queries
}
Use connection pooling in serverless environments
Serverless functions (Vercel, Lambda) open a new DB connection per invocation. Without pooling, you'll hit connection limits fast:
// lib/db.ts
import { PrismaClient } from '@prisma/client';
const globalForPrisma = globalThis as unknown as { prisma: PrismaClient };
export const db =
globalForPrisma.prisma ??
new PrismaClient({
log: process.env.NODE_ENV === 'development' ? ['query'] : [],
});
if (process.env.NODE_ENV !== 'production') globalForPrisma.prisma = db;
For production at scale, use Prisma Accelerate or PgBouncer at the infrastructure level.
🔁 6. Server Components vs Client Components
This is the App Router concept that trips up most teams migrating from Pages Router.
The rule: everything is a Server Component by default. Only add "use client" when you need browser APIs, event handlers, or React state.
// ✅ Server Component — runs on server, zero JS sent to client
// app/dashboard/page.tsx
async function Dashboard() {
const data = await db.metrics.findMany(); // direct DB access, no API layer needed
return <MetricsGrid data={data} />;
}
// ✅ Client Component — only where interactivity is needed
// components/MetricsGrid.tsx
'use client';
import { useState } from 'react';
export function MetricsGrid({ data }) {
const [filter, setFilter] = useState('all');
// ...
}
A common mistake is placing "use client" high in the tree, which forces everything below it to ship as client-side JS. Push it as far down the component tree as possible — ideally only on the interactive leaf nodes.
🧹 7. Middleware Optimization
Middleware runs on every request. Keep it lean.
// middleware.ts
import { NextResponse } from 'next/server';
export function middleware(request) {
// ❌ Don't do expensive work here
// const data = await fetch('https://api.example.com/check'); // blocks every request
// ✅ Fast checks only — JWT validation, redirects, header injection
const token = request.cookies.get('token');
if (!token) {
return NextResponse.redirect(new URL('/login', request.url));
}
return NextResponse.next();
}
// ✅ Scope middleware to only the routes that need it
export const config = {
matcher: ['/dashboard/:path*', '/api/protected/:path*'],
};
Without a matcher, middleware runs on every request including static assets — a silent but consistent performance drain.
📊 Performance Results
| Metric | Before | After | Improvement |
|---|---|---|---|
| FCP | 1.8s | 0.4s | 78% ✅ |
| LCP | 3.2s | 0.8s | 75% ✅ |
| TTI | 4.1s | 1.2s | 71% ✅ |
| Bundle Size | 450 KB | 180 KB | 60% ✅ |
| DB Queries / Request | ~200 | 1–3 | 98% ✅ |
| Lighthouse Score | 72 | 98 | +26 ✅ |
🔑 Key Takeaways
- Profile first — Lighthouse CI + real-user Web Vitals before touching code
-
Analyze your bundle —
@next/bundle-analyzerfinds hidden weight fast -
Images —
next/imagewithpriorityon LCP images, AVIF format preferred -
Fonts —
next/fonteliminates layout shift and external font requests - Code Splitting — lazy load anything not needed on first paint; defer third-party scripts
-
Caching — static generation → ISR →
stale-while-revalidate, in that order of preference - Database — eliminate N+1s, select only needed fields, use connection pooling
-
Server Components — default to server, push
"use client"as low as possible -
Middleware — always scope with
matcher, never do async work inside it
Have questions about your specific setup? Drop them in the comments — happy to dig in.
Top comments (0)