DEV Community

Cover image for From 3 Seconds to 300ms: How I Optimized a Next.js App in Production
Juan Torchia
Juan Torchia

Posted on • Originally published at juanchi.dev

From 3 Seconds to 300ms: How I Optimized a Next.js App in Production

There's a specific moment in a developer's life when you realize you broke something. Not with an error. With silence. With slowness. With that spinner going round and round while the user wonders if your app is still alive or already dead.

It happened to me in production. A Next.js app we had proudly launched was hitting between 2.8 and 3.4 seconds on First Contentful Paint. On mobile, worse. LCP was hovering around 4 seconds. Google Lighthouse was looking at me with pure contempt and I had no excuses — it was my code, my decisions, my problem.

This is the story of how I diagnosed the disaster, what I changed, and how I got to 300ms FCP in production. No bullshit, no "just use a CDN", just the dirty work nobody shows you in tutorials.

The Diagnosis: First, Figure Out What's Actually on Fire

Before touching a single line of code, you need to know what's slow. I made the classic mistake: assuming. "It's probably the bundle," I thought. Spoiler: it wasn't only the bundle.

Tools I used:

  • Lighthouse in incognito mode (no extensions contaminating the results)
  • Chrome DevTools → Network tab with throttling set to "Fast 3G"
  • Vercel Analytics for real user data
  • next build with ANALYZE=true to inspect the bundle

For the bundle analyzer, I installed this:

npm install @next/bundle-analyzer
Enter fullscreen mode Exit fullscreen mode

And in next.config.js:

const withBundleAnalyzer = require('@next/bundle-analyzer')({
  enabled: process.env.ANALYZE === 'true',
})

/** @type {import('next').NextConfig} */
const nextConfig = {
  // your config
}

module.exports = withBundleAnalyzer(nextConfig)
Enter fullscreen mode Exit fullscreen mode

Then you run:

ANALYZE=true npm run build
Enter fullscreen mode Exit fullscreen mode

And that's when I saw the horror. I had moment.js imported in full — 67kb gzipped — to format two dates across the entire app. I had a charting library loading in the main bundle when it only appeared on one dashboard page. I had components fetching data on the client that could have perfectly been Server Components.

The real diagnosis surfaced three big problems:

  1. Bloated client bundle with unnecessary dependencies
  2. Client-side fetch waterfall (fetch after fetch, chained)
  3. Unoptimized images with no declared dimensions (killer layout shift)

Problem 1: The Bundle Was a Mess

Bye Bye moment.js

I replaced moment.js with date-fns using specific imports:

// ❌ Before — pulling in all of moment
import moment from 'moment'
const date = moment(timestamp).format('DD/MM/YYYY')

// ✅ After — only what I actually need
import { format } from 'date-fns'
import { es } from 'date-fns/locale'
const date = format(new Date(timestamp), 'dd/MM/yyyy', { locale: es })
Enter fullscreen mode Exit fullscreen mode

Result: -67kb gzipped from the main bundle. Yes, it was that ridiculous.

Dynamic Imports for What Isn't Visible at Load

The dashboard chart shouldn't be in the bundle for the home page. Dynamic import with next/dynamic:

import dynamic from 'next/dynamic'

// ❌ Before
import { RevenueChart } from '@/components/RevenueChart'

// ✅ After
const RevenueChart = dynamic(
  () => import('@/components/RevenueChart'),
  {
    loading: () => <ChartSkeleton />,
    ssr: false // this component uses window, can't SSR
  }
)
Enter fullscreen mode Exit fullscreen mode

This pulled ~45kb out of the initial bundle and the user sees a skeleton while it loads — way better UX than staring at nothing.

Problem 2: The Client-Side Fetch Waterfall

This was the fattest problem. I had a user profile page doing this:

// ❌ The horror — each fetch waits for the previous one
const ProfilePage = () => {
  const [user, setUser] = useState(null)
  const [posts, setPosts] = useState([])
  const [stats, setStats] = useState(null)

  useEffect(() => {
    fetch('/api/user')
      .then(r => r.json())
      .then(user => {
        setUser(user)
        // Waits for user before fetching posts
        return fetch(`/api/posts?userId=${user.id}`)
      })
      .then(r => r.json())
      .then(posts => {
        setPosts(posts)
        // Waits for posts before fetching stats
        return fetch(`/api/stats?userId=${user.id}`)
      })
      .then(r => r.json())
      .then(setStats)
  }, [])
}
Enter fullscreen mode Exit fullscreen mode

Three chained requests. Waiting for request 1 to fire request 2. Waiting for request 2 to fire request 3. On a normal connection that's 800ms of pure overhead.

The solution in two steps:

Step 1: Parallelize What Can Be Parallelized

If you have the userId from the start (from the session, for example), you don't need to wait for the user object to arrive before requesting their posts:

// ✅ Parallel when possible
const ProfilePage = ({ userId }: { userId: string }) => {
  useEffect(() => {
    Promise.all([
      fetch(`/api/user/${userId}`).then(r => r.json()),
      fetch(`/api/posts?userId=${userId}`).then(r => r.json()),
      fetch(`/api/stats?userId=${userId}`).then(r => r.json()),
    ]).then(([user, posts, stats]) => {
      setUser(user)
      setPosts(posts)
      setStats(stats)
    })
  }, [userId])
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Move It to the Server with Server Components (The Real Fix)

But the actual solution was to stop fetching on the client altogether. With Next.js 13+ App Router, this becomes:

// app/profile/[userId]/page.tsx
// ✅ Server Component — everything on the server, in parallel
import { getUserData, getUserPosts, getUserStats } from '@/lib/api'

export default async function ProfilePage({ 
  params 
}: { 
  params: { userId: string } 
}) {
  // Parallel on the server — no waterfall, no client round trip
  const [user, posts, stats] = await Promise.all([
    getUserData(params.userId),
    getUserPosts(params.userId),
    getUserStats(params.userId),
  ])

  return (
    <div>
      <UserHeader user={user} />
      <StatsBar stats={stats} />
      <PostsList posts={posts} />
    </div>
  )
}
Enter fullscreen mode Exit fullscreen mode

This completely eliminated the client → server round trip for initial data fetching. The HTML arrives in the browser already carrying the data inside. The time those three fetches took stopped counting against the user.

Problem 3: Images Were Killing Me

I had images using native <img> tags instead of next/image. No declared width or height. No intelligent lazy loading. My Cumulative Layout Shift was 0.34 — Google hates you if you go above 0.1.

// ❌ Layout shift guaranteed
<img src={user.avatar} alt={user.name} />

// ✅ Next.js Image with everything configured
import Image from 'next/image'

<Image
  src={user.avatar}
  alt={user.name}
  width={64}
  height={64}
  className="rounded-full"
  priority={false} // true only for above-the-fold images
/>
Enter fullscreen mode Exit fullscreen mode

For hero images (above the fold), I used priority={true} so Next.js preloads them. For everything else, automatic lazy loading.

I also configured allowed domains in next.config.js:

module.exports = {
  images: {
    remotePatterns: [
      {
        protocol: 'https',
        hostname: 'storage.googleapis.com',
        pathname: '/my-bucket/**',
      },
    ],
    formats: ['image/avif', 'image/webp'],
  },
}
Enter fullscreen mode Exit fullscreen mode

Next.js automatically converts to WebP/AVIF based on what the browser supports. My 800kb images dropped to 120kb in WebP.

The Final Touch: Aggressive Caching

I was caching practically nothing. App Router routes have cache by default, but I was accidentally breaking it:

// ❌ This disables static cache
export const dynamic = 'force-dynamic'

// ✅ Revalidate every 60 seconds — fresh but cached
export const revalidate = 60
Enter fullscreen mode Exit fullscreen mode

For fetch calls inside Server Components, I used the cache options:

// Cache with time-based revalidation
const data = await fetch('https://api.example.com/data', {
  next: { revalidate: 3600 } // 1 hour
})

// Static cache (never changes until next deploy)
const config = await fetch('https://api.example.com/config', {
  cache: 'force-cache'
})

// No cache (real-time data)
const liveData = await fetch('https://api.example.com/live', {
  cache: 'no-store'
})
Enter fullscreen mode Exit fullscreen mode

The Real Results

One week after deploying all the changes, Vercel Analytics numbers:

Metric Before After Improvement
FCP (p75) 3.1s 310ms -90%
LCP (p75) 4.2s 820ms -80%
CLS 0.34 0.02 -94%
Bundle size 487kb 198kb -59%
TTFB 890ms 180ms -80%

Lighthouse score went from 42 to 91. On mobile, from 31 to 84.

What had the biggest impact, in order:

  1. Server Components eliminating the client waterfall (40% of the improvement)
  2. Bundle splitting and killing heavy dependencies (30%)
  3. Image optimization (20%)
  4. Caching (10%)

What I Learned — and What I'd Do Differently

The fundamental mistake was not measuring from the start. I developed for months assuming things were "fine" and only saw the disaster in production with real users. Now I have Lighthouse wired into CI/CD — the build fails if the score drops below 80.

I also learned that performance optimization isn't a sprint, it's a mindset. Every dependency you add has a cost. Every client-side fetch has a cost. Every image without dimensions has a cost. You pay that cost later, with frustrated users and SEO in the gutter.

Performance optimization in Next.js isn't magic — it's honest diagnosis, conservative decisions around dependencies, and actually using the tools you already have. Server Components exist for this. The Image component exists for this. The bundle analyzer exists for this.

Use them before Lighthouse starts screaming at you.

Top comments (0)