DEV Community

Sachin Maurya
Sachin Maurya

Posted on

Building a Multi-Layer Caching Strategy in Next.js App Router: From Static to Real-Time

Caching is one of those make-or-break aspects of modern web development. Get it right, and your app is fast, resilient, and scalable. Get it wrong, and you’re left with stale data, poor performance, or worse — broken user experiences.

At my previous role, I led the migration of our corporate platform to the Next.js App Router. One of our core challenges was designing a caching strategy that could handle:

  • Fully static marketing pages
  • Occasionally updated blog content
  • Personalized user dashboards
  • Real-time activity feeds
  • Mixed-content pages (static + dynamic)

In this post, I’ll walk through how we built a multi-layer caching architecture that kept our Lighthouse scores consistently above 95 while ensuring data freshness where it mattered most.


Understanding Next.js Caching Layers

Before jumping into implementation, it’s important to understand the caching layers available in the App Router:

  1. Full Route Cache – Entire route responses (HTML) cached at build or request time
  2. Data Cache – Results of fetch requests persisted across requests
  3. Client Cache – Caching done in the browser (via React Query, Zustand, etc.)
  4. CDN/Edge Cache – Caching at the network edge (Vercel, Cloudflare, etc.)

Each layer serves a different purpose, and using them together is key.


Layer 1: Fully Static Pages with force-static

We had several pages that never changed: /about, /contact, /legal. For these, we used:

// app/about/page.js
export const dynamic = 'force-static'

export default function AboutPage() {
  // This page is cached at build time
  // and served from CDN indefinitely
}
Enter fullscreen mode Exit fullscreen mode

Why force-static?

It tells Next.js to skip all dynamic logic, treat the page as completely static, and cache it at the edge. This is ideal for true static content.

Impact:

These pages loaded instantly on repeat visits and required zero backend computation.


Layer 2: Incrementally Static Pages with ISR

Our blog and documentation pages were mostly static but could be updated via CMS. We used Incremental Static Regeneration (ISR) with on-demand revalidation.

Step 1 – Basic ISR

// app/blog/[slug]/page.js
export const revalidate = 3600 // Revalidate every hour

async function getPost(slug) {
  const res = await fetch(`https://api.example.com/posts/${slug}`, {
    next: { tags: ['posts'] }
  })
  return res.json()
}

export default async function BlogPost({ params }) {
  const post = await getPost(params.slug)
  return <PostLayout post={post} />
}
Enter fullscreen mode Exit fullscreen mode

Step 2 – On-Demand Revalidation

When a post was updated in the CMS, we triggered:

// app/api/revalidate/route.js
import { revalidateTag } from 'next/cache'

export async function POST(request) {
  const { tag } = await request.json()
  revalidateTag(tag) // e.g., 'posts'
  return Response.json({ revalidated: true })
}
Enter fullscreen mode Exit fullscreen mode

This meant:

  • Blog posts were cached for performance
  • But could be updated in seconds when needed
  • No full rebuilds required

Layer 3: Personalized Dashboards – No Server Cache

For authenticated user dashboards (/dashboard, /settings), we disabled server caching entirely.

// app/dashboard/page.js
export const dynamic = 'force-dynamic'

export default async function DashboardPage() {
  // This page is never cached on the server
  // Fresh data on every request
}
Enter fullscreen mode Exit fullscreen mode

Instead, we moved caching to the client layer:

  • React Query for server-state caching, background updates, and request deduplication
  • Zustand for client-side UI state
  • Optimistic updates for a snappy UX

Example with React Query:

// app/dashboard/activity.js
'use client'

import { useQuery } from '@tanstack/react-query'

function fetchActivity() {
  return fetch('/api/activity').then(res => res.json())
}

export default function ActivityFeed() {
  const { data, isLoading } = useQuery({
    queryKey: ['activity'],
    queryFn: fetchActivity,
    staleTime: 1000 * 60 * 5, // Consider data fresh for 5 minutes
  })

  // Render feed
}
Enter fullscreen mode Exit fullscreen mode

Why client cache here?

User data is personalized and sensitive. Server caching could leak data between users. Client caching is safe and still provides performance benefits.


Layer 4: Real-Time Components with Streaming

For real-time components like live notifications or stats, we used Streaming with Suspense.

Component Structure

// app/dashboard/live-stats.js
'use client'

import { useEffect, useState } from 'react'

export default function LiveStats() {
  const [stats, setStats] = useState(null)

  useEffect(() => {
    const ws = new WebSocket('wss://api.example.com/live')
    ws.onmessage = (event) => {
      setStats(JSON.parse(event.data))
    }
    return () => ws.close()
  }, [])

  return stats ? <StatCard stats={stats} /> : <StatSkeleton />
}
Enter fullscreen mode Exit fullscreen mode

Wrapping in Suspense

// app/dashboard/page.js
import { Suspense } from 'react'
import LiveStats from './live-stats'

export default function Dashboard() {
  return (
    <div>
      <h1>Dashboard</h1>
      <Suspense fallback={<StatSkeleton />}>
        <LiveStats />
      </Suspense>
    </div>
  )
}
Enter fullscreen mode Exit fullscreen mode

This allowed the rest of the page to load quickly while real-time components hydrated separately.


The Hybrid Page Problem

Our homepage was tricky:

  • Static hero section
  • Dynamic user greeting (if logged in)
  • Real-time trending section
  • Cached blog previews

Our Solution: Isolate Caching Boundaries

// app/page.js
import { React } from 'react'
import { getHeroData, getTrendingPosts } from '@/app/actions'
import UserGreeting from './user-greeting'
import TrendingSection from './trending'
import BlogPreview from './blog-preview'

// Cache static data
const getCachedHeroData = React.cache(getHeroData)
const getCachedTrending = React.cache(getTrendingPosts)

export default async function HomePage() {
  const [heroData, trending] = await Promise.all([
    getCachedHeroData(),
    getCachedTrending(),
  ])

  return (
    <main>
      {/* Static hero */}
      <Hero data={heroData} />

      {/* Dynamic user greeting */}
      <Suspense fallback={<GreetingSkeleton />}>
        <UserGreeting />
      </Suspense>

      {/* Real-time trending */}
      <Suspense fallback={<TrendingSkeleton />}>
        <TrendingSection />
      </Suspense>

      {/* Cached blog previews */}
      <BlogPreview posts={trending} />
    </main>
  )
}
Enter fullscreen mode Exit fullscreen mode

By splitting sections into different caching boundaries, we optimized each part independently.


Monitoring & Debugging

A caching strategy is useless without monitoring.

1. Lighthouse + Web Vitals

We ran Lighthouse in CI on every PR and set performance budgets.

2. Custom Cache Logging

We added logging middleware to track cache hits/misses:

// middleware.js
import { NextResponse } from 'next/server'

export function middleware(request) {
  const response = NextResponse.next()

  // Add cache headers for static assets
  if (request.nextUrl.pathname.startsWith('/_next/static')) {
    response.headers.set('Cache-Control', 'public, max-age=31536000, immutable')
  }

  return response
}
Enter fullscreen mode Exit fullscreen mode

3. Vercel Analytics

We used Vercel’s built-in analytics to monitor cache hit ratios and edge performance.


Results & Metrics

After implementing this layered approach:

  • Lighthouse Performance: 96+ (consistent across pages)
  • Cache Hit Ratio (CDN): 92%
  • Reduced Backend Load: ~40% fewer queries for cached content
  • Time to Interactive: Improved by ~35%
  • Real-Time Updates: Reflected within 2-3 seconds

Lessons Learned

  1. Cache based on content type, not convenience – Each type of content has different freshness requirements.
  2. Use revalidateTag over revalidatePath – More precise, less expensive.
  3. Client caching is not obsolete – It’s essential for personalized data.
  4. Monitor, monitor, monitor – Caching behavior can change with traffic patterns and updates.
  5. Document your strategy – Team onboarding and debugging become much easier.

Final Thoughts

Caching in the App Router is powerful but requires intentional design. There’s no one-size-fits-all solution. By building a multi-layer caching strategy, you can optimize for both performance and freshness.

If you’re working on a Next.js app and thinking about caching, start by:

  1. Auditing your content types – What’s static? What’s dynamic?
  2. Mapping content to cache layers – Use the right tool for each job.
  3. Implementing incrementally – Start with static pages, then add ISR, then client caching.
  4. Measuring impact – Use Lighthouse and analytics to validate improvements.

Caching isn’t just a performance optimization—it’s a core part of your application architecture. Design it thoughtfully, and your users (and your infrastructure) will thank you.


Interested in more deep dives on Next.js performance?

Check out my other posts on React re-renders, bundle optimization, and WCAG compliance.

Top comments (0)