DEV Community

Cover image for Cache Components and Partial Pre-Rendering (PPR) in Next.js 16
Abdul Ahad Abeer
Abdul Ahad Abeer

Posted on • Originally published at abeer.hashnode.dev

Cache Components and Partial Pre-Rendering (PPR) in Next.js 16

Before diving into all the nitty-gritty stuffs of cache components and partial pre-rendering, I just want to let you know that these two concepts go hand-in-hand and that’s why you can’t skip PPR when you are learning cache components and the vice-versa.

The reason cache components and partial pre-rendering are such a big deal is that these concepts greatly improve the user experience and quality of a web app. The more elaborative answer would be this - the concept of Cache Components lets you mix static, cached, and dynamic content in a single route, giving you the speed of static sites with the flexibility of dynamic rendering.

Server-rendered applications typically force a choice between static pages (fast but stale) and dynamic pages (fresh but slow). Moving this work to the client trades server load for larger bundles and slower initial rendering. Cache Components eliminates these tradeoffs by prerendering routes into a static HTML shell that's immediately sent to the browser, with dynamic content updating the UI as it becomes ready. Don’t worry if you haven’t been able to grasp all the concepts yet. Let’s dive into more details.

Cache Components

Caching is a technique that stores frequently accessed data temporarily. Instead of fetching the same information repeatedly from the server, your website retrieves it from a faster storage location.This reduces loading times and improves performance.

Cache Components let you save parts of your website. This makes pages load faster for your visitors. It introduce a new suite of features that allow more explicit and flexible caching capabilities. At the core is the “use cache” directive, which enables developers to cache pages, components, and functions. This directive works with the compiler to automatically generate cache keys for each usage, streamlining the caching process and improving efficiency.

In earlier versions of the App Router, caching happened automatically without you having to do anything. With Cache Components, you now have to choose when and where to use caching  —  it doesn’t happen by default. By default, any dynamic code in your pages, layouts, or API routes runs every time someone visits your site. This gives developers more control.

How to enable cacheComponents

As it’s explained above that it gives more control to developer and it's an opt-in feature. By setting cacheComponents flags to true in your Next config file, you enable the feature.

// next.config.ts
import type { NextConfig } from 'next'

const nextConfig: NextConfig = {
  cacheComponents: true,
}

export default nextConfig
Enter fullscreen mode Exit fullscreen mode

When cacheComponents is enabled, the following cache functions and configurations are available:

  • The use cache directive

  • The cacheLife function in conjunction with use cache

  • The cacheTag function

use cache Directive

This can be applied at the function, page, or component level — either at the top of the file or directly inline above a specific component or function.

// File level
'use cache'

export default async function Page() {
  // ...
}

// Component level
export async function DemoComponent() {
  'use cache'
  return <></>
}

// Function level
export async function getDemoData() {
  'use cache'
  const data = await fetch('/api/data')
  return data
}
Enter fullscreen mode Exit fullscreen mode

By default, when you use 'use cache', it follows a "standard profile" — a set of built-in rules for how long to keep that saved data and when to update it. These rules are called "revalidation settings." Revalidation just means "checking and refreshing the data to make sure it's not too old."

The Default Revalidation Settings (Standard Profile):

  • Stale: 5 minutes (on the client side)

    When you fetch data from the server, the browser caches the data right away and shows it to the user (the browser). From that very moment to the next 5 minutes, every time user tries to get the data it comes from the cache, not the server. Right after the 5 minute period ends, Next.js marks the cached data as "stale" (a bit outdated, like yesterday's news). But it doesn’t erase the data right away.

    At any time after the 5 minute period the user makes a fetch request, the browser shows the old data (which is cached, not erased) to the user and triggers the fetch request to happen in the background quietly. If the new data is different, the page updates smoothly without the user noticing a delay — the old data on the screen gets swapped with the new one without refreshing/reloading the web page. This is great for quick loads while still staying mostly up-to-date.

  • Revalidate: 15 minutes (on the server side)

    It's similar to the client-side stale 5 minutes, but with a few key differences in timing, sharing, and what happens with many users. This is like the big shared fridge (the server's Data Cache or Full Route Cache). One copy for everyone. The 15-min timer applies to this shared cache.

    First time someone makes a request to the server, it does the real work (fetch DB, calculate, etc.), saves the fresh result in its shared cache, and sends it to the user. From that moment to the next 15 minutes, the server serves the cached data to all the users. After the 15 minute period ends, the server marks the shared cache as "stale" (old, needs checking) but no background fetch starts on its own right away.

    The first request (the ‘Ring bell’ request) after the 15 minute period triggers the recalculation (the task), but it doesn’t wait for the re-calculation to end, rather it sends the old data (so that the speed seems quite fast). Meanwhile, the re-calculation goes on. Now if 10 or whatever the number of users request before the action finishes, all of them get the old data. Once the background calculation finishes, Server overwrites the shared cache with the new fresh data. From that moment on, every subsequent request (even from the same second batch) gets the new data instantly — no loading, no stale.

  • Expire: No time-based expiration (It's server-side only)

    What happens when you don’t specify expire? The server-side cache entry lives indefinitely (forever) as long as the app runs. It only gets updated (replaced) when someone requests after the revalidate time (15 min default) → stale-while-revalidate kicks in, background refresh happens, new data overwrites old. If literally no one ever visits that cached item again? It stays in the server's cache forever (or until server restart, deployment, or manual invalidation like revalidateTag). No auto-delete.

    What happens with expire specified? If no requests arrive for longer than the expire time (from the last real access/refresh), the server deletes the cache entry completely ("vanishes" it). Next request after that? No old data available → server does a full fresh calculation (synchronous—might show loading/fallback if slow). Once done, new data is cached, and timers reset (revalidate and expire start counting from now).

    Expire is basically a "if forgotten for too long, erase it" rule to prevent very stale or unused data from cluttering server memory.

  • When used at file level, all function exports must be async functions.

    If you put 'use cache' right at the top of a whole file (not just inside one function), it applies to every function you export from that file (like sharing tools with other parts of your app). But there's a catch: All those functions must be "async" (they use async and await to handle slow tasks like fetching from a database). Why? Because caching only makes sense for things that take time—quick, simple stuff doesn't need saving. If a function isn't async, Next.js will complain with an error.

cacheLife Function

The cacheLife function lets you decide how long you want to keep saved results for a function or component. You use it together with the “use cache” directive, and you put it inside the function or component you want to control. Without 'use cache', cacheLife has no effect—it's ignored.

Think of cacheLife as the remote control for the three timers we talked about earlier: stale, revalidate, and expire. By default, 'use cache' uses a built-in "standard" profile with stale=5 min (client), revalidate=15 min (server), and no expire. cacheLife lets you override those defaults with your own numbers or simple names.

Then, in your code:

// Inside an async function or component
'use cache'  // This turns caching on
import { cacheLife } from 'next/cache';

async function getBlogPosts() {
  cacheLife('days');  // Simple way: use a preset name
  // or
  cacheLife({ stale: 3600, revalidate: 7200, expire: 86400 });  // Custom seconds

  const posts = await fetch('/api/posts');  // or DB query, etc.
  return posts.json();
}
Enter fullscreen mode Exit fullscreen mode

Custom Object (full control)

cacheLife({
  stale: 3600,      // seconds until client sees it as "old" (1 hour)
  revalidate: 7200, // seconds until server starts background refresh (2 hours)
  expire: 86400     // seconds until cache is deleted if no visits (1 day)
});
Enter fullscreen mode Exit fullscreen mode
  • All values in seconds.

  • You can mix: short stale for fast client updates, longer revalidate for server savings.

Next.js has built-in cache profiles to help you manage how your website stores and updates data. The built-in profiles like "seconds", "minutes", "hours", "days", "weeks", and sometimes "max" (mentioned in some docs/PRs) are pre-defined shortcut sets of the same three timers mentioned earlier:

  • stale → client-side "how long before this feels old"

  • revalidate → server-side "when to start background refresh"

  • expire → server-side "when to delete if untouched for too long"

They just save you from writing seconds manually. Next.js picks reasonable values based on the name of the preset, so you can say "days" instead of calculating 60*60*24.

Choose a profile based on how often your data changes:

  • seconds — For data that changes constantly (like live scores)

  • minutes — For data that updates often (like news feeds)

  • hours — For data that updates a few times a day (like product stock)

  • days — For data that updates daily (like blog posts)

  • weeks — For data that updates weekly (like newsletters)

  • max — For data that rarely changes (like legal info)

 8f.png align="center")

Quick Examples

'use cache'
import { cacheLife } from 'next/cache'

async function getWeather() {
  cacheLife('minutes')   // → stale 5 min (client), revalidate 1 min (server), expire 1 hour (server)
  return fetchWeatherAPI()
}
Enter fullscreen mode Exit fullscreen mode

Must-Known Facts

  • If you use cacheLife, put it inside the function whose results you want to cache, even if you added the use cache directive at the top of your file.

  • You should only run cacheLife once each time your function runs. If your function has different paths (like if/else), you can call cacheLife in those branches but make sure only one gets used each time.

    For example the following is correct as it calls once in every function call:

    'use cache'
    
    async function getUserData(userId: string, isPremium: boolean) {
      if (isPremium) {
        cacheLife('hours');           // Only this branch runs → one call
        // fetch premium features, longer cache ok
        return fetchPremiumStuff(userId);
      } else {
        cacheLife('minutes');         // Only this branch runs → one call
        // free users get shorter cache
        return fetchBasicStuff(userId);
      }
    }
    

    → Perfect. No matter which if branch executes, exactly one cacheLife runs.

    → Next.js gets a clear single rule for that execution.

    Not Allowed / Problematic (multiple calls in same run):

    'use cache'
    
    async function getUserData(userId: string) {
      cacheLife('days');               // First call
    
      const data = await db.user.find(userId);
    
      if (data.isSpecial) {
        cacheLife('minutes');          // Second call — now two calls happened!
        // ...
      }
    
      return data;
    }
    

    → Bad. If isSpecial is true → two cacheLife calls in one function run.

    → Next.js might ignore the second, use the first, crash, or do something random → unpredictable cache behavior (e.g. wrong lifetime applied).

cacheTag Function

The cacheTag function lets you label saved data so you can easily clear or refresh certain parts of your cache whenever you want. By adding tags, you can update or remove specific cached items without touching the rest.

It's the on-demand invalidation tool — the opposite of time-based timers like cacheLife. While cacheLife says "wait X minutes then refresh automatically", cacheTag says "only refresh when I tell you to, and only for this specific thing".

You use it inside code that's already marked with 'use cache' (or in fetch with next.tags), just like cacheLife. It's especially useful for data that changes because of user actions (e.g., adding a product to cart, editing a blog post, uploading a photo) — you can instantly tell Next.js "this is outdated now".

// app/products/page.tsx or lib/getProducts.ts
'use cache'

import { cacheTag } from 'next/cache'

async function getProducts() {
  cacheTag('products')          // ← Attach the label "products" to this cache entry
  // You can add multiple: cacheTag('products', 'featured', 'category-electronics')

  const products = await db.products.findMany()
  return products
}
Enter fullscreen mode Exit fullscreen mode

Now the cache entry for getProducts() is tagged "products".

To invalidate (refresh) it later — usually in a Server Action after a mutation:

// app/actions.ts
'use server'

import { revalidateTag } from 'next/cache'

export async function addProduct(formData: FormData) {
  // Save new product to DB...
  await db.products.create({ ... })

  revalidateTag('products')     // ← "Hey Next.js, anything tagged 'products' is now invalid!"
}
Enter fullscreen mode Exit fullscreen mode

What happens after revalidateTag('products')?

  • The cache entry tagged "products" gets marked as invalid/stale.

  • Next time someone visits a page that uses getProducts():

    • It serves the old (stale) data instantly (fast UX).
    • In the background, it re-runs the function → fetches fresh products → updates the cache.
    • Future requests get the new data.

Key Points & Behaviors

  • Multiple tags possible:

    cacheTag('products', `product-${newProductId}`, 'admin-dashboard')
    

    → You can invalidate any one of them later (e.g., revalidateTag('product-123') only affects that single product).

  • Tags are per-cache-entry:

    • If two different functions both tag "products", invalidating it affects both.
  • Works with fetch too (alternative to cacheTag):

    'use cache'
    const res = await fetch('/api/products', {
      next: { tags: ['products'] }   // Same as cacheTag('products')
    })
    

Partial Pre-Rendering

For years, web developers have been forced to make a difficult, you either choose Static Site Generation (SSG) for blazing-fast initial loads at the cost of stale data, or Server-Side Rendering (SSR) for real-time data at the cost of slower Time to First Byte (TTFB). There was rarely a middle ground. Next.js 16 effectively ends this debate with the stabilization of Partial Prerendering (PPR).

The Binary Choice Problem

Before Next.js 16, the rendering decision was often an "all-or-nothing" waterfall. If a single component on your page required dynamic data at request time—for example, a personalized "Welcome, [User]" message in the header or a real-time stock ticker—the entire route had to switch to dynamic rendering.

This meant that even your static footer, logo, navigation links, and marketing copy—elements that never change between users—had to be regenerated on the server for every single request. This unnecessary CPU overhead increased latency for the user and server costs for you.

The Concept of Partial Prerendering

Imagine your page has parts that don’t change (like a header or layout) and parts that do based on an API request to the server (like user data). Next.Js will first send a static shell with the parts it can cache, so the page shows up quickly.

any dynamic sections you wrap in React <Suspense> will load after those placeholders are initially blank or show a fallback UI, then stream in with real data when ready. in other words, “the server sends a static shell containing cached content, ensuring a fast initial load” and “dynamic sections wrapped in Suspense stream in parallel as they become ready”. Those dynamic sections are often called ‘dynamic holes’. This means users see the basic page immediately, and then the dynamic data fills in smoothly.

Suspense Boundary ( React )

Suspense is the key mechanism behind PPR. React <Suspense> helps Next.js know where the dynamic gaps are. You must wrap any code that fetches data at request time (APIs, database calls, cookies(), etc.) in a <Suspense> boundary. If you don’t, Next.js will block the whole page until that data loads, which feels slow. When wrapped, the fallback UI shows instantly, and the real data replaces it later. The docs say wrapping a component in <Suspense> doesn’t make it dynamic by itself your data-fetching does but Suspense “acts as a boundary that enables streaming.

Enabling the Feature

Before using Partial Prerendering, you need to enable the new Cache Components architecture in your configuration. In Next.js 16, this single flag unlocks both PPR and the new caching directives.

// next.config.ts
mport type { NextConfig } from "next";

const nextConfig: NextConfig = {
  cacheComponents: true, // Enables Partial Prerendering & Cache Components
};

export default nextConfig;
Enter fullscreen mode Exit fullscreen mode

How This Looks in Practice

An async Server Component consumes that data and is placed inside a Suspense boundary, while the rest of the layout remains fully static.

import { Suspense } from 'react';
import SlowData from '@/components/SlowData';
import Skeleton from '@/components/Skeleton';

export default function PPRPage() {
  return (
    <div>
      <h1>Partial Prerendering Demo</h1>
      <p>This header loads instantly as part of the static shell</p>

      {/* Suspense boundary separates static from dynamic */}
      <Suspense fallback={<Skeleton />}>
        <SlowData />
      </Suspense>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

When the page loads, the prerendered shell is available immediately. The header and descriptive text appear without waiting for any backend work, since they do not rely on request-time data. The dynamic section enters a pending state, and a lightweight skeleton placeholder occupies the space where the final content will appear.

Last Words

As we wrap up this blog post, I hope you now understand how cache components, their directives and partial pre-rendering work, where to use them in real-life projects, and how to choose the right tool for your needs.

Top comments (0)