The React Compiler (formerly "React Forget") shipped stable in React 19. The promise? Automatic memoization — no more hand-crafting useMemo, useCallback, and React.memo wrappers. Just write plain React, and the compiler handles the rest.
But does it actually hold up? I ran a battery of benchmarks across real-world-ish scenarios to find out.
What the React Compiler Actually Does
Before we dive into numbers, a quick clarification. The React Compiler is a build-time Babel/SWC transform that analyzes your component tree and automatically inserts memoization where it's safe to do so. It rewrites your components to cache computed values and prevent unnecessary re-renders — all at compile time.
useMemo and useCallback, by contrast, are runtime hooks that you manually place to signal: "cache this between renders."
They're solving the same problem, but at different layers of the stack.
The Test Setup
All benchmarks were run with:
-
React 19.1 (stable compiler enabled via
babel-plugin-react-compiler) - Vite 6 dev build (production builds for final results)
- Node 22, MacBook Pro M3
- React DevTools Profiler +
performance.now()for timing - 5 runs each, median reported
Three scenarios were tested:
- Heavy list rendering — 5,000 items, filtered + sorted
- Deeply nested form — 12-level component tree with shared context
- Real-time data dashboard — updates every 100ms, chart + table + stats panel
Benchmark 1: Heavy List Rendering
This is the classic useMemo use case — filtering and sorting a large array that shouldn't recompute unless the source data or filter criteria change.
The Manual Version
function ProductList({ products, filterText, sortBy }) {
const filtered = useMemo(() => {
return products
.filter(p => p.name.includes(filterText))
.sort((a, b) => a[sortBy] > b[sortBy] ? 1 : -1);
}, [products, filterText, sortBy]);
return filtered.map(p => <ProductRow key={p.id} product={p} />);
}
The Compiler Version
// No useMemo — compiler handles it
function ProductList({ products, filterText, sortBy }) {
const filtered = products
.filter(p => p.name.includes(filterText))
.sort((a, b) => a[sortBy] > b[sortBy] ? 1 : -1);
return filtered.map(p => <ProductRow key={p.id} product={p} />);
}
Results
| Scenario | No Memo | useMemo |
React Compiler |
|---|---|---|---|
| Initial render | 41ms | 43ms | 42ms |
| Re-render (no change) | 38ms | 2ms | 3ms |
| Re-render (filter change) | 40ms | 41ms | 40ms |
| Re-render (unrelated state) | 37ms | 2ms | 2ms |
Takeaway: The compiler matched manual useMemo almost exactly. Both deliver ~19× speedup on unnecessary re-renders vs. the unmemoized baseline. The compiler correctly detected that filtered only needs recomputation when products, filterText, or sortBy change.
Benchmark 2: Deeply Nested Form
This is where things get more interesting. A 12-level deep component tree with a shared FormContext. Each field conditionally shows a validation error. Typing in one field shouldn't re-render siblings.
This is also where useMemo gets painful to manage manually — you need useCallback on handlers, useMemo on derived values, and React.memo on subtrees. It's easy to miss one.
The Manual Version (abbreviated)
const FormSection = React.memo(({ sectionId, fields }) => {
const sectionErrors = useMemo(
() => errors.filter(e => e.sectionId === sectionId),
[errors, sectionId]
);
const handleChange = useCallback((fieldId, value) => {
dispatch({ type: 'SET_FIELD', sectionId, fieldId, value });
}, [sectionId, dispatch]);
return fields.map(field => (
<FormField
key={field.id}
field={field}
error={sectionErrors.find(e => e.fieldId === field.id)}
onChange={handleChange}
/>
));
});
Results
| Metric | Unmemoized | Manual useMemo
|
React Compiler |
|---|---|---|---|
| Typing latency (p50) | 28ms | 6ms | 7ms |
| Typing latency (p99) | 61ms | 11ms | 12ms |
| Components re-rendered per keystroke | 47 | 3 | 4 |
| Lines of memoization boilerplate | 0 | 31 | 0 |
Takeaway: Compiler performance is nearly identical to manual memoization. The one extra component re-render was from a context consumer the compiler conservatively chose not to memoize — a correct-but-cautious call. The real win here is 31 fewer lines of boilerplate that you can also mess up.
Benchmark 3: Real-Time Dashboard
A dashboard polling every 100ms — chart updates with new data points, a live table, and a stats strip. The tricky part: the chart and table share a data source, but the stats panel derives independent aggregates.
This scenario stresses the compiler's ability to detect partial dependency changes — something humans often get wrong.
function Dashboard({ rawData }) {
// Compiler version — no manual memoization
const chartData = transformForChart(rawData);
const tableData = transformForTable(rawData);
const stats = computeStats(rawData); // expensive aggregate
return (
<>
<StatsStrip stats={stats} />
<LineChart data={chartData} />
<DataTable rows={tableData} />
</>
);
}
Results (per 100ms update cycle)
| Metric | Unmemoized | Manual useMemo
|
React Compiler |
|---|---|---|---|
| CPU time per update | 22ms | 5ms | 6ms |
| Dropped frames (over 30s) | 14 | 0 | 1 |
| JS heap growth (MB/min) | 2.3 | 0.4 | 0.5 |
| Render budget exceeded (>16ms) | 31% | 0% | 2% |
Takeaway: The compiler gets very close to hand-tuned results. The 2% budget exceedance vs. 0% for manual came from a single edge case: on the first render after a large data spike, the compiler's cache invalidation triggered a cascade that a human would have caught by splitting useMemo more granularly.
Where the Compiler Wins
1. It's always-on. Every computed value gets memoized if it's safe. Humans forget to wrap things — especially in new components added 6 months after the initial "optimized" pass.
2. It handles prop stability automatically. Inline object literals and arrow functions passed as props (onClick={() => foo()}) are classic performance traps. The compiler stabilizes these references at the call site.
3. No dependency array bugs. Stale closures from wrong useMemo deps are a common production issue. The compiler's static analysis doesn't have this class of bug.
4. Less surface area. Fewer hooks = fewer tests needed to validate memoization behavior = faster code review.
Where useMemo Still Wins (or Ties)
1. Surgical optimization. If you're chasing 1-2ms of render budget on a specific hot path, manual useMemo lets you be more precise. You can split one expensive computation into multiple cached chunks in ways the compiler's heuristics don't always replicate.
2. Async + external cache integration. If your memoized value depends on something from a library like Zustand, Jotai, or a React Query result, the compiler may be overly conservative. Manual hooks give you full control.
3. Debugging. When a component re-renders unexpectedly, you can audit useMemo deps with DevTools. The compiler's implicit memoization is harder to introspect — you'd need to look at the compiled output.
4. Legacy codebases. The compiler requires React 19 and specific project setups. If you're on React 17/18 or have custom Babel pipelines, useMemo is what you've got.
The "Can I Just Enable the Compiler and Delete My useMemos?" Question
Mostly yes — with caveats.
In my tests, enabling the compiler on a codebase that already had manual memoization produced no regressions and a slight improvement (the compiler optimized a few spots I'd missed). You can leave your existing useMemo calls in place — the compiler respects them.
But I wouldn't do a mass-delete of your useMemos without profiling before and after. There are always edge cases.
Verdict
| Criterion | Winner |
|---|---|
| Performance (typical app) | 🤝 Tie |
| Performance (extreme hot paths) | useMemo |
| Developer experience | React Compiler |
| Debuggability | useMemo |
| Safety (no dep array bugs) | React Compiler |
| Legacy compatibility | useMemo |
The React Compiler is not magic — it's a well-engineered static analysis tool that handles 95% of real-world memoization correctly. For the other 5%, you still need useMemo.
My recommendation: enable the compiler, stop writing useMemo for new code, and reach for it manually only when profiling shows you need more control.
The era of spending 30% of a code review discussing whether a useCallback dependency array is correct should be behind us.
Further Reading
All benchmark source code available in the companion repo — link in comments.
Top comments (0)