Know WHY β Let AI Handle the HOW π€
In Part 1, we learned about priority-based rendering. In Part 2, we explored Fiber architecture. But here's the final piece of the puzzle: How does React know WHEN to pause?
What if I told you React gives itself a strict 5ms budget per frame, and understanding this timing mechanism is the key to building silky-smooth user interfaces?
π€ The 60 FPS Problem
Your screen refreshes 60 times per second. That gives you 16.67ms per frame to do everything:
One Frame (16.67ms):
ββ JavaScript execution (React rendering)
ββ Style calculations
ββ Layout
ββ Paint
ββ Composite
If ANY of this takes > 16.67ms:
β Frame gets dropped
β UI feels janky
β User notices lag
The Challenge: How do you render expensive components without dropping frames?
π§ Think Like a Video Game Engine for a Moment
Modern games run at 60fps by:
- Doing critical work (player movement, collisions)
- Checking the clock: "Do I have time left?"
- If yes, do nice-to-have work (background animations)
- If no, pause and continue next frame
React does the exact same thing!
function gameLoop() {
const frameDeadline = performance.now() + 16.67;
// Critical: Player movement
updatePlayerPosition();
// Check time remaining
if (performance.now() < frameDeadline - 5) {
// Nice-to-have: Background details
renderDistantTrees();
} else {
// Out of time! Skip to next frame
return;
}
}
π React's Frame Budget: The 5ms Rule
React follows a simple rule: Work in ~5ms chunks, then check if we should yield.
// Simplified React work loop
function workLoopConcurrent() {
// React's frame budget strategy
const deadline = performance.now() + 5; // 5ms time slice
while (workInProgress !== null) {
// Do one unit of work
workInProgress = performUnitOfWork(workInProgress);
// Time to check if we should pause?
if (performance.now() >= deadline) {
// Used our 5ms, yield to browser
break;
}
}
if (workInProgress !== null) {
// More work to do, schedule continuation
scheduleCallback(workLoopConcurrent);
} else {
// Done! Commit to DOM
commitRoot();
}
}
Why 5ms?
- 16.67ms per frame
- -5ms for React work
- = 11.67ms left for browser (layout, paint, user input)
- Keeps UI at 60fps β
π― The shouldYield Check
This is where the magic happens:
function shouldYield() {
const currentTime = performance.now();
// Have we used our time slice?
if (currentTime >= deadline) {
return true; // Pause!
}
// Is there urgent work waiting?
if (hasUrgentWork()) {
return true; // Pause and handle urgent work!
}
// Keep going
return false;
}
// Used in the render loop:
while (workInProgress && !shouldYield()) {
workInProgress = performUnitOfWork(workInProgress);
}
π‘ Real Example: Typing in Search Box
Let's see EXACTLY what happens with millisecond precision:
function SearchPage() {
const [query, setQuery] = useState('');
const deferredQuery = useDeferredValue(query);
const results = useMemo(() => {
console.log('Filtering for:', deferredQuery);
// Let's say this takes 50ms total
return expensiveFilter(bigDataset, deferredQuery);
}, [deferredQuery]);
return (
<div>
<input
value={query}
onChange={e => setQuery(e.target.value)}
/>
<ResultsList items={results} />
</div>
);
}
Frame-by-frame breakdown when you type "r":
Frame 1 (0-16ms):
ββ 0ms: User types "r" (keypress event)
ββ 1ms: query = "r" (HIGH PRIORITY state update)
ββ 2ms: React starts render phase
β β <input> fiber (SyncLane priority)
ββ 3ms: Commit phase: Update DOM
ββ 4ms: Input shows "r" on screen β
β User sees immediate feedback!
ββ 5ms: deferredQuery = "" (still old value)
ββ 6ms: Start LOW PRIORITY render
β β ResultsList fiber (TransitionLane)
β β Start expensiveFilter("")
ββ 7ms: Filter chunk 1/10 complete
ββ 8ms: Filter chunk 2/10 complete
ββ 9ms: Filter chunk 3/10 complete
ββ 10ms: Filter chunk 4/10 complete
ββ 11ms: shouldYield() = true (used 5ms slice)
ββ 12ms: PAUSE! Save progress, yield to browser
Browser uses remaining 4ms for:
- Handling any input
- Painting the input change
- Smooth scrolling
Frame 2 (16-32ms):
ββ 16ms: Resume LOW PRIORITY render
ββ 17ms: Filter chunk 5/10 complete
ββ 18ms: Filter chunk 6/10 complete
ββ 19ms: Filter chunk 7/10 complete
ββ 20ms: Filter chunk 8/10 complete
ββ 21ms: shouldYield() = true
ββ 22ms: PAUSE again
Frame 3 (32-48ms):
ββ 32ms: Resume LOW PRIORITY render
ββ 33ms: Filter chunk 9/10 complete
ββ 34ms: Filter chunk 10/10 complete β
ββ 35ms: Commit phase: Update DOM
ββ 36ms: Results appear on screen!
The key: Input felt instant (4ms), while expensive work happened in background across 3 frames!
β‘ Interruption in Action
Now let's see what happens when you keep typing:
Frame 1 (0-16ms):
ββ 0ms: Type "r"
ββ 1ms: query = "r"
ββ 4ms: Input shows "r" β
ββ 6ms: Start filtering "" β "r" (LOW PRIORITY)
ββ 11ms: shouldYield() = true, PAUSE
ββ 12ms: Browser gets control back
Frame 2 (16-32ms):
ββ 16ms: Resume filtering for "r"
ββ 20ms: 25% done with filter...
ββ 21ms: shouldYield() checks for urgent work
β
ββ 22ms: β‘ User types "e" (HIGH PRIORITY!)
β shouldYield() = true (urgent work detected!)
β
ββ 23ms: ABANDON current render
β Throw away partial "r" filter work
β
ββ 24ms: query = "re" (HIGH PRIORITY)
ββ 25ms: Input shows "re" β
ββ 26ms: deferredQuery updates to "r"
β (but immediately cancelled by "re")
β
ββ 27ms: Start NEW filtering "r" β "re"
ββ 28ms: shouldYield() = true, PAUSE
// Old "r" filter NEVER completes or shows!
// React intelligently skipped that intermediate state
π The Scheduler API
React uses the browser's Scheduler API (with polyfill):
// Modern browsers (Chrome, Edge)
scheduler.postTask(() => {
workLoopConcurrent();
}, { priority: 'background' });
// Fallback: MessageChannel for time slicing
const channel = new MessageChannel();
channel.port1.onmessage = () => {
workLoopConcurrent();
};
function scheduleCallback(callback) {
channel.port2.postMessage(null);
}
Why MessageChannel?
-
setTimeout(fn, 0)
has 4ms minimum delay (too slow!) -
requestAnimationFrame
only runs before paint (wrong timing) -
MessageChannel
runs immediately after current task (perfect!)
π― Complete Real-World Example: Dashboard
function Dashboard() {
const [metric, setMetric] = useState('revenue');
const [isPending, startTransition] = useTransition();
const switchMetric = (newMetric) => {
startTransition(() => {
setMetric(newMetric);
});
};
return (
<div>
<Tabs selected={metric} onChange={switchMetric} />
{isPending && <LoadingBar />}
<ExpensiveChart metric={metric} />
</div>
);
}
function ExpensiveChart({ metric }) {
const chartData = useMemo(() => {
// This takes 80ms to compute
const data = [];
for (let i = 0; i < 10000; i++) {
data.push({
x: i,
y: complexCalculation(metric, i)
});
}
return data;
}, [metric]);
return <ChartLibrary data={chartData} />;
}
Frame timeline when switching from "Revenue" to "Profit":
Frame 1 (0-16ms):
ββ 0ms: User clicks "Profit" tab
ββ 1ms: metric = "profit" (TRANSITION priority)
ββ 2ms: Tab switches to "Profit" β
ββ 3ms: isPending = true
ββ 4ms: LoadingBar appears β
β User sees immediate feedback!
ββ 5ms: Start chart re-render (LOW PRIORITY)
β Calculate data point 0
ββ 6ms: Calculate data point 1
ββ 7ms: Calculate data point 2
β ... (calculating in loop)
ββ 10ms: Calculate data point 500
ββ 11ms: shouldYield() = true
ββ 12ms: PAUSE (used 5ms slice)
Progress saved: at data point 500
Frame 2 (16-32ms):
ββ 16ms: Resume chart calculation
ββ 17ms: Calculate data point 501
ββ 18ms: Calculate data point 502
β ... (calculating in loop)
ββ 21ms: Calculate data point 1000
ββ 22ms: shouldYield() = true
ββ 23ms: PAUSE again
Progress saved: at data point 1000
// This continues across ~16 frames (80ms / 5ms per frame)
Frame 16 (240-256ms):
ββ 240ms: Resume chart calculation
ββ 241ms: Calculate data point 9998
ββ 242ms: Calculate data point 9999
ββ 243ms: All calculations complete! β
ββ 244ms: Commit phase: Update DOM
ββ 245ms: New chart renders
ββ 246ms: isPending = false
ββ 247ms: LoadingBar disappears
Total time: 247ms
But UI stayed responsive the entire time! π
π The Perfect Analogy: Restaurant Kitchen
Think of React's time slicing like a restaurant kitchen:
Without Time Slicing (Old React):
- Chef starts making a complex dish
- New urgent order comes in (appetizer)
- Chef: "Sorry, I have to finish this entrΓ©e first"
- Customer waits 20 minutes for a simple appetizer π‘
With Time Slicing (Concurrent React):
- Chef starts making complex entrΓ©e (5 min work)
- After 30 seconds, checks: "Any urgent orders?"
- Urgent appetizer comes in!
- Chef: "Let me pause the entrΓ©e"
- Makes appetizer immediately (2 min)
- Returns to entrΓ©e
- Both customers happy! π
π§ͺ Suspense with Time Slicing
Time slicing makes Suspense for data fetching smooth:
function ProfilePage({ userId }) {
const [isPending, startTransition] = useTransition();
const switchUser = (newId) => {
startTransition(() => {
setUserId(newId);
});
};
return (
<Suspense fallback={<Skeleton />}>
<ProfileDetails userId={userId} />
</Suspense>
);
}
function ProfileDetails({ userId }) {
const user = use(fetchUser(userId)); // Suspends
// Heavy computation after data loads
const stats = useMemo(() => {
return calculateComplexStats(user);
}, [user]);
return <ProfileView user={user} stats={stats} />;
}
Timeline when switching users:
Frame 1 (0-16ms):
ββ 0ms: Click "Switch User"
ββ 1ms: userId = 2 (TRANSITION priority)
ββ 2ms: Start render ProfileDetails
ββ 3ms: Suspend! (waiting for data)
ββ 4ms: Old profile STAYS VISIBLE (smooth!)
ββ 5ms: Inline spinner shows
... Network request in flight ...
Frame 50 (800-816ms):
ββ 800ms: Data arrives! fetchUser(2) resolves
ββ 801ms: Resume ProfileDetails render
ββ 802ms: Start calculateComplexStats (expensive!)
ββ 807ms: shouldYield() = true
ββ 808ms: PAUSE calculation
Frame 51 (816-832ms):
ββ 816ms: Resume calculateComplexStats
ββ 821ms: Calculation complete!
ββ 822ms: Commit phase
ββ 823ms: New profile smoothly appears β
No jarring skeleton screen!
Old content stayed visible during load!
π― Performance Monitoring
You can actually see time slicing in action:
function ExpensiveComponent({ data }) {
// Log when rendering starts/pauses
console.log('Render start:', performance.now());
const result = useMemo(() => {
const start = performance.now();
const computed = expensiveComputation(data);
const end = performance.now();
console.log(`Computation took: ${end - start}ms`);
return computed;
}, [data]);
console.log('Render end:', performance.now());
return <div>{result}</div>;
}
// Console output:
// Render start: 0.5ms
// Render end: 1.2ms (fiber created)
// (pause - browser handles other work)
// Computation took: 50ms (spread across 10 frames!)
// (pause - browser handles other work)
// Render start: 52ms (commit phase)
// Render end: 53ms
π§ The Mental Model Shift
Stop Thinking:
- "React renders everything at once"
- "Long computations always block the UI"
- "I need to manually split work with setTimeout"
Start Thinking:
- "React renders in 5ms chunks"
- "Long work is automatically split across frames"
- "Browser gets control back between chunks"
- "UI stays responsive even during heavy work"
π The Takeaway
Many developers learn the HOW: "Use useDeferredValue and it makes things faster."
When you understand the WHY: "React works in 5ms time slices, yielding control back to the browser after each slice to maintain 60fps, and can pause/resume work at any fiber node," you gain insights that help you:
- Understand why some operations feel instant
- Know when concurrent features actually help
- Debug performance with precise timing knowledge
- Build UIs that feel professional and responsive
Top comments (0)