The bug wasn’t performance. It was silence.
I ran into this while building a Next.js dashboard.
There was a button that triggered a report generation flow. Some client-side processing, then an API call.
Lighthouse looked great. API response was around 80ms.
But session recordings told a different story.
Users were clicking the button 3 to 5 times in a row.
No errors in logs. No failed requests. Everything technically worked.
So I watched one of the recordings more carefully.
The moment the user clicked, nothing changed on screen.
No loading state.
No disabled button.
No visual feedback at all.
The UI looked exactly the same before and after the click.
That was the issue.
80ms response, zero perceived response
The backend was fast.
The interface just didn’t acknowledge the interaction.
From the user’s point of view, the click didn’t go through.
So they clicked again.
And again.
Not because they were impatient. Because the UI stayed silent.
Where most of us focus (and where it falls short)
We spend a lot of time optimizing load performance.
FCP, LCP, bundle size, Lighthouse scores. All worth caring about.
But most of the time a user spends on your app happens after the page has loaded.
That’s where trust is decided. In the interactions.
The 200ms line
There’s a threshold that shows up consistently in real usage.
If the UI responds within roughly 100 to 200ms, it feels instant.
Between 200 and 500ms, the delay becomes noticeable.
Beyond that, people start questioning whether their action worked.
A slow app is frustrating. An app that feels unresponsive gets abandoned.
What INP actually measures
INP (Interaction to Next Paint) focuses on one thing:
How long it takes for the user to see a response after they interact.
Not when your API returns.
Not when your function finishes.
When something visibly changes on the screen.
That’s what closes the loop.
The fix was not “make it faster”
The fix was: acknowledge the interaction immediately.
Instead of doing all the work first:
button.addEventListener("click", async () => {
await processData();
await syncWithServer();
setLoading(false);
});
Flip it:
button.addEventListener("click", async () => {
setLoading(true); // immediate visual feedback
await scheduler.yield(); // let the browser paint
await processData();
await scheduler.yield();
await syncWithServer();
setLoading(false);
});
If scheduler.yield() is not available yet in your environment, a simple fallback works:
const yieldToMain = () => new Promise((r) => setTimeout(r, 0));
Same total work. Same total time.
Completely different experience.
Users stopped clicking multiple times.
The pattern you already use
You’ve seen this in every modern app.
You tap like. It updates instantly. The server confirms later.
That’s optimistic UI.
Here is the same idea in a simple form:
async function handleLike(postId) {
setLiked(true);
setCount((c) => c + 1);
try {
await api.like(postId);
} catch {
setLiked(false);
setCount((c) => c - 1);
}
}
In React and Next.js setups, this maps cleanly to useOptimistic and Server Actions.
The idea stays the same:
Show the result immediately. Reconcile in the background.
Why this matters more than Lighthouse 100
Lighthouse runs in controlled conditions.
Your users don’t.
They’re on slower devices, switching tabs, running background apps, dealing with network jitter.
You can ship a perfect score and still have interactions that feel off.
INP exposes that gap.
If you want to go deeper
This bug was the entry point for me.
I ended up breaking this down properly across INP, optimistic UI, task yielding, and how this fits into modern Next.js setups like Server Actions and streaming.
I wrote it up in detail here:
https://shubhra.dev/tutorials/performance-first-ui-mastery-guide-2026
That one goes deeper into how to actually implement this in real apps. This post is just the moment where the problem becomes obvious.
A quick way to test this
While working through this, I built a small quiz to check if I actually understood the idea or was just agreeing with it.
Here’s one of the questions:
A user clicks a button.
The server responds in 80ms.
They click four more times.What actually failed?
Most people still go to backend performance first.
If that was your instinct too, you’ll probably find the rest useful:
https://shubhra.dev/quiz/performance-first-ui
It’s short. Focused on real interaction problems. No trick questions.
One practical takeaway
Next time you write a click handler, check the order:
- Does something change on screen immediately?
- Or does all the work happen before the user sees anything?
That one decision is often the difference between a UI that feels fast and one that feels broken.
If you’ve run into this in your own apps, I’m curious what caused it and how you approached it.
Top comments (1)
Good point! This also reminds me of the concept of "micro interactions" (micro animations?) - subtle little visual clues that "something is happening" ...