<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: nosyos</title>
    <description>The latest articles on DEV Community by nosyos (@nosyos).</description>
    <link>https://dev.to/nosyos</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nosyos"/>
    <language>en</language>
    <item>
      <title>INP Is Not Just a Faster FID</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Thu, 07 May 2026 14:28:00 +0000</pubDate>
      <link>https://dev.to/nosyos/inp-is-not-just-a-faster-fid-2cg5</link>
      <guid>https://dev.to/nosyos/inp-is-not-just-a-faster-fid-2cg5</guid>
      <description>&lt;p&gt;I once spent an afternoon cutting a click handler from 80ms down to 18ms. Clean, fast, properly debounced. The INP score didn't move.&lt;/p&gt;

&lt;p&gt;The handler wasn't the problem. The browser couldn't even start the handler for 190ms after the click — a long task was running at exactly the wrong moment. All that optimization was irrelevant.&lt;/p&gt;

&lt;p&gt;INP splits an interaction into three phases. Most React developers know one of them.&lt;/p&gt;




&lt;h2&gt;
  
  
  The three-phase model
&lt;/h2&gt;

&lt;p&gt;Every INP interaction goes through:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Input delay&lt;/strong&gt; — the time from when the user interacts to when the browser can start processing the event. This is blocked by whatever is already running on the main thread.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processing time&lt;/strong&gt; — the time your event handlers actually execute.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Presentation delay&lt;/strong&gt; — the time from when handlers finish to when the browser renders the visual update. This is where React's reconciliation and paint happen.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The total of these three is what INP measures for each interaction. The worst interaction in a session becomes the score.&lt;/p&gt;

&lt;p&gt;Optimizing processing time — the only phase most developers think about — is the right move when processing time is actually the bottleneck. It often isn't.&lt;/p&gt;




&lt;h2&gt;
  
  
  Input delay is the one that surprises people
&lt;/h2&gt;

&lt;p&gt;Input delay is not caused by your handler. It's caused by whatever was running on the main thread the moment the user clicked.&lt;/p&gt;

&lt;p&gt;A React app rendering a large list after a search query completes. A &lt;code&gt;useEffect&lt;/code&gt; running a synchronous calculation on new data. A timer callback that scheduled itself to run every few seconds and happens to fire during an interaction. Any of these can generate 100–300ms of input delay that has nothing to do with the click handler the user triggered.&lt;/p&gt;

&lt;p&gt;The Chrome UX Report attribution for a slow INP event will tell you which phase is taking the most time. If input delay is more than 50ms, handler optimization is the wrong direction.&lt;/p&gt;

&lt;p&gt;To see the breakdown in your own data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;entryType&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;event&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;inputDelay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;processingStart&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;processingTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;processingEnd&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;processingStart&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;presentationDelay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;processingEnd&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;event&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;durationThreshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run this for a week on production. Look at the breakdown. A high &lt;code&gt;inputDelay&lt;/code&gt; on a specific page tells you there's a long task running during that page's normal usage cycle — and that's the thing to fix, not the handler.&lt;/p&gt;




&lt;h2&gt;
  
  
  Breaking up the tasks that cause input delay
&lt;/h2&gt;

&lt;p&gt;The fix for input delay is reducing the size of long tasks so the browser has gaps to process input.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;scheduler.yield()&lt;/code&gt; is the cleanest way to do this. It pauses execution and lets the browser handle any pending input before continuing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;processLargeDataset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Result&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;

  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;expensiveTransform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]));&lt;/span&gt;

    &lt;span class="c1"&gt;// Every 50 items, yield to let the browser breathe&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;scheduler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;yield&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without the yield, a 1,000-item dataset processed in one shot becomes a multi-hundred-millisecond long task that blocks input. With it, the browser gets a chance to handle clicks between chunks. INP stays low even while the work is ongoing.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;scheduler.yield()&lt;/code&gt; is available in Chrome and Edge. For broader support, &lt;code&gt;setTimeout(0)&lt;/code&gt; works as a fallback, though the browser may batch it less aggressively.&lt;/p&gt;




&lt;h2&gt;
  
  
  Presentation delay and the React render cost
&lt;/h2&gt;

&lt;p&gt;Presentation delay is the third phase — from when handlers finish to when the screen actually updates. This is React's territory.&lt;/p&gt;

&lt;p&gt;A click handler that calls &lt;code&gt;setState&lt;/code&gt; and returns immediately still has to wait for React to reconcile and the browser to paint before the interaction is complete from INP's perspective. If reconciliation is expensive, presentation delay climbs.&lt;/p&gt;

&lt;p&gt;This is where &lt;code&gt;useTransition&lt;/code&gt; belongs — not as a general "make things faster" tool, but specifically to defer reconciliation work that doesn't need to block the visual acknowledgment of an interaction. The handler returns quickly, the browser paints a loading state or an immediate visual change, and then React reconciles the heavier update separately.&lt;/p&gt;




&lt;h2&gt;
  
  
  Reading LoAF attribution in production
&lt;/h2&gt;

&lt;p&gt;The Long Animation Frames API — covered briefly in the previous article — becomes genuinely useful when you start reading its &lt;code&gt;scripts&lt;/code&gt; attribution in a production context.&lt;/p&gt;

&lt;p&gt;Each LoAF entry includes a list of scripts that contributed to the slow frame, with source locations and durations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;frame&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;script&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scripts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;sendMetric&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;frameDuration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;scriptSource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;script&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sourceURL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;scriptFunction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;script&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sourceFunctionName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;scriptDuration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;script&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;invokerType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;script&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;invokerType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// 'event-listener', 'user-callback', etc.&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;long-animation-frame&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;invokerType&lt;/code&gt; tells you whether the script was triggered by an event listener, a setTimeout, a Promise callback, or something else. Filtering by &lt;code&gt;invokerType: 'event-listener'&lt;/code&gt; on a specific page shows you exactly which handlers are contributing to slow frames during interactions — with function names and source URLs.&lt;/p&gt;

&lt;p&gt;This replaces a lot of the guesswork involved in reproducing INP issues locally. Slow interactions often don't reproduce in DevTools because the lab environment doesn't have the same background tasks, cache state, and concurrent timers as production.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where to look first
&lt;/h2&gt;

&lt;p&gt;Check &lt;code&gt;inputDelay&lt;/code&gt; in the event timing breakdown before touching handler code. If input delay is the dominant phase, look for long tasks running during the page's usage cycle — not just during load.&lt;/p&gt;

&lt;p&gt;If presentation delay is high on a specific interaction, that component's reconciliation cost is the target. Start with React DevTools Profiler on that specific action before generalizing.&lt;/p&gt;

&lt;p&gt;Processing time being the bottleneck is the most straightforward case — and also the least common in practice.&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>FID Is Dead. What INP Means for Your React App.</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Tue, 05 May 2026 14:02:00 +0000</pubDate>
      <link>https://dev.to/nosyos/fid-is-dead-what-inp-means-for-your-react-app-4ka6</link>
      <guid>https://dev.to/nosyos/fid-is-dead-what-inp-means-for-your-react-app-4ka6</guid>
      <description>&lt;p&gt;In March 2024, Google replaced First Input Delay with Interaction to Next Paint as an official Core Web Vital. FID is gone. INP is what matters now — and most React apps that were passing before are failing under the new standard without anyone realizing it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What was wrong with FID
&lt;/h2&gt;

&lt;p&gt;FID measured how long the browser took to respond to the very first user interaction on a page. Click a button, FID measures the delay before the browser started processing that click. Just the first one. Just the start of processing, not the time until anything actually happened on screen.&lt;/p&gt;

&lt;p&gt;In practice, FID was easy to pass and bad at catching real responsiveness problems. A page could have an excellent FID score while every subsequent click — after the page was fully loaded and the user was actually using it — took 600ms to respond. FID had nothing to say about that.&lt;/p&gt;




&lt;h2&gt;
  
  
  What INP actually measures
&lt;/h2&gt;

&lt;p&gt;INP measures the full interaction latency for all interactions throughout the entire page session, not just the first. It captures the delay from when a user interacts (click, tap, keyboard input) to when the browser finishes rendering the visual response.&lt;/p&gt;

&lt;p&gt;The threshold: Good is under 200ms. Needs Improvement is 200–500ms. Poor is over 500ms.&lt;/p&gt;

&lt;p&gt;The shift matters because it exposes a category of problems FID never touched. Long-running JavaScript that doesn't affect first-load responsiveness but blocks the main thread during normal usage. React state updates that trigger expensive re-renders mid-session. Event handlers that do too much synchronous work before returning control to the browser.&lt;/p&gt;

&lt;p&gt;A React app with heavy component trees, lots of context consumers, and synchronous state updates can have a perfectly acceptable LCP and a terrible INP. Under FID, nobody would have noticed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where React apps tend to fail INP
&lt;/h2&gt;

&lt;p&gt;The most common causes in React specifically:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synchronous state updates that cascade.&lt;/strong&gt; A click handler updates state, which triggers a re-render of a large subtree, which blocks the main thread while React reconciles. If that reconciliation takes 300ms, the user sees a 300ms delay before anything on screen changes. The React Compiler helps here by reducing unnecessary re-renders, but it doesn't reduce the cost of renders that need to happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unoptimized event handlers.&lt;/strong&gt; An &lt;code&gt;onClick&lt;/code&gt; that does validation, makes a synchronous API call via a cached store, updates multiple pieces of state, and then re-renders — all before returning — is an INP problem waiting to be found.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;useTransition&lt;/code&gt; is the right tool for expensive updates that don't need to block the interaction response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;isPending&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;startTransition&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useTransition&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;handleClick&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// This part runs immediately — updates the UI to acknowledge the interaction&lt;/span&gt;
  &lt;span class="nf"&gt;setButtonState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;loading&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// This part is deferred — React schedules it without blocking&lt;/span&gt;
  &lt;span class="nf"&gt;startTransition&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setFilteredResults&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;computeExpensiveFilter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The interaction response — acknowledging that something happened — is immediate. The expensive computation happens without blocking the input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Third-party scripts running during interactions.&lt;/strong&gt; An analytics script that fires on every click event and does synchronous work is adding to your INP whether you wrote it or not.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Long Animation Frames API
&lt;/h2&gt;

&lt;p&gt;Alongside INP, browsers shipped the Long Animation Frames API (LoAF) as a more precise replacement for Long Tasks.&lt;/p&gt;

&lt;p&gt;Long Tasks measured any main thread task over 50ms. LoAF measures animation frames that take over 50ms to render, with richer attribution — it tells you which scripts, which event handlers, and which rendering work contributed to the slow frame.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// entry.scripts shows which scripts contributed to the slow frame&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Slow frame:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ms&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Scripts:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scripts&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;long-animation-frame&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;scripts&lt;/code&gt; array in each entry is the part that changes the debugging workflow. Instead of knowing "something ran too long on the main thread," you know exactly which function in which file was responsible. This is significantly faster to diagnose than working backward from a Long Tasks timeline in the Performance panel.&lt;/p&gt;




&lt;h2&gt;
  
  
  Speculation Rules: prefetching gets declarative
&lt;/h2&gt;

&lt;p&gt;The Speculation Rules API lets you declare prefetch and prerender rules directly in HTML, without JavaScript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"speculationrules"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;prerender&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;where&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;href_matches&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/product/*&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;eagerness&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;moderate&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;prerender&lt;/code&gt; goes further than &lt;code&gt;prefetch&lt;/code&gt; — it fully renders the page in a hidden tab, so navigation is instant. &lt;code&gt;eagerness: "moderate"&lt;/code&gt; triggers prerendering when the user holds the pointer over a matching link for 200ms (or on pointerdown if that happens sooner), not immediately on page load.&lt;/p&gt;

&lt;p&gt;For Next.js apps, the router already handles prefetching via &lt;code&gt;&amp;lt;Link&amp;gt;&lt;/code&gt;, but Speculation Rules gives you declarative control for non-Next.js apps or edge cases the router doesn't cover.&lt;/p&gt;




&lt;h2&gt;
  
  
  Measuring INP from real users
&lt;/h2&gt;

&lt;p&gt;INP can't be meaningfully measured with synthetic tests. Lighthouse doesn't measure it in a way that reflects real interaction patterns — it simulates a few interactions, not the full session. The only honest INP measurement is from actual users doing actual things.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;entryType&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;event&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;sendMetric&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;INP_candidate&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;Element&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;tagName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;event&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;durationThreshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This captures interaction events over 100ms — the candidates that might become your INP score. Logging the target element tells you which UI components are the source of slow interactions, which is where the optimization work starts.&lt;/p&gt;

&lt;p&gt;If you're already monitoring LCP in production, adding INP measurement to the same pipeline is straightforward. I built &lt;a href="https://rpalert.dev" rel="noopener noreferrer"&gt;RPAlert&lt;/a&gt; to handle this for React apps — it tracks Web Vitals including INP from real browsers and alerts via Slack or Discord when thresholds are crossed. Given that INP is now an official ranking signal and most React apps haven't tuned for it, catching regressions as you work toward improvement is worth having in place.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to prioritize now
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Measure your INP first.&lt;/strong&gt; Add the &lt;code&gt;PerformanceObserver&lt;/code&gt; above to production and collect a week of data. Look at which interactions are the worst offenders and which pages they're on. The distribution will tell you where to focus.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit your event handlers.&lt;/strong&gt; Find click and input handlers that do significant synchronous work and identify which ones can be wrapped in &lt;code&gt;useTransition&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Switch from Long Tasks to LoAF&lt;/strong&gt; in your monitoring if you're already collecting Long Task data. The attribution is better and the LoAF data supersedes Long Tasks for debugging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check your INP score in Chrome UX Report&lt;/strong&gt; (via PageSpeed Insights or Search Console) for a baseline. If you're in the "Needs Improvement" or "Poor" range, it's affecting your search ranking today.&lt;/p&gt;




&lt;p&gt;The FID-to-INP transition isn't just a metric rename. It's a change in what "responsive" means for a passing grade. A React app that passed Core Web Vitals before March 2024 might be failing now without any code having changed. Worth checking.&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Performance Improvements Don't Last. Here's Why</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Thu, 30 Apr 2026 14:02:00 +0000</pubDate>
      <link>https://dev.to/nosyos/performance-improvements-dont-last-heres-why-4oc</link>
      <guid>https://dev.to/nosyos/performance-improvements-dont-last-heres-why-4oc</guid>
      <description>&lt;p&gt;A team spends a sprint optimizing LCP. Numbers improve. Six months later the app is slower than before the work started. Nobody made a single decision to make it slower. It just accumulated.&lt;/p&gt;

&lt;p&gt;This is the normal trajectory without structural changes. Individual optimizations decay. Culture doesn't.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why the gains disappear
&lt;/h2&gt;

&lt;p&gt;Performance degrades through ordinary work. A developer adds a new dependency. A designer hands off a 1.4MB hero image and nobody checks the size. Marketing adds a tag via the tag manager. A component gets a useEffect with a missing dependency that triggers extra renders. Each change is small, reviewed individually, and ships fine.&lt;/p&gt;

&lt;p&gt;The problem is that performance review doesn't happen at the same granularity as code review. Code gets scrutinized line by line. Performance gets checked periodically, if at all, by whoever remembers to run Lighthouse.&lt;/p&gt;

&lt;p&gt;The improvements you made six months ago are gone not because someone undid them, but because the process that created the degradation in the first place was never changed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Performance budgets only work with enforcement
&lt;/h2&gt;

&lt;p&gt;A performance budget is a threshold: bundle size under 200KB, LCP under 2.5s, no new Long Tasks over 150ms. Most teams that try budgets define them and then don't enforce them, which means they don't exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A budget without automated enforcement is a suggestion.&lt;/strong&gt; The only budget that changes behavior is one that fails a CI check and blocks a merge.&lt;/p&gt;

&lt;p&gt;Bundlesize and Lighthouse CI both integrate into GitHub Actions and can fail a PR when thresholds are crossed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/performance.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Lighthouse CI&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;treosh/lighthouse-ci-action@v10&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;urls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;https://staging.yourapp.com&lt;/span&gt;
    &lt;span class="na"&gt;budgetPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./budget.json&lt;/span&gt;
    &lt;span class="na"&gt;uploadArtifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;budget.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"timings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"metric"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"largest-contentful-paint"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"budget"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2500&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"resourceSizes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"resourceType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"script"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"budget"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a PR exceeds the budget, the check fails. The developer sees it before merge, not after deployment. This is the only version of a performance budget that actually works.&lt;/p&gt;

&lt;p&gt;Start with thresholds loose enough that you're not blocking everything immediately. Tighten them incrementally as the baseline improves. The goal at the start is to prevent regression, not to hit an ideal number.&lt;/p&gt;




&lt;h2&gt;
  
  
  Code review needs a performance lens
&lt;/h2&gt;

&lt;p&gt;Most teams review code for correctness, readability, and security. Performance is rarely on the checklist, which means expensive patterns ship unnoticed.&lt;/p&gt;

&lt;p&gt;A few things worth adding to your review process:&lt;/p&gt;

&lt;p&gt;New dependencies should trigger a bundle size check. &lt;code&gt;bundlephobia.com&lt;/code&gt; takes 30 seconds and shows exactly what a package adds to your bundle before you commit to it. A 40KB dependency that only uses two functions is worth questioning.&lt;/p&gt;

&lt;p&gt;Components that render lists should be reviewed for scale. A list component that works with 50 items in staging might create Long Tasks at 500. If it's not using virtualization and the data could grow, flag it.&lt;/p&gt;

&lt;p&gt;Images added to the codebase should have explicit dimensions and a format check. If someone commits a PNG larger than 100KB for a UI element, that's worth a comment.&lt;/p&gt;

&lt;p&gt;None of this requires a formal checklist. It requires one or two engineers who know to look for it and normalize asking the question in review.&lt;/p&gt;




&lt;h2&gt;
  
  
  Make performance data visible to everyone
&lt;/h2&gt;

&lt;p&gt;Performance stays one engineer's problem when only one engineer can see the data. The moment your LCP numbers appear somewhere the whole team looks — a Slack channel, a dashboard on a shared screen, a weekly metric in the team standup — it becomes a shared concern.&lt;/p&gt;

&lt;p&gt;The practical version: route your performance alerts to a channel where the whole team is present. When a deploy causes an LCP regression and a Slack message appears in &lt;code&gt;#engineering&lt;/code&gt;, everyone sees it. The developer who shipped the change sees it. The product manager sees it. It becomes a team metric rather than an infrastructure metric.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://rpalert.dev" rel="noopener noreferrer"&gt;RPAlert&lt;/a&gt; partly for this reason — the Slack and Discord integration means performance regressions surface in the same place where the team already communicates. It's a small thing that changes who feels responsible for the numbers.&lt;/p&gt;

&lt;p&gt;The same logic applies to your analytics dashboard. If LCP trends are buried in a monitoring tool that only two people have logins for, performance will remain two people's concern.&lt;/p&gt;




&lt;h2&gt;
  
  
  Designers and PMs are part of this
&lt;/h2&gt;

&lt;p&gt;Performance problems that originate outside the engineering team can't be fixed purely by engineers. A design system that specifies large, uncompressed images as the standard will produce large, uncompressed images at every launch. A product process that doesn't include performance review before shipping a feature will produce features that haven't been evaluated for performance impact.&lt;/p&gt;

&lt;p&gt;The lowest-effort version of this: add performance to your definition of done. Before a feature ships, someone confirms the LCP element on affected pages hasn't gotten worse. Not a full audit — a single check. If it fails, it's a bug, same as a broken form submission.&lt;/p&gt;

&lt;p&gt;With designers specifically: the conversation is usually easier than expected once the data exists. Showing a designer that their 2MB hero image is causing a 1.2s LCP increase for mobile users is more persuasive than a general request to "optimize images." Specific numbers change specific behavior.&lt;/p&gt;




&lt;h2&gt;
  
  
  The learning investment pays off asymmetrically
&lt;/h2&gt;

&lt;p&gt;A single lunch-and-learn session where you walk the team through opening Chrome DevTools, running a Lighthouse audit, and interpreting the results changes how people work for months. Developers who've never looked at a performance waterfall start noticing things during their own testing. Designers start asking about image format before handing off assets.&lt;/p&gt;

&lt;p&gt;The return on two hours of internal education is disproportionate to the investment. It doesn't require bringing in an outside expert or building a curriculum. It requires one person who knows the tools well enough to show them to the rest of the team.&lt;/p&gt;




&lt;p&gt;Culture isn't a process you implement. It's the accumulated effect of small structural changes: enforcement that makes the budget real, review habits that catch expensive patterns early, visibility that makes performance everyone's number. The teams with consistently fast apps didn't get there through heroic optimization sprints. They just made it harder for the app to get slower without someone noticing.&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Slow Pages Cost Money. Here's How to Prove It.</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Tue, 28 Apr 2026 14:04:00 +0000</pubDate>
      <link>https://dev.to/nosyos/slow-pages-cost-money-heres-how-to-prove-it-4fbm</link>
      <guid>https://dev.to/nosyos/slow-pages-cost-money-heres-how-to-prove-it-4fbm</guid>
      <description>&lt;p&gt;Performance work stalls not because engineers don't care, but because the business case is vague. "The app feels faster" doesn't unlock budget. "We reduced LCP by 800ms and checkout conversion went up 12%" does.&lt;/p&gt;

&lt;p&gt;The teams that get sustained investment in performance are the ones who learned to speak in numbers that matter to stakeholders. Here's how to build that argument.&lt;/p&gt;




&lt;h2&gt;
  
  
  The numbers that already exist
&lt;/h2&gt;

&lt;p&gt;You don't need to run your own study. The data is well-established at this point:&lt;/p&gt;

&lt;p&gt;Google's research on Core Web Vitals found that sites meeting the "Good" threshold for LCP see 24% fewer abandoned page loads compared to sites in the "Poor" range. Deloitte found that a 0.1s improvement in mobile site speed correlates with an 8% increase in conversions for retail sites. Vodafone saw a 31% increase in sales after a 31% improvement in LCP.&lt;/p&gt;

&lt;p&gt;These aren't cherry-picked outliers. The pattern holds across industries: slower pages lose users at predictable rates, and the loss scales with how slow the experience is.&lt;/p&gt;

&lt;p&gt;The most direct way to frame it for a non-technical stakeholder: every 100ms of additional load time costs some percentage of your conversions. The exact number varies by industry, audience, and baseline speed, but the direction is never ambiguous.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to calculate your own cost
&lt;/h2&gt;

&lt;p&gt;Generic industry statistics are useful for initial buy-in. Your own data is what closes the argument.&lt;/p&gt;

&lt;p&gt;Start with what you have. Most teams have analytics that show page load time alongside conversion or retention metrics. Segment your users by LCP performance bucket — "Good" under 2.5s, "Needs Improvement" 2.5–4s, "Poor" over 4s — and compare conversion rates across those groups.&lt;/p&gt;

&lt;p&gt;If your checkout conversion rate for users with LCP under 2.5s is 4.2% and for users with LCP over 4s it's 2.8%, the math becomes concrete. If 20% of your traffic is in the "Poor" bucket and you process 10,000 checkouts per month, closing that gap is worth roughly 280 additional conversions per month at whatever your average order value is.&lt;/p&gt;

&lt;p&gt;This isn't a controlled experiment — there are confounding variables, slower devices correlate with other demographic factors, and so on. But it's directionally correct and it's your data, which is far more persuasive to a leadership team than a Deloitte study about retail sites.&lt;/p&gt;




&lt;h2&gt;
  
  
  The cost of a regression is easier to calculate than the cost of being slow
&lt;/h2&gt;

&lt;p&gt;Here's the argument that often lands faster: quantify what a performance regression costs, then show what it costs to not catch one quickly.&lt;/p&gt;

&lt;p&gt;A deploy that increases LCP by 1.5s across your checkout flow and sits undetected for 4 hours: take your hourly transaction volume, apply the conversion rate delta you measured above, and multiply. For a moderately busy e-commerce site, a 4-hour regression can mean tens of thousands of dollars in lost revenue. That number is exact, not an estimate, because you have the actual transaction data from that window.&lt;/p&gt;

&lt;p&gt;The ROI argument for monitoring then writes itself. If catching a regression in 10 minutes instead of 4 hours saves $30,000, the cost of whatever tooling enables that is trivially justified.&lt;/p&gt;




&lt;h2&gt;
  
  
  Measuring before and after is non-negotiable
&lt;/h2&gt;

&lt;p&gt;The most common failure mode in performance projects: teams do the work, don't have the data to prove it made a difference, and can't justify the next round of investment.&lt;/p&gt;

&lt;p&gt;You need real-user measurements before you start any optimization, during the work, and continuously afterward. Not Lighthouse scores — those measure synthetic conditions on a controlled machine. Field data from actual users, segmented by page and device type.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PerformanceObserver&lt;/code&gt; gives you this without a third-party dependency:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lcp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;at&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lcp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;sendMetric&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LCP&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lcp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;deviceMemory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;navigator&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;deviceMemory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;largest-contentful-paint&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sending &lt;code&gt;deviceMemory&lt;/code&gt; alongside the metric lets you segment by device class — low-memory devices are a good proxy for slower hardware. The performance gap between your p50 and p75 users is often where the business impact lives.&lt;/p&gt;

&lt;p&gt;Once you have this instrumented, connect it to your analytics. LCP by page, by device, over time. When you ship an optimization, you'll see the distribution shift in the data. That shift is your ROI evidence.&lt;/p&gt;

&lt;p&gt;For the alerting side — catching regressions before they become hours-long revenue events — I built &lt;a href="https://rpalert.dev" rel="noopener noreferrer"&gt;RPAlert&lt;/a&gt; to handle the threshold monitoring and Slack/Discord notification layer. The "cost of a 4-hour regression" calculation I described above is exactly the argument for having that kind of alerting in place: the monitoring cost is fixed and small, the regression cost is variable and potentially large.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to frame this for stakeholders
&lt;/h2&gt;

&lt;p&gt;Engineers tend to present performance work as a technical improvement. Stakeholders hear "we made some things faster." The same work framed as "we identified that 18% of our users are experiencing load times that reduce checkout conversion by 1.4 percentage points, and we have a plan to move them into the Good tier" lands differently.&lt;/p&gt;

&lt;p&gt;A few framings that work:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Revenue at risk:&lt;/strong&gt; "X% of sessions have LCP over 4s. Based on our conversion data, this segment converts at Y% vs Z% for fast sessions. At our current volume, that's approximately $N/month in lost revenue."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regression cost:&lt;/strong&gt; "Our last deploy regression ran for 4 hours before we caught it. Based on transaction volume during that window, the estimated revenue impact was $N. We're proposing monitoring that would have caught it in under 10 minutes."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Competitive framing:&lt;/strong&gt; Run WebPageTest on your main competitors. If you're 1.2s slower on mobile than your closest competitor, that's a meaningful talking point in a room where people think about market share.&lt;/p&gt;




&lt;h2&gt;
  
  
  KPIs worth tracking continuously
&lt;/h2&gt;

&lt;p&gt;LCP p75 by page — the 75th percentile is what Google uses for Core Web Vitals thresholds, and it's the right target because it represents your slower users, not the median.&lt;/p&gt;

&lt;p&gt;Regression frequency and MTTR (mean time to resolution) — how often you have regressions and how quickly you fix them. This makes the monitoring ROI argument over time.&lt;/p&gt;

&lt;p&gt;Conversion rate by performance bucket — LCP Good vs. Needs Improvement vs. Poor, segmented from your analytics. This is the number that connects engineering work to business outcomes.&lt;/p&gt;

&lt;p&gt;None of these require expensive tooling to start. They require making the measurement a consistent practice, which is the harder organizational problem.&lt;/p&gt;




&lt;p&gt;The teams that make performance a sustained priority aren't the ones with the most engineering time or the biggest budgets. They're the ones who connected their performance metrics to the numbers the business already cares about. That connection starts with measuring the right things from the right place — real users, in production, continuously.&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Testing on Fast Wi-Fi Is Not a Performance Test</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Thu, 23 Apr 2026 14:06:00 +0000</pubDate>
      <link>https://dev.to/nosyos/testing-on-fast-wi-fi-is-not-a-performance-test-5gol</link>
      <guid>https://dev.to/nosyos/testing-on-fast-wi-fi-is-not-a-performance-test-5gol</guid>
      <description>&lt;h1&gt;
  
  
  Testing on Fast Wi-Fi Is Not a Performance Test
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Tags: #react #performance #webdev #javascript&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Most performance testing happens on a MacBook Pro, over a fast home or office connection, with a warm browser cache. Then you deploy and wonder why the numbers are different in production.&lt;/p&gt;

&lt;p&gt;The gap isn't mysterious. You were never testing what your users experience.&lt;/p&gt;




&lt;h2&gt;
  
  
  What your local setup hides from you
&lt;/h2&gt;

&lt;p&gt;Three things consistently make local testing optimistic:&lt;/p&gt;

&lt;p&gt;Your CPU is fast. A React component tree that reconciles in 30ms on your development machine can take 150ms on a mid-range Android phone from two years ago. JavaScript execution time scales with CPU speed, not network speed. Throttling your network doesn't help here.&lt;/p&gt;

&lt;p&gt;Your cache is warm. You've loaded the page dozens of times during development. The browser has cached your fonts, your CSS, your JS bundles. A first-time visitor has none of that. Cold cache loads can be 2–3x slower than what you see after the third reload.&lt;/p&gt;

&lt;p&gt;Your connection is fast and stable. Office and home wifi is typically 50–200Mbps with low latency. A user on 4G in a building with poor signal might be getting 5Mbps with 200ms latency. The same 300KB JavaScript bundle takes 0.4s on your connection and 3.2s on theirs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Chrome DevTools throttling: useful, not sufficient
&lt;/h2&gt;

&lt;p&gt;DevTools lets you simulate slower network conditions and CPU performance from the Network and Performance panels. This is genuinely useful for catching obvious regressions. It's not a substitute for real-device testing.&lt;/p&gt;

&lt;p&gt;For network throttling, the built-in presets are a reasonable starting point:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Fast 3G": ~1.5Mbps download, 40ms latency — approximates a decent mobile connection&lt;/li&gt;
&lt;li&gt;"Slow 3G": ~400Kbps download, 200ms latency — approximates a poor signal or congested network&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To add CPU throttling alongside network throttling, open the Performance panel and set the CPU throttle multiplier before recording. 4x slowdown is a reasonable approximation of a mid-range phone; 6x for lower-end devices.&lt;/p&gt;

&lt;p&gt;The limitation: CPU throttling in DevTools is a multiplier applied to your existing hardware. A 6x slowdown on a fast Mac still doesn't fully reproduce the memory pressure, thermal constraints, or GPU pipeline behavior of a real low-end device. It's a direction, not a destination.&lt;/p&gt;




&lt;h2&gt;
  
  
  WebPageTest gives you closer to the real thing
&lt;/h2&gt;

&lt;p&gt;WebPageTest runs tests on actual devices and actual network connections, not simulations on your hardware. The free tier at webpagetest.org lets you test from real locations against real mobile device profiles.&lt;/p&gt;

&lt;p&gt;A few settings that matter:&lt;/p&gt;

&lt;p&gt;Set the test location to somewhere geographically relevant to your users. Latency scales with distance. Testing from a US East Coast location when half your users are in Southeast Asia will give you unrealistically fast numbers.&lt;/p&gt;

&lt;p&gt;Use a mobile device profile. The "Motorola G (gen 4)" or similar mid-range Android preset is a reasonable proxy for the median visitor to most consumer apps.&lt;/p&gt;

&lt;p&gt;Enable "First View" only initially — it's the cold cache scenario, which is what new users experience and what you most need to optimize for.&lt;/p&gt;

&lt;p&gt;The waterfall view is where the value is. Look at what loads, in what order, what blocks what. Third-party scripts that seem fast locally often appear as long blocking requests here. Fonts that you never notice on your warm-cache machine show up as early blocking resources. It's the closest thing to watching a real user load your page.&lt;/p&gt;




&lt;h2&gt;
  
  
  Lighthouse: right tool, often wrong setup
&lt;/h2&gt;

&lt;p&gt;Lighthouse is easy to run, well-documented, and measures the right things. It's also commonly run in a way that undermines its usefulness.&lt;/p&gt;

&lt;p&gt;Running Lighthouse on localhost — against your dev server, with hot module reloading active — gives you numbers that have nothing to do with production. Run it against your production or staging URL.&lt;/p&gt;

&lt;p&gt;Running it on a fast connection without throttling gives you numbers your slowest users will never see. The default Lighthouse settings in Chrome DevTools apply simulated throttling automatically; if you're running it via the CLI, check your throttling configuration.&lt;/p&gt;

&lt;p&gt;Running it once and treating the number as stable is also a mistake. Lighthouse results vary by 10–15% between runs on the same page due to background processes and timing variations. Run it three times and take the median.&lt;/p&gt;




&lt;h2&gt;
  
  
  The ceiling on simulation
&lt;/h2&gt;

&lt;p&gt;Every simulation tool has the same ceiling: it's running on your infrastructure, with your hardware, and making assumptions about user conditions that may not match reality.&lt;/p&gt;

&lt;p&gt;The only way to know what your actual users experience is to measure it from their browsers. &lt;code&gt;PerformanceObserver&lt;/code&gt; gives you real field data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lcp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;at&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lcp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;sendMetric&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LCP&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lcp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;largest-contentful-paint&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The distribution of real-user LCP is almost always wider than what your local tests suggest. The p75 is what matters for Core Web Vitals — the 75th percentile user's experience, not the median. That user might be on a slow connection in a weak signal area, and your simulation never represented them.&lt;/p&gt;

&lt;p&gt;Once you have real-user data, you also get deploy-time regression detection. If a change you shipped moves the p75 LCP from 1.9s to 3.1s, you want to know within minutes. I built &lt;a href="https://rpalert.dev" rel="noopener noreferrer"&gt;RPAlert&lt;/a&gt; specifically for this — it monitors LCP and Long Tasks from real browsers and sends a Slack or Discord alert when thresholds are crossed. The simulation tools tell you what might happen; real-user monitoring tells you what did.&lt;/p&gt;




&lt;p&gt;Use DevTools throttling to catch things before they ship. Use WebPageTest to get a more honest picture of production conditions. Use real-user measurement to know what's actually happening. They're not substitutes for each other — they answer different questions at different points in the development cycle.&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Where to Start with React Performance</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Wed, 22 Apr 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/nosyos/where-to-start-with-react-performance-1g83</link>
      <guid>https://dev.to/nosyos/where-to-start-with-react-performance-1g83</guid>
      <description>&lt;p&gt;You've probably already tried something. Added &lt;code&gt;useMemo&lt;/code&gt; in a few places. Run Lighthouse. Checked the bundle size. Maybe split a route or two.&lt;/p&gt;

&lt;p&gt;And the app still feels slow.&lt;/p&gt;

&lt;p&gt;The issue is usually not the optimization — it's that the mental model came later, or never. Performance work done without a clear picture of what you're measuring is mostly guesswork. Some of it sticks. A lot doesn't.&lt;/p&gt;

&lt;p&gt;The articles below are ordered so that each one gives you something concrete before you move to the next. They're not meant to be read in a weekend. Work through one, apply it to your actual app, then come back.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understand what the browser measures before you touch anything
&lt;/h2&gt;

&lt;p&gt;Before profiling, before optimizing, read &lt;a href="https://dev.to/nosyos/core-web-vitals-explained-what-they-are-how-to-measure-them-and-why-they-matter-for-react-apps-3f2p"&gt;Core Web Vitals Explained&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;LCP, INP, CLS — these aren't SEO checkboxes. They're the closest thing we have to a standardized measurement of how fast your app feels to a real user. The article walks through what each metric actually captures, how to read them in a React app, and which thresholds matter in practice. If you've skimmed the MDN page and moved on, this will fill in the gaps that MDN skips.&lt;/p&gt;




&lt;h2&gt;
  
  
  Find your LCP element before you do anything else
&lt;/h2&gt;

&lt;p&gt;Lighthouse will show green image audits while your LCP sits at 4.2 seconds. I've seen it. The culprit was a CSS &lt;code&gt;background-image&lt;/code&gt; used for the hero. &lt;code&gt;next/image&lt;/code&gt; doesn't touch those. Nobody had checked which element was actually being measured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/nosyos/most-lcp-fixes-come-down-to-one-image-2i09"&gt;Most LCP Fixes Come Down to One Image&lt;/a&gt; is about exactly this: the diagnosis step most developers skip. You cannot fix LCP reliably until you know what element the browser is treating as "largest." That's the only point of this article, and it's worth the ten minutes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Two sources of drag that don't show up in your component tree
&lt;/h2&gt;

&lt;p&gt;You fix the hero. LCP improves. The page still feels sluggish during interaction. This is usually long tasks — JavaScript work that blocks the main thread long enough for users to notice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/nosyos/long-tasks-are-quietly-killing-your-react-apps-performance-3487"&gt;Long Tasks Are Quietly Killing Your React App's Performance&lt;/a&gt; explains what they are, where to find them in DevTools, and why React apps are particularly prone to generating them. Read this before you reach for any scheduler-level fixes.&lt;/p&gt;

&lt;p&gt;Then read &lt;a href="https://dev.to/nosyos/the-scripts-you-didnt-write-are-slowing-down-your-app"&gt;The Scripts You Didn't Write Are Slowing Down Your App&lt;/a&gt;. Analytics tags, chat widgets, tag managers firing pixels for campaigns that ended months ago — these all compete for main thread time. The engineering team usually has no idea how many are running or what they cost. This article gives you the tools to find out.&lt;/p&gt;




&lt;h2&gt;
  
  
  Your development environment is not your production environment
&lt;/h2&gt;

&lt;p&gt;This one is short. Read &lt;a href="https://dev.to/nosyos/why-your-app-feels-fast-in-staging-and-slow-in-production-27e6"&gt;Why Your App Feels Fast in Staging and Slow in Production&lt;/a&gt;, then look at how you've been profiling. CPU throttling, real network conditions, cold cache behavior — the article is a checklist you can run against your current setup immediately.&lt;/p&gt;




&lt;h2&gt;
  
  
  Don't assume the React Compiler handles everything
&lt;/h2&gt;

&lt;p&gt;If you're on React 19 or thinking about the compiler, &lt;a href="https://dev.to/nosyos/memoization-in-the-react-compiler-era-what-actually-changes-3e6b"&gt;What the React Compiler Quietly Skips&lt;/a&gt; covers what it does and doesn't automate. Side effects, context, components with non-deterministic output — these are still your problem. The article is specific enough to be useful without being alarmist about it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stop letting fixed problems come back
&lt;/h2&gt;

&lt;p&gt;Performance regressions are quiet. A dependency updates, a feature ships, and the LCP you worked to fix climbs back above three seconds. Nobody notices until a user says something.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/nosyos/detecting-performance-regressions-right-after-you-deploy-403f"&gt;Catching React Performance Regressions Before Your Users Do&lt;/a&gt; is about wiring performance checks into CI so regressions surface before merge. It's the step most teams skip because it feels like overhead — until they've been burned once.&lt;/p&gt;

&lt;p&gt;CI catches regressions in test conditions. But production is different. &lt;a href="https://dev.to/nosyos/monitoring-past-performance-vs-alerting-real-time-issues-what-react-teams-are-missing-hdc"&gt;Monitoring Past Performance vs. Alerting Real-Time Issues&lt;/a&gt; draws the line between historical analytics and real-time alerting. Most teams have one and assume it covers both. It doesn't.&lt;/p&gt;

&lt;p&gt;If you'd rather not build the alerting layer yourself, &lt;a href="https://rpalert.dev" rel="noopener noreferrer"&gt;RPAlert&lt;/a&gt; detects LCP degradation in production and sends a notification to Slack or Discord within 60 seconds. There's a free tier, and setup is one &lt;code&gt;npm install&lt;/code&gt; and a component wrapper. It's not a replacement for understanding your metrics — but once you understand them, you'll want to know the moment they break.&lt;/p&gt;




&lt;h2&gt;
  
  
  Read in this order
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://dev.to/nosyos/core-web-vitals-explained-what-they-are-how-to-measure-them-and-why-they-matter-for-react-apps-3f2p"&gt;Core Web Vitals Explained&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/nosyos/most-lcp-fixes-come-down-to-one-image-2i09"&gt;Most LCP Fixes Come Down to One Image&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/nosyos/long-tasks-are-quietly-killing-your-react-apps-performance-3487"&gt;Long Tasks Are Quietly Killing Your React App's Performance&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/nosyos/the-scripts-you-didnt-write-are-slowing-down-your-app"&gt;The Scripts You Didn't Write Are Slowing Down Your App&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/nosyos/why-your-app-feels-fast-in-staging-and-slow-in-production-27e6"&gt;Why Your App Feels Fast in Staging and Slow in Production&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/nosyos/memoization-in-the-react-compiler-era-what-actually-changes-3e6b"&gt;What the React Compiler Quietly Skips&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/nosyos/detecting-performance-regressions-right-after-you-deploy-403f"&gt;Catching React Performance Regressions Before Your Users Do&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/nosyos/monitoring-past-performance-vs-alerting-real-time-issues-what-react-teams-are-missing-hdc"&gt;Monitoring Past Performance vs. Alerting Real-Time Issues&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Eight articles. Try each concept against your own app before moving to the next. Doing it that way, this sequence will teach you more about production React performance than most tutorials manage in twice the length.&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>The Scripts You Didn't Write Are Slowing Down Your App</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Tue, 21 Apr 2026 14:24:00 +0000</pubDate>
      <link>https://dev.to/nosyos/the-scripts-you-didnt-write-are-slowing-down-your-app-4lnp</link>
      <guid>https://dev.to/nosyos/the-scripts-you-didnt-write-are-slowing-down-your-app-4lnp</guid>
      <description>&lt;p&gt;I once audited a page where nearly 40% of the main thread blocking time came from a tag manager firing scripts that the engineering team didn't know were still active. Analytics from a vendor they'd switched away from. A heatmap tool from a trial nobody cancelled. A pixel for an ad campaign that ended months ago.&lt;/p&gt;

&lt;p&gt;Nobody wrote those scripts. They accumulated.&lt;/p&gt;




&lt;h2&gt;
  
  
  What third-party scripts actually cost you
&lt;/h2&gt;

&lt;p&gt;The performance impact happens in two places: network and main thread.&lt;/p&gt;

&lt;p&gt;On the network side, each script is an additional HTTP request, often to a slow external domain with no SLA on response time. A single chat widget might make 4–6 requests before it's ready. On a slow connection, this shows up in your waterfall as a long chain of blocking or near-blocking resources.&lt;/p&gt;

&lt;p&gt;On the main thread, third-party scripts run JavaScript. That JavaScript competes with your React app for CPU time. A script that takes 80ms to parse and execute on a fast development machine might take 350ms on a mid-range Android phone. Every millisecond it holds the main thread is a millisecond your app can't respond to user input or complete a render.&lt;/p&gt;

&lt;p&gt;The combination of late network requests and CPU-heavy execution is why third-party scripts are so good at pushing LCP. The browser is waiting on resources it didn't know it needed, while the main thread is occupied with someone else's code.&lt;/p&gt;




&lt;h2&gt;
  
  
  Find out what's actually running
&lt;/h2&gt;

&lt;p&gt;Before you optimize anything, run your production URL through WebPageTest with a mobile throttling preset and look at the waterfall. Sort by domain. You'll see every request, grouped by origin.&lt;/p&gt;

&lt;p&gt;The question to ask for each third-party domain: does the engineering team know this is here, and what breaks if it doesn't load?&lt;/p&gt;

&lt;p&gt;Chrome's Coverage tab gives you the JavaScript utilization angle — how much of each loaded script is actually executed on the page. A script that's 90% unused is paying full network and parse cost for very little value.&lt;/p&gt;

&lt;p&gt;The surprises are usually in the tag manager. If your site uses Google Tag Manager or a similar tool, open it and look at what's configured. Marketing and analytics teams often have direct access and add tags without engineering review. The list is rarely what anyone expects.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stop loading scripts at the worst possible time
&lt;/h2&gt;

&lt;p&gt;Most third-party scripts don't need to be ready before your app is interactive. Analytics doesn't need to fire before the user can click anything. Chat widgets don't need to be loaded before the hero image is painted.&lt;/p&gt;

&lt;p&gt;The default behavior — scripts in &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt; without &lt;code&gt;async&lt;/code&gt; or &lt;code&gt;defer&lt;/code&gt; — blocks HTML parsing entirely. This is almost never what you want for third-party code.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;async&lt;/code&gt; loads the script in parallel with parsing, but executes it as soon as it downloads, which can still interrupt parsing at a bad moment. &lt;code&gt;defer&lt;/code&gt; loads in parallel and waits until parsing is complete before executing. For most analytics and tracking scripts, &lt;code&gt;defer&lt;/code&gt; is the right default.&lt;/p&gt;

&lt;p&gt;For scripts that are truly non-critical — chat widgets, feedback tools, anything that doesn't affect the initial render — load them after hydration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;script&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;script&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;script&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://third-party-widget.com/widget.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;script&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;appendChild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;script&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pushes execution entirely past React's initial render and hydration cycle. The widget loads when it loads. Your LCP doesn't wait for it.&lt;/p&gt;

&lt;p&gt;In Next.js, the &lt;code&gt;Script&lt;/code&gt; component handles this with the &lt;code&gt;strategy&lt;/code&gt; prop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Script&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;next/script&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// afterInteractive: loads after hydration, good for analytics&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Script&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://analytics.example.com/script.js"&lt;/span&gt; &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"afterInteractive"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;

&lt;span class="c1"&gt;// lazyOnload: loads during browser idle time, good for chat widgets&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Script&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"https://widget.example.com/chat.js"&lt;/span&gt; &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"lazyOnload"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;beforeInteractive&lt;/code&gt; exists for scripts that genuinely need to be ready before the page is usable. For third-party code, that's almost never true.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tag managers are the hard part
&lt;/h2&gt;

&lt;p&gt;A tag manager with unrestricted access is effectively a way for non-engineers to inject arbitrary JavaScript into production. The scripts themselves might be fine individually. The problem is the total: 8 tags that each take 50ms to execute is 400ms of main thread time that engineering had no visibility into.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit the tag manager on a regular schedule.&lt;/strong&gt; Not annually — quarterly at minimum. For each tag: who owns it, what it does, and what happens if it's removed. Treat it like a dependency review. Tags accumulate the same way npm packages do, and they're harder to spot because they're not in the codebase.&lt;/p&gt;

&lt;p&gt;Two practical rules that help: require engineering sign-off before any new tag is added, and set a network budget threshold that triggers a review if total third-party bytes cross it. Neither is bureaucratic overhead — they're the minimum to prevent the page you ship from drifting away from the page you tested.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem doesn't stay solved
&lt;/h2&gt;

&lt;p&gt;You optimize the loading strategy, audit the tag manager, remove the stale scripts. A month later, marketing adds a new analytics tool. Another month, a new A/B testing SDK. Each addition seems small in isolation.&lt;/p&gt;

&lt;p&gt;Measuring this from real users catches it before it accumulates. Adding the &lt;code&gt;PerformanceObserver&lt;/code&gt; for Long Tasks gives you a signal when a new script is hitting the main thread harder than expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;sendMetric&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LongTask&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;longtask&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want that signal to reach you automatically when a new script causes a regression — without manually checking dashboards — I built &lt;a href="https://rpalert.dev" rel="noopener noreferrer"&gt;RPAlert&lt;/a&gt; to handle this. It monitors LCP and Long Tasks from real browsers and sends a Slack or Discord alert when thresholds are crossed. It's caught more than a few cases where a new marketing tag quietly pushed LCP past the threshold right after it was deployed.&lt;/p&gt;




&lt;p&gt;The engineering team usually gets blamed when the app is slow. The scripts that actually caused it were added by someone else, through a tool that didn't require a code review. Getting visibility into that layer is half the work.&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Most LCP Fixes Come Down to One Image</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Thu, 16 Apr 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/nosyos/most-lcp-fixes-come-down-to-one-image-2i09</link>
      <guid>https://dev.to/nosyos/most-lcp-fixes-come-down-to-one-image-2i09</guid>
      <description>&lt;p&gt;Originally published at &lt;a href="https://rpalert.dev/blog/posts/most-lcp-fixes-come-down-to-one-image-2i09/" rel="noopener noreferrer"&gt;rpalert.dev/blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Next.js app with &lt;code&gt;next/image&lt;/code&gt; on every image component. Lighthouse image audit: no issues. LCP: 4.2 seconds. The hero was a CSS &lt;code&gt;background-image&lt;/code&gt;. &lt;code&gt;next/image&lt;/code&gt; doesn't touch those. Nobody had checked what the LCP element actually was.&lt;/p&gt;




&lt;h2&gt;
  
  
  Find your LCP element before you do anything else
&lt;/h2&gt;

&lt;p&gt;This is the step most people skip. They add &lt;code&gt;next/image&lt;/code&gt;, run Lighthouse, see green checkmarks on the image audit, and wonder why LCP is still slow.&lt;/p&gt;

&lt;p&gt;Open Chrome DevTools, run Lighthouse, and look at what it marks as the LCP element. If it's a &lt;code&gt;background-image&lt;/code&gt; set via CSS, the browser can't preload it the same way it handles a real &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; tag, and it won't get early fetch priority. Move it to an &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; element. This one change has fixed more LCP problems than anything else I've seen.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;code&gt;fetchpriority="high"&lt;/code&gt; is doing more work than most developers realize
&lt;/h2&gt;

&lt;p&gt;The browser assigns fetch priority based on what it finds during the initial HTML parse. Images discovered late — inside components that render after hydration, or below the fold at first scan — get normal or low priority. By the time the browser decides to fetch them, the LCP window is already closing.&lt;/p&gt;

&lt;p&gt;For your LCP image, you want the fetch to start as early as possible.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;img&lt;/span&gt;
  &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"/hero.webp"&lt;/span&gt;
  &lt;span class="na"&gt;fetchpriority=&lt;/span&gt;&lt;span class="s"&gt;"high"&lt;/span&gt;
  &lt;span class="na"&gt;width=&lt;/span&gt;&lt;span class="s"&gt;{1200}&lt;/span&gt;
  &lt;span class="na"&gt;height=&lt;/span&gt;&lt;span class="s"&gt;{600}&lt;/span&gt;
  &lt;span class="na"&gt;alt=&lt;/span&gt;&lt;span class="s"&gt;"..."&lt;/span&gt;
&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Next.js, the &lt;code&gt;priority&lt;/code&gt; prop on &lt;code&gt;next/image&lt;/code&gt; sets this automatically and also injects a preload link into &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Image&lt;/span&gt;
  &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"/hero.webp"&lt;/span&gt;
  &lt;span class="na"&gt;priority&lt;/span&gt;
  &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;1200&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"..."&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't use &lt;code&gt;priority&lt;/code&gt; on more than one or two images per page. Telling the browser everything is urgent means nothing is.&lt;/p&gt;




&lt;h2&gt;
  
  
  next/image defaults will silently hurt your LCP
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;next/image&lt;/code&gt; lazy-loads by default. That means if your LCP image is rendered via &lt;code&gt;next/image&lt;/code&gt; without &lt;code&gt;priority&lt;/code&gt;, the browser is intentionally delaying its fetch until the image is about to enter the viewport.&lt;/p&gt;

&lt;p&gt;I've seen this cause regressions on otherwise well-optimized pages. The format is correct, the dimensions are explicit, Lighthouse scores are green — but LCP is 300ms slower than it should be because someone forgot &lt;code&gt;priority&lt;/code&gt;. It doesn't throw a warning. It just quietly loads late.&lt;/p&gt;

&lt;p&gt;For any image that could be the LCP element on any viewport — hero images, above-the-fold product shots, article cover images — set &lt;code&gt;priority&lt;/code&gt;. Default to it rather than remembering to add it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Explicit dimensions are non-negotiable
&lt;/h2&gt;

&lt;p&gt;A browser that doesn't know an image's dimensions reserves no space for it. When the image loads, content shifts. That's a CLS problem, not just a performance one — it makes the page feel broken to users even if the load time is acceptable.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;next/image&lt;/code&gt; will warn you when dimensions are missing. Every other &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; in your codebase that doesn't go through &lt;code&gt;next/image&lt;/code&gt; should have explicit &lt;code&gt;width&lt;/code&gt; and &lt;code&gt;height&lt;/code&gt; set too. It takes ten seconds and prevents a class of layout bugs entirely.&lt;/p&gt;




&lt;h2&gt;
  
  
  Format: stop overthinking it
&lt;/h2&gt;

&lt;p&gt;WebP is 25–35% smaller than JPEG at equivalent quality. AVIF is another 20–30% on top of that. &lt;code&gt;next/image&lt;/code&gt; serves AVIF to browsers that support it and falls back to WebP automatically — you don't need to configure anything.&lt;/p&gt;

&lt;p&gt;The format switch matters, but once you're serving WebP, the gains from AVIF are marginal compared to getting &lt;code&gt;fetchpriority&lt;/code&gt; right on the LCP element. Fix the priority first.&lt;/p&gt;




&lt;h2&gt;
  
  
  Optimizing once isn't enough
&lt;/h2&gt;

&lt;p&gt;Lighthouse confirms the fix on your machine. It doesn't tell you whether it holds under real conditions — actual devices, varied networks, CDN behavior on cold loads.&lt;/p&gt;

&lt;p&gt;Measuring from real users is the only way to know:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lcp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;at&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lcp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;sendMetric&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LCP&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lcp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;largest-contentful-paint&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The harder problem is that optimizations regress. A new developer adds a hero image without &lt;code&gt;priority&lt;/code&gt;. Someone replaces an &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; with a CSS background. The LCP you fixed at 1.6s quietly climbs back to 3.2s after the next deploy, and nobody notices until a user mentions it. If you want to catch that within minutes rather than days, I built &lt;a href="https://rpalert.dev" rel="noopener noreferrer"&gt;RPAlert&lt;/a&gt; for exactly this — it handles the LCP monitoring and alerting layer for React apps, collecting field data from real browsers and posting to Slack or Discord when thresholds are crossed. Worth setting up after you've done the optimization work, so the gains actually stick.&lt;/p&gt;




&lt;p&gt;The fix is almost always the same: find the LCP element, make it a real &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; tag, set &lt;code&gt;fetchpriority="high"&lt;/code&gt;, give it explicit dimensions. Everything else is secondary.&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>webdev</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Catching React Performance Regressions Before Your Users Do</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Tue, 14 Apr 2026 14:04:00 +0000</pubDate>
      <link>https://dev.to/nosyos/detecting-performance-regressions-right-after-you-deploy-403f</link>
      <guid>https://dev.to/nosyos/detecting-performance-regressions-right-after-you-deploy-403f</guid>
      <description>&lt;p&gt;Originally published at &lt;a href="https://rpalert.dev/blog/posts/detecting-performance-regressions-right-after-you-deploy-403f/" rel="noopener noreferrer"&gt;rpalert.dev/blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Three hours after a deploy, someone posts a screenshot in Slack. One-star review. App "takes forever to load." You check Lighthouse — fine. You check Sentry — no errors. The regression started the moment you deployed. Nobody knew until a user complained.&lt;/p&gt;

&lt;p&gt;This is the normal state of affairs for most teams, and it's not hard to fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  The first 30 minutes are the cheapest
&lt;/h2&gt;

&lt;p&gt;Performance regressions don't announce themselves. They show up in production under conditions you can't fully replicate: real devices slower than your dev machine, networks that drop in and out, CDN cache misses on fresh deploys.&lt;/p&gt;

&lt;p&gt;The first 10–30 minutes after a deploy are when regressions are cheapest to fix. You can just roll back. By the time a support ticket arrives, you're already hours into the impact window and the fix is a proper investigation, not a revert.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why your existing tools miss this
&lt;/h2&gt;

&lt;p&gt;Lighthouse CI runs against staging with synthetic conditions. It won't catch regressions that only appear under real network speeds or with production data volumes. A LCP that went from 1.8s to 3.2s doesn't throw an exception — Sentry has nothing to report. APM tools tell you about backend latency, not what's happening in the browser.&lt;/p&gt;

&lt;p&gt;The shared blind spot: real users on real devices. None of these tools will fire when your LCP degrades after a deploy.&lt;/p&gt;




&lt;h2&gt;
  
  
  LCP is what to watch
&lt;/h2&gt;

&lt;p&gt;For deploy regressions specifically, LCP is the right metric. It's the best proxy for perceived load speed, and it's where most regressions surface first. Long Tasks are the clearest signal of render bloat. FCP is a useful early warning.&lt;/p&gt;

&lt;p&gt;The browser has a native API for all of this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lcpObserver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;entries&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lcp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lcp&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;2500&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// don't batch threshold crossings — send immediately&lt;/span&gt;
    &lt;span class="nf"&gt;sendMetric&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LCP&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lcp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nx"&gt;lcpObserver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;largest-contentful-paint&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This runs in every user's browser. The question is what you do with the data. A minimal pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;React app → PerformanceObserver → batch POST every 30s (immediate on threshold cross)
→ your API → threshold check → Discord/Slack alert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The batching distinction matters. Routine measurements can queue up — there's no reason to POST on every LCP reading. But when something crosses a threshold you care about, you want it sent immediately, not held for the next batch window.&lt;/p&gt;

&lt;p&gt;For the alert destination: email gets buried. If your team is in Discord or Slack, that's where it should go. Someone needs to see it within five minutes of the regression starting.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the alert loop actually looks like
&lt;/h2&gt;

&lt;p&gt;You deploy at 2pm. At 2:03, a Discord message arrives: LCP exceeded 2.5s on &lt;code&gt;/checkout&lt;/code&gt;, three minutes after the last deploy. You open the diff, find a new image component missing &lt;code&gt;loading="lazy"&lt;/code&gt;, fix it, deploy the hotfix by 2:15.&lt;/p&gt;

&lt;p&gt;Fifteen minutes of degraded performance.&lt;/p&gt;

&lt;p&gt;Without the alert: the first signal is a support ticket at 4:30pm. You dig through Sentry — nothing, because no exceptions were thrown. You run Lighthouse locally — looks fine, warm cache. You eventually find the image issue around 6pm. Four hours of impact instead of fifteen minutes.&lt;/p&gt;

&lt;p&gt;The alert doesn't prevent the regression. It collapses the time between "regression exists" and "someone is fixing it."&lt;/p&gt;




&lt;h2&gt;
  
  
  Building vs. not building this
&lt;/h2&gt;

&lt;p&gt;The pipeline above isn't complicated to build. It's also not that complicated to maintain — until the edge cases around batching logic, threshold tuning, and webhook routing start to accumulate.&lt;/p&gt;

&lt;p&gt;If you'd rather skip building it, I built &lt;a href="https://rpalert.dev" rel="noopener noreferrer"&gt;RPAlert&lt;/a&gt; for exactly this reason — it handles the PerformanceObserver setup, threshold logic, and Discord/Slack routing. Install the SDK and wrap your root layout:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;rpalert-sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;RPAlertProvider&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rpalert-sdk/react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;RootLayout&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;children&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;children&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ReactNode&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;html&lt;/span&gt; &lt;span class="na"&gt;lang&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"en"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;body&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;RPAlertProvider&lt;/span&gt; &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"YOUR_API_KEY"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;RPAlertProvider&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;body&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;html&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;LCP, FCP, CLS, Long Tasks — all measured from that point. Alert fires when thresholds are crossed. There's a free tier if you want to verify the pipeline end to end before committing.&lt;/p&gt;

&lt;p&gt;One thing worth being clear about: RPAlert isn't a Sentry replacement. Sentry tells you why something broke. RPAlert tells you when to go look at Sentry. Different jobs, and they work well together.&lt;/p&gt;




&lt;p&gt;The goal isn't zero regressions — that's not realistic in any active codebase. The goal is making sure you find out before your users do.&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
    <item>
      <title>What the React Compiler Quietly Skips</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Thu, 09 Apr 2026 14:09:00 +0000</pubDate>
      <link>https://dev.to/nosyos/memoization-in-the-react-compiler-era-what-actually-changes-3e6b</link>
      <guid>https://dev.to/nosyos/memoization-in-the-react-compiler-era-what-actually-changes-3e6b</guid>
      <description>&lt;p&gt;Originally published at &lt;a href="https://rpalert.dev/blog/posts/memoization-in-the-react-compiler-era-what-actually-changes-3e6b/" rel="noopener noreferrer"&gt;rpalert.dev/blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;React Compiler 1.0 went stable in October 2025. Half the tutorials I saw declared &lt;code&gt;useMemo&lt;/code&gt; dead. It's not — and on most existing codebases, the compiler will silently skip the components you most want it to optimize.&lt;/p&gt;




&lt;h2&gt;
  
  
  The compiler handles one thing
&lt;/h2&gt;

&lt;p&gt;Re-render performance. It's a build-time plugin that analyzes your components and inserts memoization automatically, without you writing it.&lt;/p&gt;

&lt;p&gt;The genuinely useful part: it can memoize values in code paths after an early return, which manual &lt;code&gt;useMemo&lt;/code&gt; can't do.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Component&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;isAdmin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;isAdmin&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;processed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;expensiveTransformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// compiler memoizes this&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Chart&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;processed&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What it doesn't touch: first render cost, Long Tasks from large list renders, expensive one-time computations on mount. None of that changes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Silent bailouts
&lt;/h2&gt;

&lt;p&gt;When the compiler encounters code it can't safely analyze — mutating props, reading mutable refs during render, class instances with internal state — it skips the component entirely. No warning. No error. It just leaves that component unoptimized and moves on.&lt;/p&gt;

&lt;p&gt;This is the part that catches people off guard. You enable the compiler expecting your most expensive component to benefit, and nothing changes. The compiler bailed on it without telling you.&lt;/p&gt;

&lt;p&gt;The diagnostic is in React DevTools. Successfully compiled components show a "memo ✨" badge. Check your heaviest components first. If the badge isn't there, that's your answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Most existing codebases have violations
&lt;/h2&gt;

&lt;p&gt;The compiler works well on clean, pure function components with immutable data. Greenfield Next.js apps tend to fit. Existing apps often don't.&lt;/p&gt;

&lt;p&gt;Patterns that cause silent skips:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Direct mutation during render&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;BadComponent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newItem&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// skipped&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;List&lt;/span&gt; &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Mutable ref read during render&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;AlsoProblematic&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;inputRef&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;inputRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// skipped&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Class instance methods&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;WithClassInstance&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;label&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getFormattedLabel&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// compiler can't track internal state&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;None of these are bugs. Your app won't break. But the compiler won't help them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before enabling the compiler on an existing codebase, run the ESLint plugin first.&lt;/strong&gt; &lt;code&gt;eslint-plugin-react-hooks&lt;/code&gt; with &lt;code&gt;recommended-latest&lt;/code&gt; includes compiler rules. The violation count is a rough proxy for actual benefit. High violation count means the compiler will spend most of its time bailing out.&lt;/p&gt;




&lt;h2&gt;
  
  
  useMemo isn't dead
&lt;/h2&gt;

&lt;p&gt;There's still one category where manual memoization is the right call: &lt;code&gt;useEffect&lt;/code&gt; dependencies that need guaranteed reference stability.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Component&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;userId&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useMemo&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;X-User-Id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userId&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}),&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;fetchData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The compiler's own docs call &lt;code&gt;useMemo&lt;/code&gt; and &lt;code&gt;useCallback&lt;/code&gt; valid escape hatches for this. The mental shift is from reaching for them by default to reaching for them when you have a specific reason. That's a real improvement — just not elimination.&lt;/p&gt;

&lt;p&gt;For existing code with lots of manual memoization, don't rush to remove it. The docs explicitly recommend leaving it in place for now. Removing it can change the compiler's output in ways that don't surface until something re-renders unexpectedly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Roll it out on a subset first
&lt;/h2&gt;

&lt;p&gt;Next.js 15+ supports annotation mode, which only compiles files that opt in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// next.config.js&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;nextConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;experimental&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;reactCompiler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;compilationMode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;annotation&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use memo&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// top of each file you want compiled&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;MyComponent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// compiler applies here&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One thing worth following: pin the exact compiler version with &lt;code&gt;--save-exact&lt;/code&gt;. The React team has said memoization behavior may change in minor versions. Auto-upgrading and then debugging unexpected re-render changes is not a good use of a morning.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to write going forward
&lt;/h2&gt;

&lt;p&gt;New components: write them without manual memoization. Pure functions, no mutations during render, and the compiler handles it.&lt;/p&gt;

&lt;p&gt;Existing components: run ESLint first, check the DevTools badges after enabling, and don't touch working &lt;code&gt;useMemo&lt;/code&gt;/&lt;code&gt;useCallback&lt;/code&gt; calls until you have a concrete reason.&lt;/p&gt;

&lt;p&gt;For components doing genuinely heavy work — large list renders, expensive data transformations — the compiler helps with unnecessary re-renders, but the underlying cost is still there. Those still need virtualization, &lt;code&gt;useTransition&lt;/code&gt; for non-urgent updates, or Web Workers for off-thread computation.&lt;/p&gt;

&lt;p&gt;The compiler is a real improvement, particularly for deeply nested trees where unnecessary re-renders compound. It raises the floor for everyone. It just doesn't replace thinking about where the expensive work actually is.&lt;/p&gt;

&lt;p&gt;Have you enabled React Compiler yet? What's been your experience so far?&lt;/p&gt;

</description>
      <category>react</category>
      <category>javascript</category>
      <category>performance</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Why Your App Feels Fast in Staging and Slow in Production</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Tue, 07 Apr 2026 14:05:00 +0000</pubDate>
      <link>https://dev.to/nosyos/why-your-app-feels-fast-in-staging-and-slow-in-production-27e6</link>
      <guid>https://dev.to/nosyos/why-your-app-feels-fast-in-staging-and-slow-in-production-27e6</guid>
      <description>&lt;p&gt;Originally published at &lt;a href="https://rpalert.dev/blog/posts/why-your-app-feels-fast-in-staging-and-slow-in-production-27e6" rel="noopener noreferrer"&gt;rpalert.dev/blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Lighthouse score of 95 on staging doesn't mean your users will see that. It means your machine, on your network, with your warm cache, hit that number once.&lt;/p&gt;

&lt;p&gt;The gap between staging and production isn't random bad luck. It has predictable causes that almost every team hits in the same order.&lt;/p&gt;




&lt;h2&gt;
  
  
  You're not testing on anything like a real device
&lt;/h2&gt;

&lt;p&gt;The biggest one. Most web developers work on hardware that's two to three times faster than the median device visiting their app. React component trees that reconcile in 40ms on a MacBook Pro take 180ms on a mid-range Android phone from 2022. That's not a small difference — it crosses the line between "feels fast" and "feels like something is wrong."&lt;/p&gt;

&lt;p&gt;CPU throttling in DevTools gets you closer. It's not the same. A simulated 6x slowdown doesn't capture memory pressure, thermal behavior, or how the GPU pipeline behaves on constrained hardware. &lt;strong&gt;Test on a physical mid-range Android device at least once per feature that touches render-heavy components.&lt;/strong&gt; This is the most reliable signal you have. Everything else is an approximation.&lt;/p&gt;

&lt;p&gt;BrowserStack works if you don't have a device. It's slower to iterate but it's still a real device.&lt;/p&gt;




&lt;h2&gt;
  
  
  Cold cache is a different product
&lt;/h2&gt;

&lt;p&gt;When you're iterating on staging, you've hit that URL dozens of times. The browser cache is warm. The CDN has every asset hot. Your service worker is running.&lt;/p&gt;

&lt;p&gt;Your users don't have any of that on their first visit. The first visit is what determines whether they stay or leave, and it's exactly the scenario you never test.&lt;/p&gt;

&lt;p&gt;Cold cache isn't just slower — the loading sequence is different. Resources that appear instant in your workflow take seconds the first time. Font requests that look like they resolve immediately actually block layout. Preconnect hints that feel redundant do real work on a cold visit.&lt;/p&gt;

&lt;p&gt;Run your staging tests in an incognito window with cache disabled. It's not a substitute for real-user data but it surfaces the worst offenders immediately.&lt;/p&gt;




&lt;h2&gt;
  
  
  Staging data doesn't tell you how your components scale
&lt;/h2&gt;

&lt;p&gt;Staging databases are seeded for developer convenience: enough data to see the UI, not enough to stress it. A list component that renders 50 items smoothly in staging might be rendering 5,000 in production, and nobody noticed because the test data never revealed it.&lt;/p&gt;

&lt;p&gt;React re-renders scale with data. A component tree that's fine at 50 records creates Long Tasks at 500. You don't need to copy production data — synthetic data at realistic scale is enough. But it has to be realistic scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  Third-party scripts you forgot about
&lt;/h2&gt;

&lt;p&gt;Analytics, chat widgets, A/B testing tools, tag managers. In staging they're often disabled, sandboxed, or absent entirely. In production they load fully, compete for main thread time, and contribute to LCP delays in ways that never show up in local testing.&lt;/p&gt;

&lt;p&gt;Run your production URL through WebPageTest with mobile throttling enabled and look at the waterfall. You'll see scripts you forgot were there. For each one, the question is simple: what breaks if this doesn't load? If the answer is "nothing visible to users," question whether it belongs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Measure from real browsers, not synthetic tests
&lt;/h2&gt;

&lt;p&gt;This is where most teams underinvest.&lt;/p&gt;

&lt;p&gt;Lighthouse runs in a controlled environment on a single configuration. It's useful for catching regressions in a CI pipeline. It is not telling you what your actual users experience.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PerformanceObserver&lt;/code&gt; runs in your users' browsers and gives you the real distribution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;entries&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lcp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;metric&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LCP&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lcp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;largest-contentful-paint&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add this to production. Not staging. The data you want is from real users on real devices on real networks. Once you have it, performance stops being a feeling and becomes something you can track across deploys.&lt;/p&gt;

&lt;p&gt;The deploy window is where this matters most. A CSS change that pushes your LCP element below the fold, or a new image that wasn't optimized, can move your p75 LCP from 1.8s to 3.5s overnight. If you're only checking periodically, you'll find out from a user complaint. If you're watching the real-user data, you'll know within an hour of deploying.&lt;/p&gt;

&lt;p&gt;The PerformanceObserver approach above works if you have somewhere to send the data and something watching the thresholds. If you'd rather not build and maintain that alerting layer yourself, &lt;a href="https://rpalert.dev" rel="noopener noreferrer"&gt;RPAlert&lt;/a&gt; does exactly this for React apps — install the SDK, wrap your component, set your LCP threshold, and it posts to Slack or Discord within 60 seconds of a regression. There's a free tier if you want to try it on a single app first.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three things worth doing this week
&lt;/h2&gt;

&lt;p&gt;Check your LCP element on your main pages. Verify it's WebP, has &lt;code&gt;fetchpriority="high"&lt;/code&gt;, and has explicit dimensions. Twenty minutes, fixes the most common issue.&lt;/p&gt;

&lt;p&gt;Add the &lt;code&gt;PerformanceObserver&lt;/code&gt; snippet to production and log to your existing analytics. Just having the data changes what gets prioritized in your next sprint.&lt;/p&gt;

&lt;p&gt;Run your production URL through WebPageTest once with a mobile throttling preset. Look at what loads, in what order, and what you've forgotten about.&lt;/p&gt;




&lt;p&gt;The compounding problem with performance is that each change seems fine in isolation, in the environment where it was built. Production is the only place where all of it adds up at once. Measuring there isn't an advanced optimization — it's the baseline for knowing what's actually happening.&lt;/p&gt;

&lt;p&gt;What's the biggest performance gap you've seen between staging and prod?&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Long Tasks Are Quietly Killing Your React App's Performance</title>
      <dc:creator>nosyos</dc:creator>
      <pubDate>Thu, 02 Apr 2026 14:02:00 +0000</pubDate>
      <link>https://dev.to/nosyos/long-tasks-are-quietly-killing-your-react-apps-performance-3487</link>
      <guid>https://dev.to/nosyos/long-tasks-are-quietly-killing-your-react-apps-performance-3487</guid>
      <description>&lt;p&gt;Originally published at &lt;a href="https://rpalert.dev/blog/posts/long-tasks-are-quietly-killing-your-react-apps-performance-3487/" rel="noopener noreferrer"&gt;rpalert.dev/blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's something that doesn't get talked about enough: your React app can have great LCP and FCP scores, pass all your Lighthouse checks, and still feel sluggish to use.&lt;/p&gt;

&lt;p&gt;The culprit is usually Long Tasks.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's a Long Task?
&lt;/h2&gt;

&lt;p&gt;The browser's main thread handles everything — parsing HTML, running JavaScript, responding to user input, painting pixels. It can only do one thing at a time.&lt;/p&gt;

&lt;p&gt;A Long Task is any task that occupies the main thread for more than &lt;strong&gt;50 milliseconds&lt;/strong&gt; without a break. While a Long Task is running, the browser can't respond to anything else. Click a button during a Long Task? Nothing happens — until the task finishes.&lt;/p&gt;

&lt;p&gt;50ms might sound short, but human perception starts noticing unresponsiveness around 100ms. Any Long Task over that threshold will feel broken to a user.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why React Makes This Easy to Get Wrong
&lt;/h2&gt;

&lt;p&gt;React renders synchronously by default (outside of concurrent features). When you trigger a state update, React processes the entire component tree update in one go. If that update is expensive — lots of components, heavy computations, large lists — it becomes a Long Task.&lt;/p&gt;

&lt;p&gt;The tricky part: this doesn't show up in unit tests. It doesn't throw an error. It doesn't affect your Lighthouse score in a way that's obvious. It just makes your app feel slow.&lt;/p&gt;

&lt;p&gt;Some common patterns that create Long Tasks in React apps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rendering large lists without virtualization&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// If `items` has 500+ entries, this creates a Long Task on every render&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ItemList&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ExpensiveItem&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;item&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expensive computations in render&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Dashboard&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;rawData&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// This runs on every render, blocking the main thread&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;processed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;rawData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;row&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;heavyTransformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;row&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;{});&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Chart&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;processed&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;State updates that cascade through large component trees&lt;/strong&gt;&lt;br&gt;
A single &lt;code&gt;setState&lt;/code&gt; at the top of a deeply nested tree can trigger hundreds of re-renders in one synchronous block.&lt;/p&gt;


&lt;h2&gt;
  
  
  How to Detect Long Tasks
&lt;/h2&gt;

&lt;p&gt;The browser exposes this through &lt;code&gt;PerformanceObserver&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;observer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PerformanceObserver&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getEntries&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Long Task detected:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;// how long it ran (ms)&lt;/span&gt;
      &lt;span class="na"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;     &lt;span class="c1"&gt;// when it started&lt;/span&gt;
      &lt;span class="na"&gt;attribution&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;attribution&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// which script caused it (limited support)&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;observer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;longtask&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;buffered&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run this in your production app for a day and look at the output. If you're seeing regular Long Tasks over 100ms — especially clustering around user interactions or page loads — you have a real problem.&lt;/p&gt;

&lt;p&gt;One thing worth knowing: &lt;code&gt;entry.attribution&lt;/code&gt; gives you some information about what caused the task, but browser support varies and the data is often vague. It'll tell you it was a script, but not always which function.&lt;/p&gt;

&lt;p&gt;For more precise attribution, the Chrome DevTools Performance panel is your best friend. Record a session, look for the red triangles at the top of the flame chart — those are Long Tasks. Click into them and you'll see exactly which functions ran.&lt;/p&gt;




&lt;h2&gt;
  
  
  Fixing Long Tasks
&lt;/h2&gt;

&lt;p&gt;There's no single fix. The approach depends on what's causing the task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For expensive renders: useMemo&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Dashboard&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;rawData&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Only recalculates when rawData changes&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;processed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useMemo&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;rawData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;row&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;heavyTransformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;acc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;row&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;{});&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;rawData&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Chart&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;processed&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;useMemo&lt;/code&gt; doesn't prevent Long Tasks on the first render, but it prevents them from happening again unnecessarily.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For large lists: virtualization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Libraries like &lt;code&gt;react-window&lt;/code&gt; or &lt;code&gt;@tanstack/virtual&lt;/code&gt; only render the rows visible in the viewport. If you have more than a couple hundred items in a list, this is almost always worth doing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;FixedSizeList&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react-window&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ItemList&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;FixedSizeList&lt;/span&gt; &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;itemCount&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;itemSize&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"100%"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ExpensiveItem&lt;/span&gt; &lt;span class="na"&gt;item&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;FixedSizeList&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For non-urgent updates: useTransition (React 18+)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;SearchPage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setQuery&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setResults&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;([]);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;isPending&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;startTransition&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useTransition&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;handleSearch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setQuery&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// urgent — update input immediately&lt;/span&gt;

    &lt;span class="nf"&gt;startTransition&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;setResults&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;searchItems&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt; &lt;span class="c1"&gt;// non-urgent — can be interrupted&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;onChange&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;handleSearch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;isPending&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Spinner&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ResultsList&lt;/span&gt; &lt;span class="na"&gt;results&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;useTransition&lt;/code&gt; tells React that the update inside &lt;code&gt;startTransition&lt;/code&gt; is low priority. React can interrupt it if something more urgent comes in (like another keystroke). This is particularly effective for search-as-you-type patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For truly heavy work: move it off the main thread&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're doing something computationally expensive that can't be memoized — parsing a large dataset, running a sorting algorithm on thousands of items — consider a Web Worker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// worker.ts&lt;/span&gt;
&lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onmessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;heavyComputation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;postMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// component&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;worker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./worker.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="nx"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onmessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;setResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// runs on main thread, but the computation didn't&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nx"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;postMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;largeDataset&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Web Workers don't have access to the DOM, so this only works for pure computation. But when it applies, it's the cleanest solution — zero Long Task, because the work literally doesn't happen on the main thread.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Connection to INP
&lt;/h2&gt;

&lt;p&gt;If you've looked at your Core Web Vitals recently, you might have noticed INP (Interaction to Next Paint) — the metric that replaced FID in 2024. It measures how long it takes the page to respond to user interactions.&lt;/p&gt;

&lt;p&gt;Long Tasks are the primary cause of bad INP. When a user clicks and there's a Long Task in progress, the browser queues the input event and processes it after the task finishes. If that task runs for 200ms, your INP for that interaction is 200ms+ — in the "needs improvement" range.&lt;/p&gt;

&lt;p&gt;Fixing Long Tasks improves INP directly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Monitoring This in Production
&lt;/h2&gt;

&lt;p&gt;DevTools is great for debugging a specific session, but it won't tell you how often Long Tasks are happening for real users across different devices.&lt;/p&gt;

&lt;p&gt;The PerformanceObserver code above works in production. A few things worth tracking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Count of Long Tasks per page load&lt;/strong&gt; — is this happening on every visit or just edge cases?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration&lt;/strong&gt; — are they 60ms or 400ms? The severity matters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;When they happen&lt;/strong&gt; — during initial load, or triggered by user interactions?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If Long Tasks spike after a deploy, that's a signal something in the new code is blocking the main thread. Having an alert set up for unusual Long Task counts is worth it — it's the kind of regression that's easy to introduce and hard to notice until users start complaining.&lt;/p&gt;

&lt;p&gt;This is actually what pushed me to build &lt;a href="https://rpalert.dev" rel="noopener noreferrer"&gt;RPAlert&lt;/a&gt; — I kept finding out about Long Task spikes and LCP regressions from users instead of catching them myself. It handles the PerformanceObserver setup and sends a Discord alert when thresholds are crossed, so you don't have to build the plumbing yourself.&lt;/p&gt;




&lt;p&gt;That's the gist of it. Long Tasks aren't glamorous, but they're one of the most direct causes of "this app feels slow" complaints — and they're largely invisible without instrumentation. Worth adding to your monitoring stack.&lt;/p&gt;

&lt;p&gt;Have you ever caught a Long Task spike in production? Would love to hear how you found it.&lt;/p&gt;

</description>
      <category>react</category>
      <category>performance</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
