<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hamza Mairaj</title>
    <description>The latest articles on DEV Community by Hamza Mairaj (@hamzamairaj).</description>
    <link>https://dev.to/hamzamairaj</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hamzamairaj"/>
    <language>en</language>
    <item>
      <title>Your PageSpeed Score Is Someone's Actual Experience</title>
      <dc:creator>Hamza Mairaj</dc:creator>
      <pubDate>Sun, 26 Apr 2026 16:01:31 +0000</pubDate>
      <link>https://dev.to/hamzamairaj/your-pagespeed-score-is-someones-actual-experience-184p</link>
      <guid>https://dev.to/hamzamairaj/your-pagespeed-score-is-someones-actual-experience-184p</guid>
      <description>&lt;p&gt;You've just shipped a site. It loads instantly on your machine, looks great, passes your internal checks. Then you run it through PageSpeed Insights and the mobile score comes back at 41.&lt;/p&gt;

&lt;p&gt;The instinct is to dismiss it. "Google throttles the test." "It's not a realistic environment." "My actual users aren't seeing this."&lt;/p&gt;

&lt;p&gt;Some of them are.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Test Is Actually Simulating
&lt;/h2&gt;

&lt;p&gt;PageSpeed Insights uses Lighthouse under the hood, and Lighthouse doesn't test on your infrastructure or your device. It simulates a specific user profile:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Device&lt;/strong&gt;: Moto G Power — a mid-range Android phone, not a flagship. This represents the median globally shipped smartphone, not what developers or early adopters carry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network&lt;/strong&gt;: 1.6 Mbps download, 150ms round-trip time. That's slow 4G — the kind of connection you get in a moving vehicle, in a rural area, or in a dense urban environment with tower congestion.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CPU&lt;/strong&gt;: 4x slowdown multiplier applied on top of the simulated device. This accounts for thermal throttling — when a phone has been running for a while, it slows its CPU to manage heat. It also accounts for background processes competing for cycles on a device with 3GB of RAM instead of 16GB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That 4x CPU multiplier is the one most developers don't know about. It's why JavaScript-heavy sites that feel fast on a MacBook fall apart in the test. The same 200KB of JS that your machine parses in 40ms takes 160ms on a throttled mid-range phone. Multiply that across all your scripts and you've already spent your entire performance budget before anything renders.&lt;/p&gt;

&lt;p&gt;This isn't Google being punitive. It's Google choosing to represent the bottom half of the global mobile market instead of optimizing for the top.&lt;/p&gt;




&lt;h2&gt;
  
  
  Lab Score vs. Field Score, They're Different Things
&lt;/h2&gt;

&lt;p&gt;PageSpeed Insights shows two kinds of data, and conflating them is one of the most common sources of confusion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The lab score&lt;/strong&gt; (the big number at the top) is the Lighthouse simulation. It's synthetic — the same test run in the same controlled conditions every time. Useful for diagnosing issues, but it doesn't reflect your actual users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The field data&lt;/strong&gt; comes from the Chrome User Experience Report (CrUX) — real sessions from real Chrome users who visited your site, aggregated over a 28-day rolling window. This is what the Core Web Vitals assessment at the top of PSI is based on.&lt;/p&gt;

&lt;p&gt;A critical detail: CrUX data is reported at the &lt;strong&gt;75th percentile&lt;/strong&gt;. Your score isn't the average experience — it's the threshold at which 75% of your users had that experience or better. In other words, you're being graded on your worst quartile, not your typical visitor.&lt;/p&gt;

&lt;p&gt;This is why a site can have a lab score of 45 and green field data: if most of your real users are on fast devices in fast-connection geographies, CrUX will reflect that. The simulation still reveals genuine technical problems — they're just not yet affecting your user base.&lt;/p&gt;

&lt;p&gt;The inverse is worse: green lab score, red field data. That means something is degrading in production that the simulation doesn't catch — third-party scripts, A/B testing tools, ad networks loading after the simulated test window closes.&lt;/p&gt;

&lt;p&gt;Both scores matter. Lab tells you what to fix. Field tells you who's already affected.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Metrics Are Actually Measuring
&lt;/h2&gt;

&lt;p&gt;Core Web Vitals aren't arbitrary benchmarks. Each one maps to a specific moment in a user's experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LCP Largest Contentful Paint&lt;/strong&gt;&lt;br&gt;
When does the main content of the page appear? Not when the spinner stops, not when the page is technically "done" — when the largest visible element (usually a hero image or above-the-fold text block) finishes rendering.&lt;/p&gt;

&lt;p&gt;LCP fails silently in ways that feel fast on a good connection: a hero image that isn't preloaded, a slow TTFB from an uncached server response, render-blocking CSS that delays paint. The user is staring at a partially loaded page while your analytics still logged a "fast" load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLS Cumulative Layout Shift&lt;/strong&gt;&lt;br&gt;
How much does the page jump around while loading? CLS measures the total amount of unexpected layout movement — elements shifting position as images load in without declared dimensions, as web fonts swap in, as late-loading ad scripts inject content above existing text.&lt;/p&gt;

&lt;p&gt;This is the one that causes real user errors: a person goes to tap a button, the page shifts at the last millisecond, and they hit something else. That's not a score problem. That's a UX failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;INP Interaction to Next Paint&lt;/strong&gt;&lt;br&gt;
Replaced FID (First Input Delay) in March 2024, and it measures something more complete. FID only captured the delay between a user's first interaction and when the browser started processing it. INP captures the full event lifecycle: input delay + processing time + the time until the browser actually paints the result.&lt;/p&gt;

&lt;p&gt;A page can load quickly and still fail INP badly. If your main thread is blocked by long JavaScript tasks, every tap and click will feel sluggish — the browser can't respond until the current task finishes. This is what "JavaScript bloat" actually means in measurable terms.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters Beyond the Score
&lt;/h2&gt;

&lt;p&gt;Core Web Vitals are a confirmed ranking signal in Google Search. The weight is real but moderate — you won't outrank a highly authoritative competitor just by going green. But at equal authority, technical performance is a differentiator.&lt;/p&gt;

&lt;p&gt;The more direct impact is on behavior. Google and Deloitte found that improving load time by 0.1 seconds increased conversions by up to 8% on retail sites. Bounce rate on mobile climbs sharply past the 3-second mark. These aren't PageSpeed numbers — they're revenue and retention numbers.&lt;/p&gt;

&lt;p&gt;When a developer dismisses a low mobile score, they're often making a decision on behalf of users they never actually see: the person in a different country on a prepaid data plan, the user on a 3-year-old phone, the potential customer who left before your page finished loading and never came back.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to Do With This
&lt;/h2&gt;

&lt;p&gt;Don't dismiss a low mobile score as "the test is unrealistic." Instead:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Check your field data first.&lt;/strong&gt; If CrUX is red, real users are already affected. If it's green or absent (not enough data), you have time but not infinite time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fix LCP first.&lt;/strong&gt; It has the biggest perceived impact. Preload your hero image, reduce TTFB, eliminate render-blocking resources above the fold.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit your JavaScript for INP.&lt;/strong&gt; Long tasks on the main thread are the primary cause. Look for third-party scripts, large bundles, and synchronous operations running at page load.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Declare image dimensions and reserve space for dynamic content&lt;/strong&gt; to address CLS. It's the most mechanical fix of the three.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The score is a proxy. The real thing it's measuring is whether someone on a mid-range phone in a spotty 4G area can use your site without frustration. That person exists. They're in your analytics right now.&lt;/p&gt;




&lt;p&gt;If you're working on a WordPress site and want LCP, CLS, and INP handled automatically — image optimization, script management, caching — &lt;a href="https://berqwp.com" rel="noopener noreferrer"&gt;BerqWP&lt;/a&gt; does this without requiring manual configuration for each metric.&lt;/p&gt;

</description>
      <category>webperf</category>
      <category>javascript</category>
      <category>corewebvitals</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
