Core Web Vitals in 2026: The Practical Fixes for INP, LCP, and CLS That Actually Work
Your Lighthouse score says 95. Your users say your site feels slow. Here's why — and exactly how to fix each metric with code you can ship today.
Why Your Lighthouse Score Is Lying to You
There's a hard truth most teams learn the painful way: the number in your terminal is not the number Google cares about.
Lighthouse runs in a simulated environment — a powerful desktop with a fast connection. Google's ranking signal comes from CrUX (Chrome User Experience Report), which measures real users on real devices on real networks. It looks at the 75th percentile over a rolling 28-day window. Your Lighthouse run is a lab test. CrUX is the field result.
I've seen teams spend weeks polishing a 98 Lighthouse score while their actual INP — the metric that measures how responsive your site feels — sat firmly in the "poor" bucket for every mobile user. The gap between lab and field can be enormous.
In 2026, the three metrics that matter are:
- INP (Interaction to Next Paint) — does the site respond quickly when tapped?
- LCP (Largest Contentful Paint) — does the main content appear fast?
- CLS (Cumulative Layout Shift) — does the page jump around while loading?
Let's fix each one with real, ship-ready code.
Fix 1: INP — The Architecture Problem Nobody Diagnoses
INP replaced FID in March 2024, and it's the metric most teams are failing. Roughly 43% of sites still don't meet the 200ms "good" threshold. Unlike FID, which only measured input delay, INP measures the full round trip: from click to visual paint. Every interaction on the page, not just the first one.
Here's the classic scenario: a user taps a filter on a product listing page. That tap triggers a setState that runs through a context provider, re-renders 40+ components, recomputes a sort, and then paints the selected state. On a mid-range Android over 4G, that round trip can hit 480ms. The user feels the lag. Chrome reports it. Your CrUX score tanks.
The Fix: Break the Task, Then Yield
The single highest-impact change is introducing yield points with scheduler.yield(). Split the interaction into "visual feedback first, everything else second":
async function handleFilterTap(filter) {
// Paint the selected state immediately
setSelectedFilter(filter);
await yieldToMain();
// Then do the expensive work
const filtered = applyFilters(products, filter);
await yieldToMain();
setResults(filtered);
}
function yieldToMain() {
if ('scheduler' in window && 'yield' in window.scheduler) {
return window.scheduler.yield();
}
return new Promise(resolve => setTimeout(resolve, 0));
}
The perceived difference is dramatic. The filter chip highlights instantly. The list updates a tick later. INP at the 75th percentile typically drops by 60-65% with this pattern alone — no changes to the filtering logic required.
Layer 2: Defer Heavy Renders with useDeferredValue
In React, mark the filtered results as a deferred value so the input stays responsive while the heavy list re-renders in the background:
function ProductList({ products, filter }) {
const deferredProducts = React.useDeferredValue(products);
const filtered = useMemo(
() => applyFilters(deferredProducts, filter),
[deferredProducts, filter]
);
return (
<ul>
{filtered.map(product => (
<ProductCard key={product.id} product={product} />
))}
</ul>
);
}
Layer 3: Offload Hidden Long Tasks
Open Chrome DevTools → Performance panel → record an interaction → look for any task over 50ms. The most common culprit? Third-party analytics scripts running synchronous operations on every click. Move them to a web worker:
// main.js
const analyticsWorker = new Worker('/analytics-worker.js');
function trackEvent(event) {
analyticsWorker.postMessage({
type: 'track',
payload: { event: event.name, data: event.data }
});
}
// analytics-worker.js
self.onmessage = function(e) {
if (e.data.type === 'track') {
// Heavy serialization and network calls happen here,
// off the main thread
const serialized = JSON.stringify(e.data.payload);
fetch('/api/analytics', {
method: 'POST',
body: serialized
});
}
};
The pattern: INP rewards event handlers that do almost nothing synchronously. Anything expensive — filtering, sorting, logging, computation — gets yielded, deferred, or offloaded.
Fix 2: LCP — It's Almost Always the Hero Image
Every team has the same story: "we optimized the images" and LCP was still 3.2 seconds. Because "the images" isn't the fix — the LCP element is the fix, and it's almost always one specific image above the fold.
Step 1: Identify the LCP Element
Don't guess. Measure it in production with the web-vitals library:
import { onLCP } from 'web-vitals';
onLCP(metric => {
console.log('LCP element:', metric.entries.at(-1)?.element);
console.log('LCP value:', metric.value);
// Send to your analytics pipeline
sendToAnalytics(metric);
});
Step 2: Preload the LCP Resource
One line in your <head> can cut LCP by 500-700ms:
<link rel="preload" as="image" href="/hero-banner.avif" type="image/avif">
Don't preload everything — just the LCP resource. Preload abuse creates its own problems.
Step 3: Set Fetch Priority
Browsers are conservative about image priority by default. Tell them this one matters:
<img
src="/hero-banner.avif"
fetchpriority="high"
alt="Hero banner"
width="1200"
height="600"
/>
Step 4: Modern Formats + Responsive Sizes
Use AVIF with WebP fallback. The file size difference between JPEG and AVIF at equivalent quality is routinely 40-60%. And serve the right size — a responsive srcset is not optional:
<picture>
<source
type="image/avif"
srcset="/hero-400.avif 400w, /hero-800.avif 800w, /hero-1200.avif 1200w"
sizes="100vw"
/>
<source
type="image/webp"
srcset="/hero-400.webp 400w, /hero-800.webp 800w, /hero-1200.webp 1200w"
sizes="100vw"
/>
<img
src="/hero-1200.jpg"
fetchpriority="high"
alt="Hero banner"
width="1200"
height="600"
/>
</picture>
Step 5: Never Lazy-Load the LCP Image
This mistake is still on production sites in 2026. loading="lazy" on the hero image guarantees a LCP regression. The browser waits to start loading it until the page is scrolled — but it's visible immediately.
When LCP Is a Font, Not an Image
If the LCP element is text depending on a custom font, use font-display: swap and preload the font file:
@font-face {
font-family: 'BrandDisplay';
src: url('/fonts/brand-display.woff2') format('woff2');
font-display: swap;
font-weight: 700;
}
<link rel="preload" href="/fonts/brand-display.woff2" as="font" type="font/woff2" crossorigin>
Accept the brief flash of fallback type. Users barely notice. Your 75th-percentile LCP will.
Fix 3: CLS — Three Rules That Cover Everything
CLS is the cheapest metric to fix and the one that makes your site look most amateur when you don't. When buttons jump under thumbs and ads push content down, people lose trust — even if they can't articulate why.
Rule 1: Explicit Dimensions on Every Image, Video, and Iframe
Even if you're styling with CSS, HTML width and height attributes let the browser reserve space before the asset loads:
<!-- Good: browser reserves space immediately -->
<img src="/product.jpg" width="800" height="600" alt="Product photo" />
<!-- Bad: no space reserved, page jumps when image loads -->
<img src="/product.jpg" alt="Product photo" />
If you prefer CSS, use aspect-ratio — but the HTML attributes are simpler and work everywhere.
Rule 2: Reserve Space for Late-Injected Content
Ad slots, cookie banners, personalization widgets — anything that arrives after first paint needs height reserved upfront:
.ad-slot {
min-height: 250px; /* Reserve space even before the ad loads */
}
.cookie-banner-container {
min-height: 64px;
}
If the slot stays empty, leave it empty. The shift is worse than the blank space.
Rule 3: Match Font Metrics with size-adjust
font-display: swap prevents invisible text but can cause a layout shift if your fallback font has different metrics. The size-adjust descriptor on @font-face eliminates that shift:
@font-face {
font-family: 'BrandDisplay';
src: url('/fonts/brand-display.woff2') format('woff2');
font-display: swap;
font-weight: 700;
}
@font-face {
font-family: 'BrandDisplay';
src: local('Arial');
font-weight: 700;
font-display: swap;
size-adjust: 105.2%; /* Tune this to match the web font metrics */
ascent-override: 98%;
descent-override: 22%;
line-gap-override: 0%;
}
This technique is massively underused. Most teams haven't heard of it, and it can bring CLS from 0.15 down to 0.02.
The Monitoring Setup That Makes This Stick
You can't fix what you can't see, and DevTools on your laptop is not "seeing." Here's the monitoring stack that actually helps:
1. Ship Real-User Metrics
import { onINP, onLCP, onCLS } from 'web-vitals';
function sendToAnalytics(metric) {
const body = JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating,
delta: metric.delta,
id: metric.id,
navigationType: metric.navigationType,
url: window.location.href,
// Slice by route, device, country
route: window.location.pathname,
deviceType: /Mobi|Android/i.test(navigator.userAgent) ? 'mobile' : 'desktop',
});
if (navigator.sendBeacon) {
navigator.sendBeacon('/api/vitals', body);
} else {
fetch('/api/vitals', { body, method: 'POST', keepalive: true });
}
}
onINP(sendToAnalytics);
onLCP(sendToAnalytics);
onCLS(sendToAnalytics);
2. Slice the Data
A single aggregate INP number hides everything. The same site can have great INP for desktop users in Germany and awful INP for mobile users in Brazil. The aggregate looks "mid." Slice by route, device class, and country to find the real fires.
3. Set Alerts at 80% of Thresholds
Don't wait until you're failing. Set alerts at:
- INP at 160ms (80% of 200ms threshold)
- LCP at 2.0s (80% of 2.5s threshold)
- CLS at 0.08 (80% of 0.1 threshold)
A deploy that bumps INP from 150ms to 190ms still reports "good," but three of those in a month and you're failing.
FAQ: Common Gotchas
Q: I fixed INP on my local machine but CrUX still shows poor. Why?
A: CrUX is a 28-day rolling window. Your fix won't fully appear in the data for up to 4 weeks. Check your own real-user monitoring pipeline for immediate validation.
Q: scheduler.yield() isn't available in my target browsers.
A: The setTimeout(resolve, 0) fallback in the yieldToMain() function above handles this. It's slightly less optimal than native scheduler.yield() but still delivers the core benefit of breaking up long tasks.
Q: Do I need to preload every image on the page?
A: No — only the LCP element. Preloading too many resources creates contention and can actually hurt performance. One preload for one LCP resource.
Q: My CLS is caused by a third-party chat widget. How do I fix that?
A: Reserve space for it. Add min-height to the container where the widget renders. If the widget doesn't load, the empty space is better than the shift.
Q: Is it worth optimizing for Core Web Vitals if my site already ranks well?
A: Yes. Google has confirmed Core Web Vitals are a ranking signal. Beyond SEO, better vitals directly correlate with lower bounce rates and higher conversion rates. Performance is a product feature.
The Three Things I'd Tell You on Day One
Stop optimizing for Lighthouse. Use it as a diagnostic, not a scoreboard. The scoreboard lives at CrUX, and the gap between the two can be an order of magnitude.
Fix the metric that's actually failing. Don't work on LCP because you find it satisfying while INP silently tanks. Look at your CrUX data, find the worst metric, and start there.
Treat performance as a product feature. The teams that succeed at Core Web Vitals are the ones where someone owns the metrics, they're reviewed weekly, regressions are treated as bugs, and they're part of the definition of done. The teams that fail are the ones that treat vitals as "something to get to after the next launch."
Your users don't care about your Lighthouse score. They care about whether the button responds when they tap it. Fix that.
If this helped, follow for more on frontend performance, architecture, and the unglamorous parts of shipping software.
Top comments (0)