If you have been optimising for Core Web Vitals for a few years, you will remember First Input Delay (FID) as the “interactivity” metric. That role now belongs to Interaction to Next Paint (INP). Google promoted INP to a stable Core Web Vital on 12 March 2024 and retired FID from the programme at the same time. INP is not a minor rename—it measures a fuller slice of the experience, and it is the number you should expect in PageSpeed Insights, Search Console, and any serious performance report.
This article explains what INP is, why the change happened, and what to do about it in day-to-day work—including when you are responsible for many client sites rather than a single product. For step-by-step fixes, pair it with our deeper guide on LCP, INP, and CLS; for the wider CWV picture, start with What Are Core Web Vitals?
What INP measures (in plain terms)
INP captures responsiveness: how long it takes from a user’s discrete action until the browser can paint the next frame that reflects that action. Eligible interactions are clicks, taps, and key presses on the page. Hovering and scrolling are out of scope for INP, which keeps the metric focused on deliberate gestures that expect immediate feedback.
Google’s documentation breaks each interaction into phases that developers actually debug: input delay (waiting for the main thread), processing time (your event handlers), and presentation delay (work before the next paint). The slowest of those phases dominates the interaction’s latency. The page’s reported INP is derived from the interactions observed during the visit—for most pages that is effectively the worst interaction; on very chatty pages, the methodology discounts rare outliers so one freak delay does not drown out an otherwise snappy experience.
Field scoring uses the 75th percentile of page loads (split by mobile and desktop), consistent with other Core Web Vitals. The public thresholds are:
| Rating | INP (field) |
|---|---|
| Good | ≤ 200 ms |
| Needs improvement | 200–500 ms |
| Poor | > 500 ms |
Those numbers are not aspirational labels—they are what Google uses when it evaluates real-user experience at scale.
Where INP sits next to LCP and CLS
Core Web Vitals are still a set of three: LCP for loading, INP for responsiveness, CLS for visual stability. They answer different questions. You can ship a fast first paint and still fail INP if the main thread is busy when someone opens a menu; you can pass INP on a lean marketing page and still fail CLS if images without dimensions push content around. In practice, teams prioritise LCP first because it is easy to explain to stakeholders and often tied to hero assets and CDN configuration. INP rewards the same discipline—JavaScript budget, tag governance, framework choices—but shows up in different URLs and flows, especially after hydration. Treat the three metrics as separate dials, not one score.
Why FID was not enough
FID only looked at the first interaction on a page, and only at input delay: time before the browser started handling the event. That made FID useful for catching catastrophic main-thread blocking during load, but it ignored everything that happens after the page is interactive. Google notes that Chrome usage data shows most of a typical visit happens after load; a slow menu, cart step, or client-rendered route change could leave FID looking fine while users still felt a sluggish product.
INP closes that gap by measuring responsiveness across the full session and including processing and presentation, not just the queue in front of the first handler. That is why INP is a better match for modern sites heavy on JavaScript, third-party widgets, and single-page transitions—exactly the stacks agencies ship for clients every week.
Why INP matters for SEO and for users
Core Web Vitals are part of Google’s page experience signals. INP is the responsiveness pillar: poor scores indicate real friction—double taps, abandoned forms, rage clicks—while good scores mean the UI keeps up with input. Search is not the only reason to care; conversion and support tickets follow the same physics.
For teams managing portfolios of sites, INP adds a wrinkle: the worst interactions often sit on templates (navigation, checkout, lead forms) or on third-party scripts shared across properties. One slow pattern can drag INP for every page that uses it. That is less visible in a one-off Lighthouse run than in field data or repeated URL-level checks over time—which is why operational monitoring and regression alerts matter alongside manual audits.
How to see INP in practice
PageSpeed Insights pulls field data from the Chrome User Experience Report (CrUX) when your origin or URL has enough traffic; that is the authoritative place to see whether you pass INP at the 75th percentile. Lab tools do not compute INP directly; Lighthouse’s Total Blocking Time (TBT) is a rough proxy for main-thread contention that often tracks with INP problems, but it is not interchangeable. When CrUX data is missing—common on small or new sites—use real user monitoring (RUM) if you have it, or fall back to manual profiling in Chrome DevTools for representative flows.
Search Console’s Core Web Vitals report surfaces INP (and no longer treats FID as a Core Web Vital) after the March 2024 transition, so keep INP in scope when you triage URL groups and templates.
If you are building a repeatable workflow for clients—baselines, fixes, then proof—our Core Web Vitals monitoring checklist for agencies ties these metrics to review cadence and ownership. For setup across many monitored URLs, How to Set Up Automated PageSpeed Monitoring for Multiple Sites walks through the operational pieces.
What usually hurts INP
These patterns show up repeatedly in audits:
- Long tasks on the main thread — parsing and executing large JavaScript bundles, synchronously updating heavy DOM trees, or blocking styles/layout after an interaction.
- Third-party tags — analytics, chat, consent banners, and A/B snippets competing for the same thread as your UI handlers.
- Large DOMs and expensive selectors — interactions that trigger wide reflows or style recalculations on complex pages.
- Client-heavy rendering — SPAs that wait on data or hydration before showing feedback; users perceive that as “nothing happened” even when the network is fast.
A concrete pattern we see on client sites: the first click after load feels fine (FID would have looked healthy), but the fifth interaction—opening a filtered product grid, submitting a multi-step form, or toggling a sticky nav—hits a long task left behind by a tag or a bundle split. INP catches that; FID did not. Embeds deserve attention too: slow interactions inside iframes still count toward the page-level INP users see, while your own JavaScript cannot inspect cross-origin iframe code—so field data and DevTools frame selection matter when CrUX and RUM disagree.
Google’s own guides on optimising long tasks, input delay, and interaction diagnostics are the right next step once you know which interaction is slow.
INP and Apogee Watcher
Apogee Watcher is built to run scheduled PageSpeed-class tests across many sites and routes, surface lab and field signals where the API provides them, and alert you when scores move. INP is fundamentally a field-first metric: fixing it means reproducing real interactions, trimming main-thread work, and re-checking user journeys—not a single synthetic number in isolation. Use Watcher to watch for regressions when you ship framework upgrades, tag managers, or new client themes; pair those signals with DevTools and CrUX for the interactions CrUX cannot explain line-by-line.
If you are not monitoring yet, start with a baseline on your highest-traffic templates, then expand. Create a free account to add sites and budgets without wiring up PSI by hand for every property.
FAQ
When did INP replace FID as a Core Web Vital?
INP became an official Core Web Vital and replaced FID on 12 March 2024, per Google’s Web Vitals programme.
What is a good INP score?
At the 75th percentile of field data, 200 ms or less is “good”, over 500 ms is “poor”, with a band between for “needs improvement”—see Google’s INP documentation.
Does INP include scrolling or hover?
No. INP observes click, tap, and keyboard interactions. Scrolling and hover are not part of the metric, though some gestures may include a click or tap that is measured.
Is INP the same as Total Blocking Time?
No. TBT is a lab proxy related to main-thread blocking during load. INP is a field metric covering full-session interactions. They often move together when JavaScript is the culprit, but they are not identical.
Why should agencies track INP separately from LCP?
LCP measures loading; INP measures responsiveness after content is on screen. A page can have an acceptable LCP and still fail INP because of client-side code, third parties, or heavy UI after load—common on marketing sites and apps your clients maintain for years.
Further reading (Google): Interaction to Next Paint (INP), INP becomes a Core Web Vital — March 12, Optimize Interaction to Next Paint.
Top comments (1)
The TBT/INP distinction you draw is one of the most important (and most misunderstood) things in web performance right now. Worth adding one more nuance: heavy post-load JavaScript that fires after hydration is completely invisible to TBT, since TBT only counts long tasks during the load phase. This creates a failure mode where a page looks healthy in Lighthouse but users feel lag on their 4th or 5th interaction — exactly what you describe with the filtered product grid example.
One separate optimization path that's worth distinguishing from INP: navigation LCP. When users click a link to a new page, the field LCP for that navigation includes the full TTFB + render time of the destination. The Speculation Rules API's prerender mode brings TTFB for that navigation down to near zero, because the browser has already fetched and rendered the page in a background tab. The result shows up in CrUX as dramatically improved navigation LCP — but it does nothing for INP on the destination page once it's loaded.
The practical implication: prerendering (via
<script type="speculationrules">) is the right lever for navigation speed; JavaScript thread management is the right lever for INP. They're complementary, not competing. We built Prefetch (apps.shopify.com/prefetch) specifically around the Speculation Rules API for Shopify stores — the split between what prerendering fixes and what it doesn't is something we explain constantly.The third-party script point is the INP culprit I'd put at the top of the list. One consent banner or chat widget that blocks the main thread during an interaction can fail INP site-wide regardless of how clean your own code is.