A Lighthouse score of 95 feels great. Until you check what your actual users experience and find that 40% of them are getting a Poor LCP.
How? Because Lighthouse runs in a controlled environment. Fixed CPU, fixed network, no browser extensions, cold cache. Your real users are on old Android phones, congested Wi-Fi, with 12 Chrome extensions installed. The test and reality can be very different.
We just shipped Field Data in Ahoj Metrics to close that gap. You can now look up real Chrome user experience data for any domain or URL, right alongside your Lighthouse audits.
What Is Field Data?
The data comes from Google's Chrome User Experience Report (CrUX). It's an aggregated, anonymised dataset of real performance timings collected from Chrome users who have opted in to sharing usage statistics.
When someone visits your site in Chrome, their browser quietly measures how long things take to load, how quickly the page responds to clicks, and how much the layout shifts around. Google aggregates this data across all opted-in Chrome users and makes it available through the CrUX API.
A few important details about how CrUX works:
28-day rolling window. The data represents the last 28 days of real user visits. No single bad day can spike the numbers. No single good day can hide persistent problems.
75th percentile (p75). The reported value isn't the average. It's the experience of someone at the 75th percentile, meaning 75% of your visitors had a better experience than this number, and 25% had a worse one. This is intentional. Google wants you to optimize for the tail, not the middle.
Good / Needs Improvement / Poor distribution. Every page load gets classified against Google's thresholds. You can see what percentage of your users fall into each bucket. A site might have 80% Good, 12% Needs Improvement, and 8% Poor for LCP. That distribution tells you more than any single number.
Lab Data vs Field Data
This is the core concept. Both are useful. Neither is complete on its own.
Lab data (Lighthouse) tests your site in a controlled environment. Same CPU, same network throttling, same browser config, every time. It's reproducible. It's great for finding issues, comparing before/after a deployment, and running automated tests in CI/CD. But it's synthetic. It doesn't represent any real user.
Field data (CrUX) measures what your actual visitors experience. Real devices, real networks, real browser configurations. It's messy and variable, but it's the truth. It's also what Google uses for Core Web Vitals in Search ranking.
Here's where it gets interesting: these two numbers can disagree significantly.
A site might score 68 on Lighthouse (worrying) but show 85% Good LCP in CrUX (fine in practice). Why? Maybe most of your users are on fast connections with warm caches, so the real experience is better than what the lab predicts.
Or the reverse: a Lighthouse score of 92 (looks great) but only 55% Good LCP in CrUX (a real problem). Maybe your audience skews toward mobile users in regions with slower connectivity, and the lab test doesn't capture that.
Neither number is "right." Lab data tells you what's wrong. Field data tells you the impact. You need both to make good decisions about where to spend your optimization time.
The Five Metrics
Field Data in Ahoj Metrics shows five metrics:
LCP (Largest Contentful Paint) measures how quickly the main content loads. This is usually the hero image, a large heading, or a video thumbnail. Google considers under 2.5 seconds "Good."
INP (Interaction to Next Paint) measures how responsive the page is to user input. When someone taps a button or clicks a link, how long before something visibly happens? Under 200ms is "Good." INP replaced FID (First Input Delay) as a Core Web Vital in 2024.
CLS (Cumulative Layout Shift) measures how much the layout jumps around while loading. You know when you're about to tap a button and an ad loads above it, pushing everything down? That's layout shift. Under 0.10 is "Good."
FCP (First Contentful Paint) measures how quickly the first piece of content appears. Not the main content (that's LCP), just anything: text, an image, the background color. Under 1.8 seconds is "Good."
TTFB (Time to First Byte) measures how quickly the server responds to the browser's request. Under 800ms is "Good."
LCP, INP, and CLS are Google's three Core Web Vitals. These are the metrics that directly feed into Google's search ranking signals. If you can only focus on three things, focus on these.
How to Use It
Go to Field Data in the Ahoj Metrics sidebar. Enter any domain (like https://stripe.com) or a specific URL. Hit Look Up Field Data.
You'll see the p75 value and the Good/Needs Improvement/Poor distribution for all five metrics. Instant results, no audit credits used.
A few things to know:
It works for any public site. You can look up your competitors, your clients, or any site you're curious about. The data is public.
Not every URL has data. CrUX needs a meaningful amount of Chrome traffic to generate a record. If you look up an internal tool, a brand new site, or a low-traffic page, Google won't have data for it. You'll see a clear message when that happens. Origin-level lookups (the whole domain) are more likely to have data than individual URLs.
It's available to all users. Free tier, paid plans, everyone. Field Data lookups don't count against your audit quota. The CrUX API is free from Google, and we saw no reason to gate it.
How This Changes Your Workflow
Before, an Ahoj Metrics workflow looked like this:
- Run Lighthouse audit from multiple regions
- See scores and recommendations
- Fix issues
- Run another audit to verify
Now it looks like this:
- Check Field Data for a baseline of what real users experience
- Run Lighthouse audit from multiple regions to find specific issues
- Fix issues
- Run another audit to verify the fix
- Wait for field data to update (28-day rolling window) to confirm the real-world impact
Field data gives you the "why" behind your optimization work. You're not fixing things because a synthetic test says so. You're fixing things because 30% of your real users are getting a Poor LCP.
Why Not Just Use PageSpeed Insights?
Google's PageSpeed Insights already shows CrUX data. It's free and it works. So why look at it in Ahoj Metrics?
Context. In PSI, field data lives on Google's website, separate from everything else. You look up a URL, see the numbers, close the tab. In Ahoj Metrics, field data lives next to your Lighthouse audits, your monitors, and your historical data. You can see how your lab scores compare to real-world experience for the same site, in the same tool, without switching between tabs.
PSI also doesn't save history, doesn't compare across sites, and doesn't integrate into a monitoring workflow. It's a snapshot tool. Ahoj Metrics is trying to be the place where all your performance data lives together.
Technical Details
For anyone curious about the implementation:
We built a thin Ruby wrapper around the CrUX API (ahojmetrics/crux-api). Results are cached server-side for 12 hours using Solid Cache (PostgreSQL-backed, same as the rest of our infrastructure). Repeat lookups for the same URL are instant.
The API response from Google is verbose. Metric names are long (largest_contentful_paint), CLS comes back as a string float, and the structure is nested. Our serializer normalizes everything into a clean JSON shape with short keys (lcp, inp, cls) that the frontend can work with easily.
Authentication is the same as every other Ahoj endpoint. Standard JWT/session auth, no separate API key needed.
What's Next
Field Data is a lookup tool today. You search for a URL and see the current CrUX data. We're thinking about:
- Historical field data tracking. Store CrUX snapshots over time so you can see trends, not just the current 28-day window.
- Field data alongside monitors. When your automated Lighthouse monitor runs, also pull the CrUX data for that URL and display them together.
- Field vs lab comparison view. A side-by-side showing your Lighthouse lab metrics and CrUX field metrics for the same URL, highlighting where they agree and where they diverge.
If any of those would be particularly useful to you, I'd love to hear about it.
Try It
Sign in to Ahoj Metrics and go to Field Data in the sidebar. Look up your own site, look up your competitors, look up anything. No credits used, no limits.
If you don't have an account, the free tier gives you 20 Lighthouse audits per month plus unlimited Field Data lookups.
Ahoj Metrics is a performance monitoring tool that runs Lighthouse audits from 18 global regions and now shows real Chrome user experience data via CrUX. Built with Rails 8, Solid Queue, and Fly.io.
Top comments (3)
This is very true. I once had a Lighthouse score above 90, but real users still complained the site felt slow — especially on mobile. Later in CrUX, I saw LCP was poor for many users. That’s when I realized Lighthouse shows potential, but field data shows reality. Since then, I always check both before trusting performance.
That's a perfect example of why both matter. The gap between "scores well in a lab" and "feels fast for real users" can be surprisingly wide, especially on mobile.
Curious, when you spotted the LCP issue in CrUX, was it a specific cause? I've seen CDN misconfigurations and large hero images be the usual culprits, but mobile-specific issues like render-blocking third-party scripts are sneaky too.
Totally agree: the only sane workflow is to use lab data to debug, and field data to validate. When they disagree, I’ve found it helps to segment field data by device class and page template (home/category/PDP/cart/checkout), because one bad template can drag the p75. Also worth watching the distribution (Good/NI/Poor) more than a single p75 number. The tails are where complaints come from.