Improving Core Web Vitals doesn't require paid software. The most useful tools in this space are free, and most of them come from Google or are open source. The challenge is knowing which tool to reach for at each stage of the process, since each one measures slightly different things and answers different questions.
This roundup covers the seven free tools worth having in your workflow, what each one is best for, and where each falls short. Used together in the right order, they cover everything from identifying which pages are failing to diagnosing the root cause and verifying the fix.
1. Google Search Console (Core Web Vitals Report)
Google Search Console is the starting point for any Core Web Vitals project. The Core Web Vitals report shows field data from the Chrome User Experience Report, grouped by URL pattern, and categorized into poor, needs improvement, and good. It's the only place to see how your site is actually performing for real visitors using real devices.
Best for: Identifying which page templates are failing in the field and getting a priority-ranked list of URLs to focus on. The report groups similar URLs (like all blog post pages) together so you can see which template types need the most attention, rather than having to check every URL individually.
Limitation: The data is aggregated and delayed by approximately 28 days. You can't see individual user sessions, and you can't isolate which specific device types or geographies are driving poor scores. It tells you what to fix but not precisely why.
2. Chrome DevTools (Performance Panel and Web Vitals Extension)
Chrome DevTools is the most powerful diagnostic tool for Core Web Vitals because it lets you run the page in your own browser, reproduce issues locally, and trace them to specific resources and code paths. No external service required.
The Performance panel records a timeline of everything the browser does during page load, including resource fetches, JavaScript execution, layout and paint events, and user interactions. The Experience row highlights layout shift events. Long tasks are visible in the Main thread track. CPU and network throttling let you simulate slower devices. The Interactions panel, added more recently, records every interaction and shows its breakdown across input delay, processing time, and presentation delay, which is exactly what you need for INP debugging.
Best for: Diagnosing the exact cause of an LCP delay, identifying which elements are shifting for CLS, and profiling long JavaScript tasks or event handlers that cause INP.
Limitation: Lab conditions. Your machine's CPU speed and browser state don't match a typical user's device. Installed extensions and cached resources can affect results in ways that don't reflect the real user experience. Run tests in a clean Incognito window with throttling enabled for the most representative lab results.
3. Lighthouse (via DevTools or CLI)
Lighthouse is Google's open-source automated auditing tool. It runs a battery of tests against a page, including Core Web Vitals, accessibility, SEO, and best practices, and produces a scored report with specific, prioritized recommendations. The Lighthouse tab in Chrome DevTools runs it in a few clicks. The CLI version allows it to run in a CI/CD pipeline as a quality gate on every pull request.
Best for: Getting a structured, actionable report on what to fix, with explanations of each issue and direct links to guidance. Running it in CI catches performance regressions before they reach production. The JSON output format makes it possible to track scores over time programmatically.
Limitation: Lighthouse runs in lab conditions and is particularly sensitive to CPU load on your local machine. Scores from DevTools can vary significantly between runs if other applications are running. For stable, repeatable scores, use the CLI version in a clean environment with a consistent CPU profile.

Photo by Markus Winkler on Pexels
4. WebPageTest
WebPageTest runs page load tests from real browsers on real hardware in locations around the world. Unlike local DevTools tests, a WebPageTest run from a specific city on a specific device profile reflects what a user in that location on that device would actually experience. The waterfall chart shows every resource fetch in sequence, making it easy to see exactly what is happening at each millisecond of page load.
Best for: Reproducing performance issues that only appear on slower networks or lower-end devices, testing from multiple geographies, and getting a detailed waterfall view of resource loading order and timing. The filmstrip view shows what the page looks like at each second of load, which is useful for identifying the LCP element and confirming when it actually renders.
Limitation: Free tier has rate limits and queued test execution. Not suitable for rapid iterative testing during development. Plan on running a few targeted tests at key milestones rather than using it as a fast feedback loop.
5. GTmetrix
GTmetrix runs performance tests from multiple locations and presents a combined score alongside a Lighthouse report and a filmstrip view of the page loading in frames. The comparison feature lets you run tests before and after a change and see the difference side by side. The filmstrip view makes it easy to see exactly when the LCP element paints and whether any layout shifts are visible between frames.
Best for: Quickly verifying that a fix improved LCP and getting a visual sense of how the page loads frame by frame. The before/after comparison is particularly useful for communicating performance improvements to stakeholders who want to see a visual difference.
Limitation: Free tier is limited to specific test locations and a limited number of tests per month. For teams that run many tests during an optimization sprint, the free tier can run out quickly.
6. HTTP Archive (Research and Benchmarking)
HTTP Archive is a nonprofit project that crawls millions of web pages on a regular schedule and archives their performance data. The data is queryable via BigQuery for custom analysis. The site publishes regular Web Almanac reports with detailed analysis of web performance trends, broken down by page type, CMS, and technology stack.
Best for: Benchmarking your site's performance metrics against industry-wide averages, understanding how common specific issues are across the web, and tracking how performance patterns evolve over time. The Web Almanac's annual performance chapter is one of the best publicly available analyses of Core Web Vitals across the web.
Limitation: HTTP Archive data is aggregate and historical. It's a research and benchmarking tool, not a diagnostic tool for your specific site. There's a meaningful learning curve to querying the data effectively via BigQuery.
7. web.dev (Learning Resource and Measure Tool)
web.dev is Google's resource for web performance best practices. It hosts a Measure tool for running a Lighthouse audit from a URL, but more importantly it's where Google publishes detailed technical guidance on Core Web Vitals, including how each metric is calculated, what causes failures, and how to fix specific issues. The articles are written by the Chrome team members who define the metrics, which makes them unusually authoritative.
Best for: Understanding the "why" behind a score, reading Google's official guidance on LCP, CLS, and INP, and finding detailed case studies from sites that have improved their scores. When you find an issue in DevTools or Lighthouse and need to understand it deeply before writing a fix, web.dev is where you go.
Limitation: It's documentation and guidance, not a diagnostic tool for your specific site. Use it alongside the other tools in this list.
How These Fit Together in Practice
A practical Core Web Vitals workflow uses these tools in a specific sequence. Start with Search Console to find which page templates have field data problems and which URLs to prioritize. Run those specific URLs through WebPageTest or GTmetrix to get a baseline lab measurement. Use Chrome DevTools to drill into the specific resources and code causing the issues on those pages. Use Lighthouse to generate a structured list of recommendations. Use web.dev to read the guidance for each issue type before writing fixes. Then run WebPageTest or GTmetrix again after shipping the fix to confirm the score moved.
For a step-by-step guide to using these tools in a real production audit, 137Foundry published a full walkthrough that covers the complete process from field data to fix verification: How to Audit and Fix Core Web Vitals on a Production Website.
The tooling in this space is genuinely excellent and genuinely free. The bottleneck is rarely tool access. It's knowing which issues to prioritize, how to interpret what the tools are reporting, and how to trace a reported problem to the specific code that's causing it. That part comes from running the process on real sites and building familiarity with what each tool is actually measuring.

Top comments (0)