Late April was mostly about counting tests the way your charts already do, giving sysadmin leads a denser board and a proper read-only analysis page, and wiring related tags so thin blog archives still send readers somewhere useful. We also lined up marketing anchors and small UI fixes so the path from a feature page into signup or the waitlist behaves the same everywhere.
Monthly quota now follows scored PageSpeed runs
Your plan limit is PageSpeed tests per month. We only decrement the allowance when a run finishes with a Lighthouse performance score (the 0–100 headline number you see on site tests). HTTP failures, timeouts, or responses with no score no longer eat quota. That matches how we store a site test as worth charting only when there is a score to plot.
Customer dashboard numbers
On normal customer accounts (not sysadmin), the API usage strip at the top of the dashboard still shows three figures: successful runs today, this calendar month (we abbreviate thousands as “2.4k”), and errors today. Successful uses the same scored rows as quota; errors groups failed attempts so you can tell whether the lab or the URL is flaking without guessing from gaps in history. The strip still links to the day-by-day API usage table. The same ideas are spelled out for prospects on the API access and quota visibility feature page.
Pricing FAQ and product behaviour
The pricing page FAQ already described which runs count toward quota. Counters in the app now match that wording: success means a scored result you can act on, not a bare HTTP 200 with empty metrics.
When a scheduled test has no performance score
If automation gets a payload that cannot produce a score, we treat it as failed for quota and record an explicit failure instead of a row that looks healthy with no number. You spend less time hunting empty rows when PageSpeed Insights is intermittent or the URL blocks the lab.
Leads board and full-page analysis
The admin leads management and prospecting area is still in development with a small beta group. It is where agencies will run prospecting against sites that are not yet on a customer organisation: discovery jobs, scheduled lab tests, and the public report link you might send before someone becomes a paying tenant. The marketing page describes the intended workflow; the April changes already change how you work the queue inside the app.
Index table: scan before you open a lead
The index now shows enough signal on the first screen that you can sort and skim without opening every lead. Extra columns are on by default:
- Pages discovered tells you how much of the site the crawler actually mapped (see automated page discovery for how that fits the wider product). A low number next to a large marketing site is often the first hint that auth walls, aggressive bot blocking, or a shallow sitemap will skew later scores.
- Completed tests and pending tests compare what we have already run on mobile and desktop with what discovery says we should expect. Pending stays visible when a device lane has not caught up yet, so you are not guessing from a single “last tested” date in the row title.
- Report shows the date of the latest public report plus a link that opens in a new tab, so you can sanity-check the customer-facing page before you paste it into an email.
- Mobile and desktop render the latest analysis performance score as colour-coded badges: green from 90, amber from 50, red below, and “N/A” when we never got a scored run for that device.
Platform or CMS and priority remain in the column manager. Turn them on when you need taxonomy or routing metadata; leave them off when you want a wide table that still wraps long headers instead of crushing column widths.
Lead analysis: same report layout as monitored sites
Choosing View on a lead analysis opens a full-page, read-only record headed with the lead name and the URL we tested. Layout and sections match what you already know from monitored site tests: status and device, timestamps, a full-page or viewport screenshot when the stored response includes image data, and the headline 0–100 performance score.
Under that you get the same lab detail we surface for production monitoring: core timings (LCP, INP, CLS, and the rest of the Lighthouse set), optional Chrome UX Report field data when Google returns it, then diagnostics and optimisation opportunities, passed audits, and resource summaries. Nothing on the page is editable on purpose. When you are reconciling a PDF or a prospect email against what the crawler saw, you want a frozen snapshot of our store, not a second editor that could drift from the run you quoted.
If you already rely on the monitored-site test view for client conversations (the same framing as on the Core Web Vitals monitoring page), you should not have to learn a parallel “lead-only” dialect. Same sections, same ordering, different object in the database.
Blog archives and marketing pages
Thin blog tag pages can show related posts at the bottom when a tag only lists a few articles, so you still have onward reading. Across the public site, we cleaned up internal sign-up links, so you are less likely to land in the wrong place when you move from a post or feature page into the waitlist or account flow.
What to do next
If you are on a paid plan (pricing), run your next batch of tests and watch the API usage strip: successful totals should move with the scores you see on site tests, and error spikes should show up the same day. If you work leads in admin, sort by pages discovered or completed tests before opening a single row. For the public overview of that workflow, see leads management and prospecting.
The next changelog will cover the first half of May.
Top comments (0)