A performance budget on paper is only a policy. In production it needs two things: thresholds your tests actually enforce, and notifications people will read without muting the sender. This product spotlight walks through how Apogee Watcher connects performance budgets to email alerts so regressions show up in your inbox as structured digests tied to each scheduled run.
If you are new to the vocabulary, our Core Web Vitals monitoring checklist for agencies covers the operational habits around budgets; here we focus on what the product does with those numbers after each PageSpeed Insights-backed test completes.
Why budgets and alerts belong together
Teams adopt budgets for different reasons. Some need a line in the sand before a release train ships. Others need evidence for a client retainer (“we agreed LCP stays inside this band”). Without automated checks, those thresholds become a PDF you filed once. With scheduled lab tests, the same thresholds can answer a simpler question: did this week’s deploy move any URL outside the band we care about?
Alerts are the other half. If every breach generated a separate message per URL and per metric, a single bad deploy on a large site would bury your team in mail before anyone opened a dashboard. Apogee Watcher sends one digest email per site per budget-check run when the alert channel is email. The digest lists up to ten pages with the worst scores first, plus totals for how many pages breached budgets and how violations break down by metric. That design follows the same instinct as a good incident summary: enough detail to triage, not enough to replace your issue tracker.
What “a budget” means inside Watcher
Budgets in Apogee Watcher are site-level and strategy-specific. For each monitored site you configure separate rows for mobile and desktop lab strategies. That matters because retail and content sites often diverge sharply by breakpoint: a template can pass mobile LCP while desktop TBT balloons after a script change.
Each active budget stores thresholds the product compares against stored lab results from scheduled runs. You can set caps and floors across the metrics PageSpeed Insights exposes in our results model, including the following:
- Performance score (minimum acceptable Lighthouse-style score)
- Largest Contentful Paint (LCP) as a time budget
- Interaction to Next Paint (INP) where the API supplies it
- Cumulative Layout Shift (CLS)
- First Contentful Paint (FCP)
- Total Blocking Time (TBT)
- Speed Index
Not every team enables every field. A content marketing site might care most about LCP and CLS; an app-heavy property might weight INP and TBT more heavily. The performance budget thresholds template post includes starter numbers you can copy before you tune per client.
When you add a site, the product creates default budget rows for both strategies so you are not starting from an empty configuration. You still choose which numbers reflect your contract or internal standard, and you can deactivate a strategy’s budget if you only monitor one form factor for a given property.
From a scheduled test to an alert row
The loop is intentionally boring, which is what you want from monitoring infrastructure:
- Scheduled tests run on the cadence allowed by your plan (hourly, daily, weekly, and so on). Each run produces fresh lab metrics per page and strategy.
- Budget evaluation compares those metrics to the active budget for the same site and strategy. When a value is worse than the threshold (for example LCP above your maximum seconds, or performance score below your minimum), the system records an alert with the metric name, the threshold you set, and the value observed.
- Resolution happens automatically when a later test shows the metric back inside the budget. Resolved alerts stop contributing to “open” noise; you keep history for auditing without treating old breaches as current fires.
That pipeline sits on top of the same automated PageSpeed monitoring setup this blog has covered before: organisations, sites, pages, and scheduled tests. Budgets do not replace discovery or URL hygiene; they judge the URLs you already chose to measure.
Email digests: what actually arrives
When new violations exist after a run and your budget’s alert channel is set to email, Apogee Watcher sends the budget-violation digest to each active organisation admin. The mail is scoped per site, not per page. Inside one digest you will typically see:
- Summary counts for how many pages had violations and how many individual metric breaches occurred, plus a breakdown by metric type so you can tell whether the deploy mainly hurt LCP or spread pain across several signals.
- Detailed rows for up to ten pages, prioritised so the weakest performance scores surface first, then by URL when scores tie. If more than ten pages failed, the email tells you that you are viewing the first ten of a larger set, while the totals still reflect the full picture.
Digest timing aligns with your scheduled test frequency for that site. The footer of the email states that it was generated from the automated schedule, which keeps expectations aligned: this is not a real-time push from your CDN; it is the post-run account of what the lab saw after the last completed sweep.
Recipients are organisation admins because alert routing is tied to account responsibility. If you need a wider broadcast inside a client team, forward the digest or pull the same numbers into your stand-up doc. For Slack-first teams, the product’s data model reserves other alert channels for when those integrations ship end to end; today’s reliable path for delivery is email.
Cooldowns and why they exist
A page that fails a budget on Monday will often fail again on Tuesday until someone ships a fix. Without guardrails you would receive a fresh digest with the same headline every day. Apogee Watcher applies cooldown logic keyed to page, strategy, and time since the last alert for that combination. The goal is simple: signal when something newly breaks or re-breaks, not to ping you on every run while the underlying issue is unchanged.
Cooldowns interact with your schedule. A site on a daily cadence still gets timely reminders; a weekly site batches more change into each run. If you tighten budgets after a major refactor, expect a burst of legitimate new violations while the system learns what “normal” looks like under the new line. That is working as intended.
How this fits next to policy and people
Budgets answer what crossed a line. People still answer who fixes it and how it gets prioritised. Many teams pair Watcher with a lightweight policy doc so on-call knows which breaches page the SEO lead versus the platform team. Our Slack alert policy template for web performance teams is written for Slack-shaped workflows, but the same sections (ownership, severity, quiet hours) translate directly to email-first teams: paste the digest link into the ticket, attach the URL list, and move on.
If you sell performance work to clients, budgets also give you a defensible story: you are not arguing from a one-off Lighthouse screenshot taken on someone’s laptop; you are pointing to thresholds you agreed in writing and time-stamped breaches after scheduled runs. That pairs naturally with the prospecting angle in From monitoring to pipeline, even though this spotlight stays on product mechanics rather than sales motion.
Getting started in a few concrete steps
- Add or select a site and confirm your page list covers the templates you care about. Use automatic page discovery if the inventory has drifted.
- Open budgets for that site and set mobile and desktop thresholds to match your standard or the client contract. Start from the template post if you do not want to guess at seconds and milliseconds.
- Choose email as the alert channel on each active budget row your plan allows, and verify admin membership on the organisation so the right inboxes receive digests.
- Let at least one scheduled run complete after deploy. If nothing breaches, you will not get mail, which is also useful signal.
When you are ready to stress-test the workflow, temporarily lower a threshold on a staging URL you control, run a test, and confirm the digest lists the expected metric. Roll the threshold back once you have validated routing.
Create a free account to configure budgets, scheduled PageSpeed tests, and email digests without wiring the PageSpeed Insights API yourself.
FAQ
Do I need separate budgets for mobile and desktop?
You should set both if you care about both experiences. Lab scores often diverge because assets, layout, and third-party behaviour differ by viewport. Empty or inactive strategies simply skip evaluation for that form factor.
Will I get one email per failing page?
No. Email notifications are digests: one message per site per run (for the email channel), with detailed rows for up to ten pages and summary totals for the full set of violations.
Who receives the digest?
Active organisation admins on the account. Viewer or manager roles do not automatically receive budget mail; adjust membership if someone else should be in that admin list.
What if I only want alerts on production, not staging?
Keep staging on its own site record with stricter or looser budgets, or pause budgets on environments you do not want to page yourself about. The product evaluates whatever URLs you attach to that site.
Does Apogee Watcher replace my status page or incident tool?
No. It tells you that lab metrics crossed thresholds you set after scheduled PageSpeed runs. You still route that signal through your normal engineering and client communication channels.
Are Slack notifications available?
Additional channels are part of the roadmap. Today, rely on email digests for delivery; check current plan details in the app for which channels your tier exposes.
Top comments (0)