If you only ever managed one production URL, a single tool tab might be enough. The moment you support a small portfolio, “multi site performance monitoring” becomes a different job: you need a shared place to see which properties are green, which need attention, and which tests ran last week without opening five bookmarks and three spreadsheets. This product spotlight is about how Apogee Watcher keeps that work in one dashboard so agency teams are not re-learning a new surface for every account.
Why “one login per client” breaks down
A common starting pattern looks sensible at two or three sites: create a free account in a speed tool for Client A, another for Client B, export PDFs, file them. Past roughly five properties the cracks show. Credentials scatter, renewal dates do not line up, and the person who “owns” the PageSpeed account for Client C left six months ago. You end up with performance data in more places than the code you ship.
The alternative is a single paid stack with API keys, which fixes identity but not structure. A folder of API keys and a shared spreadsheet is still not a portfolio-level view. It does not answer “across all retainers, who is drifting on mobile LCP this month?” in one pass. A dashboard is not a branding nicety. It is the minimum surface to run multi site performance monitoring as an operational habit rather than a series of one-off tasks.
What sits under the umbrella: organisation, sites, pages
Apogee Watcher is built around a simple hierarchy you can explain to a new hire in a sentence.
Organisation (your team’s workspace)
You work inside an organisation tied to your subscription. Team members, billing, and the shared list of sites all live there. The goal is to mirror how agencies already think: one company, many client properties, one place to set rules.
Site (a monitored property)
Each site is a hostname or project you are responsible for, with its own page list, schedule, budgets, and test history. When you add a new client, you add a new site. You are not duplicating a whole new environment.
Page (a URL you measure)
Under each site you maintain the URLs you care about, whether they were added by hand or discovered from sitemaps and crawls. The dashboard ties every scheduled run back to a site, then to a page, so a regression is never just “a number in a void”. You can see which property it belongs to.
That split matches accountability. The account team asks about a named brand; the dashboard answers at the site level. The engineering team asks which template broke; the answer is at the page level with lab metrics and, where available, field signals from the same test flow.
One PageSpeed relationship, not one key per client
A practical pain for agencies is Google’s own quota and credential story. Requiring every client to create a Cloud project and hand you an API key is slow, sometimes impossible under procurement, and it creates a support burden when keys rotate. Apogee Watcher holds the PageSpeed Insights relationship for you. You configure sites and pages inside the product. You are not pasting a different key into a script for each domain.
We say this in other articles when we compare manual checks and automation and when we walk through set-up for multiple properties, but the product angle is direct: the dashboard is the control plane for that shared integration. The value is not “mystery sauce”. It is that your team can standardise on one workflow and one place to see usage, instead of a patchwork of keys and cron hosts.
What you actually see in one place
Site list as portfolio health
From the top level you can scan which sites you are monitoring, open one quickly, and work through onboarding or review without context-switching between unrelated products. The point is to support the weekly habit described in a core web vitals monitoring checklist for agencies: assign owners, set schedules, and review trends on a fixed rhythm. A list that lives next to the tests makes that believable. A set of files on someone’s desktop does not.
Per-site configuration, one product
Each site can carry its own schedule, budgets, and alert settings. A quiet brochure site and a high-traffic retail build do not need identical thresholds. The dashboard keeps those differences in one product surface so your defaults stay firm while exceptions stay visible. For how thresholds connect to email, read the spotlight on performance budgets and email alerts; the question here is which screen you use when you are juggling many customers.
History tied to the site
Storing run history under the same site object means you can show change over time without re-importing old CSVs. When you are building a monthly review or explaining movement to a client, you are drawing from a single system of record, not a merge of exports.
How this supports agency workflows, not just engineering
Sales and account teams
When you sell performance monitoring or audits as a line item, the pitch is stronger if you can describe a concrete operating model. “We add your site to our Watcher org, we agree budgets, and you get the same test cadence and reporting as our other retainer clients” is easier to trust than “we have some spreadsheets and we run PageSpeed when someone has time”.
Delivery and DevOps
The same view helps the person on call. If a client emails about a slow checkout, the first move is to open that client’s site, confirm when tests last ran, and compare the failing URL to neighbouring templates. A dashboard built around sites keeps that path short. It lines up with the time-and-cost case we make in automated vs manual PageSpeed testing: you still invest hours in real fixes, but you spend fewer hours chasing where last week’s numbers went.
Prospecting next to monitoring
The product also supports workflows where performance evidence feeds outreach. The detailed funnel story lives in from monitoring to pipeline. Even a lightweight version of that story needs a list of sites and URLs you can act on. One dashboard means your monitoring inventory and your prospecting inventory can stay aligned under the same roof instead of in separate silos you reconcile by hand.
Fit with plans and team size
We are not going to recite a price list here, because plans change, but the design intent is consistent: an Agency subscription is for organisations that add many sites under one subscription without treating each new hostname as a new billing project. A Solo or small-team plan is for operators who do not need that breadth. The dashboard layout is the same; the limit is how many sites and which schedule frequencies your tier allows. If you are deciding whether to standardise on one product for the whole portfolio, start with how many sites you need live in the next quarter and how often you want tests to run, then match the tier to that, not the other way around.
Deeper team permissioning (who can edit which site, invite-only access for clients) is on the roadmap in places; this post describes what is core today: one organisation, many sites, shared visibility for your team.
Getting to value quickly
- Create or join an organisation and add your first site with a name your team will recognise in six months, not a code name that made sense in Slack on day one.
- Attach the URLs you need, using automatic page discovery if the sitemap is trustworthy, or a manual list for launch.
- Set a schedule and budgets that match the client’s risk, not a single global default, then confirm alerts route to the inboxes you actually read.
- Review the site list weekly in the same stand-up where you triage other client health signals. If a site is stale or out of contract, retire it from the list so the portfolio view stays honest.
If you are new to the metrics themselves, the overview what are Core Web Vitals in 2026 still gives the fastest grounding before you tune per-site priorities.
What this is not
A unified dashboard is not a replacement for deep debugging. When you need a custom slow connection profile, a filmstrip, or a trace, you will still open a diagnostic tool built for that job. A unified dashboard is also not a replacement for RUM if you have already instrumented real users. Watcher is synthetic and CrUX where the API supplies it; your analytics stack is still the right place for product-specific funnels. The dashboard’s job is to keep synthetic multi site performance monitoring consistent and at hand so those deeper sessions happen on real problems, not on surprises you could have seen from a schedule.
Summary
Multi site performance monitoring only works as a process when your list of properties, your tests, and your thresholds live in one system your team can trust. Apogee Watcher models that as organisations, sites, and pages, with a single PageSpeed integration and a portfolio-friendly surface. If you are outgrowing ad hoc tabs and want one place to run that loop, the product is built around that need.
Start with a free account to add a handful of sites, point discovery at a sitemap, and see the dashboard with your own data.
FAQ
Is every client site a separate “account”?
No. You add each client as a site under your organisation. Your team uses one product login; clients do not get their own Watcher log-in unless you choose to share reporting another way. The model is one agency workspace, many monitored properties.
Do we need a Google API key for every domain?
No. Watcher uses the product’s own PageSpeed integration. You do not manage a separate key per client for standard scheduled testing. If your plan or an advanced integration ever required your own key, the admin area would call that out explicitly. Today the path is: add a site, add pages, run tests.
How is this different from a spreadsheet of URLs?
A spreadsheet is static. The dashboard is tied to schedules, stored results, budgets, and history per site. It answers “what did we know last week?” without someone rebuilding a pivot table. You can still export when you need a side deck, but the source of truth lives in the app.
Can we use this if we are not an agency?
Yes. The same “many sites, one org” model fits in-house teams with several brands, regional properties, or microsites. The article uses agency language because that is our primary fit, not because the product refuses other shapes.
What about performance budgets and alerts across many sites?
Budgets and alert channels are configured per site so a strict retail client and a light brochure site are not forced into the same numbers. The behaviour is covered in the performance budgets and email alerts spotlight. This post is about the container those settings live in: one dashboard, many sites, clear ownership per property.
Will you add more org-level and client-facing controls later?
We are working on a richer team and access story, including how agencies eventually expose views to their own customers. The current value is a clean internal portfolio view with room to grow. Watch release notes and changelogs for when those features land.
Where can I read more on automated monitoring in general?
For the business case, start with why agencies need automated performance monitoring in 2026. For setup detail, the how-to for multiple sites walks the same object model from a procedural angle.




Top comments (0)