DEV Community

Cover image for How to Set Up Automated PageSpeed Monitoring for Multiple Sites
Apogee Watcher
Apogee Watcher

Posted on • Originally published at apogeewatcher.com

How to Set Up Automated PageSpeed Monitoring for Multiple Sites

The Problem: Manual Testing Doesn't Scale

You manage 10 client sites. Each has 5 key pages. You want to test both mobile and desktop. That's 100 tests — every time you want a snapshot of your portfolio's performance.

Running those tests manually in PageSpeed Insights takes about 2 minutes each. That's over 3 hours of tab-switching, waiting, and copy-pasting numbers into a spreadsheet. And by the time you finish the last site, the first site's results are already hours old.

This is why automated monitoring exists. Set it up once, and every test runs itself.

What Automated PageSpeed Monitoring Should Do

Before choosing a tool or approach, define what you need:

Must-Have Features

  • Scheduled testing — Tests run automatically at configured intervals (daily, hourly, etc.)
  • Multiple sites — Manage all your sites from one place
  • Mobile and desktop — Test both strategies for every page
  • Historical data — Store results over time for trend analysis
  • Alerting — Get notified when performance degrades below a threshold
  • No manual intervention — Once configured, it runs without you

Nice-to-Have Features

  • Automated page discovery — The tool finds pages to monitor from your sitemap
  • Performance budgets — Set thresholds per site and get alerted on violations
  • Client-ready reports — Generate PDFs or summaries for clients
  • Team access — Multiple team members with role-based permissions
  • API access — Pull data programmatically for custom dashboards
  • White-label branding — Reports and dashboards with your agency branding

Approach 1: DIY with Open-Source Tools

If you have the technical chops and want full control, you can build automated monitoring with open-source tools.

Using Lighthouse CI

Lighthouse CI runs Lighthouse tests in a CI/CD pipeline and stores results over time.

Setup:

  1. Install Lighthouse CI:
npm install -g @lhci/cli

Enter fullscreen mode Exit fullscreen mode
  1. Create a lighthouserc.json configuration:
{
  "ci": {
    "collect": {
      "url": [
        "https://client1.com/",
        "https://client1.com/pricing",
        "https://client2.com/",
        "https://client2.com/products"
      ],
      "numberOfRuns": 3,
      "settings": {
        "preset": "desktop"
      }
    },
    "assert": {
      "assertions": {
        "categories:performance": ["error", {"minScore": 0.9}],
        "largest-contentful-paint": ["warn", {"maxNumericValue": 2500}],
        "cumulative-layout-shift": ["warn", {"maxNumericValue": 0.1}]
      }
    },
    "upload": {
      "target": "lhci",
      "serverBaseUrl": "https://your-lhci-server.com"
    }
  }
}

Enter fullscreen mode Exit fullscreen mode
  1. Set up a LHCI server for storing and viewing results:
npm install -g @lhci/server
lhci server --storage.storageMethod=sql --storage.sqlDatabasePath=./lhci.db

Enter fullscreen mode Exit fullscreen mode
  1. Schedule runs with cron:
# Run daily at 7 AM
0 7 * * * cd /path/to/config && lhci autorun

Enter fullscreen mode Exit fullscreen mode

Pros: Free, fully customisable, integrates with CI/CD pipelines. Cons: Requires server setup and maintenance, no built-in alerting, no multi-tenant management, no client-facing reports, limited mobile testing (emulated only).

Using Google PageSpeed Insights API Directly

For teams comfortable with scripting, you can call the PSI API directly:

# Basic API call
curl "https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=https://example.com&strategy=mobile&key=YOUR_API_KEY"

Enter fullscreen mode Exit fullscreen mode

Build a monitoring script:

import requests
import json
from datetime import datetime

SITES = [
    {"name": "Client 1", "urls": ["https://client1.com/", "https://client1.com/pricing"]},
    {"name": "Client 2", "urls": ["https://client2.com/", "https://client2.com/products"]},
]

API_KEY = "your-api-key"
STRATEGIES = ["mobile", "desktop"]

def run_test(url, strategy):
    endpoint = "https://www.googleapis.com/pagespeedonline/v5/runPagespeed"
    params = {"url": url, "strategy": strategy, "key": API_KEY}
    response = requests.get(endpoint, params=params)
    data = response.json()
    # API response structure: lighthouseResult contains categories and audits
    # See https://developers.google.com/speed/docs/insights/v5/reference/pagespeedapi/runpagespeed
    lh = data.get("lighthouseResult", {})
    cats = lh.get("categories", {})
    audits = lh.get("audits", {})
    return {
        "url": url,
        "strategy": strategy,
        "score": (cats.get("performance", {}).get("score") or 0) * 100,
        "lcp": audits.get("largest-contentful-paint", {}).get("numericValue", 0),
        "cls": audits.get("cumulative-layout-shift", {}).get("numericValue", 0),
        "tbt": audits.get("total-blocking-time", {}).get("numericValue", 0),
        "tested_at": datetime.now().isoformat()
    }

# Run tests
for site in SITES:
    for url in site["urls"]:
        for strategy in STRATEGIES:
            result = run_test(url, strategy)
            print(f"{site['name']} | {url} | {strategy} | Score: {result['score']}")
            # Store result in database, CSV, or send to Slack

Enter fullscreen mode Exit fullscreen mode

Pros: Maximum flexibility, use any language, store data anywhere. Cons: You build and maintain everything — alerting, storage, reporting, UI, error handling, quota management, scheduling. Significant development and maintenance effort.

API quota note: Google's PageSpeed Insights API allows 25,000 requests per day on the free tier. Each URL + strategy (mobile/desktop) counts as one request. For 25 clients × 5 pages × 2 strategies × 30 days, that's 7,500 requests/month — well within quota. Scale up carefully; request a quota increase if you need more.

Approach 2: Purpose-Built Monitoring Platforms

For most teams, a purpose-built platform is the better choice. You get all the features without the maintenance burden.

What to Look for in a Platform

Feature Why It Matters
Multi-site management One dashboard for all clients
Automated page discovery Don't manually enter every URL
Scheduled testing Tests run without manual intervention
Performance budgets Set thresholds and get alerted on violations
Alerting (email, Slack) Know immediately when something breaks
Historical data Track trends over weeks and months
Reporting Generate client-ready reports
Team access Multiple users with appropriate permissions
API access Integrate with your existing tools

Platform Comparison Factors

When evaluating platforms, consider:

  • Pricing model — Per site? Per test? Flat rate? How does cost scale with your growth?
  • Test accuracy — Does it use Google's actual PageSpeed Insights API or its own analysis?
  • Mobile testing — Real device testing or emulated? (Emulated is standard; real device testing is rare and expensive)
  • Data retention — How long are historical results stored?
  • Alert flexibility — Can you set custom thresholds? Multiple channels? Cooldowns?
  • Report customisation — Can you brand reports with your agency logo?

Approach 3: Hybrid (Recommended for Growing Agencies)

The most effective setup combines a monitoring platform with CI/CD integration:

In Production: Monitoring Platform

Use a platform like Apogee Watcher for ongoing production monitoring:

  • Daily automated tests on all client sites
  • Performance budgets per site with alert notifications
  • Historical trend tracking
  • Monthly client reports

In Development: Lighthouse CI

Use Lighthouse CI in your deployment pipeline:

  • Run performance tests on every PR or deployment
  • Block deploys that regress performance below budget
  • Catch issues before they reach production

The Workflow

Development:
  PR opened → Lighthouse CI runs → Budget check passes → PR approved

Production:
  Deploy → Monitoring platform detects change → Tests run →
  Results compared against budgets →
  If regression: Alert fired → Team investigates → Fix deployed

Ongoing:
  Daily tests run automatically → Results stored →
  Monthly report generated → Sent to client

Enter fullscreen mode Exit fullscreen mode

This hybrid approach catches issues at two points: before they ship (CI) and after they ship (production monitoring). If something slips through CI (which can happen — lab tests don't catch everything), production monitoring is your safety net.

Step-by-Step Setup Guide

Regardless of which approach you choose, follow these steps:

Step 1: Inventory Your Sites

Create a spreadsheet of all sites you need to monitor:

Client,Domain,Key Pages,Strategy,Priority
Client A,clienta.com,"/, /pricing, /contact",Both,High
Client B,clientb.com,"/, /products, /checkout",Both,High
Client C,clientc.com,"/, /blog, /about",Both,Medium

Enter fullscreen mode Exit fullscreen mode

Step 2: Define Performance Budgets

Set budgets for each site type. For an introduction to what is Core Web Vitals (LCP, INP, CLS), see our practical guide. Use the templates from our Performance Budget Thresholds Template and The Complete Guide to Performance Budgets as a starting point:

  • E-commerce: LCP ≤ 2.0s, INP ≤ 150ms, CLS ≤ 0.05
  • Content sites: LCP ≤ 2.5s, INP ≤ 200ms, CLS ≤ 0.10
  • Landing pages: LCP ≤ 2.0s, INP ≤ 150ms, CLS ≤ 0.05

Step 3: Configure Your Monitoring Tool

In your chosen platform:

  1. Add each site with its domain
  2. Run page discovery to find all monitorable pages (or add key pages manually)
  3. Set performance budgets per site and strategy (mobile/desktop)
  4. Configure alerts — Choose channels (email, Slack) and recipients
  5. Set the test schedule — Daily is standard; twice daily for high-priority sites

Step 4: Establish Your Review Routine

Automated monitoring doesn't mean zero human attention. Establish a rhythm:

  • Daily: Glance at alerts. If none fired, you're good.
  • Weekly: Review the dashboard for trends. Are any sites gradually degrading?
  • Monthly: Generate client reports. Review budget compliance. Adjust budgets if needed.

Step 5: Integrate with Your Workflow

Connect monitoring to your team's existing workflow:

  • Slack integration: Performance alerts appear alongside other team notifications
  • Ticketing: Create tickets automatically when alerts fire (via webhooks)
  • Reporting: Schedule client reports to auto-generate and send

Step 6: Onboard Your Team

Everyone who might respond to alerts needs to know:

  • What the metrics mean
  • What the budgets are
  • How to investigate a regression
  • How to run a manual test when needed
  • Where to find historical data

Common Pitfalls

1. Monitoring Too Many Pages

Start with key pages (homepage, landing pages, conversion pages). Add more over time. Monitoring 500 pages on day one creates noise without value.

2. Setting Alerts Too Aggressively

Performance scores fluctuate naturally. A score of 89 on one test and 91 on the next doesn't mean anything changed. Use cooldowns (e.g. 8–24 hours between alerts for the same page) and only alert on sustained or significant changes. Consider alerting only when a metric is over budget for 2+ consecutive tests.

3. Not Monitoring Mobile

Desktop scores are usually better than mobile. If you only monitor desktop, you're seeing an optimistic picture. Always monitor both strategies.

4. Forgetting to Review

Automated tests are only useful if someone looks at the results. Automated alerts are only useful if someone responds. Build the review habit into your weekly routine.

5. Not Adjusting Over Time

As sites evolve (new features, new content, redesigns), your monitoring setup needs to evolve too. Add new pages, adjust budgets, update alert recipients.


What You Get When It's in Place

Once automated monitoring is running, you get:

  • One place for all sites — No more tab-switching or separate spreadsheets; every client's results live in one dashboard.
  • History and trends — Stored results over time so you can see whether a site is improving or degrading.
  • Alerts that fire when it matters — Not every tiny fluctuation; thresholds and cooldowns mean you're notified when metrics cross the line.
  • No maintenance burden — No cron jobs to debug, no API keys to rotate, no scripts that break after a platform update. The tool runs the tests; you review and act.

That's the benefit of a purpose-built platform versus DIY: you spend time on performance decisions, not on keeping the monitoring system alive.


The End Result

When automated monitoring is set up correctly, your daily effort looks like this:

  1. Check Slack — Any alerts overnight? If not, everything's fine.
  2. If alert fired — Click the link in the alert, review the data, investigate, fix or create a ticket.
  3. Done. — Total time: 5-15 minutes.

Compare that to 3+ hours of manual PageSpeed testing. Automated monitoring doesn't just save time — it fundamentally changes your relationship with performance from reactive to proactive.

What you can achieve: You can have scheduled monitoring across all your sites, with alerts and history, without running scripts or maintaining pipelines yourself. One dashboard, clear budgets, and client-ready reporting when you need it — so performance stays visible without becoming a second job.


FAQ

What's the difference between Lighthouse CI and a monitoring platform?Lighthouse CI runs in your CI/CD pipeline — it catches regressions before they ship. A monitoring platform (like Apogee Watcher) runs tests on production on a schedule — it catches issues after they ship. Use both: CI for pre-deploy gates, monitoring for ongoing production visibility.

How many pages should I monitor per site?Start with 5–10 key pages: homepage, top landing pages, conversion pages. Add more over time. Monitoring 500 pages on day one creates noise without value. Quality of coverage beats quantity.

Does automated monitoring require a sitemap?Many tools can discover pages from a sitemap automatically. If the client has no sitemap, add key URLs manually. Most platforms support both sitemap discovery and manual URL entry.

What's the cost of building vs buying automated monitoring?Building from scratch (API calls, storage, alerting, UI, multi-tenant management) takes months. Ongoing maintenance adds up. Purpose-built platforms cost $50–200/month and eliminate that burden. For most teams, buying is the better ROI.


Apogee Watcher automates PageSpeed monitoring for agencies managing multiple sites. Automated discovery, daily testing, performance budgets, alerts, and client-ready reports — all from one dashboard. Join the waitlist for early access.

Top comments (0)