Stop guessing your site speed — monitor it systematically. One script, any number of pages, run it weekly.
The problem with Lighthouse
Lighthouse is great for debugging one page. But when you're managing a SaaS landing page, 20 blog posts, and an e-commerce catalog, "run Lighthouse manually" isn't a workflow — it's wishful thinking.
You need:
- All your pages checked on a schedule
- A flag when any page exceeds your latency threshold
- Zero manual steps
Here's how to build that in about 30 lines of Node.js.
What the analyze endpoint returns
We'll use the SnapAPI analyze endpoint, which headlessly loads a page and returns structured performance + content data:
{
"url": "https://example.com",
"load_time_ms": 1842,
"word_count": 1204,
"technologies": ["React", "Vercel", "Google Analytics"],
"buttons": ["Get started", "Sign in", "View demo"],
"forms": [{ "id": "signup", "fields": 3 }],
"headings": ["Why Example beats the competition", "Pricing"],
"links": ["https://example.com/pricing", "https://example.com/about"],
"cta": "Get started"
}
The key fields for performance monitoring:
| Field | What it tells you |
|---|---|
load_time_ms |
Full page load to network idle (real browser, not simulated) |
word_count |
Ballpark for content sprawl — bloated pages load slower |
technologies |
Detect heavy JS frameworks, tracking pixels, etc. |
buttons / forms
|
CTA presence — catch regression when a deploy breaks your hero CTA |
The 30-line script
// monitor-pages.js
const SNAPAPI_KEY = process.env.SNAPAPI_KEY;
const THRESHOLD_MS = 3000; // flag anything over 3 seconds
const PAGES = [
"https://yoursite.com",
"https://yoursite.com/pricing",
"https://yoursite.com/blog",
"https://yoursite.com/features",
// add as many as you like
];
async function analyzePage(url) {
const res = await fetch(`https://snapapi.tech/analyze?url=${encodeURIComponent(url)}&apiKey=${SNAPAPI_KEY}`);
if (!res.ok) throw new Error(`${url} → ${res.status}`);
return res.json();
}
async function runMonitor() {
const results = await Promise.all(PAGES.map(analyzePage));
const slow = results.filter(r => r.load_time_ms > THRESHOLD_MS);
console.log(`\n✅ Checked ${results.length} pages`);
console.log(`⚠️ Slow pages (>${THRESHOLD_MS}ms): ${slow.length}`);
if (slow.length > 0) {
console.log("\nSlow pages:");
slow.forEach(r => console.log(` ${r.load_time_ms}ms — ${r.url}`));
process.exit(1); // non-zero exit for CI/GitHub Actions
}
}
runMonitor().catch(err => { console.error(err); process.exit(1); });
Run it:
SNAPAPI_KEY=your_key node monitor-pages.js
Output:
✅ Checked 4 pages
⚠️ Slow pages (>3000ms): 1
Slow pages:
4231ms — https://yoursite.com/blog
Adding a Slack alert
Replace the console.log block with a Slack webhook call when slow pages are found:
async function notifySlack(slowPages) {
const lines = slowPages.map(r => `• *${r.load_time_ms}ms* — ${r.url}`).join("\n");
await fetch(process.env.SLACK_WEBHOOK_URL, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: `🐌 *Page speed alert:* ${slowPages.length} slow page(s)\n${lines}`
})
});
}
Then call it inside runMonitor:
if (slow.length > 0) {
await notifySlack(slow);
process.exit(1);
}
Automating it with GitHub Actions
Create .github/workflows/page-speed.yml:
name: Page Speed Monitor
on:
schedule:
- cron: '0 9 * * 1' # Every Monday at 9am UTC
workflow_dispatch:
jobs:
monitor:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: '20'
- name: Run page speed monitor
env:
SNAPAPI_KEY: ${{ secrets.SNAPAPI_KEY }}
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
run: node monitor-pages.js
Add SNAPAPI_KEY and SLACK_WEBHOOK_URL to your repository secrets. Now every Monday morning, you'll get a Slack ping if any page exceeds your threshold — or silence if everything's fine.
Real-world use cases
SaaS landing pages
Your homepage, pricing page, and signup flow are conversion-critical. A 4-second load on /pricing is quietly killing conversions. Set the threshold at 2500ms and get alerted the moment a deploy introduces a regression.
E-commerce category pages
Category pages are high-traffic, often image-heavy, and tend to balloon over time as products are added. Running weekly analysis catches the creep before it compounds.
Blog post audits
Blog posts get updated, embedded widgets get added, images get re-uploaded without compression. A weekly scan across your 50 most-trafficked posts will surface the ones that have gone soft.
Agency client monitoring
If you manage multiple client sites, add all their URLs to a single list and send each client a weekly report. The technologies field also lets you confirm their CDN, analytics, and tag manager are still loaded correctly after deploys.
Compared to running Lighthouse manually
| Lighthouse (manual) | This script | |
|---|---|---|
| Pages per run | 1 | Unlimited |
| Automation | None | GitHub Actions / cron |
| Slack alerts | No | Yes |
| Tech stack detection | No | Yes |
| CTA regression detection | No | Yes |
| Setup time | 0 (built into Chrome) | ~15 minutes |
| Weekly cost | $0 | ~$0.04 (for 20 pages) |
Lighthouse is still useful for deep dives — waterfall charts, render-blocking resource diagnosis, etc. But for monitoring, the script above wins on every axis except initial familiarity.
Extending it: full content audit
The same endpoint surfaces word_count, headings, buttons, and forms. You can extend the monitor to also catch:
-
Missing CTA:
if (!r.cta) flag(r.url, "no CTA detected") -
Thin content:
if (r.word_count < 300) flag(r.url, "thin page") -
Broken forms:
if (r.forms.length === 0 && url.includes("/signup")) flag(...)
One API call, one page load — you get performance, content, and structure all at once.
Get started
SnapAPI has a free tier — 100 calls/month, no credit card. The analyze endpoint is included. If you're monitoring more than a few pages weekly, the paid plan starts at $9/month for 1,000 calls.
Full docs: snapapi.tech/docs#analyze
The script above is self-contained — drop in your PAGES array, add your key, and you're monitoring in under 15 minutes.
Top comments (1)
The "one page at a time in Chrome DevTools" workflow does not scale at all — I run a static site with 90k+ pages deployed via Cloudflare CDN and monitoring is a real operational problem, so the GitHub Actions automation angle here is genuinely useful.
One distinction worth flagging: load_time_ms (network idle) is a proxy metric, not the actual Core Web Vitals Google uses for ranking (LCP, CLS, INP). They correlate but diverge in interesting ways — a page can have a fast load_time_ms but a terrible LCP if the largest image loads late, or a great LCP but bad CLS from layout shifts during hydration. If the goal is ranking signals, Lighthouse CI on a URL sample gives you the actual CWV numbers. For alerting on load regressions and CTA presence, this approach is cleaner and cheaper.
The technologies field for verifying CDN presence after deploys is a nice use case — we do rclone syncs to DigitalOcean Spaces with Cloudflare in front, and checking that Cloudflare is still correctly intercepting after a config change is exactly the kind of regression that could silently slip through.
For anyone scaling the PAGES array past a few hundred URLs: Promise.all will hit rate limits fast. Worth dropping in a concurrency limiter (p-limit or a simple batching loop with delay) before scaling up.