DEV Community

Boehner
Boehner

Posted on

Monitor Page Load Times Across Any URL Fleet Without Running a Browser

Every performance monitoring tool I've tried has the same problem: it runs inside your infrastructure. Lighthouse needs Node. WebPageTest needs a server. Puppeteer needs Chrome running somewhere. For a quick external check of "is my site actually fast for real users?", they're all overkill.

I wanted something dead simple: give it a list of URLs, get back load times, detect regressions.

Here's what I built — and how you can replicate it in under 50 lines.

The approach

Instead of running a headless browser, I use a screenshot API that already has a Puppeteer cluster running in the cloud. The API's load_time_ms field captures real browser load time — navigation start to load event — which is exactly what matters for CWV (Core Web Vitals).

The endpoint I'm using is SnapAPI's /v1/analyze:

curl "https://snapapi.tech/v1/analyze?url=https://example.com" \
  -H "X-API-Key: YOUR_KEY"
Enter fullscreen mode Exit fullscreen mode

Response includes:

{
  "url": "https://example.com",
  "title": "Example Domain",
  "load_time_ms": 83,
  "page_type": "other",
  "technologies": [],
  "word_count": 67
}
Enter fullscreen mode Exit fullscreen mode

83ms is genuinely fast (it's a static HTML page). A typical Next.js app with a cold start might return 1,200–2,400ms. More on that threshold in a minute.

Building the monitor

Here's the full script. It reads a list of URLs, measures load time for each, compares to a stored baseline, and alerts if anything regresses beyond a threshold:

// perf-monitor.js
const fs   = require("fs");
const path = require("path");

const API_KEY     = process.env.SNAPAPI_KEY;
const BASELINE    = path.resolve("./baseline.json");
const THRESHOLD   = 0.20; // alert if 20% slower than baseline
const REPORT_FILE = path.resolve("./perf-report.json");

const URLS = [
  "https://yourdomain.com",
  "https://yourdomain.com/pricing",
  "https://yourdomain.com/docs",
  "https://yourdomain.com/blog",
];

async function measure(url) {
  const res = await fetch(
    `https://snapapi.tech/v1/analyze?url=${encodeURIComponent(url)}`,
    { headers: { "X-API-Key": API_KEY } }
  );
  if (!res.ok) throw new Error(`HTTP ${res.status} for ${url}`);
  const data = await res.json();
  return { url, load_time_ms: data.load_time_ms, measured_at: new Date().toISOString() };
}

async function run() {
  // Load or initialise baseline
  const baseline = fs.existsSync(BASELINE)
    ? JSON.parse(fs.readFileSync(BASELINE, "utf8"))
    : {};

  const results = [];
  const alerts  = [];

  for (const url of URLS) {
    try {
      const { load_time_ms, measured_at } = await measure(url);
      const base = baseline[url];
      const regression = base
        ? (load_time_ms - base) / base
        : null;

      const entry = { url, load_time_ms, measured_at, regression_pct: regression
        ? Math.round(regression * 100)
        : null };
      results.push(entry);

      if (regression !== null && regression > THRESHOLD) {
        alerts.push(`🔴 REGRESSION: ${url}${load_time_ms}ms (was ${base}ms, +${Math.round(regression * 100)}%)`);
      }

      // Update baseline with running average
      baseline[url] = base
        ? Math.round((base * 0.8) + (load_time_ms * 0.2)) // exponential moving avg
        : load_time_ms;

    } catch (err) {
      console.error(`Failed: ${url}${err.message}`);
    }
  }

  // Write outputs
  fs.writeFileSync(BASELINE, JSON.stringify(baseline, null, 2));
  fs.writeFileSync(REPORT_FILE, JSON.stringify({ run_at: new Date().toISOString(), results, alerts }, null, 2));

  if (alerts.length > 0) {
    console.log("\nPERFORMANCE ALERTS:");
    alerts.forEach(a => console.log(a));
    process.exit(1); // Non-zero exit for CI integration
  } else {
    console.log(`✅ All ${results.length} pages within threshold. Fastest: ${Math.min(...results.map(r => r.load_time_ms))}ms`);
  }
}

run();
Enter fullscreen mode Exit fullscreen mode

Run it:

SNAPAPI_KEY=your_key node perf-monitor.js
Enter fullscreen mode Exit fullscreen mode

Example output:

✅ All 4 pages within threshold. Fastest: 83ms
Enter fullscreen mode Exit fullscreen mode

Or if something regressed after a deploy:

PERFORMANCE ALERTS:
🔴 REGRESSION: https://yourdomain.com/blog — 2,840ms (was 1,100ms, +158%)
Enter fullscreen mode Exit fullscreen mode

Integrating into CI/CD

The script exits non-zero on regression, which is all you need for CI gates.

GitHub Actions:

- name: Performance regression check
  env:
    SNAPAPI_KEY: ${{ secrets.SNAPAPI_KEY }}
  run: node perf-monitor.js
Enter fullscreen mode Exit fullscreen mode

Add it after your deploy step. If load time spikes more than 20% from baseline, the job fails and blocks the merge.

As a scheduled cron job:

# .github/workflows/perf-monitor.yml
on:
  schedule:
    - cron: '0 */6 * * *'  # every 6 hours
Enter fullscreen mode Exit fullscreen mode

This gives you external monitoring — checking from GitHub's infrastructure, not your own server. Catches CDN issues, third-party script slowdowns, database query regressions.

What the load_time_ms field actually measures

Worth being explicit: load_time_ms is the time from navigation start (before DNS) to the browser's load event (all synchronous resources fetched). It's equivalent to PerformanceTiming.loadEventEnd - PerformanceTiming.navigationStart.

This is different from:

  • TTFB (time to first byte) — much earlier, misses render blocking
  • LCP (Largest Contentful Paint) — later, measures visual completeness
  • TTI (Time to Interactive) — later still, measures JS execution

For regression detection, load_time_ms is a good proxy. A spike here almost always maps to something real: a new heavy dependency, a blocking third-party script, a missing CDN cache.

Rough thresholds for context

Score load_time_ms
Excellent < 500ms
Good 500 – 1,500ms
Needs improvement 1,500 – 3,000ms
Poor > 3,000ms

These aren't official CWV buckets (those use LCP), but they match real-world user perception closely enough for regression detection.

Extending it

Multi-region monitoring: The API runs from a fixed server location. For geographic distribution, run the same script from GitHub Actions in different regions using runs-on: ubuntu-latest (US East) and an equivalent in EU.

Slack/webhook alerts: Replace the console.log alert section with:

if (alerts.length > 0) {
  await fetch(process.env.SLACK_WEBHOOK, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ text: alerts.join("\n") })
  });
}
Enter fullscreen mode Exit fullscreen mode

Competitor benchmarking: Same script, different URL list. Track how your load time compares to three competitors, measured from the same location, at the same time:

const URLS = [
  "https://yoursite.com/pricing",
  "https://competitorA.com/pricing",
  "https://competitorB.com/pricing",
];
Enter fullscreen mode Exit fullscreen mode

The full picture

The script above is ~70 lines, has no dependencies beyond Node built-ins and fetch, and catches real regressions. It's external (testing what real users hit, not your internal network), schedulable, and CI-compatible.

The only thing you need is a SnapAPI key — free tier covers 100 requests/month which is enough for most monitoring setups.


SnapAPI is a web intelligence API that wraps headless Chromium — screenshot, PDF, structured page analysis, batch processing. Tip: if you build something with it, I'd like to see it.

Top comments (0)