DEV Community

Wilson Xu
Wilson Xu

Posted on

How to Build a Web Performance Monitoring CLI with Node.js and Lighthouse

How to Build a Web Performance Monitoring CLI with Node.js and Lighthouse

Web performance directly impacts user experience, SEO rankings, and conversion rates. Yet most developers only check performance manually through Chrome DevTools or PageSpeed Insights — reactive, inconsistent, and easy to forget.

What if you could monitor any website's performance metrics from your terminal, automatically, on every deploy?

In this tutorial, we'll build a CLI tool that runs Lighthouse audits programmatically, tracks Core Web Vitals over time, and alerts you when performance degrades. You'll learn how to use Lighthouse's Node.js API, store historical metrics, and build a practical developer tool from scratch.

What We're Building

Our CLI — perfwatch — will:

  1. Run Lighthouse audits on any URL from the terminal
  2. Extract Core Web Vitals (LCP, FID, CLS, FCP, TTFB)
  3. Store results in a local JSON history file
  4. Compare current results against baselines and previous runs
  5. Output a clean, color-coded terminal report
  6. Exit with non-zero codes when thresholds are breached (CI-friendly)

Prerequisites

  • Node.js 18+ installed
  • Basic familiarity with npm and CLI tools
  • Chrome or Chromium browser installed (Lighthouse uses it under the hood)

Project Setup

mkdir perfwatch && cd perfwatch
npm init -y
Enter fullscreen mode Exit fullscreen mode

Install our dependencies:

npm install lighthouse chrome-launcher commander chalk cli-table3
Enter fullscreen mode Exit fullscreen mode

Here's what each package does:

  • lighthouse — Google's automated auditing tool, available as a Node.js module
  • chrome-launcher — Launches a headless Chrome instance for Lighthouse to use
  • commander — Parses CLI arguments and subcommands
  • chalk — Adds color to terminal output
  • cli-table3 — Renders formatted tables in the terminal

Update package.json to make our tool executable:

{
  "name": "perfwatch",
  "version": "1.0.0",
  "bin": {
    "perfwatch": "./bin/perfwatch.js"
  },
  "type": "module"
}
Enter fullscreen mode Exit fullscreen mode

Step 1: Launch Chrome and Run Lighthouse Programmatically

Create lib/audit.js:

import lighthouse from 'lighthouse';
import * as chromeLauncher from 'chrome-launcher';

export async function runAudit(url, options = {}) {
  const chrome = await chromeLauncher.launch({
    chromeFlags: ['--headless', '--no-sandbox', '--disable-gpu']
  });

  try {
    const result = await lighthouse(url, {
      port: chrome.port,
      output: 'json',
      onlyCategories: ['performance'],
      formFactor: options.mobile ? 'mobile' : 'desktop',
      screenEmulation: options.mobile
        ? { mobile: true, width: 375, height: 812 }
        : { mobile: false, width: 1350, height: 940 },
      throttling: options.noThrottle
        ? { cpuSlowdownMultiplier: 1, throughputKbps: 0 }
        : undefined,
    });

    const { audits, categories } = result.lhr;

    return {
      url,
      timestamp: new Date().toISOString(),
      score: Math.round(categories.performance.score * 100),
      metrics: {
        fcp: audits['first-contentful-paint']?.numericValue,
        lcp: audits['largest-contentful-paint']?.numericValue,
        cls: audits['cumulative-layout-shift']?.numericValue,
        tbt: audits['total-blocking-time']?.numericValue,
        si: audits['speed-index']?.numericValue,
        ttfb: audits['server-response-time']?.numericValue,
      }
    };
  } finally {
    await chrome.kill();
  }
}
Enter fullscreen mode Exit fullscreen mode

The key insight here is that Lighthouse is not just a browser extension — it's a full Node.js module. By launching Chrome programmatically via chrome-launcher, we get complete control over the auditing process. The formFactor option lets us simulate both mobile and desktop experiences, which matters because Google uses mobile-first indexing.

Step 2: Store and Compare Historical Results

Performance data is only useful when you can see trends. Create lib/history.js:

import { readFile, writeFile, mkdir } from 'node:fs/promises';
import { join, dirname } from 'node:path';
import { homedir } from 'node:os';

const HISTORY_DIR = join(homedir(), '.perfwatch');
const HISTORY_FILE = join(HISTORY_DIR, 'history.json');

export async function loadHistory() {
  try {
    const data = await readFile(HISTORY_FILE, 'utf-8');
    return JSON.parse(data);
  } catch {
    return {};
  }
}

export async function saveResult(result) {
  await mkdir(HISTORY_DIR, { recursive: true });
  const history = await loadHistory();
  const key = new URL(result.url).hostname;

  if (!history[key]) {
    history[key] = [];
  }

  history[key].push(result);

  // Keep last 100 results per domain
  if (history[key].length > 100) {
    history[key] = history[key].slice(-100);
  }

  await writeFile(HISTORY_FILE, JSON.stringify(history, null, 2));
  return history[key];
}

export function calculateTrend(results) {
  if (results.length < 2) return null;

  const recent = results.slice(-5);
  const previous = results.slice(-10, -5);

  if (previous.length === 0) return null;

  const avgRecent = recent.reduce((sum, r) => sum + r.score, 0) / recent.length;
  const avgPrevious = previous.reduce((sum, r) => sum + r.score, 0) / previous.length;

  return {
    direction: avgRecent > avgPrevious ? 'improving' : avgRecent < avgPrevious ? 'declining' : 'stable',
    delta: Math.round(avgRecent - avgPrevious),
    avgRecent: Math.round(avgRecent),
    avgPrevious: Math.round(avgPrevious),
  };
}
Enter fullscreen mode Exit fullscreen mode

We store results per hostname so you can track multiple sites. The calculateTrend function compares your last 5 runs against the 5 before that — enough data to spot meaningful changes without being noisy.

Step 3: Format the Terminal Report

Create lib/report.js:

import chalk from 'chalk';
import Table from 'cli-table3';

function scoreColor(score) {
  if (score >= 90) return chalk.green;
  if (score >= 50) return chalk.yellow;
  return chalk.red;
}

function metricStatus(name, value) {
  const thresholds = {
    fcp: { good: 1800, poor: 3000 },
    lcp: { good: 2500, poor: 4000 },
    cls: { good: 0.1, poor: 0.25 },
    tbt: { good: 200, poor: 600 },
    si: { good: 3400, poor: 5800 },
    ttfb: { good: 800, poor: 1800 },
  };

  const t = thresholds[name];
  if (!t || value === undefined) return chalk.gray('N/A');

  const formatted = name === 'cls'
    ? value.toFixed(3)
    : `${Math.round(value)}ms`;

  if (value <= t.good) return chalk.green(formatted);
  if (value <= t.poor) return chalk.yellow(formatted);
  return chalk.red(formatted);
}

export function printReport(result, trend) {
  const color = scoreColor(result.score);

  console.log();
  console.log(chalk.bold(`  Performance Report: ${result.url}`));
  console.log(chalk.gray(`  ${result.timestamp}`));
  console.log();
  console.log(`  Overall Score: ${color.bold(result.score + '/100')}`);

  if (trend) {
    const arrow = trend.direction === 'improving' ? '' : trend.direction === 'declining' ? '' : '';
    const trendColor = trend.direction === 'improving' ? chalk.green : trend.direction === 'declining' ? chalk.red : chalk.gray;
    console.log(`  Trend: ${trendColor(`${arrow} ${trend.direction} (${trend.delta > 0 ? '+' : ''}${trend.delta})`)}`);
  }

  console.log();

  const table = new Table({
    head: ['Metric', 'Value', 'Rating'],
    style: { head: ['cyan'] }
  });

  const metricNames = {
    fcp: 'First Contentful Paint',
    lcp: 'Largest Contentful Paint',
    cls: 'Cumulative Layout Shift',
    tbt: 'Total Blocking Time',
    si: 'Speed Index',
    ttfb: 'Time to First Byte',
  };

  for (const [key, label] of Object.entries(metricNames)) {
    const value = result.metrics[key];
    table.push([label, metricStatus(key, value), getRating(key, value)]);
  }

  console.log(table.toString());
  console.log();
}

function getRating(name, value) {
  const thresholds = {
    fcp: { good: 1800, poor: 3000 },
    lcp: { good: 2500, poor: 4000 },
    cls: { good: 0.1, poor: 0.25 },
    tbt: { good: 200, poor: 600 },
    si: { good: 3400, poor: 5800 },
    ttfb: { good: 800, poor: 1800 },
  };

  const t = thresholds[name];
  if (!t || value === undefined) return chalk.gray('N/A');

  if (value <= t.good) return chalk.green('Good');
  if (value <= t.poor) return chalk.yellow('Needs Work');
  return chalk.red('Poor');
}
Enter fullscreen mode Exit fullscreen mode

The thresholds here match Google's official Core Web Vitals benchmarks. When a metric is green, it meets the "good" threshold. Yellow means "needs improvement." Red means it's actively hurting your users' experience.

Step 4: Wire Up the CLI

Create bin/perfwatch.js:

#!/usr/bin/env node

import { program } from 'commander';
import chalk from 'chalk';
import { runAudit } from '../lib/audit.js';
import { saveResult, loadHistory, calculateTrend } from '../lib/history.js';
import { printReport } from '../lib/report.js';

program
  .name('perfwatch')
  .description('Monitor web performance from your terminal')
  .version('1.0.0');

program
  .command('audit <url>')
  .description('Run a Lighthouse performance audit')
  .option('-m, --mobile', 'Simulate mobile device', false)
  .option('--no-throttle', 'Disable network/CPU throttling')
  .option('--threshold <score>', 'Minimum acceptable score (exit 1 if below)', parseInt)
  .option('--json', 'Output raw JSON instead of formatted report')
  .action(async (url, options) => {
    if (!url.startsWith('http')) {
      url = `https://${url}`;
    }

    console.log(chalk.gray(`  Running Lighthouse audit on ${url}...`));

    try {
      const result = await runAudit(url, options);
      const history = await saveResult(result);
      const trend = calculateTrend(history);

      if (options.json) {
        console.log(JSON.stringify({ ...result, trend }, null, 2));
      } else {
        printReport(result, trend);
      }

      if (options.threshold && result.score < options.threshold) {
        console.log(chalk.red(`  Score ${result.score} is below threshold ${options.threshold}`));
        process.exit(1);
      }
    } catch (error) {
      console.error(chalk.red(`  Audit failed: ${error.message}`));
      process.exit(1);
    }
  });

program
  .command('history <domain>')
  .description('Show audit history for a domain')
  .option('-n, --last <count>', 'Number of recent results', parseInt, 10)
  .action(async (domain, options) => {
    const history = await loadHistory();
    const results = history[domain];

    if (!results || results.length === 0) {
      console.log(chalk.yellow(`  No history found for ${domain}`));
      return;
    }

    const recent = results.slice(-options.last);
    console.log(chalk.bold(`\n  Performance History: ${domain}`));
    console.log(chalk.gray(`  Last ${recent.length} audits\n`));

    for (const r of recent) {
      const color = r.score >= 90 ? chalk.green : r.score >= 50 ? chalk.yellow : chalk.red;
      const date = new Date(r.timestamp).toLocaleDateString();
      console.log(`  ${chalk.gray(date)}  ${color(r.score.toString().padStart(3))}/100  LCP: ${Math.round(r.metrics.lcp)}ms  CLS: ${r.metrics.cls?.toFixed(3) || 'N/A'}`);
    }

    const trend = calculateTrend(results);
    if (trend) {
      console.log(`\n  Trend: ${trend.direction} (${trend.delta > 0 ? '+' : ''}${trend.delta} points)`);
    }
    console.log();
  });

program
  .command('compare <url1> <url2>')
  .description('Compare performance of two URLs')
  .option('-m, --mobile', 'Simulate mobile device', false)
  .action(async (url1, url2, options) => {
    console.log(chalk.gray('  Running audits on both URLs...'));

    const [result1, result2] = await Promise.all([
      runAudit(url1.startsWith('http') ? url1 : `https://${url1}`, options),
      runAudit(url2.startsWith('http') ? url2 : `https://${url2}`, options),
    ]);

    console.log(chalk.bold('\n  Performance Comparison\n'));

    const metrics = ['fcp', 'lcp', 'cls', 'tbt', 'si', 'ttfb'];
    const names = {
      fcp: 'First Contentful Paint',
      lcp: 'Largest Contentful Paint',
      cls: 'Cumulative Layout Shift',
      tbt: 'Total Blocking Time',
      si: 'Speed Index',
      ttfb: 'Time to First Byte',
    };

    console.log(`  ${chalk.cyan(url1)}: ${result1.score}/100`);
    console.log(`  ${chalk.cyan(url2)}: ${result2.score}/100`);
    console.log();

    for (const m of metrics) {
      const v1 = result1.metrics[m];
      const v2 = result2.metrics[m];
      if (v1 === undefined || v2 === undefined) continue;

      const winner = v1 < v2 ? url1 : v2 < v1 ? url2 : 'tie';
      const diff = Math.abs(v1 - v2);
      const format = m === 'cls' ? (v) => v.toFixed(3) : (v) => `${Math.round(v)}ms`;

      console.log(`  ${names[m]}: ${format(v1)} vs ${format(v2)} ${winner !== 'tie' ? chalk.green(`(${winner.split('/')[2]} wins by ${format(diff)})`) : ''}`);
    }
    console.log();
  });

program.parse();
Enter fullscreen mode Exit fullscreen mode

Step 5: Add CI/CD Integration

The --threshold flag makes perfwatch CI-ready. Here's how to use it in GitHub Actions:

# .github/workflows/perf.yml
name: Performance Check
on: [push]
jobs:
  audit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: npm install -g perfwatch
      - run: perfwatch audit https://your-site.com --threshold 85
Enter fullscreen mode Exit fullscreen mode

If your site's performance score drops below 85, the CI job fails — catching regressions before they reach production.

Testing It Out

Make the CLI executable and test locally:

chmod +x bin/perfwatch.js
node bin/perfwatch.js audit google.com
Enter fullscreen mode Exit fullscreen mode

You should see a color-coded report showing Google's Core Web Vitals with trend tracking.

Run it again to see the trend indicator:

node bin/perfwatch.js audit google.com
node bin/perfwatch.js history google.com
Enter fullscreen mode Exit fullscreen mode

Publishing to npm

npm publish --access public
Enter fullscreen mode Exit fullscreen mode

Now anyone can install it globally:

npm install -g perfwatch
perfwatch audit example.com --mobile --threshold 80
Enter fullscreen mode Exit fullscreen mode

What We Learned

Building this tool taught us several things:

  1. Lighthouse is a Node.js module first — the Chrome extension is just one interface. The programmatic API gives you much more control.
  2. Core Web Vitals have specific thresholds — Google publishes exact numbers for "good," "needs improvement," and "poor" for each metric.
  3. Performance tracking needs history — a single snapshot is nearly useless. Trends tell the real story.
  4. CLI tools can replace dashboards — not every monitoring solution needs a web UI. A terminal command that runs in CI is often more practical.

Next Steps

You could extend perfwatch with:

  • Multiple URL batch auditing — audit your entire sitemap in one command
  • Slack/Discord notifications — alert your team when performance degrades
  • Budget files — define per-metric thresholds in a .perfwatchrc config
  • Competitive benchmarking — track how your site compares to competitors over time

The code for this project is available on GitHub and npm.


Wilson Xu is a developer tool builder with 7+ published npm packages and a focus on CLI-based developer experience. Find more of his tools at github.com/chengyixu.

Top comments (0)