Google Lighthouse is great for one-off audits. But if you fix your LCP on Tuesday and ship a bundle regression on Friday, nobody notices until Search Console emails you three weeks later.
Lighthouse tells you how fast your site is today. kanmi tells you when it gets slower.
npm install -g @knfrmd/kanmi
kanmi audit https://your-site.com
Kanmi Performance Audit
https://your-site.com
mobile · Slow 4G, 4x CPU · LH 12.0.0 · 14.2s
─────────────────────────────────────────────
Performance 72
Accessibility 95
Best Practices 100
SEO 91
Lab Metrics
─────────────────────────────────────────────
LCP 4,200ms NEEDS IMPROVEMENT
TBT 380ms NEEDS IMPROVEMENT
CLS 0.003 GOOD
FCP 1,800ms GOOD
Top Issues
─────────────────────────────────────────────
* Reduce unused JavaScript (~1.2s savings)
* Eliminate render-blocking resources (~800ms savings)
One command. Scores, lab metrics, top issues.
What it does
-
kanmi audit <url>- run Lighthouse, print results -
kanmi monitor <url>- run 3 audits (median), save to local history, detect regressions -
kanmi ci- enforce thresholds in GitHub Actions, exit 1 on failure
Why not Lighthouse CI?
Lighthouse CI is powerful but assumes:
- A server (LHCI server or temporary storage)
- A
.lighthouserc.jsproject config - CI-specific setup before you see any value locally
kanmi works with one command. No server. No database. Just JSON files in ~/.kanmi/history/ and automatic comparison against your own baseline.
What about WebPageTest, Calibre, SpeedCurve?
Excellent tools for continuous monitoring and real-user dashboards. But they're hosted services with accounts, billing, and long-term data storage.
kanmi is designed for a different workflow: "did this deployment make performance worse?" - answered locally in under a minute, with no account required.
| Tool | Purpose |
|---|---|
| Lighthouse | One-off audits |
| Lighthouse CI | CI pipelines with server |
| WebPageTest / Calibre / SpeedCurve | Hosted monitoring + RUM |
| kanmi | Local regression detection |
Monitoring catches what audits miss
kanmi monitor https://shop.example.com
This runs 3 Lighthouse audits (takes the median to reduce Lighthouse noise), saves the result, and compares against your last 5 runs:
Kanmi Monitor
Run #8 | 2026-02-28 | 3 runs (median)
─────────────────────────────────────────────
Performance 92 unchanged
Regressions
CRIT LCP: 2,100ms -> 2,600ms (+500ms)
Regression Policy
─────────────────────────────────────────────
Baseline median of last 5 runs
Rule flag if delta >= max(abs, baseline × rel%)
Fail exit 1 on critical/high severity
Regression detection uses dual thresholds - absolute and relative. LCP flags at +150ms or +10%, whichever is larger. This prevents false positives on slow sites (where 150ms is noise) and catches small regressions on fast sites (where 10% matters).
It won't evaluate regressions until you have 5 baseline runs. No false alarms on thin data.
CI gate
kanmi ci --urls https://example.com --performance 90
Exits 1 if thresholds fail or critical regressions are detected. Add --post-comment to annotate PRs.
Engineering notes
Three decisions worth sharing:
Score scale contract. Lighthouse scores are 0-1 internally. Every wrapper I've seen converts to 0-100 somewhere and introduces bugs. kanmi keeps data as 0-1 everywhere - JSON output, storage, history. The 0-100 conversion happens once, at the terminal display layer. This is enforced by a JSON schema that CI validates on every push.
URL normalization. If you monitor https://www.example.com/?utm_source=email Monday and https://example.com/ Tuesday, those need to be the same baseline. kanmi strips protocols, www., tracking params (14 of them), hash fragments, and default ports before storing. 40 unit tests cover this.
"Stranger mode" CI. A CI job packs the CLI into a tarball, installs it in an empty /tmp directory, and validates that the binary runs and the JSON output matches the schema. If it breaks for a fresh install, CI catches it before npm publish.
Try it
npm install -g @knfrmd/kanmi
kanmi audit https://your-site.com
kanmi monitor https://your-site.com
Run it a few times and you'll start seeing regression detection kick in automatically.
Top comments (0)