You push a CSS change. Everything looks fine in your browser. You merge to main. Then a user reports the checkout button disappeared on mobile.
Visual regression testing catches these bugs automatically by comparing screenshots of your pages before and after changes. Here's how to build one in ~60 lines of JavaScript.
What We're Building
A CLI tool that:
- Takes screenshots of your web pages
- Compares them against baseline images
- Reports pixel-level differences
- Integrates with CI/CD pipelines
No Puppeteer. No browser installation. Just API calls.
Prerequisites
- Node.js 18+
- A free API key from Agent Gateway (200 free credits)
Step 1: The Screenshot Function
Instead of spinning up a headless browser, we'll use a screenshot API that handles rendering, viewport sizing, and format conversion:
import { writeFileSync, readFileSync, existsSync, mkdirSync } from 'fs';
import { createHash } from 'crypto';
import { join } from 'path';
const API_KEY = process.env.GATEWAY_API_KEY || 'your-api-key';
const API_BASE = 'https://api.frostbyte.cc';
const BASELINE_DIR = './visual-baselines';
const CURRENT_DIR = './visual-current';
async function takeScreenshot(url, viewport = '1280x720') {
const [width, height] = viewport.split('x').map(Number);
const res = await fetch(`${API_BASE}/screenshot?url=${encodeURIComponent(url)}&width=${width}&height=${height}&fullPage=true`, {
headers: { 'x-api-key': API_KEY }
});
if (!res.ok) throw new Error(`Screenshot failed: ${res.status}`);
return Buffer.from(await res.arrayBuffer());
}
This handles:
- Full-page screenshots (not just the viewport)
- Custom viewport sizes for responsive testing
- JavaScript rendering (SPAs, dynamic content)
Step 2: Pixel Comparison
We'll compare images by computing a hash of the raw pixel data. For a simple but effective approach:
function imageHash(buffer) {
return createHash('sha256').update(buffer).digest('hex');
}
function compareImages(baseline, current) {
const baselineHash = imageHash(baseline);
const currentHash = imageHash(current);
return {
match: baselineHash === currentHash,
baselineHash,
currentHash,
baselineSize: baseline.length,
currentSize: current.length,
sizeDiff: Math.abs(current.length - baseline.length)
};
}
Note: Hash comparison detects any pixel change. For fuzzy matching (ignoring anti-aliasing differences), you'd use a library like
pixelmatch. But hash comparison is perfect for catching real regressions with zero dependencies.
Step 3: The Test Runner
// Pages to test — add your routes here
const TEST_PAGES = [
{ name: 'homepage', url: 'https://yoursite.com', viewport: '1280x720' },
{ name: 'homepage-mobile', url: 'https://yoursite.com', viewport: '375x812' },
{ name: 'pricing', url: 'https://yoursite.com/pricing', viewport: '1280x720' },
{ name: 'docs', url: 'https://yoursite.com/docs', viewport: '1280x720' },
];
async function runTests(mode = 'compare') {
// Ensure directories exist
[BASELINE_DIR, CURRENT_DIR].forEach(dir => {
if (!existsSync(dir)) mkdirSync(dir, { recursive: true });
});
const results = [];
for (const page of TEST_PAGES) {
const filename = `${page.name}-${page.viewport.replace('x', '_')}.png`;
console.log(`📸 Capturing ${page.name} (${page.viewport})...`);
try {
const screenshot = await takeScreenshot(page.url, page.viewport);
if (mode === 'baseline') {
// Save as new baseline
writeFileSync(join(BASELINE_DIR, filename), screenshot);
results.push({ page: page.name, status: 'baseline_saved', viewport: page.viewport });
console.log(` ✅ Baseline saved`);
} else {
// Compare against baseline
writeFileSync(join(CURRENT_DIR, filename), screenshot);
const baselinePath = join(BASELINE_DIR, filename);
if (!existsSync(baselinePath)) {
results.push({ page: page.name, status: 'no_baseline', viewport: page.viewport });
console.log(` ⚠️ No baseline found — run with --baseline first`);
continue;
}
const baseline = readFileSync(baselinePath);
const comparison = compareImages(baseline, screenshot);
if (comparison.match) {
results.push({ page: page.name, status: 'pass', viewport: page.viewport });
console.log(` ✅ No changes detected`);
} else {
results.push({
page: page.name,
status: 'fail',
viewport: page.viewport,
sizeDiff: comparison.sizeDiff,
});
console.log(` ❌ Visual difference detected! (size diff: ${comparison.sizeDiff} bytes)`);
}
}
} catch (err) {
results.push({ page: page.name, status: 'error', error: err.message });
console.log(` 💥 Error: ${err.message}`);
}
}
return results;
}
Step 4: CLI Interface
const args = process.argv.slice(2);
const mode = args.includes('--baseline') ? 'baseline' : 'compare';
console.log(`\n🔍 Visual Regression Test — ${mode.toUpperCase()} mode\n`);
const results = await runTests(mode);
// Summary
const passed = results.filter(r => r.status === 'pass').length;
const failed = results.filter(r => r.status === 'fail').length;
const errors = results.filter(r => r.status === 'error').length;
console.log(`\n${'─'.repeat(50)}`);
console.log(`Results: ${passed} passed, ${failed} failed, ${errors} errors`);
if (failed > 0) {
console.log(`\nFailed pages:`);
results.filter(r => r.status === 'fail').forEach(r => {
console.log(` • ${r.page} (${r.viewport})`);
});
process.exit(1); // Non-zero exit for CI
}
Usage
# First run: save baselines
GATEWAY_API_KEY=your-key node visual-test.mjs --baseline
# After code changes: compare
GATEWAY_API_KEY=your-key node visual-test.mjs
Output:
🔍 Visual Regression Test — COMPARE mode
📸 Capturing homepage (1280x720)...
✅ No changes detected
📸 Capturing homepage-mobile (375x812)...
❌ Visual difference detected! (size diff: 4821 bytes)
📸 Capturing pricing (1280x720)...
✅ No changes detected
──────────────────────────────────────────────────
Results: 2 passed, 1 failed, 0 errors
Failed pages:
• homepage-mobile (375x812)
Step 5: GitHub Actions Integration
Add this to .github/workflows/visual-test.yml:
name: Visual Regression Test
on: [pull_request]
jobs:
visual-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Run visual tests
env:
GATEWAY_API_KEY: ${{ secrets.GATEWAY_API_KEY }}
run: node visual-test.mjs
- name: Upload screenshots on failure
if: failure()
uses: actions/upload-artifact@v4
with:
name: visual-diffs
path: visual-current/
Now every PR automatically checks for visual regressions. Failed screenshots are uploaded as artifacts for review.
Multi-Viewport Testing
Test responsive breakpoints by adding more viewport sizes:
const RESPONSIVE_VIEWPORTS = [
{ name: 'desktop', width: '1920x1080' },
{ name: 'laptop', width: '1280x720' },
{ name: 'tablet', width: '768x1024' },
{ name: 'mobile', width: '375x812' },
{ name: 'mobile-small', width: '320x568' },
];
// Generate test matrix
const TEST_PAGES = ['/', '/pricing', '/docs'].flatMap(path =>
RESPONSIVE_VIEWPORTS.map(vp => ({
name: `${path.replace('/', 'home').replace(/\//g, '-')}-${vp.name}`,
url: `https://yoursite.com${path}`,
viewport: vp.width,
}))
);
This gives you 15 screenshots (3 pages x 5 viewports) per test run — comprehensive coverage with zero browser dependencies.
Cost
Each screenshot = 1 API credit. With 200 free credits:
- 4 pages x 4 viewports = 16 credits per run
- That's 12 full test runs on the free tier
For CI, you'd want a paid plan for higher volume — but for local testing and small projects, the free tier covers it.
Why Not Puppeteer?
| Screenshot API | Puppeteer | |
|---|---|---|
| Setup | npm init |
Install Chrome + puppeteer |
| CI/CD | No browser needed | Need Chrome in Docker |
| Rendering | Consistent across runs | Varies by Chrome version |
| Maintenance | Zero | Chrome updates break things |
| Speed | ~2s per screenshot | ~5-10s per screenshot |
The tradeoff: you need an API key and internet access. For air-gapped environments, Puppeteer still wins.
What's Next
- Add Slack/Discord notifications on failure
- Store baselines in git (they're small PNGs)
- Add a diff viewer that overlays baseline vs current
- Combine with scraper API to also check for broken links
Get your free API key at api-catalog-three.vercel.app — 200 credits, no credit card.
The screenshot API also supports custom wait times, CSS injection, and element-level captures. Full docs here.
Top comments (0)