Ever needed to know when a website updates its content? Maybe you're tracking competitor pricing, monitoring a job board, or watching for policy changes.
Most solutions require Puppeteer, a headless browser, and 200+ lines of boilerplate. Here's how to do it in 30 lines with a free API.
The Approach
- Scrape a URL to clean markdown
- Hash the content
- Compare with the previous hash
- Alert if changed
No browser dependencies. No Selenium. Just HTTP requests.
The Code
import crypto from 'crypto';
const API = 'https://agent-gateway-kappa.vercel.app/v1/agent-scraper';
const WATCH_URL = 'https://example.com'; // Replace with your target
const CHECK_INTERVAL = 60 * 60 * 1000; // 1 hour
let previousHash = null;
async function checkForChanges() {
const res = await fetch(
`${API}/api/scrape?url=${encodeURIComponent(WATCH_URL)}&format=markdown`
);
const data = await res.json();
if (!data.content) {
console.error('Scrape failed:', data.error);
return;
}
const hash = crypto.createHash('sha256')
.update(data.content)
.digest('hex');
if (previousHash && hash !== previousHash) {
console.log(`[${new Date().toISOString()}] CHANGE DETECTED on ${WATCH_URL}`);
console.log(`Old hash: ${previousHash}`);
console.log(`New hash: ${hash}`);
// Send notification here (email, Slack webhook, Discord, etc.)
} else {
console.log(`[${new Date().toISOString()}] No changes`);
}
previousHash = hash;
}
// Initial check
checkForChanges();
// Schedule recurring checks
setInterval(checkForChanges, CHECK_INTERVAL);
Run it:
node monitor.js
Output:
[2026-03-05T14:30:00.000Z] No changes
[2026-03-05T15:30:00.000Z] CHANGE DETECTED on https://example.com
Old hash: a3f2b8c...
New hash: 7e1d4f0...
Making It Useful: Monitor Multiple URLs
import crypto from 'crypto';
const API = 'https://agent-gateway-kappa.vercel.app/v1/agent-scraper';
const targets = [
{ url: 'https://news.ycombinator.com', label: 'Hacker News' },
{ url: 'https://example.com/pricing', label: 'Competitor Pricing' },
{ url: 'https://status.github.com', label: 'GitHub Status' },
];
const hashes = new Map();
async function scrape(url) {
const res = await fetch(
`${API}/api/scrape?url=${encodeURIComponent(url)}&format=markdown`
);
return res.json();
}
async function checkAll() {
for (const { url, label } of targets) {
const data = await scrape(url);
if (!data.content) continue;
const hash = crypto.createHash('sha256')
.update(data.content)
.digest('hex');
const prev = hashes.get(url);
if (prev && prev !== hash) {
console.log(`CHANGED: ${label} (${url})`);
// Add your notification logic here
}
hashes.set(url, hash);
}
console.log(`[${new Date().toISOString()}] Checked ${targets.length} sites`);
}
checkAll();
setInterval(checkAll, 30 * 60 * 1000); // Every 30 min
Track What Changed (Diff)
Want to see the actual differences, not just "something changed"? Add a simple diff:
import crypto from 'crypto';
const API = 'https://agent-gateway-kappa.vercel.app/v1/agent-scraper';
const WATCH_URL = 'https://news.ycombinator.com';
let previousContent = null;
async function check() {
const res = await fetch(
`${API}/api/scrape?url=${encodeURIComponent(WATCH_URL)}&format=markdown`
);
const { content } = await res.json();
if (!content) return;
if (previousContent && content !== previousContent) {
const oldLines = new Set(previousContent.split('\n'));
const newLines = content.split('\n');
const added = newLines.filter(line => !oldLines.has(line) && line.trim());
if (added.length > 0) {
console.log(`\n--- New content on ${WATCH_URL} ---`);
added.slice(0, 10).forEach(line => console.log(`+ ${line}`));
console.log('---\n');
}
}
previousContent = content;
}
check();
setInterval(check, 15 * 60 * 1000);
This outputs only the new lines — useful for news sites, changelogs, or job boards.
Add Notifications
Send yourself a Slack message when a change is detected:
async function notifySlack(message) {
await fetch(process.env.SLACK_WEBHOOK_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ text: message }),
});
}
// In your check function, replace the console.log with:
await notifySlack(`Website changed: ${url}\nNew lines:\n${added.join('\n')}`);
Or use Discord, email via SendGrid, or any webhook-based notification.
How the Scraping API Works
The API handles all the browser rendering for you:
# GET request — simplest usage
curl "https://agent-gateway-kappa.vercel.app/v1/agent-scraper/api/scrape?url=https://example.com&format=markdown"
Response:
{
"url": "https://example.com/",
"format": "markdown",
"content": "# Example Domain\n\nThis domain is for use in...",
"meta": {
"title": "Example Domain",
"description": "",
"ogTitle": "",
"ogImage": ""
},
"contentLength": 167,
"scrapedAt": "2026-03-05T14:30:00.000Z"
}
You can also use POST /api/scrape for more options:
curl -X POST "https://agent-gateway-kappa.vercel.app/v1/agent-scraper/api/scrape" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"format": "markdown",
"selector": "main",
"removeSelectors": ["nav", "footer", ".ads"],
"extractLinks": true,
"extractImages": true
}'
Options:
-
format:markdown(default),html, ortext -
selector: CSS selector to extract a specific section -
removeSelectors: Array of CSS selectors to remove (ads, navs, etc.) -
extractLinks: Include all links found on the page -
extractImages: Include all image URLs
Get a Free API Key
The scraper works without authentication for light usage. For higher limits, grab a free API key (200 credits):
curl -X POST https://agent-gateway-kappa.vercel.app/api/keys/create
Then pass it as a header:
curl "https://agent-gateway-kappa.vercel.app/v1/agent-scraper/api/scrape?url=https://example.com" \
-H "x-api-key: YOUR_KEY"
The same API gateway gives you access to 40+ other developer tools — IP geolocation, DNS lookups, crypto prices, screenshots, code execution, and more. Full docs at agent-gateway-kappa.vercel.app.
What would you monitor? I'd love to hear use cases in the comments.
Top comments (0)