How I Built an AI Competitor Intelligence Tool in a Weekend (and Started Charging $29/Month)
I'm a solo developer. I built a competitor monitoring tool called BusinessPulse in about two weekends. It's currently at $29/month, and I want to walk through exactly how I built it — tech stack, architecture decisions, and the mistakes I made along the way.
This isn't a "how to get rich" post. It's a technical walkthrough of building a real product that does something actually useful: it reads your competitors' websites every week and tells you — in plain English — what changed.
The Problem Worth Solving
Competitive monitoring is genuinely painful. If you're a small SaaS or indie founder, you're probably doing one of these:
- Manually checking competitor sites — every Monday, opening 5–10 tabs, trying to remember what was different last week
- Paying $200–$500/month for enterprise tools like Crayon or Klue that are overkill for a 3-person team
- Ignoring it entirely — and occasionally getting blindsided by a competitor dropping their price or launching a feature you didn't know about
None of these are good options. The enterprise tools cost more than many indie SaaS products make per month. And manual monitoring doesn't scale — you miss things, you forget, you get busy.
The gap I was targeting: a lightweight, AI-powered tool for small teams and indie founders at a price that doesn't require a board meeting to approve.
Architecture Overview
The finished system has three components:
Scheduled Job → SnapAPI (screenshot + analyze) → Claude (summarize) → Email Brief
Step 1: Screenshot + Analyze
I use SnapAPI to capture screenshots and analyze competitor pages. For each URL being monitored, I make two calls:
// Take a screenshot for visual diff
const screenshot = await fetch('https://snapapi.tech/v1/screenshot', {
method: 'POST',
headers: { 'Authorization': `Bearer ${SNAPAPI_KEY}` },
body: JSON.stringify({ url: competitorUrl, format: 'webp', full_page: false })
});
// Analyze the page for structured data
const analysis = await fetch('https://snapapi.tech/v1/analyze', {
method: 'POST',
headers: { 'Authorization': `Bearer ${SNAPAPI_KEY}` },
body: JSON.stringify({ url: competitorUrl })
});
The /v1/analyze endpoint returns structured data: page type, CTA text, navigation items, detected technologies, word count, and detected buttons/forms. This is the key — instead of diffing raw HTML (which changes constantly for irrelevant reasons), I'm diffing semantic signals.
Step 2: Build a Diff String
I store the previous week's analysis in Postgres and compare:
function buildDiffString(previous, current) {
const changes = [];
if (previous.cta !== current.cta) {
changes.push(`CTA changed: "${previous.cta}" → "${current.cta}"`);
}
if (previous.title !== current.title) {
changes.push(`Page title changed: "${previous.title}" → "${current.title}"`);
}
const wordCountDelta = current.word_count - previous.word_count;
if (Math.abs(wordCountDelta) > 100) {
changes.push(`Word count ${wordCountDelta > 0 ? 'increased' : 'decreased'} by ${Math.abs(wordCountDelta)} words`);
}
const prevTech = new Set(previous.technologies || []);
const currTech = new Set(current.technologies || []);
const added = [...currTech].filter(t => !prevTech.has(t));
const removed = [...prevTech].filter(t => !currTech.has(t));
if (added.length) changes.push(`New technology detected: ${added.join(', ')}`);
if (removed.length) changes.push(`Technology removed: ${removed.join(', ')}`);
return changes.join('\n') || 'No significant changes detected';
}
Step 3: Claude Summarizes
This is where the product gets interesting. Instead of emailing a raw diff (confusing and noisy), I pass the changes to Claude:
const anthropic = new Anthropic();
const summary = await anthropic.messages.create({
model: 'claude-opus-4-5',
max_tokens: 300,
messages: [{
role: 'user',
content: `You are a competitive intelligence analyst.
A competitor's website changed this week. Here are the detected changes:
${diffString}
Write a 3-5 sentence plain-English summary explaining:
1. What changed
2. What it likely signals (pricing strategy, A/B testing, new feature launch, etc.)
3. What action the reader might consider
Be specific and analytical. Avoid vague statements. If changes are minor, say so directly.`
}]
});
The output looks like this:
Acme Corp dropped their Pro plan price by $10 this week and changed their primary CTA from "Start free trial" to "Get started free" — the removal of the word "trial" is a classic A/B testing move to reduce friction. They also added HubSpot to their tech stack, suggesting they're investing in outbound sales. Worth watching: if this A/B wins, expect them to push this variant to all traffic within 30 days. Consider whether your own trial messaging could be tested similarly.
That's what people will pay for. Not raw data — interpretation.
The Security Architecture
One thing I got right from the start: API keys never touch the browser.
The demo page calls a server-side proxy that handles all API communication:
// server-proxy.js — the client never sees API keys
app.post('/api/screenshot', async (req, res) => {
const response = await fetch('https://snapapi.tech/v1/screenshot', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.SNAPAPI_KEY}`, // env only
'Content-Type': 'application/json'
},
body: JSON.stringify(req.body)
});
const data = await response.json();
res.json(data);
});
app.post('/api/summarize', async (req, res) => {
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
// ... Claude call here
res.json({ summary: message.content[0].text });
});
If you're building anything like this: treat your API proxy as the security boundary. Your frontend is untrusted territory.
What I Got Wrong
1. Building the demo before the value prop was clear
I spent a full day on a slick demo before I'd validated that the AI summary was actually the thing people wanted. Turns out the screenshot comparison is table stakes — the Claude interpretation is what makes people go "oh, I need this."
2. Semantic diffing is harder than it looks
My first version just diffed raw HTML. It generated alerts for every ad rotation, cookie banner change, and lazy-loaded image. Useless noise. Switching to semantic field diffing (CTA text, title, tech stack, word count) cut false positive alerts by ~90%.
3. Pricing anchoring matters more than the price itself
I launched at $9/month. Nobody bought it. The same product at $29/month with an "enterprise monitoring tool for 1/10th the price" positioning converted better. This is well-documented but I had to live it to believe it.
The Current State
BusinessPulse is live at snapapi.tech/businesspulse. The AI demo is at snapapi.tech/businesspulse/demo-v2 — it runs real competitor analysis in the browser using the server proxy architecture described above.
The whole product runs on:
- SnapAPI — screenshots and page analysis
- Anthropic Claude — competitive interpretation
- Postgres — storing weekly snapshots
- Resend — Monday morning email briefs
- Render — hosting (auto-deploys from GitHub)
Total infrastructure cost: ~$40/month at current scale. Not bad for a $29/month product.
What's Next
The thing I'm most interested in building next: time-series trend visualization. Right now you get a weekly brief. What I want is a timeline view — "Acme Corp has changed their CTA 4 times in the last 90 days, each time reducing friction language" — that kind of longitudinal pattern is more valuable than any single week's alert.
If you're building something in this space or have thoughts on the architecture, I'd love to hear it. The AI diff-then-summarize pattern has applicability well beyond competitor monitoring.
Built with SnapAPI for screenshots and page analysis. If you need browser automation without managing Puppeteer infrastructure, check it out — the batch endpoint is particularly useful for monitoring workflows like this.
Top comments (0)