What We Built
WatchContact is a two-part product:
- AI Chat Analyzer — A free tool where users paste a conversation (or upload a screenshot) and get an AI-powered analysis: intent, tone, risk level, and suggested replies.
- Blog — Articles on messaging psychology, WhatsApp behavior, texting etiquette, and relationship signals.
The goal was to ship something useful quickly: no auth, no signup, just paste and analyze. Here's how we built it.
Architecture Overview
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Next.js 14 │────▶│ Express API │────▶│ MongoDB │
│ (Frontend) │ │ (Backend) │ │ (Analytics) │
└─────────────────┘ └────────┬────────┘ └─────────────────┘
│ │
│ ├────────────────▶ OpenAI API
│ ├────────────────▶ Tesseract.js (OCR)
│ └────────────────▶ Cloudflare R2
│
└─────────────────────────────────────────▶ Markdown blog (static)
- Frontend: Next.js 14, React, Tailwind CSS
- Backend: Express, Mongoose, MongoDB
- AI: OpenAI GPT for conversation analysis
- OCR: Tesseract.js for screenshot text extraction
- Storage: Cloudflare R2 for screenshot files (S3-compatible)
- Auth: None — IP-based rate limiting instead
1. The AI Chat Analyzer Flow
Two Input Paths
Text input: User pastes or types a conversation → sent directly to the analysis API.
Screenshot input: User pastes an image (Ctrl+V) or uploads a file → backend runs OCR → extracted text is analyzed. The screenshot can optionally be stored in R2 for reference.
Backend API Endpoints
| Endpoint | Purpose |
|---|---|
POST /api/analysis/text |
Analyze pasted text |
POST /api/analysis/screenshot-extract |
OCR + optional R2 upload |
POST /api/analysis/screenshot-final |
Analyze OCR text (reuses rate limit from extract) |
GET /api/analysis/limit-status |
Return remaining analyses for the day |
Rate Limiting: IP-Based, No Auth
We wanted to avoid signup but still control abuse. Solution: 3 analyses per IP per day.
// rateLimit.service.js - simplified
const DAILY_LIMIT = 3;
async function checkAndIncrement(ipAddress) {
if (isLocalhost(ipAddress)) return { allowed: true }; // dev bypass
const dateKey = new Date().toISOString().slice(0, 10); // YYYY-MM-DD
let record = await IpUsage.findOne({ ipAddress, dateKey });
if (!record) record = await IpUsage.create({ ipAddress, dateKey, analysisCount: 0 });
if (record.analysisCount >= DAILY_LIMIT) return { allowed: false };
record.analysisCount += 1;
await record.save();
return { allowed: true };
}
-
IpUsagemodel:{ ipAddress, dateKey, analysisCount } - Localhost is exempt for development
- Frontend calls
limit-statusto show "X of 3 analyses remaining today"
OpenAI Analysis
We use a structured JSON prompt so the model returns a consistent shape:
// buildPrompt.js - system prompt excerpt
{
"overallSignal": "Interested" | "Hesitant" | "Mixed Signals" | "Low Interest" | "Neutral",
"intentSummary": "Short paragraph...",
"toneAnalysis": ["observation 1", "observation 2", ...],
"riskLevel": "Low" | "Medium" | "High",
"riskExplanation": "Brief explanation",
"suggestedReplies": {
"safe": "A cautious reply",
"confident": "A more direct reply",
"warm": "A warm, casual reply"
},
"disclaimer": "This is a behavioral interpretation, not certainty."
}
We validate the response, parse JSON, and reject if required keys are missing. Token usage and estimated cost are stored in MongoDB for monitoring (not shown to users).
OCR with Tesseract.js
For screenshots, we use Tesseract.js on the backend:
// ocr.service.js
const { createWorker } = require('tesseract.js');
async function extractText(imagePath) {
const worker = await createWorker('eng');
try {
const result = await worker.recognize(imagePath);
return { text: (result.data?.text || '').trim(), confidence: result.data?.confidence };
} finally {
await worker.terminate();
}
}
Multer handles the upload; we pass the temp file path to Tesseract. If no text is found, we return a friendly error ("No text found in image. Try a clearer screenshot.").
Screenshot Storage: Cloudflare R2
Screenshots are uploaded to Cloudflare R2 (S3-compatible) under a chat-analyzer/ prefix. We use the AWS SDK with R2's custom endpoint:
// r2.service.js
const client = new S3Client({
region: 'auto',
endpoint: `https://${accountId}.r2.cloudflarestorage.com`,
credentials: { accessKeyId, secretAccessKey },
forcePathStyle: true,
});
If R2 env vars are not set, we fall back to local disk. This keeps local dev simple.
2. Frontend UX Decisions
Single Input Area
We avoided separate "text" and "image" modes. One input area handles both:
- Paste (Ctrl+V): If clipboard has an image → treat as screenshot; otherwise → text
- Upload: File input for PNG/JPEG/WebP
ChatGPT-Style Image Preview
When the user pastes or uploads an image, we show a preview inside the input box. If an image is present, we hide the textarea and placeholder to reduce clutter.
Loading Overlay
During analysis, we overlay the input box with a semi-transparent layer and a spinner. No layout shift—the box stays in place.
Example Analysis Card
New users don't know what to expect. We added a sticky "Example Analysis" card on the right (desktop) showing sample output: signal, intent, suggested reply. On mobile it appears below the form.
3. The Blog
The blog is a standard Next.js + Markdown setup:
-
Content: Markdown files in
content/posts/with frontmatter (title, description, date, tags, category) -
Rendering:
react-markdown+remark-gfmfor GitHub Flavored Markdown - Categories: Messaging Psychology, WhatsApp Behavior, Texting Etiquette, Relationship Signals, Communication Boundaries
The homepage is the Chat Analyzer; the blog lives at /blog/ with categories, featured articles, and latest posts.
4. SEO and Structured Data
We added:
- Page metadata: Title and description for each route, including "AI Chat Analyzer" and blog keywords
- SoftwareApplication schema: For the Chat Analyzer tool, so search engines understand it's a web application
- Blog schema: For article listings
- RSS feed: Generated at build time for blog posts (title, link, description only—no full content to limit scraping)
5. Deployment Notes
-
Frontend: Next.js on Vercel or Cloudflare Pages (we use
wrangler deploy) - Backend: Node.js on any host (Railway, Render, Fly.io, etc.)
- Database: MongoDB Atlas
- Storage: Cloudflare R2 for screenshots
Environment variables: OPENAI_API_KEY, MONGODB_URI, R2 credentials (optional), NEXT_PUBLIC_API_URL for the frontend to call the API.
Lessons Learned
- IP rate limiting works for a free tool without auth. 3/day is enough to prevent abuse while allowing real use.
- Tesseract.js is slow on first run (worker init). Consider warming it up or using a queue for production.
- Structured JSON prompts with validation make the AI output reliable and easy to render.
- R2 as S3-compatible storage keeps things simple; the AWS SDK works with minimal config.
- Combining tool + blog helps SEO and gives users a reason to return beyond a one-off analysis.
Try It
WatchContact — Paste a conversation or screenshot, get intent, tone, risk level, and suggested replies. No signup, 3 free analyses per day.
Built with Next.js, Express, OpenAI, Tesseract.js, MongoDB, and Cloudflare R2.
Top comments (0)