The Problem
I got tired of pasting text into AI detection tools and getting a single percentage back. "83% AI-generated" — but which sentences? Without knowing what to fix, the number is useless.
So I built a tool that shows you exactly which sentences trigger AI detection, and optionally rewrites them.
What It Does
HumanizeAI — free, no signup, no login.
AI Detection — paste any text, get a sentence-by-sentence breakdown. Each sentence is highlighted: red = likely AI, green = likely human. You see a circular gauge with the overall score and individual detector ratings from 4 different detection algorithms.
Text Humanization — click "Humanize" and it rewrites the text to sound more natural. You can compare before/after side by side.
Technical Implementation
Stack
- Next.js 16 (App Router)
-
Tailwind CSS v4 — the new CSS-first config, no more
tailwind.config.js - AI text model via API — for text humanization
- Custom detection engine — multi-detector scoring with weighted analysis
-
Cloudflare Pages — edge deployment via
@opennextjs/cloudflare
Sentence-Level Highlighting
The detection endpoint (/api/detect) returns per-sentence results:
interface DetectionResult {
sentence: string;
aiProbability: number; // 0-100
detectors: {
name: string;
score: number;
label: string; // "AI" | "Human" | "Mixed"
}[];
}
Each sentence gets a color based on its AI probability:
- Red (>70%): likely AI-generated
- Yellow (40-70%): uncertain
- Green (<40%): likely human-written
Gauge Dashboard
Built with pure SVG — no chart library needed:
// Circular progress ring using SVG circle + stroke-dasharray
const circumference = 2 * Math.PI * radius;
const offset = circumference - (score / 100) * circumference;
The gauge color shifts dynamically: red → yellow → green based on the overall AI score.
Multi-Detector Scoring
Instead of relying on one detection method, the tool runs 4 independent detectors and aggregates the results. Each detector analyzes different patterns (perplexity, burstiness, vocabulary distribution, sentence structure). The final score is a weighted average displayed as individual progress bars.
Rate Limiting
Rate limited per IP with daily caps to prevent abuse. Simple in-memory implementation — no Redis needed at this scale.
Deployment on Cloudflare Pages
Deployed via @opennextjs/cloudflare adapter. The entire app runs on the edge — API routes and all. No separate backend server.
npm install @opennextjs/cloudflare
npx opennextjs-cloudflare build
npx wrangler pages deploy
Connected to GitHub for auto-deploys on push.
Design Decisions
| Decision | Why |
|---|---|
| No signup/login | Friction kills conversion for simple tools |
| Sentence-level, not paragraph | Users need to know what to fix |
| Multi-detector | Single detectors have high false positive rates |
| SVG gauge, no library | Keeps bundle small (~0 dependency overhead) |
| In-memory rate limit | Simple, free, works at low traffic |
| Cloudflare Pages | Free tier is generous, edge = fast globally |
Limitations (Honest Take)
- AI humanization has a ceiling. Good prompting can reduce AI scores significantly, but eliminating them entirely is unreliable. The detection feature is the real value — humanization is a helper.
- Rate limiting resets on deploy. Fine for now, not for scale.
- No file upload or PDF export yet. On the backlog.
Try It
HumanizeAI — paste some text, see which sentences look AI-generated, optionally humanize. Free, no account needed.
Feedback welcome — what features would make this actually useful for your workflow?
Top comments (0)