I'm a developer. I sign freelance contracts. At some point I realized I was doing both badly.
Not the development part — the contract part. I was signing agreements I only half-understood, trusting that the other party had written something reasonable.
So I built GetRevealr to fix that.
The core problem with legal text
Legal language is designed to be precise, not readable. The same clause that protects one party can silently harm the other. The ambiguity isn't accidental — it's structural.
Teaching an AI to surface that ambiguity was harder than I expected. The challenge isn't identifying obviously bad clauses. It's flagging the ones that look neutral but create asymmetric risk — an IP assignment that extends to side projects, a termination clause that only favors one side, an auto-renewal window so short it's nearly impossible to meet.
What the system does
You upload a contract — PDF, Word, or image. The AI reads every clause, assigns a risk score from 0 to 100, and returns plain-English explanations with specific recommended actions for each flagged item.
The hard part was calibrating tone. Legal analysis tends to be either too technical or too alarmist. I wanted something that felt like a knowledgeable friend reading the contract with you — not a legal disclaimer machine.
What I'd do differently
Start with one contract type, not all of them. I built for leases, employment, NDAs, and freelance contracts simultaneously. Each has different risk patterns and different user contexts. Narrowing the scope early would have made the first version much cleaner.
Try it
If you have a contract you've been meaning to read properly, upload it at Getrevealr.com
Free preview, $19 for the full report.
And if you've built something similar or have thoughts on working with legal text — I'd genuinely like to hear it in the comments.
Top comments (1)
The insight about starting with one contract type instead of all of them is something I wish more builders talked about. I've run into the same pattern building AI tools for financial data analysis — every domain has its own edge cases that look simple from the outside but blow up when you try to generalize too early. Stock analysis, ETF data, and sector overviews all seem like the same task until you realize each has completely different risk patterns and user expectations.
Your point about calibrating tone is interesting too. I've found similar challenges when generating content at scale with LLMs — the default output tends to be either too generic or too aggressive with warnings. Getting that "knowledgeable friend" feel requires a lot of prompt iteration and usually a scoring/feedback loop to keep quality consistent across different input types.
One thing I'd be curious about: how are you handling the PDF parsing pipeline? OCR quality on scanned contracts can vary wildly, and I've found that the extraction step is often where the most silent failures happen — the LLM confidently analyzes whatever text it gets, even if the OCR mangled a critical clause.