I kept watching friends burn 6 months building things nobody wanted. Every time I asked "who's your competitor?" I got the same answer: "nobody's doing exactly this."
Someone always is. They just didn't look.
So I built Test Your Idea. You type a startup idea, I scan 40+ live data sources, and you get a scored report in about 60 seconds. Every claim links back to a real source.
I'm one person. No co-founder. No funding. Here's what 2,500+ scans taught me.
The one rule I refused to break
Every number in every report had to link to something real.
If I say competitor X raised $4M, you can click and check. If I say a Reddit thread shows demand, there's a link. If the data doesn't exist, the report says so.
This sounds basic. But it's the opposite of what most "AI tools" do. They generate confident paragraphs with zero sources. I didn't want that. I wanted something I'd trust myself.
This constraint made everything harder to build. It also made the product worth paying for.
What happens when you scan an idea
You type something like "AI meal prep planner." Then:
First, research. It fires parallel requests to 40+ sources. Search engines, startup databases, Reddit, Hacker News, IndieHackers, app stores, trend data. Pure data collection. No AI opinion yet.
Then, analyze. The raw data goes through multiple AI passes. Each section of the report gets its own data context. Competitors only see competitor data. Market sizing only sees market data. This was a deliberate choice. Smaller, focused analysis beats one giant prompt trying to do everything.
Then, score. A weighted score from 0 to 100 across 8 dimensions: market size, competition, demand signals, differentiation, revenue clarity, execution feasibility, timing, team fit. I open-sourced a simplified version of this as an npm package if you want to play with the math.
60 seconds for free scans (6 min for deep), start to finish.
What broke (constantly)
The glamorous AI failures never happened. No rogue analysis. No wildly wrong conclusions.
The boring stuff broke constantly. Malformed JSON. Truncated responses. Timeouts. Rate limits. I spent more time making the parsing resilient than writing prompts. Nobody talks about this part because it's not interesting. But it's the difference between a demo that works when you're watching and a product that works when you're asleep.
Three things I didn't expect
Zero competitors is a bad sign. Ideas where I found zero competitors scored lower (average: 47) than ideas with 3-5 competitors (average: 58). No competitors usually means no market. I wrote about this pattern in detail on my blog.
Boring ideas score best. The highest-scoring category across 2,500+ scans? B2B workflow tools. Not AI wrappers. Not consumer apps. Compliance tools. Niche invoicing. Spreadsheet replacements. Boring problems with clear buyers.
People pay for sources, not analysis. The #1 thing paying customers mention: "I can click the links and verify." Not the AI. Not the score. The fact that they can check my work. That surprised me. I thought I was selling analysis. I'm selling trust.
Why I charge $29 once instead of a subscription
Validation is a one-time event. You don't validate the same idea every month. Charging a subscription felt wrong. I didn't want people paying me when they're not using the product.
I made that choice deliberately. $29 for the full 13-section report. The first 3 sections are free. If those are useful, people pay. If not, they got free research and I got feedback.
49 founders have paid so far. Mostly organic. Minimal ad spend.
Try it yourself
Scan any idea free. Judge the sources yourself. That's the whole pitch.
If you're a dev who wants to tinker with the scoring logic: npm install startup-viability-score or try the interactive demo I put on Hugging Face.
I'm @VincentBuilds. Building this solo, sharing what I learn.
Top comments (0)