I have been scanning SaaS products across ChatGPT Claude Perplexity and Gemini for a few months now. Asking the actual buying questions people type in like "what is the best tool for X" and "compare X vs Y."
The results are usually not great.
Most products are either completely invisible in AI buying conversations or described using their competitors features. And the founders have no idea because there is no view source on a ChatGPT answer.
Some patterns from scanning about 40 products:
A product with 31K GitHub stars. ChatGPT has literally never heard of it.
A company that raised 16M. Zero mentions across all four models for their core buying queries.
Multiple products described word for word using a competitors feature set. The model fills the gap with whatever it already knows which is usually the biggest player in the category.
The thing most people miss is that every model fails differently. Perplexity is retrieval based so it picks up new content fast. ChatGPT uses training data from months ago. A fix that works on one model does nothing on another. Treating them as one thing is why most efforts feel random.
I just opened this up so anyone can run one full scan for free. You sign up enter your product URL and see exactly what each model says about you. Which conversations you are missing from. Which competitors get recommended instead. What AI seems to be getting wrong.
Takes about 5 minutes: bersyn.com
If you run a scan I would genuinely love to hear what surprised you. The patterns are fascinating and I am still learning from every new product I see.
Top comments (0)