DEV Community

Cover image for I Asked ChatGPT, Claude, and Perplexity to Recommend My SaaS. Only One Knew I Existed 😭
Phil Rentier Digital
Phil Rentier Digital

Posted on • Originally published at rentierdigital.xyz

I Asked ChatGPT, Claude, and Perplexity to Recommend My SaaS. Only One Knew I Existed 😭

That’s when I realized: I’d spent months optimizing for Google while my potential users were asking robots.

TL;DR: AI search traffic converts at 14% vs Google's 2.8% because users trust AI recommendations like personal advice, but most SaaS founders are invisible to ChatGPT, Claude, and Perplexity while obsessing over traditional SEO. Microsoft just launched AI Performance in Bing Webmaster Tools (the first official AI citation tracker), and the brutal truth is third-party mentions beat your own website 6.5x for AI recommendations — your perfect landing page means nothing if nobody's talking about you on Reddit or dev blogs.

The blind spot in every SaaS acquisition playbook

If you’ve launched a SaaS in the last two years, you’ve probably done some version of this checklist: SEO keywords with Ubersuggest, content marketing, Product Hunt launch, community lurking on Reddit and Discord, cold DMs on LinkedIn, maybe some programmatic SEO if you’re fancy, directory listings on G2 and Capterra.

Sound familiar? I did literally all of these. Some worked. Most took months to show any signal. And the whole time, there was a distribution channel growing 527% year-over-year that I was completely ignoring.

AI search. Not “AI-powered SEO.” Not “ChatGPT plugins.” The actual moment when a potential customer types “what’s the best tool for [your category]” into ChatGPT or Claude — and your product either shows up, or it doesn’t.

The brutal part? AI search traffic converts at roughly 14% compared to Google’s 2.8%. Every visitor from an AI recommendation is worth about 5x more — because they already trust the answer. The AI said “use this.” They’re not comparison-shopping. They’re pulling out their credit card. Like getting a personal recommendation from a friend, except the friend has read every website on the internet and has zero taste in music.

Bing just made AI visibility officially measurable (and Google is scrambling)

On February 10, 2026 — literally last week — Microsoft launched AI Performance inside Bing Webmaster Tools. Almost nobody in the indie hacker world is talking about it yet.

First official tool from any major search platform that lets you see how often your content gets cited in AI-generated answers. Not clicks. Not rankings. Citations. As in: “your page was used as a source when Copilot answered a question.”

Five metrics, all free, sitting in your Bing Webmaster Tools dashboard right now:

  • Total Citations — how many times your site appeared as a source in AI answers
  • Average Cited Pages — how many of your URLs get referenced daily
  • Grounding Queries — the actual phrases the AI used to retrieve your content (this is gold — it’s not what the user typed, it’s what the AI searched for internally to build its answer)
  • Page-level Activity — which specific URLs get cited most
  • Timeline Trends — are your citations growing or dying?

Search Console, but for LLMs. And Google doesn’t have anything like it. Google Search Console lumps AI Overview traffic in with regular organic — you can’t separate them. No citation counts, no grounding queries, nada.

Bing just set a standard that Google will have to match. Same playbook as IndexNow — Microsoft innovates, community adopts, even Google-first sites end up using it because why wouldn’t you. The URL is bing.com/webmasters/aiperformance. Setup takes 5 minutes. You'll have data today.

One early insight from SEOs poking at the beta: pages with clear structure — proper H2s, H3s, lists, data-backed claims — are massively overrepresented in citations. The AI doesn’t want your marketing fluff. It wants answers it can cite with confidence. Makes sense if you think about it. You wouldn’t cite a billboard in a research paper.

How AI models actually decide what to recommend

This is where being a dev who uses Claude Code daily actually matters. I don’t just use AI — I’ve shipped production apps with it. I’ve seen how it hallucinates your Convex schema at 4 PM on a Tuesday and somehow nails a complex auth flow ten minutes later. These models are chaos wrapped in a probability distribution.

They don’t have a “ranking algorithm” like Google. No crawl score, no domain authority, no PageRank. They synthesize answers from patterns in their training data, plus (for Perplexity and ChatGPT with search) whatever they pull from the live web.

So the question isn’t “how do I rank higher.” It’s “how do I become part of the pattern.”

Three things matter, and one of them will annoy you:

  • Third-party mentions beat your own website 6.5x. Yeah. Your beautiful landing page? The one you spent three weekends perfecting the hero section gradient on? The AI mostly ignores it. What it looks for is other people talking about you. Blog reviews. Reddit threads. GitHub discussions. You’re 6.5 times more likely to get cited through someone else’s content than your own domain. All those hours on your /features page and the AI is out there reading some random dude’s dev.to post instead.
  • Specificity wins over authority. 90% of pages ChatGPT cites rank at position 21 or lower on Google. Read that again. You dont need page one. You need content that gives specific, detailed, useful answers to exact questions. A random blog post from a dev who used your tool and wrote a genuine walkthrough? Gold. Your SEO-optimized “Top 10 Tools For
” listicle? The AI has seen a thousand of those and it’s bored.
  • Freshness matters more than you’d expect. ChatGPT recommended one of my competitors that shut down 8 months ago. Just casually suggested a dead product like a GPS that keeps routing you to a Blockbuster. Models learn from snapshots. If your last meaningful content update was 6 months ago, you’re slowly becoming a ghost. Perplexity is better here because it searches live, but ChatGPT and Claude rely on training data that might still think it’s 2024.

My $0.30 visibility audit

No fancy monitoring platform. Just me, three browser tabs, and 15 prompts a potential customer might type. Things like “best AI tool for [my category]”, “alternatives to [bigger competitor]”, “I need [core feature], what should I use?”

The results:

  • ChatGPT mentioned me in 0 out of 15 prompts. Zero. It happily recommended 4 competitors I’d never heard of — including the dead one. Cool cool cool.
  • Claude — 2 out of 15. Both times buried at the end of a list, with a description so generic it could’ve been about literally any product in my category. Like being invited to a party but having to stand in the garage.
  • Perplexity — 1 out of 15. Got my pricing wrong (listed the old plan I deprecated in November) and linked to a blog post from 2024 instead of my actual product page.

6.7% visibility rate. For a product that ranks on page one of Google for its main keyword đŸ« 

Then I automated it:

python

import anthropic

import openai

prompts = [

"What's the best tool for [your category]?",

"Compare [competitor] vs alternatives for [use case]",

"I need [core feature]. What should I use?",

# add your category-specific prompts

]

def test_visibility(prompt):

results = {}

\# Test Claude  
claude = anthropic.Anthropic()  
response = claude.messages.create(  
    model="claude-sonnet-4-5-20250514",  
    max\_tokens=1024,  
    messages=\[{"role": "user", "content": prompt}\]  
)  
results\["claude"\] = response.content\[0\].text  

\# Test ChatGPT  
client = openai.OpenAI()  
response = client.chat.completions.create(  
    model="gpt-4o",  
    messages=\[{"role": "user", "content": prompt}\]  
)  
results\["chatgpt"\] = response.choices\[0\].message.content  

return results  
Enter fullscreen mode Exit fullscreen mode

for prompt in prompts:

data = test_visibility(prompt)

for model, response in data.items():

mentioned = "YOUR_BRAND" in response.lower()

print(f"{model}: {'✅' if mentioned else '❌'} - {prompt[:50]}...")

$0.30 per run. Weekly cadence. Between this script (ChatGPT + Claude) and Bing’s free dashboard (Copilot + Bing AI), you get full coverage without paying for any third-party tool.

What I changed (results after 6 weeks)

This part isn’t theory. I tracked everything. Before/after, weekly diffs, the whole thing.

Got other people to write about me. Not outreach spam — genuine participation. I answered questions on Reddit where my tool was actually relevant (and kept my mouth shut when it wasnt, which was harder than expected). I reached out to devs who wrote “here’s my stack” posts and offered free access for honest coverage. Built integration guides with Zapier, n8n, Supabase that naturally reference my product.

Result: ChatGPT went from 0/15 to 4/15 mentions in two months. The n8n community template alone got me two new third-party mentions I didn’t even ask for.

Rewrote my key pages as “answer-shaped” content. AI models want the answer in the first 50–70 words, then depth. Not “What is [category]? Let me explain the rich and fascinating history
” — the AI will tune out faster than your users during a product demo. Instead: “The best tool for [use case] depends on X, Y, and Z. For teams under 10, [this]. For enterprise, [that].”

Result: Perplexity started citing my actual product page instead of that random 2024 blog post.

Shipped integrations, not just features. Every integration is a new node in the AI’s understanding of your product. When my tool appeared in Zapier’s directory and n8n’s community templates, third-party mentions spiked. Anyway I could write a whole article just about the integration strategy but the short version is: be where other tools already are and the AI will connect the dots.

Fed the live-search models fresh data. Comparison pages with schema markup. Docs updated weekly. Pricing page with clean machine-readable structure. That deprecated pricing Perplexity was showing? Fixed within 2 weeks of updating with proper structured data. Two weeks. Not six months of “domain authority building.”

The uncomfortable numbers

For the skeptics (and I respect the skepticism — the AI hype cycle has trained us all to be suspicious of anyone claiming a new channel matters):

60% of Google searches now end without a click. Headed toward 70%. Organic CTR for queries with AI Overviews dropped 61% year-over-year. But brands that DO get cited in AI answers get 35% more organic clicks than those that dont.

ChatGPT pulls over 5 billion monthly visits — fourth most-visited site on earth. 30% of Perplexity users are in senior leadership roles. These are decision-makers with budget, not people googling “is a hot dog a sandwich.”

And AI platforms still account for less than 1% of global internet traffic. The channel is already 5x more valuable per visitor, the competition is basically nonexistent, and Microsoft just gave us measurement tools. If this were a video game, this would be the part where you find an unlooted chest sitting in plain sight in a room everyone walked past 💀

Your 30-minute audit (do this today)

Right now — Open ChatGPT, Claude, and Perplexity. Type 5 prompts a customer would use to find a tool like yours. Screenshot everything. Count your mentions. Then go to bing.com/webmasters/aiperformance and check your citation count. Thats your double baseline.

This week — Run the same prompts for your top 3 competitors. If they show up and you dont, look at what content exists about them that doesn’t exist about you. The gap is usually third-party coverage, not your own site.

This month — Pick the highest-value prompt where you’re absent. Create one piece of content specifically designed to answer that prompt. Not a product blog post — a genuinely helpful resource that happens to reference your tool. Ship it. Wait two weeks. Test again.

Ongoing — Script weekly. Bing dashboard monthly. AI visibility doesn’t move like Google rankings. You’ll see nothing, nothing, nothing, then sudden inclusion. Step function, not a slope.

Your next customer might never see a Google result. They might just ask Claude. And when they do, your SaaS better have an answer ready.


If this was useful, follow me — I write about building SaaS with AI tools, shipping with Claude Code, and the kind of automation that makes your 9-to-5 colleagues nervous. Next up: how I automated my entire content pipeline with n8n and Claude (without losing my soul in the process).

Top comments (0)