DEV Community

Fay
Fay

Posted on

I Audited 5 SaaS Brands for AI Visibility. The Results Were Surprising.

A few months ago I started wondering something. Everyone talks about SEO rankings, backlink profiles, and domain authority. But does any of that actually matter for whether AI systems recommend your brand?

I decided to test it the direct way: manually check five SaaS brands across multiple AI engines and see what came back.

The brands I picked covered a range: one large public company, two well-funded scaleups, and two solid but less-known mid-market players. I'm keeping the names out of this because the point isn't to embarrass anyone, it's to show the pattern.

What I Actually Did

For each brand I ran a set of category-level prompts across ChatGPT (GPT-4o), Perplexity, and Google Gemini. Not "tell me about [Brand]" -- that just gives you a summary. I asked the kinds of questions real buyers ask:

  • "What's the best [tool category] for [use case]?"
  • "Compare the top options for [problem this tool solves]"
  • "I'm a [target customer type], what tool should I use for [task]?"

Five variations per brand, across three engines, for a total of 75 checks. I logged whether the brand appeared at all, where it appeared in the response (mentioned first, mentioned alongside competitors, mentioned briefly, or not mentioned), and how the AI described it.

The Pattern That Showed Up

Here's what I found, and yes, I expected something like this, but not this stark.

The large public company appeared in roughly 90% of relevant queries. Not always first, but almost always present. When it did appear, the descriptions were detailed and accurate. Multiple competitors mentioned alongside it.

The two well-funded scaleups (both with significant PR, investor press, and active content programs) appeared in about 60-70% of queries. When they appeared, the descriptions were fairly accurate. One had a strong Perplexity showing but was weak on Gemini. The other was the reverse.

The first mid-market player was a quality product with a good Google ranking for its main keywords. In AI engines? Present in roughly 25% of queries. When it did appear, the description was often vague or slightly outdated, referring to features the product had updated 18 months ago.

The second mid-market player was the most surprising. Strong product, real customers, decent content on their own site. But in 75 AI checks, it appeared a total of 4 times. Twice in Perplexity (with one citation back to a third-party review site), once in ChatGPT in a long list without detail, and once in Gemini in passing.

Four mentions out of 75 checks.

Why This Happens

After digging into this, the pattern makes sense, even if it's frustrating.

AI engines learn from what is available and verifiable. Large companies with years of press coverage, analyst reports, G2 reviews, Reddit threads, comparison posts, and Wikipedia entries have rich, consistent third-party documentation. The AI has a lot to work with.

Mid-market SaaS companies often have great products but thin external footprints. Their content lives on their own site. Third-party coverage is sparse. There are few places where the AI can cross-reference and feel "confident" enough to recommend them.

The SEO playbook says: rank your own pages. The GEO reality is different. What makes an AI recommend you is what other people say about you, not what you say about yourself. Owned content matters less than third-party content that is findable, credible, and consistent.

This is the fundamental gap between SEO and GEO.

What the AI Descriptions Revealed

Beyond presence/absence, the quality of AI descriptions was telling.

When the large public company was mentioned, descriptions included specific use cases, named integrations, and accurate pricing tiers. The AI clearly had multiple quality sources to synthesize from.

For the mid-market brands that did appear, descriptions were often incomplete or generic: "[Brand] is a tool for X" with nothing that would actually help a buyer decide. The AI did not have enough information to be useful, so it was barely useful.

This matters more than whether you show up. A vague mention in ChatGPT is not going to drive a decision. A specific, accurate description that positions you correctly -- that drives clicks and trust.

The Engines Are Not the Same

One thing I did not expect: how much variance there was across engines for the same brand.

The scaleup with strong Perplexity visibility was nearly invisible on Gemini. The reason appears to be citation sources. Perplexity heavily weights certain tech review sites and comparison platforms. Gemini seems to weight Google's own index differently, favoring structured content and Google Business signals.

This means "checking your AI visibility" is not one check. You need to look across engines separately, because your blind spots will be different on each one.

What Fixes AI Visibility

Based on this and follow-up research:

Third-party mentions are the highest-leverage fix. Getting included in one solid comparison post on a credible site did more for Perplexity visibility than several months of on-site content for one of the brands I tracked.

Comparison content works. "X vs Y" pages and category listicles get cited in AI answers constantly. If your brand is not on the right comparison pages, you are not getting recommended.

Clean, factual on-site content still matters. But it functions as a foundation, not a driver. The AI needs to understand what you do clearly. That starts with your own pages but requires external validation to build confidence.

Recency signals. One brand had good historical coverage but nothing recent. AI engines, especially those with web access, weight recent content. Brands that have gone quiet on external channels can fade from visibility even when they were previously well-represented.

Why This Is Urgent

AI search is not a future consideration. According to multiple industry analyses, a meaningful portion of product research is already happening in AI chat interfaces rather than traditional search. The Gartner estimate cited widely in marketing circles puts organic search traffic decline at 50% by 2028 as AI interfaces absorb more queries.

The brands building AI visibility now are building an asset. The ones waiting are falling behind in a way that will be hard to reverse, because the brands getting mentioned are getting cited by the same sources that trained the models.

How to Actually Check Where You Stand

If you want to run the kind of check I did manually, it takes a few hours and produces inconsistent results because AI answers are probabilistic.

There are now tools that automate this. The one I've been using for quick audits is XanLens. It checks 7 engines (ChatGPT, Gemini, Perplexity, Claude, Grok, DeepSeek, Meta AI) automatically, gives you a score per engine, and tells you specifically what you are missing. At $0.99 per audit it is genuinely the fastest way to get a baseline without committing to a $200+/month monitoring subscription before you even know what you are dealing with.

The pattern I found manually is real. If you are a mid-market SaaS brand and you have not checked your AI visibility, there is a reasonable chance you are in the same situation as the brand that showed up 4 times out of 75. The only way to know is to check.

Top comments (0)