DEV Community

Edvisage Global
Edvisage Global

Posted on • Edited on

I Audited a Business's AI Visibility Across Four Platforms. The Results Were Worse Than Expected.

Most businesses have spent years optimizing for Google. Title tags, meta descriptions, backlinks, structured data. The whole playbook.

Nobody told them they also need to optimize for ChatGPT.

Or Claude. Or Gemini. Or Perplexity.

I recently completed an AI visibility audit for a client — a legitimate, established consulting practice with a real website, real services, and real clients. Here's what I found when I asked four major AI platforms about their business.


What an AI Visibility Audit Actually Is

An AI visibility audit tests how AI language models understand, represent, and recommend a business when users ask questions that relate to that business's services. It uses two tiers of queries:

Tier 1 — Category queries: How a potential client would search for the type of service without knowing the business name. Things like "best AI readiness consultants" or "who can help my company prepare for AI."

Tier 2 — Brand queries: Direct searches for the business name and website URL.

I ran both tiers across four platforms: ChatGPT (GPT-4o), Claude (Sonnet), Gemini, and Perplexity. Eight queries total. Sixteen data points. Screenshots of everything.


The Platforms I Tested and Why Each One Matters

ChatGPT has the largest user base of any AI assistant. If someone is using AI to research vendors, there's a reasonable chance they're starting here.

Claude has strong enterprise adoption and is increasingly used for research and decision support in professional contexts.

Gemini is Google's AI. It is deeply integrated into Google Search and Google Workspace. Anyone using Google products has a short path to Gemini.

Perplexity is different from the others — it's an AI-native search engine that crawls the live web continuously. It reflects current web content faster than any other platform.

Note: I excluded Microsoft Copilot from this audit due to geographic routing issues in my testing environment. It will be included in the follow-up After Report.


Tier 1 Findings: Complete Invisibility in Category Searches

This is the commercially critical result.

Across all four platforms and both category queries, the client's business did not appear once.

Not buried at the bottom. Not mentioned in passing. Not even alluded to.

Every platform returned the same set of large firms: McKinsey, BCG, Accenture, Deloitte, IBM, and a handful of boutique names with significant web presence. The client's practice — which offers genuinely differentiated, vendor-neutral consulting at accessible price points — was completely absent.

This is the gap that matters most. When a potential client sits down and asks an AI assistant who to hire, this business does not exist in the answer. That is a lost business opportunity at the discovery stage, before any human conversation has begun.


Tier 2 Findings: Brand Confusion Across Three of Four Platforms

This is where things got more interesting — and more instructive.

When I searched the business's brand name directly, three of the four platforms did not recognize it as a consulting business at all.

Platform A recognized the stylized nature of the brand name and hinted that it might be an acronym or branding choice, but had no knowledge of what the business does, who runs it, or what services it offers.

Platform B had no knowledge of the brand and asked for clarification. It offered several possible interpretations — all of them generic. It didn't know this business existed.

Platform C confidently returned a detailed, helpful response about an entirely different type of business in an unrelated industry. It wasn't confused or uncertain. It was wrong and certain. This is the most dangerous result in the audit — a potential client gets a confident, detailed, completely incorrect answer and has no way to know it.

Perplexity got it right. It correctly described the business, its purpose, and its service offering. This is because Perplexity crawls the live web and the client's site content was readable. This is the most actionable finding: the content exists and is accurate. The problem is that the other platforms can't read it.


The Website URL Test

The second Tier 2 query — searching the website URL directly — produced a revealing pattern.

ChatGPT could not find a working or well-known website for the domain. It described it as potentially misspelled or inactive and suggested unrelated companies in the same general naming space. For a business with a live, functional website, this is a significant discoverability problem.

Claude identified that the domain redirects to a different primary domain but could not read the destination page content. It knew the redirect existed but couldn't see through it.

Gemini listed unrelated businesses first, then correctly identified the client's business in third position. The correct information exists in Gemini's index but is buried behind noise.

Perplexity again performed best — correctly and fully describing the business from the website URL alone.


The Redirect Problem

One structural finding that emerged from the audit deserves its own mention.

The client uses two domains — one redirects to the other. This fragments the digital identity across two URLs. AI platforms generally index the destination domain, not the redirect source, which means marketing materials pointing to one domain may be building brand recognition that AI platforms attribute to the other.

Both ChatGPT and Claude identified the redirect but could not read the destination page — suggesting the redirect itself may be reducing content accessibility for AI crawlers.

This is the kind of structural issue that doesn't show up in traditional SEO audits but matters significantly for AI visibility.


What the Audit Revealed Needs to Change

Four distinct issues emerged from the findings:

1. Complete category invisibility. No AI platform recommends this business when someone searches for what it does. This is the most commercially damaging gap.

2. Brand name confusion. Three of four platforms associate the brand with an entirely different industry. A potential client searching the brand name on most platforms gets confidently wrong information.

3. Website unreadability. Most platforms know the site exists but cannot read its content or describe what the business does.

4. Split digital identity. Two domains are dividing the brand's AI footprint, making it harder for any platform to build a complete and accurate picture.

Each of these has a different root cause and a different fix. Knowing which platforms are affected by which issues — and in what order to address them — is what determines whether the implementation actually works.


The Implementation

After completing the audit, I delivered a structured implementation package addressing each of the four root causes.

I won't walk through exactly what's in it here — the specific combination of technical files, content changes, and structural decisions is where the real work happens and where getting things wrong in the wrong order can delay results by weeks.

What I can say is that none of it requires a developer, none of it requires paid tools, and the changes range from immediate (days) to longer-term (weeks to months) depending on which platform you're targeting and what type of query you want to show up in.


What the After Report Will Show

I'll conduct the After Report audit 45 to 60 days after the client confirms the implementation is live.

The timeline for improvement varies significantly by platform. Perplexity reflects changes fastest because it crawls the live web continuously. The other platforms update on their own cycles — some faster, some slower — and the type of query matters too. URL queries improve before brand name queries. Brand name queries improve before category queries. Understanding that sequence is part of setting accurate expectations.


Why This Matters Beyond This One Client

This client's situation is not unusual. It is the default state for most businesses that were built before generative AI became a primary research tool.

The behavior has shifted. People are using AI assistants to research vendors, evaluate options, and make initial shortlists before they ever visit a website or talk to a human. If a business is invisible or misidentified in that moment, the sales conversation never starts.

This is not an SEO problem in the traditional sense. Google rankings don't translate directly into AI platform recommendations. A business can rank first on Google and not appear in a single AI-generated recommendation. The optimization required is different — it requires a specific type of structured, machine-readable content that tells AI systems exactly what a business is and when to recommend it.

The playbook is still early. Most businesses haven't heard of it. That window won't stay open.


Run a Basic Version on Your Own Business

You don't need to hire anyone to get a first read on where you stand.

Open Perplexity and run two queries: your business category and your website URL. Perplexity gives you the most current picture of what AI platforms can actually read from your site.

If the results are wrong, incomplete, or missing entirely — that's your baseline. The gap between what Perplexity returns today and what you want it to return is roughly the scope of the work.


Update — April 28, 2026: The threat classifier I built for this audit is now live as a public API on RapidAPI. If you want to integrate real-time prompt
injection and attack detection directly into your agent pipeline, free tier available here: Vigil Threat Classifier


I run AI consulting and content through Edvisage Global. If you want a full audit across all four platforms with a structured implementation package, start here: www.edvisageglobal.com/services#ai-readiness

Top comments (0)