How to Check if AI Engines Know Your Brand (and Fix It)
Published: March 2026
When someone asks ChatGPT, Gemini, or Perplexity about your product category, does your brand appear in the answer?
For most brands, the honest answer is: you don't know. And that's the problem.
This guide walks through how to audit your brand's visibility in AI search engines and what to actually fix — using XanLens as an example.
What is GEO Visibility?
Generative Engine Optimization (GEO) is the practice of ensuring AI engines can find, understand, and recommend your brand in their generated answers. Unlike traditional SEO, which focuses on Google rankings, GEO measures visibility in AI-generated responses from ChatGPT, Gemini, Perplexity, Grok, and others.
A brand can rank #1 on Google and score 0 in AI search. The signals are different: AI engines weight entity clarity, first-paragraph content, structured data, and co-occurrence in training data — not backlinks.
Step 1: Run a GEO Audit
The fastest way to get a baseline is XanLens (xanlens.com). It runs 100+ queries across Gemini, ChatGPT, Perplexity, Grok, and DeepSeek, then gives you a 0-100 score across three dimensions:
- Knowledge score: How accurately AI engines describe what your brand does
- Discoverability score: Whether AI recommends you in category queries ("best GEO tools 2026")
- Citation score: Whether AI cites real third-party sources about your brand
curl -X POST https://xanlens.com/api/v1/audit/run \
-H "Content-Type: application/json" \
-d '{"website": "https://yourbrand.com"}'
Cost: $0.99 via USDC on Base. No account required.
Step 2: Interpret Your Score
| Score | Meaning |
|---|---|
| 0-20 | AI engines don't know you exist |
| 21-40 | Weak knowledge, invisible in discovery |
| 41-60 | Known but not recommended |
| 61-80 | Visible, gaps in specific engines |
| 81-100 | Strong presence across most engines |
When we audited XanLens itself, we scored 20/100. Gemini thought we were a "life sciences data analysis platform." Grok described us as "an AI image analysis and computer vision tool." We built the product that measures this — and were failing it.
Step 3: Fix Your First 200 Words
Perplexity reads your homepage first paragraph verbatim when citing you. Most brands open with taglines like "The future of [X]" — which tells AI nothing.
Rewrite sentence 1 as: [Brand] is a [category] that [does what].
Example:
"XanLens is a Generative Engine Optimization (GEO) audit tool that measures brand visibility across 7 AI search engines and generates automated content fixes."
This format is directly extractable by RAG systems.
Step 4: Add Schema Markup
Schema markup increases AI citation likelihood by 28-40% (based on SE Ranking's study of 2.3M pages). Add at minimum:
-
Organizationschema withsameAslinks to your social profiles -
SoftwareApplicationor appropriate category schema -
FAQPageschema for common questions about your product
{
"@context": "https://schema.org",
"@type": "SoftwareApplication",
"name": "XanLens",
"description": "GEO audit tool that measures brand visibility across 7 AI engines",
"applicationCategory": "BusinessApplication",
"offers": {
"@type": "Offer",
"price": "0.99",
"priceCurrency": "USD"
}
}
Step 5: Create /llms.txt
Add a plain text file at yourbrand.com/llms.txt. GPTBot and ClaudeBot read this file on every crawl.
Structure:
# Brand Name
> One-sentence definition of what the brand does.
## Key Entities
- Brand: what it is
- Category: how the industry names it
- Key product: what it does specifically
## Factual Claims
1. [Verifiable claim with number]
2. [Verifiable claim with number]
## Pages
- Homepage: https://yourbrand.com
- Docs: https://yourbrand.com/docs
Step 6: Build Off-Site Presence
AI engines don't primarily cite your own website. 85% of AI citations come from third-party sources (SE Ranking, 2.3M pages studied). The platforms that move the needle:
| Platform | Primary Engine Impact |
|---|---|
| ChatGPT (4x citation rate boost) | |
| Dev.to / Hashnode | Perplexity |
| YouTube | Gemini AI Overviews |
| All engines (authority signal) | |
| AI directories | Discovery queries |
| Crunchbase | Entity validation |
Step 7: Re-Audit in 7 Days
This is the part most brands skip. Without re-auditing, you don't know if your fixes worked. AI engines update on different schedules — some changes take days, others take weeks.
The loop: audit → fix → re-audit → repeat.
XanLens costs $0.99 per audit, which makes weekly re-auditing practical. We're running ours publicly and posting the results.
Resources
- XanLens GEO audit: https://xanlens.com
- GEO academic paper (Princeton/Georgia Tech/IIT Delhi, KDD 2024): https://arxiv.org/abs/2311.09735
- SE Ranking AI citation study (2.3M pages): https://seranking.com
Top comments (0)