We built an AI search optimization tool. We used it on our own site. We scored 88/100 on AI readiness — structured data, FAQ schema, fresh content, cited sources, proper heading hierarchy, the works.
Then we asked 5 AI engines 7 different questions where our tool should logically appear. Zero mentions across 35 checks.
Not low visibility. Not partial visibility. Zero.
Here's what we learned about the gap between being ready for AI search and actually getting cited by AI search.
The setup: what "88/100" actually measures
Our AI readiness score evaluates 8 technical dimensions:
| Dimension | What it checks |
|---|---|
| Structured Data Richness | JSON-LD schemas (FAQPage, Article, Organization, etc.) |
| Heading Clarity | H1-H3 hierarchy, semantic heading structure |
| FAQ Quality | Visible FAQ content matching schema markup |
| Entity Identity | Brand entity data, consistent NAP, knowledge panel signals |
| Content Depth | Word count, semantic coverage, topical comprehensiveness |
| Citation Formatting | Inline citations, sources sections, attribution |
| Topical Authority | Internal linking, content clusters, expertise signals |
| AI Crawler Access | robots.txt rules for GPTBot, ClaudeBot, PerplexityBot, etc. |
We scored well on all of them. Our robots.txt allows all major AI crawlers. We have FAQPage schema on 40+ pages. Every blog post has a Sources & Further Reading section with academic and industry citations. Content is freshly updated (every post modified within the last 30 days).
By every technical measure, we're doing the right things.
The test: 0 out of 35
We ran our own AI Visibility Check — the same feature we sell to customers — against ourselves. We queried 5 AI engines (ChatGPT, Perplexity, Claude, Gemini, Google AI Overview) with 7 prompts:
- "best AI search optimization tool"
- "how to rank in ChatGPT"
- "GEO tools for SaaS"
- "track brand mentions in AI search"
- "tools to optimize for Perplexity"
- "best AEO tools 2026"
- "AI visibility monitoring platform"
Result: 0/7 prompts, 0/35 engine checks. Not one AI engine mentioned us for any query.
Who did show up?
This is the interesting part. The AI engines didn't return nothing — they returned other tools and companies:
- ChatGPT recommended agencies (Zupo, iPullRank, First Page Sage) and platforms (Yext)
- Perplexity cited monitoring tools (Goodie AI, Profound, Gauge, AthenaHQ)
- Claude referenced content sources (DirectAgents, BrightEdge)
- Google AI Overview and Gemini gave generic strategies without naming specific tools
The winners weren't necessarily better products. They were better-known products — with more third-party mentions, more backlinks, more citations in listicle articles, and stronger presence on community platforms.
The paradox, explained
Technical AI readiness and AI visibility are different things that use different signals.
Think of it like a job interview. AI readiness is your resume — well-formatted, clear structure, proper credentials. AI visibility is your reputation — who knows you, who's talked about you, whether the interviewer has already heard your name before you walked in.
The research backs this up:
Brand mentions account for ~35% of citation weight. An SE Ranking study of 129,000 domains found that brand web mentions are the single strongest predictor of whether an AI engine cites you. Not your schema. Not your heading structure. Whether other sites on the internet talk about you.
Referring domains are the strongest technical predictor. The same study found that the number of unique websites linking to you is the strongest link-based signal. AI models learn from web data — links are how the web vouches for authority.
Reddit presence gives a 3.9x citation multiplier. Sites discussed on Reddit get cited by AI engines at nearly 4x the rate of sites that aren't. AI models heavily weight community discourse as a trust signal.
28% of the most-cited domains have zero Google visibility. (Chatoptic, 2025.) These aren't SEO winners — they're authority winners. AI citation and Google ranking are almost statistically independent (correlation: 0.034).
What technical readiness actually buys you
This doesn't mean technical optimization is worthless. It's necessary — just not sufficient.
Here's the mental model: AI readiness is table stakes. Authority signals are the game.
What the 88/100 score does buy:
- Extraction accuracy. A Nature Communications study (Feb 2024) found LLMs extract information more accurately from structured fields than from prose. When an AI engine does cite you, schema markup ensures it gets your brand name, pricing, and features right.
- Freshness signals. Content updated within 30 days gets 3.2x more AI citations (Digital Bloom, 2025). Our dateModified timestamps are current — when authority catches up, freshness won't be the bottleneck.
- Crawl access. If your robots.txt blocks GPTBot or ClaudeBot, nothing else matters. The door has to be open before anyone can walk through it.
Technical readiness is the floor, not the ceiling.
What we're doing about it
Knowing the gap is step one. Here's how we're closing it:
Earning authority through original research. We published a study analyzing 240 website scans with data no one else has. Primary research earns citations from journalists and bloggers — which builds the third-party mention graph that AI models trust.
Community presence. Genuine engagement on Dev.to, Indie Hackers, and (eventually) Reddit. Not "check out our tool" spam — substantive contributions that establish expertise. The 3.9x Reddit multiplier doesn't work if your Reddit history is all self-promotion.
Content that other sites want to link to. Our blog post calling out the unsourced "44% AI citation lift" stat that every SEO blog repeats has become one of the few honest treatments of that claim online. That kind of content earns backlinks organically because it fills a gap no one else is filling.
Getting into listicles. AI engines heavily weight "best X tools" articles when answering tool-recommendation queries. If you're not in those articles, you're invisible to those queries.
Patience. AI models retrain on web data periodically, not continuously. Even if you do everything right today, it may take weeks or months for the authority signals to propagate through the training pipeline. This isn't like SEO where you can see index updates within days.
The takeaway for your site
If you're optimizing for AI search visibility, here's the uncomfortable truth:
The technical checklist is the easy part. Schema markup, heading structure, FAQ content, fresh dates, citation formatting — these are all things you can control directly and implement in a weekend.
The hard part is earning authority. Third-party mentions, backlinks from reputable domains, community presence, being included in comparison articles — these take months of consistent work and can't be shortcut.
Don't skip the technical work. But don't mistake a high readiness score for actual visibility. We made that mistake, and 0/35 was the wake-up call.
We're building Foglift — a free website audit that measures both AI readiness and AI visibility. The readiness score is the floor. The visibility data is the ceiling. We use it on ourselves every week to track whether the gap is closing.
Sources:
- SE Ranking, "AI Search Ranking Factors," 129,000 domain study, 2025
- Chatoptic, "AI Citation vs. Google Rank Correlation," 2025
- Nature Communications, "LLM Information Extraction from Structured Data," Feb 2024
- Digital Bloom, "Content Freshness and AI Citations," 2025
- Seer Interactive, "AI Overview Citation Sources," 863K keyword analysis, Feb 2026
- Aggarwal et al., "Generative Engine Optimization," KDD 2024
Top comments (0)