Your site probably ranks fine on Google. How does it look when ChatGPT or Perplexity read it?
Different question. Different answer. Google ranks pages. LLMs extract and cite passages, and the signals they care about (schema richness, FAQ coverage, heading clarity, entity disambiguation) aren't on the average SEO checklist.
We built a CLI to audit that. This post is a 30-minute walkthrough: install it, scan your site, scan some competitors, and wire a quality gate into CI. All the data below is from scans I ran while writing this post.
What "AI search visibility" actually measures
When you audit a page for traditional SEO, you're optimizing for one thing: will Google rank this URL for a query. The inputs are Core Web Vitals, backlinks, keyword targeting, crawlability.
AI search is a different extraction problem. An LLM-powered answer engine (ChatGPT, Perplexity, Google AI Overviews, Claude, Gemini) wants to pull a specific passage out of your page and cite it, often alongside 3-5 other sources. The signals that matter there are different:
- Structured data richness (JSON-LD, not microdata)
- FAQ coverage with FAQPage schema
- Heading clarity — can the model segment your page into answerable chunks
- Entity identity — can it disambiguate your brand from noise
- Content depth and authority — citations, data, original research
- Citation formatting — do you make it easy to quote you
- Topical authority — is your site a source the model has seen cited elsewhere
- AI crawler access — is GPTBot, ClaudeBot, PerplexityBot actually allowed in your robots.txt
The usual name for optimizing these signals is AEO (Answer Engine Optimization) or GEO (Generative Engine Optimization). The CLI we'll use scores both, plus traditional SEO on the side.
Install
npm install -g foglift-scan
No account needed for the basic scan command. Everything in the first half of this post runs without authentication.
foglift --version
Scan your own site
Point it at a URL:
foglift scan https://foglift.io
You get a color-graded scorecard in your terminal:
foglift scan results for foglift.io
Overall ██████████ 96/100 A
SEO ██████████ 100/100 A
GEO ██████████ 100/100 A
AEO █████████░ 88/100 B
Perf █████████░ 89/100 B
Security ██████████ 100/100 A
A11y ██████████ 100/100 A
Top Issues:
⚠ 14 external scripts (Performance)
⚠ 1 render-blocking scripts (Performance)
The seven rows are the axes. SEO is the classic bundle. GEO is "is this site structurally readable by a generative engine." AEO is "is this specific page extractable by an answer engine." Perf, Security, A11y are what they sound like.
The B on AEO here is honest: we still ship 14 external scripts on the homepage and one of them is render-blocking. That's an active todo, not a bragging number.
Add --json for anything scriptable
If you want to pipe results to jq, log them to a dashboard, or post them to a Slack channel, use --json:
foglift scan https://foglift.io --json | jq '.scores'
{
"overall": 96,
"seo": 100,
"geo": 100,
"aeo": 88,
"performance": 89,
"security": 100,
"accessibility": 100
}
The full JSON payload includes every issue, its category, severity, and a one-line description. That's the entire audit as structured data, which is the point.
The spicy part: scan the SEO giants
Here's where it gets interesting. I ran the same command against the three biggest names in the SEO tooling space:
foglift scan https://ahrefs.com --json | jq '.scores'
foglift scan https://moz.com --json | jq '.scores'
foglift scan https://semrush.com --json | jq '.scores'
The numbers:
| Site | Overall | SEO | GEO | AEO |
|---|---|---|---|---|
| ahrefs.com | 81 | 100 | 90 | 54 |
| moz.com | 80 | 100 | 90 | 65 |
| semrush.com | 78 | 100 | 90 | 52 |
These are the companies that literally sell the tools everyone uses to rank on Google. Their SEO scores are a flawless 100. Their AEO scores are 52 to 65.
Two things to pull out of that.
First, SEO 100 and AEO 54 on the same page is not a contradiction. It's the whole thesis: the signals that win Google are not the signals that get you extracted into a ChatGPT answer. A site can be a textbook SEO execution and still be opaque to an LLM that's trying to pull a citation.
Second, if Ahrefs and Semrush haven't retrofitted their marketing site for this yet, the gap is probably everywhere. In the 240-scan audit we ran earlier this year, the median AEO score was 46, and only 10% of sites scored above 80. The tooling giants aren't outliers on the low side. They're roughly average.
Interpreting your top issues
The topIssues block in the JSON is where your actual todo list lives. A typical output for a site that scores in the 50s on AEO looks like:
{
"topIssues": [
{ "category": "GEO", "title": "No FAQ section", "description": "Add FAQPage schema for AI extraction.", "severity": "warning" },
{ "category": "AEO", "title": "Missing Article schema", "description": "LLMs cite more reliably when Article JSON-LD is present." },
{ "category": "AEO", "title": "Low heading density", "description": "Break long sections with H2/H3 to improve extractability." }
]
}
Each issue is a concrete edit. "No FAQ section" means add FAQPage JSON-LD on a page where users actually ask questions. "Missing Article schema" means wrap blog posts in Article JSON-LD with author, datePublished, dateModified. "Low heading density" means the model can't segment your page into citable answers, so break it into named sections.
The issues are deliberately concrete because the fixes are concrete. There's no "improve your E-E-A-T" mush in there.
Scan competitors as a batch
If you're auditing a landscape and not just one URL, batch mode runs up to 10 scans in one call:
foglift scan batch \
https://foglift.io \
https://ahrefs.com \
https://moz.com \
https://semrush.com \
--json > competitors.json
(Batch mode is the one thing in this post that requires an API key. It's free to generate one.)
Pipe it through jq to get a sortable table:
jq -r '.[] | [.url, .scores.aeo, .scores.overall] | @tsv' competitors.json
Now you have a leaderboard. If you're on the wrong end of the leaderboard, you have a reason to care about the top issues.
Wire it into CI (the real payoff)
The scorecard is interesting once. What makes it useful is making regressions visible.
foglift scan has a --threshold=N flag that exits 1 if the overall score drops below N. That's all you need for a CI gate:
# .github/workflows/ai-audit.yml
name: AI Search Audit
on:
pull_request:
branches: [main]
jobs:
audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Foglift CLI
run: npm install -g foglift-scan
- name: Audit production URL
run: foglift scan https://example.com --threshold=85
If a PR ships changes that drop the production score below 85, the job fails. Same pattern as Lighthouse CI, same pattern as eslint --max-warnings 0, same pattern as any other quality gate you're already running.
For local dev, I run this as part of a pre-release check:
foglift scan https://staging.example.com --threshold=80 \
&& echo "AI audit passed" \
|| { echo "AI audit failed — check issues before shipping"; exit 1; }
What to do with the results
Three realistic next steps, in order of leverage:
- Add FAQPage schema to your 10 most-trafficked content pages. This usually moves AEO the most for the least effort. Write actual questions, answer them in 40-80 words each, wrap in JSON-LD.
- Make sure GPTBot, ClaudeBot, PerplexityBot, and Google-Extended are allowed in robots.txt. A surprising number of sites accidentally block them, then wonder why they're not cited.
- Add Article schema to every blog post with author, datePublished, and dateModified. Freshness signals matter more for answer engines than they do for Google, because LLMs are trying to avoid citing stale answers.
After you ship those, rerun the scan. The delta is the point: you want a line on a graph that trends up.
Wrap
The CLI is open, the scans are free, and the gate is a single flag. If you work on a site and the AEO number comes back under 70, you now have a ranked list of things to fix and a way to stop it from regressing.
If you want to go deeper, foglift scan ai-check runs your URL against a set of target prompts across ChatGPT, Perplexity, Claude, and Gemini and shows you which ones actually cite you today. That's the ground-truth measurement — the scorecard above is the leading indicator, the ai-check is the lagging one. Both useful, neither redundant.
The 30 minutes is: install, scan your site, scan two competitors, read the top issues, wire the threshold into CI. That's a full first loop. The second loop is shipping one of the fixes and watching the number move.
CLI source and docs at foglift.io/developers. All scores in this post were captured on 2026-04-18 and will drift.
Top comments (0)