67% of users now get their first answer from an AI assistant rather than clicking a search result. Your site might rank #1 on Google and still be invisible to ChatGPT, Perplexity, and Google AI Overview.
The problem: SEO optimizes for ranking algorithms. AI engines don't rank -- they cite. If your content can't be extracted and quoted, it doesn't exist.
This is AEO (Answer Engine Optimization), and we built an open-source tool for it.
One Command
npx aeoptimize scan your-site.com
AEO Readability Report
Score: 66/100 AI Readability: Good
Structure ██████████████░░░░░░ 18/25
Citability █████████████░░░░░░░ 16/25
Schema ███████░░░░░░░░░░░░░ 7/20
AI Metadata █████████████████░░░ 13/15
Content Density ████████████████░░░░ 12/15
Warnings:
! Found 2 H1 headings. Use exactly one H1 per page.
! JSON-LD missing fields: @type, name, description
Top Suggestions:
-> Add FAQ section with question-format headings
-> Add AI-relevant schema types (FAQPage, Article, HowTo)
No API keys. No signup. Runs offline.
What aeoptimize Measures
17 rules across 5 dimensions, scored 0-100:
| Dimension | Max | What AI engines care about |
|---|---|---|
| Structure | 25 | Clear headings, short paragraphs (<150 words), FAQ sections |
| Citability | 25 | Self-contained statements an AI can quote without context |
| Schema | 20 | JSON-LD that LLMs use as source of truth (FAQPage, Article) |
| AI Metadata | 15 | llms.txt file, robots.txt AI crawler rules |
| Content Density | 15 | Signal-to-noise ratio, vocabulary diversity |
The scoring is deterministic -- same input, same score. No LLM involved in the base scan, so you can run it in CI/CD without worrying about cost or rate limits.
Benchmarking Real Websites
| Website | Score | Why |
|---|---|---|
| nextjs.org/docs | 75 | Good structure, has llms.txt, missing some schema |
| shiheintelligent.com | 66 | Has schema but incomplete, 2 H1s, no FAQ |
| stripe.com/docs | 59 | Great content but zero JSON-LD (-20 points) |
| anthropic.com | 52 | Landing page with little extractable content |
Even well-built sites lose 20+ points from missing JSON-LD alone. That's the difference between being cited and being ignored.
Case Study: Fixing a 66-Score Site
shiheintelligent.com scored 66/100. The three biggest issues:
- Two H1 headings -- AI engines treat H1 as the page topic. Two H1s create ambiguity.
-
Incomplete JSON-LD -- Schema existed but missing
@type,name,description. AI crawlers can't categorize the content. - No FAQ section -- The site answers common questions in paragraph form, but without FAQ schema, AI engines can't extract them as Q&A pairs.
Fixing these three issues alone could push the score above 80.
More Than a CLI
aeoptimize works in multiple modes:
Framework plugins -- auto-generate llms.txt and JSON-LD on build:
// vite.config.ts
import { aeoPlugin } from 'aeoptimize/vite';
export default defineConfig({ plugins: [aeoPlugin()] });
// next.config.mjs
import { withAeo } from 'aeoptimize/next';
export default withAeo({});
Pre-commit hook -- block commits that drop AEO score:
npx aeoptimize hook install # min score: 60
npx aeoptimize hook install --min-score 80
GitHub Action -- check every PR:
- uses: dexuwang627-cloud/aeoptimize/action@main
with:
path: dist
min-score: 60
Claude Code skills -- interactive, AI-powered optimization:
claude plugin marketplace add dexuwang627-cloud/aeoptimize
# Then: /aeo-scan, /aeo-generate, /aeo-transform
The /aeo-transform skill uses your Claude subscription to restructure content: split paragraphs, extract FAQ schemas, remove keyword stuffing, fix dangling references. Zero extra cost.
Multi-AI Scoring
For higher confidence, aeoptimize can dispatch scoring to multiple AI engines simultaneously:
npx aeoptimize scan your-site.com --multi-ai
It detects gemini and copilot CLIs on your system, sends each the page content for independent evaluation, then merges with the rule engine using weighted consensus. The result shows per-AI insights:
Score: 72/100 (Rule Engine: 61 | AI Consensus: 83)
AI Insights:
Claude: "FAQ section lacks schema markup"
Gemini: "Missing llms.txt reduces discoverability"
Built by Three AIs
The project itself was built as a multi-AI collaboration:
- Claude -- architecture, core engine (17 rules, scanner, generator), security audit
- Gemini -- Vite plugin
- Copilot -- Next.js plugin
50 tests, 4 rounds of automated code review (code quality, security, architecture, final). The biggest competitor in this space has 4,800+ stars but zero tests.
The SEO vs AEO Shift
| SEO | AEO | |
|---|---|---|
| Goal | Rank higher | Get cited |
| Audience | Search crawler | Language model |
| Key metric | Click position | Citation accuracy |
| Content style | Keyword-rich | Self-contained, structured |
| Structured data | Nice to have | Essential |
This shift is already happening. Google AI Overview appears in 43% of queries. Perplexity processes 200M+ queries/month. Content that AI can't parse is content that doesn't exist.
Try It
npx aeoptimize scan your-site.com
GitHub: https://github.com/dexuwang627-cloud/aeoptimize
npm: https://www.npmjs.com/package/aeoptimize
Star the repo if it helped. Open issues if it didn't.
Top comments (1)
Most developers overlook that optimizing for AI assistants involves training language models with domain-specific data, not just tweaking SEO. In our experience, integrating RAG architectures can make data retrieval more dynamic and context-aware, giving AI models a better grasp of your content. This approach ensures AI provides accurate info and keeps users engaged longer. - Ali Muwwakkil (ali-muwwakkil on LinkedIn)