`
Last updated: March 24, 2026 · 8 min read
You spent hours writing a post. It ranks on Google. People read it. But ask ChatGPT or Perplexity about your topic — and it cites someone else.
This isn't a content quality problem. It's a structure problem. And you can diagnose and fix it in 10 minutes.
Here's the exact audit process I run on every post before publishing — and on any old post I want to start getting cited by AI search engines.
Why your Google ranking doesn't protect you in AI search
Google's crawler rewards backlinks, keyword placement, and domain authority.
AI answer engines — ChatGPT-4o, Perplexity AI, Google Gemini, Claude 3.5 Sonnet — reward something completely different: extractable answers. They scan your content looking for the clearest, most direct response to a query. If they can't find it fast, they skip your page entirely — regardless of your PageRank.
This creates a gap. High-ranking pages with poor GEO structure get bypassed. Newer, shorter pages with clean answer-first formatting get cited instead.
The good news: the fix is structural, not creative. You don't need to rewrite your content. You need to restructure it.
The 10-minute AI visibility audit
Step 1 — Run the automated scan (2 minutes)
The fastest starting point is GoForTool's AI SEO Analyzer. Paste your URL, run the scan.
You get back:
- A GEO score from 0–100
- Answer position flag (where your first direct answer appears)
- Entity density rating
- Schema markup gaps
- AI crawler access check (is GPTBot blocked?)
- A prioritised fix list
What the scores mean in practice:
| GEO Score | AI Search Status |
|---|---|
| 80–100 | Actively cited by Perplexity and ChatGPT Browse |
| 60–79 | Occasionally cited, inconsistent |
| 40–59 | Rarely cited, significant gaps exist |
| 0–39 | Effectively invisible to AI search |
Run this first. It tells you which of the following steps matter most for your specific page.
Step 2 — Fix your answer position (3 minutes)
Open your post. Read your first 150 words. Ask: does this directly answer the core question implied by my title?
If your post is titled "How to optimise images for web performance" — do the first 150 words tell someone exactly how to do that? Or do they set context, tell a story, and build up to the answer?
LLMs use position-weighted extraction. The answer in your first 500 tokens scores significantly higher than the same answer buried in paragraph six.
The fix:
`markdown
❌ Before (context-first):
"Image optimisation is one of those topics that keeps coming up in
performance audits. As websites have grown more visual, the need to
balance quality and file size has become increasingly important..."
✅ After (answer-first):
"Image optimisation for web performance requires three steps: compress
images to WebP format, set explicit width/height attributes to prevent
layout shift, and use lazy loading for below-the-fold images. Together
these reduce page load time by 40–60% on image-heavy pages."
`
The second version is a citation. The first is a preamble.
Step 3 — Add named entities (2 minutes)
LLMs understand the world through named entities. Vague language is noise. Specific names are signal.
Do a quick find-and-replace scan for these vague phrases and substitute real names:
| Vague (low GEO) | Specific (high GEO) |
|---|---|
| "popular image tool" | "Squoosh, ImageOptim, or Sharp" |
| "a leading browser" | "Chrome 120, Firefox 121, Safari 17" |
| "modern AI assistants" | "ChatGPT-4o, Perplexity AI, Claude 3.5 Sonnet" |
| "performance metrics" | "Core Web Vitals: LCP, INP, CLS" |
| "a recent study" | "Google's 2024 Web Almanac" |
You don't need to change your argument. Just name things specifically.
Step 4 — Add FAQPage schema (2 minutes)
This is the single highest-ROI change you can make for GEO. FAQPage schema gives Google's Gemini pre-packaged citation units — ready-made Q&A pairs it can pull directly into an AI Overview.
A post with 3 FAQ schema pairs has 3 independent chances to appear in AI-generated answers.
Create a faq-schema.json or embed directly in your page <head>:
json
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "What is the best image format for web performance?",
"acceptedAnswer": {
"@type": "Answer",
"text": "WebP is the best image format for web performance in 2025. It delivers 25–35% smaller file sizes than JPEG at equivalent quality, with broad support across Chrome, Firefox, Safari 14+, and Edge. Use AVIF for even better compression where browser support allows."
}
},
{
"@type": "Question",
"name": "How do I check if my images are hurting my Core Web Vitals?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Run your URL through Google PageSpeed Insights or Chrome DevTools Lighthouse audit. Look for the 'Properly size images' and 'Serve images in next-gen formats' opportunities. Any image flagged there is directly impacting your LCP (Largest Contentful Paint) score."
}
},
{
"@type": "Question",
"name": "Does image optimisation affect SEO?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Yes. Image optimisation affects SEO through two direct mechanisms: faster page load speeds improve Core Web Vitals scores (a confirmed Google ranking factor since 2021), and descriptive alt text provides keyword context for Google Image Search and accessibility crawlers."
}
}
]
}
Note: The Q&A content above is an example using image optimisation as a demo topic. Replace with Q&As relevant to your actual post topic.
Step 5 — Check your robots.txt (1 minute)
This one surprises people. A lot of sites are accidentally blocking AI crawlers — either through legacy rules or blanket Disallow: / entries for unknown bots.
Check your robots.txt at yourdomain.com/robots.txt. You need explicit Allow entries for:
`robot_framework
User-agent: GPTBot
Allow: /
User-agent: PerplexityBot
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: Google-Extended
Allow: /
User-agent: Applebot-Extended
Allow: /
`
If any of these bots hit a Disallow rule, you are completely invisible to that AI platform — no matter how well-optimised your content is.
GoForTool's AI SEO Analyzer checks this automatically and flags any blocked bots as a critical issue.
Before/after: real audit numbers
Here's what this audit process looks like on a real post. I ran a 1,400-word tutorial on CSS Grid through the GoForTool AI SEO Analyzer before and after applying the steps above:
| Metric | Before | After |
|---|---|---|
| GEO Score | 31/100 | 78/100 |
| Answer position | Word 340 | Word 45 |
| Entity count | 4 | 19 |
| Schema types | 0 | 3 (Article, FAQPage, HowTo) |
| AI bots blocked | 2 (GPTBot, ClaudeBot) | 0 |
| Perplexity citations | 0 | 4 (within 9 days) |
The content didn't change. The structure did.
The pre-publish checklist
Add this to your publishing workflow. Run it on every new post before you hit publish:
- [ ] First 150 words contain a direct answer to the title's implied question
- [ ] All tools, platforms, and products referenced by their specific name (no "popular tools")
- [ ] FAQPage schema with minimum 3 Q&A pairs
- [ ]
datePublishedanddateModifiedin Article schema - [ ]
robots.txtallows GPTBot, PerplexityBot, ClaudeBot, Google-Extended - [ ] Author bio links to at least one external profile (GitHub, LinkedIn, Twitter)
- [ ] At least one comparison table or numbered list in the body
Run your audit now
The fastest way to apply everything in this post is to let the tool do the diagnosis for you.
👉 GoForTool AI SEO Analyzer — run your free audit
Paste your URL. You get a full GEO score, every gap flagged, and a prioritised fix list — in 90 seconds. No signup required to run the scan.
If your score comes back under 50, start with Step 2 (answer position). That single change moves the needle more than anything else.
Got your score? Drop it in the comments with your post topic — I'll tell you the single highest-impact fix for your specific number.
Top comments (0)