`
Three thousand visitors a month. A handful of newsletter signups from it. Decent comments. By most measures, it's performing well.
Then a few weeks ago I was testing something. I opened Perplexity and typed in the exact phrase my article targets. Five sources came back. Mine was not one of them. I tried ChatGPT with Browse enabled. Same thing. Gemini. Same.
I sat with that for a minute. A post I've spent more time on than anything else I've written is essentially invisible to the tools that a growing chunk of developers now use as their first stop for answers.
So I started digging.
What I assumed the problem was (I was wrong)
My first instinct was that the content was too long. AI tools prefer short, dense answers, right? So maybe 2,800 words was working against me.
Wrong. Some of the most-cited technical content I found when researching this runs 3,000+ words. Length is not the variable.
Then I thought maybe it was the topic. Too niche. Not enough people asking about it.
Also wrong. The keyword gets decent search volume and I confirmed other pages on the same topic were being cited regularly.
The actual problem was something I had not even thought to look at.
The thing I had never checked
I ran my article URL through GoForTool's AI SEO Analyzer mostly out of curiosity. It does a full on-page audit specifically for AI search visibility, not traditional SEO.
The score came back: 27 out of 100.
I expected maybe 50, 60 tops given how much I'd thought about the writing. 27 was a genuine surprise. The fix list had five items and the top three explained everything.
Problem one: My answer appeared at word 380.
The tool flags something called answer position -- how many words appear before your first direct response to the implied question in your title. Mine was at 380. The benchmark for regularly-cited content is under 120.
When I read back through my opening, I could see it immediately. I open with a story about when I first ran into the problem the article addresses. Good for human readers building context. Terrible for an AI trying to extract a clean answer quickly. LLMs weight earlier content more heavily. If your answer is buried, they will find someone else's answer that isn't.
Problem two: I was using vague language throughout.
I had phrases like "modern JavaScript frameworks," "popular bundlers," "leading cloud providers." None of those are entities an AI can pin to anything specific. ChatGPT-4o, Perplexity, and Gemini all understand named things. They understand "Vite 5.0," "esbuild," "Cloudflare Workers." They do not understand "popular bundlers." That phrase could mean anything.
Problem three: PerplexityBot was blocked on my site.
This one stung. A robots.txt rule I had added over a year ago was blocking PerplexityBot from crawling my content. I had not thought about it since. Every article I have written in the last 14 months has been invisible to Perplexity not because of anything wrong with the writing but because of a single line in a config file.
The fixes and how long they took
Fix one: Rewrote the opening paragraph. I moved the story to a later section and put a direct, specific answer up front. Something like: "Optimising Vite build performance for large React apps requires four configuration changes: chunk splitting with manualChunks, enabling build.minify with esbuild, configuring rollupOptions for tree shaking, and using vite-plugin-inspect to identify bottlenecks." That is now the first thing anyone reads. 20 minutes.
Fix two: Named everything specifically. Went through the article and replaced every vague reference. "Popular bundlers" became "Vite 5, Rollup 4, and esbuild." "Modern browsers" became "Chrome 120, Firefox 121, and Safari 17." This took about 15 minutes and made the article substantially more useful for human readers too, not just AI.
Fix three: Fixed robots.txt. Added explicit Allow rules for GPTBot, PerplexityBot, ClaudeBot, and Google-Extended. Five minutes.
Fix four: Added FAQPage schema. Three question-answer pairs embedded in a JSON-LD script tag in the page head. This one I did through GoForTool because it generates the schema for you based on your content -- I did not have to write JSON by hand. 10 minutes.
Total time: about 50 minutes across an evening.
What happened after
I gave it two weeks before checking anything.
The article now shows up in Perplexity responses for two related queries. Not my exact target phrase yet, but two adjacent ones. ChatGPT Browse cited it once in a test I ran last week -- showed up as a source in an answer about Vite configuration.
My GEO score went from 27 to 71 after the changes. GoForTool lets you re-scan after edits so you can confirm things landed before moving on.
The Google traffic has not changed. Same 3,000 visitors, same position. But now the content is doing a second job it wasn't doing before.
The robots.txt thing is probably affecting you too
I want to highlight this specifically because I think it is the most commonly missed issue and the fastest to fix.
A lot of developers added security-focused robots.txt rules over the past few years in response to scrapers and bots. Totally reasonable. But AI crawlers use user agents that many of those rules accidentally caught.
Check yours right now. Go to yourdomain.com/robots.txt. Look for any Disallow rules that apply to wildcards or to specific bots you may not have evaluated recently. Then add explicit Allow entries for the crawlers you actually want:
`robot_framework
User-agent: GPTBot
Allow: /
User-agent: PerplexityBot
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: Google-Extended
Allow: /
`
Seriously, do this before anything else. If you are blocked, nothing else you do matters.
Running this audit on your own content
The manual version of what I described takes 15 to 20 minutes per URL: read through your opening for answer position, count your named entities, inspect your schema, check robots.txt. Doable, but slow if you have a lot of content.
The automated version through GoForTool's AI SEO Analyzer takes about 90 seconds per URL and catches things I was missing manually. It also generates the schema for you rather than making you write it from scratch.
I'm now running it on every article before publishing. The pre-publish check has become part of my workflow the same way running a linter before committing has become automatic -- you just do it.
What does your GEO score come back as? Drop it in the comments with a rough description of your content type. Happy to suggest what to fix first based on the number.
Top comments (0)