`
I have been writing technical content for about three years. In that time I developed a fairly confident set of beliefs about what makes content rank well. Keyword research, internal linking, page speed, backlink acquisition, content length. I followed the playbook. My traffic grew. I assumed I understood what I was doing.
Then, about six weeks ago, I started seriously looking at whether my content was being cited by ChatGPT, Perplexity AI, and Google Gemini. Not just vaguely appearing somewhere, but actually being used as a source when someone asked a question my posts directly answer.
The results were uncomfortable. And almost everything I had been confident about turned out to be either wrong or only partially true when applied to how AI search actually works.
Here are the five beliefs that took the biggest hits.
Belief 1: "If I rank on Google, AI tools will find my content"
This was my first and most fundamental assumption. It made intuitive sense. Google crawls your content, AI tools train on or retrieve from web data, if your content is good enough to rank it must be getting picked up.
The reality is messier. Perplexity AI runs its own crawler, called PerplexityBot, independently of Google's index. ChatGPT Browse uses Bing's index, not Google's. Google's own Gemini model pulls from Google's AI Overviews system, which uses a separate retrieval layer on top of the regular index. These are distinct systems with distinct access requirements.
More importantly, even when AI tools can reach your content, whether they cite it depends on how the content is structured, not on whether it already ranks. I have posts sitting at position 2 on Google that get zero AI citations. I found competitor posts with weaker Google authority getting cited constantly because their content is structured for fast answer extraction.
Google ranking and AI search visibility are related but not the same thing. I had been treating them as identical.
Belief 2: "Longer, more comprehensive content always wins"
The long-form comprehensive guide has been the backbone of content strategy for years. Cover everything, go deeper than anyone else, build the definitive resource. That approach has served me reasonably well on Google.
AI search does not reward comprehensiveness in the same way. What it rewards is answer clarity and answer position. A 3,000-word guide that spends the first 400 words on background and context before getting to the actual answer will consistently lose AI citations to a 900-word post that answers directly in the first paragraph.
I had a post last year that I was particularly proud of. Exhaustive research, multiple expert perspectives, thorough examples. It ran nearly 4,500 words. My GEO score on it, when I eventually checked with the GoForTool AI SEO Analyzer, was 22 out of 100. The main issue was answer position - my first real answer to the implied question appeared at word 490. The post was comprehensive and effectively invisible to AI search.
Length is not the problem. Delay is the problem.
Belief 3: "Writing naturally and avoiding keyword stuffing is enough"
I was actually proud of this one. I had moved away from aggressive keyword optimization toward what I thought of as writing for humans first. Readable sentences, natural language, no awkward repetition of target phrases.
It turns out there is a middle layer between keyword stuffing and natural writing that I was completely missing: entity specificity.
AI systems build their understanding of content through named entities - verifiable, specific things that appear in training data. When I write "a popular JavaScript bundler," that phrase means nothing to an LLM. When I write "Vite 5.2" or "esbuild 0.19," those are entities the AI can map to its knowledge graph, confirm as real, and use to understand what my content is about.
My natural writing was full of vague references that felt fine to me as a human reader but scored near zero for entity density. I went through my top ten posts after learning this and found phrases like "major cloud providers," "modern browsers," "leading frameworks" throughout. Every one of those is a missed citation opportunity.
The fix is not keyword stuffing. It is just being specific about things you were already referencing vaguely.
Belief 4: "Schema markup is for e-commerce and news sites"
I knew schema markup existed. I had implemented basic Article schema on my posts at some point because a tutorial said to. I thought of it as a nice-to-have for rich snippet eligibility and mostly irrelevant for the kind of technical writing I do.
This was probably my most expensive misconception.
FAQPage schema turns out to be the highest single-impact technical change you can make for AI search visibility. Here is why: each question-answer pair you encode in FAQPage schema becomes a pre-packaged extraction unit that Google's Gemini model can pull directly into an AI Overview response. It does not have to infer your answer from prose. It has a structured, verified answer sitting in machine-readable JSON.
A post with three FAQPage pairs gets three independent citation opportunities, each potentially triggering for a different related query. A post with no FAQPage schema requires the AI to do substantially more interpretive work to extract a citable answer, and it will often choose a different source instead.
I now add FAQPage schema to every post I publish. GoForTool generates it for me based on the page content so I am not writing JSON from scratch each time. The time cost is about ten minutes. The citation impact has been significant enough that I consider it mandatory.
Belief 5: "My site's domain authority protects me from newer competitors"
Three years of consistent publishing, some decent backlinks, a growing audience. I had assumed this created a kind of moat. Newer sites and shorter posts would struggle to outrank me because they lacked the authority signals I had built up.
AI search does not care about domain authority in the traditional sense. It cares about how well a specific piece of content is structured for extraction. A two-month-old blog with a well-structured post that leads with a direct answer, names specific entities, and has proper FAQPage schema will get cited over a three-year-old site whose well-ranking post opens with four paragraphs of context.
I found this out the hard way when auditing which pages were getting cited for queries in my topic area. Several were from sites I had never heard of, published recently, with zero of the traditional authority signals I had spent years accumulating. Their GEO scores were high. Mine were not. That was the whole story.
What I have actually changed
I am not throwing out everything I know about traditional SEO. That still matters and the signals still compound. But I added a few specific things to my workflow that address these gaps.
Every post now opens with a direct answer to the question implied by its title, within the first hundred words. Not context, not a story, not a "have you ever wondered" opener. An answer.
I check entity density before publishing. Anything vague gets a specific name.
FAQPage schema goes on every post. I use GoForTool's AI SEO Analyzer as a pre-publish check - it runs the full audit, flags whatever I missed, and generates the schema so I do not have to write it by hand.
My robots.txt now has explicit Allow entries for GPTBot, PerplexityBot, ClaudeBot, and Google-Extended. I had two of those blocked without realizing it.
The posts I have applied this to consistently started getting Perplexity citations within two to three weeks. The posts I have not touched yet are still sitting with GEO scores in the 20s and 30s and zero AI citations. The correlation is hard to ignore.
Which of these beliefs did you share? Or have you hit a different wall with AI search that I have not covered here? Drop it in the comments - genuinely curious how this maps to other people's experience.
Top comments (0)