DEV Community

Dmitry (Dee) Kargaev
Dmitry (Dee) Kargaev

Posted on • Originally published at blog.deeflect.com on

SEO Is Dead? No. But the Game Changed.

I asked ChatGPT who I am. It had nothing. No idea I existed.

I've been deep in the AI space for a while now. I spent 5 years as lead product designer at a fintech platform serving 70+ financial institutions. I shipped 31 open-source Rust CLI tools. Published 13 blog posts. Built in public for months. And the models I use every day had zero record of me.

That bothered me more than it probably should have. But it also made sense once I started pulling the thread. Because "SEO is dead" isn't quite right - but something real is shifting, and I wasn't ready for it.

This is what I found.

The two games nobody told me I was playing

There's Google-findable. And there's AI-citable. I had one. I completely lacked the other.

Traditional SEO is about ranking. You write content, build links, optimize your pages, and Google surfaces you when someone searches. The pipeline is: search query โ†’ ranked results โ†’ human clicks through. That's the game most people know how to play.

Generative Engine Optimization - GEO - is about citation. AI generates an answer. Your content, your name, your entity gets referenced in that answer. The pipeline skips the click entirely. There's no blue link. The model just knows you exist, or it doesn't.

I had spent zero time thinking about the second one. Which meant I was completely invisible to the systems I use to do my actual work. The irony was genuinely annoying.

That gap is what sent me down a multi-week research rabbit hole that ended with an open-source platform list, a scoring system, and a website full of free tools. But I'm getting ahead of myself.

What's actually happening to search right now - and why "SEO is dead" keeps trending

The data isn't speculative anymore. BrightEdge tracked AI Overviews appearing in 11% of all queries in 2025. CTR is down 30%. People are asking AI assistants instead of searching, and when they do search, they're increasingly getting an AI-generated summary instead of ten blue links.

This isn't a prediction. It's already the baseline. The shift isn't coming - it happened while everyone was arguing about whether it would.

SparkToro's 2025 data puts this in concrete terms: top established brands appear in 55-77% of relevant AI responses. Unknown entities? 70x more volatile. You either have consistent presence or you have noise. There's not much middle ground.

The question stopped being "how do I rank?" and became "how do I get cited?"

What made this hit differently for me was trying to find myself across the major AI systems. Not just ChatGPT. I asked Claude, asked Perplexity, asked Google's AI Overview. The results were inconsistent in a way that told me something real: these systems aren't pulling from the same data, aren't weighting the same signals, and aren't resolving entities the same way. Being findable on one doesn't mean you're findable on all. That fragmentation is the part nobody's really mapped yet - including most of the GEO content I've seen so far.

SEO isn't dead - let's be honest about that

The title is provocative on purpose. Here's the actual nuance, because I think people deserve a straight answer rather than a hot take.

seoClarity's 2025 research found that 99.5% of AI Overview sources come from Google's top 10 organic results. Read that again. The AI is pulling from Google's rankings. Which means if you don't rank, you don't get cited. SEO is still the prerequisite. GEO builds on top of it.

So no, you shouldn't burn your SEO playbook. You should add a chapter.

What changed is the goal. Ranking #1 used to be the finish line. Now ranking #1 gets you in the candidate pool for AI citation. That's still worth doing. But it's not sufficient anymore. You can rank well and still be invisible to AI systems if you're missing the signals that models use to identify credible, citable entities.

Think about what that means practically. You could have a page ranking on the first page of Google for a competitive keyword - real traffic, real impressions - and still not get cited in an AI-generated answer for the same query. Because the ranking signals and the citation signals overlap but aren't identical. You need both. And most SEO workflows are only optimizing for one.

That's the shift. Not death. Evolution with a second layer that most people haven't started thinking about yet - including me, until very recently.

What GEO actually is and why the research convinced me

Generative Engine Optimization is the practice of optimizing your online presence to appear in AI-generated answers, not just search rankings.

What GEO actually is and why the research convinced me

The foundational paper here is Aggarwal et al., published at KDD '24 - you can read it on arXiv. They tested different optimization strategies across a range of queries and found that the right approach could drive up to a 40% increase in AI visibility. The top strategies weren't what I expected: citations and statistics outperformed most other approaches. Authoritative sourcing matters enormously to how models evaluate content.

Structured data, clear entity signals, and demonstrable expertise all feed into whether a model considers you citable. This isn't link juice. It's closer to reputation infrastructure - the stuff that makes a model "trust" that you're a real entity with real credentials.

What clicked for me is that this maps to how the models actually work. They don't crawl the web in real time. They learned from a corpus. And in that corpus, some entities are clearly defined, well-referenced, consistently mentioned. Others are noise. I was noise.

The content structure piece most people miss

The Aggarwal research gets into something most GEO content glosses over: it's not just what you publish, it's how you structure it.

Content that AI systems cite tends to have specific characteristics. It makes direct, falsifiable claims. It cites external sources. It includes statistics with attribution. It answers questions in a way that can be cleanly excerpted. This last point matters more than I initially thought - AI systems aren't summarizing your whole article, they're pulling specific passages. If your content isn't written in citable chunks, it's harder for a model to quote you cleanly even when it wants to.

This is actually a content design problem as much as an SEO problem. It's related to something I've written about before in the AI UX space - most people building for AI systems haven't thought about the machine as a reader with specific needs. The machine reads differently than a human does. It's looking for density, structure, and attributable claims.

Writing for AI citation means writing in a way that makes a model's job easier. Short, precise statements. Named sources. Numbers with context. The exact opposite of the fluffy "this is interesting to explore" writing style that pads word counts but says nothing.

The data point that actually rewired how I think about this

I went deep on the Ahrefs research and one number kept stopping me.

Brand mention correlation to AI citation: 0.664. Backlink correlation: 0.218.

That's not a small gap. Brand mentions are three times more predictive of AI visibility than backlinks. Three times. The entire SEO industry is built around backlinks as the gold standard signal - and for Google rankings, that's still mostly true. But for AI citation, what matters is whether your name, your brand, your entity is being talked about across the web.

The Semrush data added another layer: nofollow links perform nearly as well as dofollow links for AI visibility. In traditional SEO, a nofollow link is worth significantly less. For GEO purposes, the signal isn't the PageRank transfer - it's the mention. The presence. The fact that you're being referenced.

This is a real reorientation. The question isn't just "who links to me?" It's "who talks about me, mentions me, references me across contexts?" Those are different things. The second one is what I had been ignoring completely.

It connects to something I wrote about leaving fintech to build AI systems - the whole reason I went independent was to build things that matter. Building an AI presence that's actually citable is part of that.

What the platform data actually shows

When I started auditing where brand mentions were coming from across the 168 platforms I ended up cataloging, some patterns were obvious in hindsight.

Platforms that block AI crawlers still contribute to traditional SEO - backlinks, referral traffic, domain authority signals. But they contribute zero to AI citation. If a major AI crawler can't index the content where you're being mentioned, that mention is invisible to the model. Doesn't matter how high-authority the platform is. Doesn't matter how many people read it. The AI never saw it.

Several platforms that have solid SEO reputations block AI crawlers entirely. A few you'd never expect have wide-open access. The robots.txt data across 168 platforms genuinely changed my prioritization for where to spend time building presence.

High-DA platform with AI crawlers blocked: useful for search rankings, useless for GEO. Medium-DA platform with full AI crawler access: directly contributes to citation potential. Those aren't the same trade-off at all. Treating them the same - which most SEO frameworks do - is leaving real GEO value on the table.

Why "SEO is dead" gets the diagnosis wrong but identifies a real symptom

I keep seeing the "SEO is dead" framing in newsletters, on Twitter, in founder group chats. It's not accurate but it's pointing at something real: the workflows that worked in 2020 are producing worse results in 2025. That's true. The feeling that something fundamental has changed is correct. The conclusion that SEO itself is dead is wrong.

What's happening is that SEO was always a proxy for something else - demonstrating that your content is trustworthy and relevant. Google built ranking signals as a proxy for that. Now AI systems are building citation signals as a different proxy for the same underlying thing. The underlying thing didn't change. The measurement changed.

If you had real expertise and real content depth, most GEO strategies will work for you because you actually have the substance those signals are trying to measure. If you were gaming SEO with thin content and link schemes, GEO is going to be harder because the signals it uses are less gameable. Brand mentions across real communities are harder to manufacture than backlinks. Genuine citations in credible content are harder to fake than directory submissions.

That's probably a good thing. The prompt engineering is dead conversation is related - these systems keep evolving in ways that reward actual depth over tactical gaming. GEO continues that trend.

The mistake is treating this as a binary. SEO or GEO. Old playbook or new playbook. It's additive. Everything that made content good for search still applies. Now there's additional surface area to optimize - entity signals, structured data, AI crawler access, brand mention distribution - that wasn't relevant before.

What I built to solve this

Once I understood the problem I started doing the research manually - checking which platforms actually allow AI crawlers, which ones block them in robots.txt, which ones have high GEO value versus medium versus low. Doing it by hand was a pain in the ass. So I built a system.

What I built to solve this

awesome-geo is the output: a curated, verified list of 168 platforms with full crawler access data. 142 of them are AI-discoverable. I scored them: 74 high GEO value, 78 medium, 16 low. Every platform has been manually verified against robots.txt for the major AI crawlers - GPTBot, ClaudeBot, Anthropic's crawler, Google-Extended.

I also built geo. deeflect.com - free tools that came out of doing this manually and wishing they existed. AI Visibility Checker, JSON-LD Generator, llms.txt Generator, Meta Tags Generator, robots.txt Generator.

I built this because I needed it and figured others would too. It's open source. Use it.

The reason I verified robots.txt across all 168 platforms is that it matters more than most people realize. A platform could have high domain authority and great SEO value - but if it blocks AI crawlers, it contributes zero to your GEO presence. Several well-known platforms do exactly that. Knowing which ones are actually AI-accessible changes your prioritization completely.

The other thing that came out of building this: the verification process itself is time-consuming in a way that scales badly. You can't just check robots.txt once. Platforms update their policies. A platform that allowed GPTBot in 2023 might have added restrictions in 2024. The landscape is moving, which means any static list becomes stale. The tools at geo. deeflect.com are built to stay current rather than being a snapshot.

This is part of the same instinct that drove building multiple projects solo - when I hit friction repeatedly, I build the thing that removes it, then make it available. The GEO research tooling is that, applied to discoverability infrastructure.

What to do right now if you care about any of this

You don't need to overhaul everything. Start here:

  • Check where you're mentioned. Brand mentions are the highest-correlation signal. Are you being referenced across contexts beyond your own site?
  • Add JSON-LD structured data. This is how you communicate entity information to AI systems. If you haven't done it, geo. deeflect.com has a free generator.
  • Create an llms.txt file. Similar to robots.txt but for LLMs - it gives AI systems structured information about who you are and what you do. Again, free generator on the site.
  • Verify your platforms allow AI crawlers. Check the robots.txt on any platform you're counting on for AI visibility. You might be surprised what you find.
  • Think about entity consistency. Your name, credentials, and core claims should be stated consistently across platforms. Inconsistency makes entity resolution harder for models.
  • Use citations and statistics in your content. The KDD '24 research is clear: this is a top GEO signal. Reference real sources. Include real numbers. This post does that on purpose.
  • Audit your content structure. Are your key claims written in excerptable chunks? Can a model pull a clean sentence or paragraph that stands on its own? If not, restructure. This is different from readability optimization - it's citation optimization.

The prompt engineering is dead argument I've seen floating around is related to this - the game keeps shifting toward higher-level signals, away from tactical optimization. GEO is the same shift applied to discoverability.

Where this goes from here

I'm just getting started on this.

The research took me deep enough that I have a lot more to share - how different AI systems handle citations differently, what the actual citation mechanics look like across ChatGPT versus Claude versus Perplexity, how to structure content specifically for AI summarization, what the verification data across 168 platforms actually reveals about the crawling landscape.

The fragmentation across AI systems is the next thing I want to dig into properly. Right now, most GEO content treats "AI visibility" as a monolithic thing. It's not. Being cited by Perplexity requires different signals than being cited in a Google AI Overview. ChatGPT's training data cutoff means recent content won't affect your visibility there until the next model version. Claude uses different weighting. These aren't the same problem. Treating them the same is leaving real optimization opportunities untouched.

This article is the intro. I'm building out a full GEO research series here - the tools are live, the data is real, and I'm going to keep digging.

If you've been heads-down on traditional SEO and haven't thought about AI visibility yet, now's the time to start. Not because SEO is dead. Because the finish line moved - and most people haven't noticed yet.

SEO got a co-pilot. Learn to fly both.


Tools and research: geo. deeflect.com - awesome-geo on GitHub

Top comments (0)