DEV Community

ReddGrow
ReddGrow

Posted on

GEO after ChatGPT shopping and Gemini citation whiplash

GEO after ChatGPT shopping and Gemini citation whiplash

Most GEO advice is already stale.

That sounds harsh, but the product surface keeps moving faster than the playbooks. This week made that obvious again. OpenAI says 700 million people use ChatGPT each week, and it says 1 million+ Shopify merchants are being pulled into its buying flow. That is not a small UX tweak. It means ChatGPT is turning into a discovery layer fed by merchant data, product feeds, and whatever source context it trusts enough to show in the answer.

At the same time, Google's own AI guidance, echoed in the discussion on r/SEO, is basically telling everyone to calm down and go back to the boring stuff: crawlable pages, clear text, useful content, clean technical SEO. And honestly, Google has a point.

But here's the thing. If you stop there, you miss what is actually changing.

Seer measured a 23 percentage point drop in Gemini citation usage across a dataset of 82,000 responses. Same web. Same publishers. Different product behavior. That should make every SEO team a little uncomfortable, because it means AI visibility can shift when the interface shifts, when retrieval logic shifts, or when a model decides it needs fewer citations.

My take is simple: GEO is no longer just a content game. It is a source-shape game.

If ChatGPT is getting more commerce-aware, your feed quality matters. If Gemini can suddenly cite less, your monitoring matters. If Google is telling the market that generative visibility still comes from solid SEO basics, your pages still need to be fast, readable, and easy to parse. And if Reddit keeps showing up as a trusted source, your brand needs real presence in the discussions AI systems already look at.

That last part gets hand-waved way too often.

In the same Seer work, Reddit held a 44% citation rate while other site categories lost ground. That lines up with what practitioners are saying in the schema skepticism thread on r/SEO: schema alone is not the magic trick, and a lot of teams are over-crediting checklists while under-measuring topic relevance, authority, and discussion footprint.

So what should a technical GEO workflow look like now?

Not mystical. Measurable.

Start by logging citations like a product team, not like a one-off content audit. If you are checking AI answers manually in a spreadsheet, you are already behind.

SELECT prompt,
       answer_model,
       cited_domain,
       cited_url,
       retrieved_at
FROM ai_answer_citations
WHERE cited_domain IN ('reddit.com', 'yourdomain.com')
ORDER BY retrieved_at DESC;
Enter fullscreen mode Exit fullscreen mode

That tiny table already tells you more than most GEO decks. You can see whether your domain is getting cited at all, whether Reddit is beating you on the same topic, and whether one model is pulling different evidence than another.

Then map your content to the kinds of evidence these systems seem to like.

Google's current posture says useful pages still win. Fine. Write pages that answer one thing cleanly, use obvious headings, and expose facts in plain text. But for ChatGPT and Gemini, that is only part of the puzzle. You also want pages that are easy to quote, easy to compare, and easy to connect to an active conversation elsewhere on the web.

That is where Reddit becomes strategic instead of cosmetic.

The r/AI_SearchOptimization discussion around Google's new guidance is telling. People are reading it as proof that hacks are fading and disciplined SEO is back. I mostly agree. But I would push it one step further: disciplined SEO plus discussion adjacency is the actual GEO stack.

A page without distribution is a lonely asset. A Reddit thread without a useful page behind it is a dead end. When those two pieces line up, AI systems have more paths to find the same idea, more context around it, and more reasons to treat it as worth citing.

This is why ReddGrow keeps leaning into Reddit for AI search visibility. Not because Reddit is magic. Because AI answer engines already trust it, and because helpful discussion leaves a trace that static brand pages often do not.

If I were building a GEO pipeline today, I would keep it brutally plain:

{
  "track": ["answers", "citations", "domains", "reddit_threads"],
  "publish": ["clear pages", "comparison content", "sourceable claims"],
  "amplify": ["helpful Reddit comments in threads AI already cites"],
  "review": ["citation drift", "model differences", "feed coverage"]
}
Enter fullscreen mode Exit fullscreen mode

Nothing there is glamorous. Good.

That is probably the clearest signal in the market right now. OpenAI is widening the inputs to discovery. Google is telling brands to stop chasing myths. Gemini is proving that citation behavior can move fast. Reddit is still holding attention inside the citation layer. Put those together and the lesson is not "SEO is back" or "GEO changes everything." Both takes are too clean.

The real lesson is messier. Classic SEO is the floor. Source formatting is the packaging. Reddit is the trust bridge. And the teams that win AI visibility will be the ones that measure all of it like a living system instead of publishing a few pages and hoping the bots figure it out.

That hope phase is over.

If your brand is invisible in AI answers today, it is usually not because you missed one trick. It is because your sources are weak, your monitoring is weaker, and the web keeps teaching the models to trust someone else.

Top comments (0)