In November I pulled our team's project boards into a spreadsheet and counted hours. Not because I love spreadsheets; because we'd been telling clients we were "moving to GEO" and I had no idea if that was true or just the thing we said in calls. The honest answer turned out to be approximately 30%. Three out of every ten hours that had been categorized as SEO work six months earlier was now categorized as something else, mostly things with names like "answer audit," "entity disambiguation," or "citation tracking."
The shift didn't feel like a strategy. It felt like a slow drift, the way a glacier moves. Most weeks nobody made a decision; we just kept doing the next sensible thing, and six months later the work looked different.
This is what survived the shift, what didn't, and what I'd warn anyone against doing if they're starting the same migration with their own team.
What stuck: brief templates and source-of-truth pages
The boring stuff stuck. Our brief template grew an "AI answer target" section, which forces the writer to draft the one-sentence claim an AI engine would have to extract to count us as a useful source. That's a small change with a big consequence: writers stopped burying the lede in throat-clearing intros, because the AI-answer-target line is sitting right there in the brief and the editor will ask why the article doesn't actually say that thing.
We also doubled down on what we used to call "source-of-truth" pages: a single canonical page per claim, owned by the client, with the underlying data or methodology in plain sight. These didn't move SEO rankings much but they moved citation tier in our testing, especially in Perplexity. Our hypothesis is that engines that re-query in real time reward pages where the claim and the supporting structure are both extractable from one URL.
What didn't stick: most of the keyword research
Keyword research workflows shrank. Not to zero, but close. The thing that replaced them was prompt research, which sounds similar and isn't. Keywords are about what people type into a search bar. Prompts are about what they ask a conversational agent, which tends to be longer, more contextualized, and dramatically less normalized across users.
We tried, for about three weeks, to scrape prompt data from a leaked public dataset and use it the way we used keyword volume. It didn't work. The distribution is too long-tail, and the synonyms are too varied. We now treat prompt research as a qualitative exercise with structured interviews and customer transcripts, not a quantitative exercise with a tool dashboard.
A thing we were wrong about
For the first quarter of the shift, I thought meta descriptions still mattered for AI engines. They don't, at least not in our testing. Or rather: they matter exactly as much as the rest of the page does, no more. We spent maybe 40 hours optimizing meta descriptions for AI snippet pull and watched the citation tier needle not move. I was the one who pushed that experiment. It was a waste. The team was polite about it. I should have killed it after week two.
The 30% number is composite
I want to flag the 30% honestly. It's a portfolio average across a 12-client book of work, weighted by hours logged, comparing May 2025 to October 2025. Some clients shifted closer to 55%, mostly the ones with B2B SaaS positioning where AI engines were already a primary discovery channel. One client shifted maybe 8%, because their audience still lives on Google's blue links and our testing didn't justify a bigger reallocation.
The aggregate number is real but the variance is enormous. If you're a head of marketing reading this and your team is "moving to GEO," I'd want to see the per-channel data before I trusted any single percentage. In our testing, the temptation to round up the shift number is strong because it tells a tidy migration story. The honest data is messier.
The hidden cost of the shift: client communication
The workflow change was, in some ways, the easier part. The harder part was changing how we communicated progress to clients who had been buying SEO from us for two or three years and had grown comfortable with monthly rank reports and traffic charts.
Citation tier data is harder to skim than a position chart. A client glancing at a dashboard wants to know, in three seconds, whether things are getting better or worse. The A/B/C/D/E framework requires explanation the first three times you show it to anyone. Some clients adopted it quickly. A few resisted, not because the framework was wrong but because they had bosses who wanted to see rank movement and didn't want to learn a new vocabulary.
We added a translation layer: every monthly report now includes both the GEO-native metrics (tier rates, citation counts, engine breakdown) and a "legacy view" with traditional SEO indicators where they still apply. That doubled the time per report for a while. We're still figuring out how to reduce that overhead without losing the audience for either view.
What we'd tell our six-months-ago selves
Run the citation baseline first, before changing the workflow. We didn't, and that means our pre-shift data is reconstructed from screenshots and memory, which is the same as saying we don't really know what the lift was. The agency I work with now requires a 40-prompt baseline before any GEO engagement, partly because of this regret. It costs a couple of weeks. It's worth it.
The other thing I'd tell us: don't rename the team. We called ourselves the "GEO squad" for about a month and it created a weird internal politics where the "SEO squad" felt sidelined. It's the same work. It's the same people. The rename was an own goal.
A third thing: keep the technical SEO inventory. We let some technical SEO maintenance slip during the shift, partly because everyone wanted to work on the new shiny thing and partly because the wins felt smaller. Then we did an audit at month seven and found two clients had accumulated crawl errors, broken canonicals, and a small pile of redirect loops that had to be cleaned up before the GEO work could even be measured cleanly. The lesson: GEO is not a replacement for technical SEO hygiene. It runs on top of it. Stop maintaining the foundation and you'll start to lose the building.
Headcount and skill mix
The team didn't change in headcount over the six months, but the skill mix shifted. We added more time spent on prompt research, citation tracking, and entity work. We reduced time spent on link building outreach and on keyword expansion. We did not lay anyone off. The people who'd been doing keyword expansion picked up entity disambiguation work because the cognitive habits transferred well (both jobs involve systematic inventory and consistency-checking). The link builders learned digital PR and source-of-truth content production. None of these transitions were painless, but none of them required new hires.
If you're managing a team through this kind of shift, the question isn't "do we need new skills." It's "what does our existing team's craft skill translate into, and how do we let them learn the new tools without making them feel like beginners." We were imperfect at this. We're better now than we were in March.
If you're partway through this shift yourself, what's the one workflow you've already cut that you're quietly relieved to be done with? Mine was the monthly position-tracking report. I don't miss it.
Top comments (0)