DEV Community

Om Prakash
Om Prakash

Posted on

I lost 22% of my organic traffic in 6 months and Google never warned me. Here's what was actually happening.

I noticed the drop in January. Not a dramatic cliff, just a slow bleed. Month over month, the numbers kept sliding. By the time I sat down to actually investigate, we had lost 22% of our organic traffic over six months, and nothing in Google Search Console flagged it as a problem worth investigating.

No manual actions. No algorithmic penalties. Core Web Vitals were green. Backlink profile was stable. I kept refreshing the same dashboards, looking for a technical explanation that wasn't there.

It took me embarrassingly long to consider the real answer: people weren't finding my content through Google anymore. A growing percentage of them were just asking ChatGPT.

Here's what made this particularly frustrating. The queries weren't disappearing. The intent was still there. People were still searching for answers to exactly the questions my content was built to answer. But AI tools were intercepting those searches before they ever became clicks.

ChatGPT hit 100 million users faster than any app in history, according to The Guardian. That number, when I first read it, felt abstract. It stopped feeling abstract when I looked at my traffic chart. The timing aligned almost perfectly with when the erosion started.

And it isn't just ChatGPT. Google's own AI Overviews, which now reaches billions of users monthly according to Alphabet's Q1 2025 earnings report, had become a direct competitor to every piece of content I'd spent years producing. Gartner's projection that organic search traffic will decline 50% by 2028 no longer reads like a warning for the future. For some of us, it's already the present.

The traffic I was losing wasn't evenly distributed. When I dug into which pages had dropped the most, it was informational content, the kind designed to answer specific questions. "What is X?" "How does Y work?" "What's the difference between A and B?" These were exactly the types of queries where AI models could deliver a complete answer without the user ever needing to click through.

The bottom-of-funnel transactional content held up better. People still click when they're ready to buy. But the top and middle of funnel, all that awareness and consideration content I'd invested years in building, was being answered in a sidebar before users ever got to me.

40% of B2B buyers now use AI assistants to research solutions before they even talk to sales, according to multiple industry surveys. That's not a marginal use case anymore. For any company that depends on organic traffic as a discovery channel, that's a structural shift in how buyers find you. And it happened quietly, without any search algorithm update to point to, without any penalty to appeal.

The change I made wasn't a full content overhaul. I didn't have the bandwidth for that, and I was skeptical of anyone selling a complete solution. What I actually did was go back through my highest-traffic pages and look at how they were structured.

Most of them were written the way good SEO content used to be written: covering topics broadly, hedging claims, presenting multiple perspectives. That approach worked when the goal was to rank in ten blue links. It doesn't work as well when an AI model is deciding whether to cite you.

I came across a paper from KDD 2024, published on arXiv as 2311.09735, that studied what made content more likely to be cited by AI systems. The finding was specific: content that included concrete statistics and used definitive language was cited 30 to 40% more often than content that hedged or spoke in generalities. Not slightly more. 30 to 40%.

That changed how I thought about the problem. It wasn't that my content was bad. It was that it was written for a world where the reader was doing the evaluation. In that world, nuance and balance are virtues. When an AI model is doing the evaluation, specificity and confidence signal credibility.

So I went through those pages and did something simple: I replaced vague claims with cited statistics. I rewrote hedged sentences into declarative ones. "Many businesses find that..." became "Companies using this approach report a 40% reduction in..." Not manufactured precision, but real precision where I actually had the data to back it up.

Within about six weeks, I started seeing some of those pages show up in AI-generated responses. Not consistently, not on every query, but enough to track.

What I can't claim is that I have this fully figured out. The landscape is changing faster than any one person's content strategy can keep up with. The platforms are different. ChatGPT cites differently than Perplexity, which cites differently than Google AI Overviews. What gets you cited in one doesn't guarantee you show up in another.

The one thing I'm reasonably confident about: the content I was producing before was optimized for something that no longer fully exists. The question I should have been asking, and wasn't, is whether my content was structured in a way that AI systems would find trustworthy enough to reference.

That shift from "will a human reader find this useful?" to "will an AI model find this citable?" isn't as extreme as it sounds, but it does require different instincts. Specificity over comprehensiveness. Concrete numbers over hedged estimates. Clear answers over balanced exploration.

The practical takeaway, if there is one: go look at your top informational pages and count how many specific, cited statistics they contain. Then look at the sentences that start with "Many," "Some," "It can be," or "Often." Those are the sentences that AI models tend to skip over when deciding what to quote. Rewriting even a handful of them, grounding them in real numbers with real sources, is the lowest-effort change I found that actually moved the needle.

I started using GEOmind to track where my content was actually showing up across AI platforms, which helped me prioritize which pages to fix first rather than doing this entirely by intuition. It didn't solve the problem by itself, but it made the problem visible. And for something this new, that turned out to be most of the battle.

Top comments (0)