DEV Community

Robert Kirkpatrick
Robert Kirkpatrick

Posted on

Everyone Switched to AI Content. Most of Them Got Nothing.

Seventy-four percent of new webpages now contain AI-generated content.

Most of it gets zero traffic.

Not penalized. Not flagged. Not removed. Just invisible. Zero clicks. Zero reads. The content exists, technically. Nobody finds it.

I have been building AI writing systems since before it was fashionable, and I have watched smart people make the same mistake over and over. They automate the output. They ignore the outcome.

Eighty-seven percent of businesses are using AI for SEO content now. The volume is staggering. The results are concentrated in a small percentage of those operations.

This is the gap nobody talks about. Not because it is a secret. Because it requires admitting something uncomfortable.

The Content Flood Is Real. The Results Are Not.

Before 2023, putting out one good article a week was competitive. Now you are competing with operations that pump out twenty articles a day, all of it AI-generated, all of it technically correct, all of it reading like it was written by a very polite robot who did the assigned reading.

Here is what the data says: sixty-five percent of marketing professionals say their biggest concern about AI content is quality and authenticity. They are right to be worried.

But they are misidentifying the problem.

The issue is not that Google penalizes AI content. Google has stated publicly that it does not penalize AI-assisted content by default. The issue is that readers do.

You have about eight seconds. That is roughly how long a visitor stays on a page before deciding whether to keep reading or hit the back button. And human readers have developed a remarkable sensitivity to content that was clearly generated without a specific person in mind.

It is not about spelling. It is not about grammar. It is about whether the writing feels like it was written for anyone in particular.

Most AI content is not written for anyone. It is written for the keyword.

What AI Content Usually Gets Wrong

I have read hundreds of AI-generated articles in the past year. I can spot them inside three sentences. Not because of technical tells, though those exist. Because of a structural emptiness.

The article knows things. It does not know you.

Generic AI content tends to hit the same patterns. It opens with a broad statement about how important the topic is. It lists five to seven points. Each point has two to three sentences of explanation. The conclusion circles back to why this matters. The whole thing reads like a term paper written the night before it was due.

That structure is not wrong. It is just completely forgettable.

The articles that get read, shared, and ranked share something different. They have a point of view. They make a specific claim and defend it. They include something that could only have been observed by someone who was actually in the room.

In the absence of that specificity, readers bounce. And when readers bounce, rankings follow.

The Authenticity Problem Is Measurable Now

Here is where it gets interesting.

AI search platforms are now the fastest-growing source of referral traffic on the web. AI platforms sent more than one billion referral visits in a single month in 2025, up three hundred fifty-seven percent year over year. ChatGPT users click out to external sites at twice the rate of Google users.

But here is what the research shows about which content gets cited.

Content depth matters. Readability matters. Q-and-A formats perform best. Dense paragraphs perform worst. And forty-four percent of all AI Overview citations come from the first thirty percent of a piece, meaning the opening matters more than anything else.

The content that gets picked up by AI citation engines, and the content that converts when readers arrive from those engines, shares the same DNA. It is specific. It is direct. It makes a claim in the first paragraph. It answers the question without burying the answer.

That is not a coincidence. That is what human writing at its best looks like.

Why Most People Are Generating Content Wrong

The mistake is using AI to write more of the same.

You take a keyword. You tell the AI to write a fifteen-hundred-word article about that keyword. The AI does. You publish it. You do this forty times. You wonder why nothing is ranking.

The content is not bad. It is generic. And generic is the one thing you cannot afford to be in a market where everyone has access to the same tools.

The people getting results are using AI differently. They are not asking the machine to think for them. They are using it to execute ideas they already have. The strategy, the angle, the specific claim, the personal observation, those come from them. The AI builds the structure around a direction that a human already chose.

That is a different process. It produces a different result.

When I write something using this approach, the article has a spine. It is arguing something. It is not just covering a topic. The difference is obvious to anyone who reads both versions.

The Specific Problem With Voice

There is a technical version of this problem and a practical version.

The technical version is that AI writing has patterns. Not just em dashes and phrases like "it is worth noting" and "in today's rapidly evolving landscape." Those are easy to remove. The deeper patterns are structural: the tendency to hedge every claim, to present multiple perspectives on everything, to avoid taking a position.

Real writing takes positions. It says this and not that. It is written by someone with an opinion.

The practical version is simpler. If your content could have been written by anyone, it reads like it was written by no one.

I built a tool specifically for this problem. It is called Human Voice, and it exists because I kept running into the same issue with my own output. The AI would give me something technically correct and completely without personality. The tool runs the content through a set of voice-matching criteria, looking for the places where the writing has gone flat and flagging them for revision.

There is a second pass that matters as much as the first. AI content carries what I call signature patterns, things the model does consistently that trained readers recognize immediately. Removing them is not about fooling anyone. It is about making sure the content can compete on its own merits instead of being dismissed before anyone finishes the opening paragraph.

That second pass is what AI Signature Scrub handles. Every article I publish goes through both.

What Actually Works Right Now

The research on AI citations gives you a clear picture of what to aim for.

First-person observations. Specific data. Clear claims in the opening paragraph. Q-and-A structure where the question is one that real people actually ask. Depth without padding. Sentences that go somewhere.

None of this is complicated. It is just not what you get when you hand a keyword to a model and wait.

The filter I use before publishing anything:

Would a specific person find this useful? Not a demographic. A person. Someone I could picture reading it on their phone at lunch, nodding because it says something they have been thinking but could not articulate.

If the answer is no, the article is not ready.

The Numbers That Tell the Story

Fifty-seven percent of SEO professionals say competition has increased significantly because of AI. That is not surprising.

What is surprising is that only sixteen percent of companies are tracking AI search performance. Everyone is generating content. Almost nobody is measuring what happens to it.

The ones who are measuring are finding something clear: the content that converts from AI referrals is converting at twenty-three times the rate of traditional search traffic. The readers who come from AI citations are already convinced they need what you have. They are not browsing. They are looking for confirmation.

That is an extraordinary opportunity. And it belongs entirely to people who write content worth citing.

What You Should Do Differently

Stop generating. Start arguing.

Pick one specific thing you believe about your topic that most people in your space would push back on. Write that article. Make the case. Use your actual experience as evidence. Let the AI help you build the structure, but make sure the position is yours.

That article will outperform ten generic ones. Every time.

If you want to see what this looks like in practice, Human Voice and AI Signature Scrub are both available through TotalValue Group. They are the tools I use on every article before it goes live.

One is about getting the voice right. The other is about removing the patterns that signal to readers, before they have even finished the introduction, that nobody was actually home when this was written.

The content flood is not slowing down. The gap between content that works and content that does not is widening. The tools exist to end up on the right side of it.


Originally published on Medium

Top comments (0)