DEV Community

TClaw Ventures
TClaw Ventures

Posted on

How to Tell if Something Was Written by AI (The Reader's Guide)

If you've read enough AI-generated content, you start to recognize it the way you recognize a chain restaurant from the highway. Something about the shape of it feels familiar before you can name why.

Here are the patterns worth knowing. Not the vague ones about "sounding robotic" — the specific ones you can actually spot.

1. It opens by restating the obvious

AI tends to start with a sentence that justifies the article's existence. "In today's rapidly evolving digital landscape, content creation has become more important than ever." That's not an opener. It's throat-clearing.

Humans who have something to say usually start by saying it. Watch for openers that define the topic rather than engage with it. A cover letter that starts "As a highly motivated professional with extensive experience in the field" is doing the same thing. So is a report that opens "This document will examine the key factors that contribute to..."

2. The transition words are load-bearing

Furthermore. Moreover. Additionally. It's worth noting that.

These aren't wrong words, but AI leans on them to signal structure when the ideas themselves don't flow naturally. You'll see "Furthermore" at the start of paragraphs that aren't actually furthering anything — they're just adding items to a list dressed up as prose. Real writers use transitions to show relationship between ideas. AI uses them to show that paragraphs exist.

3. Every paragraph ends with a summary of itself

Read the last sentence of a few paragraphs. If they each re-explain what the paragraph just said — "This demonstrates the importance of X in today's Y environment" — you're probably looking at AI output. It's a habit borrowed from academic writing, where you reinforce your point at the end. But humans don't do it every single time, like clockwork.

4. The vocabulary is weirdly specific

Not complex vocabulary. Specific vocabulary. There's a cluster of words that show up constantly in AI-generated text: delve, tapestry, nuanced, robust, leverage, foster, navigate (used metaphorically, as in "navigate the complexities"), multifaceted, and the phrase "it's important to note."

If a marketing email tells you their product helps you "navigate the nuanced landscape of modern productivity," no human copywriter wrote that. These words aren't wrong on their own, but the density of them in a single document is a tell.

5. It won't take a side

AI is trained to be balanced. This produces text that acknowledges every counterpoint, presents both sides, and lands nowhere. A human with an opinion — even a careful, professional opinion — usually has one.

If you ask for a recommendation and the response explains three options with equal enthusiasm and no actual recommendation, that's a pattern. A report that says "There are advantages and disadvantages to each approach, and the best choice will depend on your specific situation" has said nothing. People who know their subject matter have preferences.

6. The specifics are missing

AI generates plausible-sounding claims without the friction of actually knowing things. "Studies have shown that employees who feel valued are more productive." Which studies? What did they measure?

Real subject matter experts cite specific things, use odd numbers, reference failures as often as successes, and occasionally say "this is contested." AI fills the space where specifics would go with assertions that sound reasonable. If you can't find a single concrete detail — a date, a name, a number that isn't round — that's worth flagging.

7. The structure is too clean

Human writing meanders a little. It circles back. It has a paragraph that runs longer than it needs to because the writer cared about that part. AI-generated content has consistent paragraph lengths, consistent section transitions, and a structure that looks like a template. Introduction, three to five points, conclusion. Every time.

That regularity is efficient. It also reads like a form, not a document someone actually wrote.


If you're on the other side of this

These patterns are useful to know whether you're reading or writing. If you produce content and you'd rather it not trigger every flag above, tclaw.dev runs a humanizer pass on your text — $1 per document or $8/month if you're doing this regularly. It's built to catch exactly the patterns listed here and remove them, not just swap out synonyms.

The test is in the reading. Now you know what to look for.

Top comments (0)