It Finally Happened: My Writing Was Tagged as AI Content
Part 1: The Incident
Let me paint you a picture. Stack Overflow—that digital colosseum where developers go to solve problems and occasionally lose faith in humanity—has some gold-standard unwritten rules:
- There is only one right answer (yours better be it).
- If I have more "reputation" than you, I am literally, objectively, measurably better than you.
- Answers are not meant for learning, only problem-solving. No depth. No flourish. Just the fix.
I violated the third rule. The one I'm actually (fairly, given Stack Overflow's Darwinian culture) okay with.
The gentleman who flagged me was surprisingly kind. He offered genuinely good advice: "Provide references." He's right—LLMs rarely embed citations unless explicitly prompted. But here's the twist: I wrote that answer entirely myself.
See, I'm not neurotypical. My brain naturally produces what people now call "AI fluff"—the verbose, structured, slightly pompous prose that characterizes GPT outputs. It's just how I think. I've spent years developing strategies for Smart Brevity™, especially as a professor and ML engineer where time is sacred. That's precisely why I include a TL;DR with copy-pastable solutions upfront, then proceed to explain in depth. (Too many years teaching...)
Part 2: The Social Experiment
After being flagged for sounding like an AI, I decided to run a little experiment.
Next question I answered? I wrote it completely myself, then fed it to an LLM with one simple instruction:
"Rewrite this text as if you were [LLM]."
I asked the AI to rewrite everything I'd written, but in its own voice.
A single tear rolled down my cheek as I read the output. The flow was superb. The transitions were buttery smooth. I pasted it verbatim—didn't even remove the telltale characteristics I (as a Machine Learning specialist) know exist in AI-generated text.
The result? Not flagged. Better yet: the same moderator who flagged my original answer upvoted this one.
This moment crystallized something for me. We're missing a massive opportunity here. People are so focused on detecting AI that they've forgotten to ask: If the answer solves the problem, does it matter who—or what—wrote it?
But that's a story for another day.
Part 3: The Linguistic Fingerprints
What I can share today are the patterns—those statistically significant trends that betray AI authorship. These aren't smoking guns individually, but part of a larger textual fingerprint. (Actual detection involves analyzing diacritics, BOMs, and ventures into Kolmogorov Complexity theory of randomness—my favorite subject, coming in a dedicated series.)
For now, here's what to watch for:
The Rule of Three
Remember when I listed three Stack Overflow rules at the beginning? That wasn't accidental. Our brains naturally gravitate toward triplets—two feels incomplete, four feels excessive (and we're lazy). We rarely reach five.
LLMs have this obsession amplified to eleven. Three examples. Three benefits. Three challenges. Always three.
LinkedIn Talk: The Art of Saying Nothing Beautifully
You know these sentences: packed with words, empty of substance.
"The outstanding performance of the data shows an increasing demand for more sophisticated solutions among users, not only because they value safety and freedom, but also because such are intrinsic components embedded in human nature and philosophy."
What did that actually say? Not much. But it sounded profound.
Watch for the "not X, rather Y" construction. "It's not just about speed, it's about efficiency." AI loves this false dichotomy.
The Vocabulary Hall of Fame
Certain words appear with suspicious frequency in AI text:
The Classics: Additionally (especially starting sentences), crucial, delve (pre-2025), emphasizing, enhance, fostering, garner, highlight (as verb), intricate/intricacies, landscape (abstract), pivotal, showcase, tapestry (abstract), testament, underscore (as verb), vibrant.
The Significance Merchants: "stands/serves as," "is a testament/reminder," "plays a vital/crucial/pivotal role," "underscores its importance," "reflects broader trends," "marks a turning point," "evolving landscape."
The Humble Helpers: "I hope this helps," "Of course!," "Certainly!," "You're absolutely right!," "Would you like...," "Let me know if you need more detail."
Structural Tells
AI writing has a typography signature:
- Smart quotes: "this" or 'this' (not "this" or 'this')
- Em dashes—always, ALWAYS em dashes—used liberally
- Ellipsis as Unicode: … (not ...)
- Emojis and bullet points everywhere 🎯
- Title Case Headers
- Overuse of boldface for emphasis
- Two-column tables when prose would suffice
The Copula Conspiracy
LLMs avoid basic "is/are" constructions like vampires avoid garlic.
Instead: "serves as," "stands as," "marks," "represents," "boasts," "features," "offers."
Why write "The library is popular" when you can write "The library boasts a vibrant community"?
Elegant Variation Gone Wild
LLMs have repetition-penalty code (temperature parameters) that discourages word reuse. Mention a protagonist once by name, then watch the synonyms parade: "the key player," "our eponymous character," "the central figure."
It's the literary equivalent of using a thesaurus on every noun.
The Balanced Parallelism
"Not only X, but also Y." "It's not just about A, it's about B." "Despite challenges, however..."
These parallel constructions make AI sound thoughtful and balanced. They're everywhere.
Knowledge-Cutoff Disclaimers
"As of my last training update..." "While specific details are limited..." "Based on available information..." "Up to [date]..."
Humans rarely qualify their knowledge this way unless we're academics covering our asses.
Part 4: The Philosophical Epilogue
Here's what keeps me up at night: I write like an AI because my brain works differently. I structure thoughts in neat hierarchies. I use formal vocabulary. I love em dashes and parallel constructions.
Am I less human because my prose matches a statistical model trained on billions of human texts?
Or perhaps—and here's the uncomfortable truth—LLMs write like "AI" because they learned from people like me. The verbose professors. The technical writers. The documentation specialists who've been producing "AI-sounding" content since before transformers were invented.
We built these models on human language. They reflect patterns that already existed. The call is coming from inside the house.
Part 5: Conclusion
The Stack Overflow incident taught me something valuable: We're developing detection mechanisms for a problem we've barely defined. "AI-generated text" isn't a monolith—it's a spectrum that overlaps considerably with neurodivergent communication styles, academic writing, technical documentation, and yes, anyone who simply gives a damn about structure and clarity.
Maybe instead of playing an endless game of cat-and-mouse with detection algorithms, we should ask better questions:
- Is the information accurate?
- Does it solve the problem?
- Is it helpful to the reader?
Because here's the kicker: The moderator who flagged my human-written answer as AI and upvoted my AI-rewritten answer wasn't evaluating truth or utility. He was evaluating vibes.
And vibes, my friends, are a terrible basis for gatekeeping knowledge.
So next time you're tempted to flag something as "AI-generated," pause. Ask yourself: Are you detecting artificial intelligence, or just intelligence that's been artificially forced into a box labeled "normal"?
The answer might surprise you.
P.S. This entire article was written by a human. Probably. Does it matter?


Top comments (1)
Eduardo, I made a Choose Your Adventure-ish dev.to post, which you might like.
The entire game was written by a human. Probably. Does it matter?
Some comments have been hidden by the post's author - find out more