r/programming banned LLM content: I banned my own posts and found the impossible criterion
Why do we assume "AI-generated vs human" is the line that matters? r/programming has been banning LLM content for weeks now, and the entire conversation circles around authorship — who wrote it, how it was generated, what tool was in the middle. Nobody's asking the other question: how much original thought does the post actually have, even if a human typed every single character?
That question made me uncomfortable enough to do something uncomfortable: I grabbed my last 20 posts and ran them through my own imaginary moderation filter. The results shut me up pretty good.
The LLM content ban in programming communities: what's actually happening
r/programming announced explicit restrictions on LLM-generated or LLM-assisted content. The public justification is reasonable: the community was filling up with generic posts that had no real perspective behind them, passing themselves off as technical analysis without any lived experience underneath. The kind of post that explains how a garbage collector works without the author ever having debugged a memory leak at 2am.
The problem — and here's the part I think is honest to admit — is that the moderation criterion isn't technically "generated by AI." It's something more subjective than that. Moderators talk about "original value," "genuine perspective," "first-hand experience." Terms that sound good in a policy document but are a swamp in practice.
Because I write with Claude. Not to have Claude write for me, but as an interlocutor — a first reader who forces me to clarify things I don't yet know how to articulate. Does that disqualify me? Or does it matter whether what's left after the process actually has something to say?
I decided not to resolve that question in the abstract. I resolved it with my own data.
The filter I built and the 20 posts that survived (or didn't)
I defined four criteria. I didn't invent them out of thin air — I distilled them from reading discussion threads on r/programming, r/MachineLearning, and several posts from moderators explaining their decisions. Each post could score up to 25 points per criterion — 100 maximum.
# moderation_criteria_llm.py
# My attempt to operationalize "original value"
criteria = {
"verifiable_experience": {
"description": "Is there a specific measurement, log, error, or decision?",
"weight": 25,
# If the post says "based on my tests" but shows nothing,
# it scores 0. If there's a real number, a stacktrace, a date: it counts.
},
"own_position": {
"description": "Does the author take a stance someone could actually argue against?",
"weight": 25,
# Posts that explain something neutrally with no opinion: 0.
# "X is better than Y because I measured it and Z was the result": counts.
},
"context_specificity": {
"description": "Could this post exist without the author's particular experience?",
"weight": 25,
# A generic Docker tutorial: 0.
# "I nuked production with rm -rf in my first week of hosting": counts.
},
"irreproducibility": {
"description": "Could an LLM generate this without the author's original input?",
"weight": 25,
# Article about what a mutex is: 0.
# Post about how a specific Railway error broke my deploy
# exactly when I pushed at midnight: counts.
}
}
I applied this to my last 20 posts. Honest results:
- 12 posts: 70 points or more. They survive. They have real measurements, a clear stance, specific context.
- 5 posts: 50-69 points. Gray zone. They have something personal but the central argument could have been written by anyone with access to the documentation.
- 3 posts: below 50. They don't pass. Posts I wrote myself, with my own hands, in first person — but at their core they're documentation summaries with an anecdote glued on top as decoration.
That third group hit me like a bucket of cold water.
The three "human" posts that wouldn't survive the ban
I'm not going to name them directly because some are still active, but I can describe the pattern.
The first was about an infra tool. I had used it, yes. But the post described features from the official documentation more than my actual experience with it. The anecdote was decorative — it could have appeared in any section and the post wouldn't have changed. Score: 38/100.
The second was an opinion about the future of a protocol. It had a position, but the position was safe. It didn't say anything that could cost me anything. It was the kind of take that everyone in the ecosystem would be willing to sign off on. Score: 44/100.
The third — and this is the one that bothered me the most — was a technical reference post. Useful, well-written, with examples. But there was nothing in it that required me specifically to have written it. Zero. It was fully replaceable by an LLM with access to the same docs. Score: 31/100.
My thesis, after this exercise: r/programming is targeting the right symptom with the wrong diagnosis. The problem isn't that an LLM was somewhere in the middle of the process — the problem is content without original thought, and that can be produced perfectly well by a human writing on autopilot.
The impossible criterion: what happens when the filter is actually applied
What makes this exercise uncomfortable is that it collapses the convenient distinction between "AI-generated" and "written by a human." When I sat down to review what actually changes when Anthropic moves Claude between plans or what real gaps MCP has when you run it in production, there was original thought because there was real friction. I had fought with those things. I had something to lose by being honest about them.
When I wrote about how OpenAI sells relevance by prompt, what mattered was the discomfort of having simulated the mechanism with my own logs. Pull that discomfort out, and the post becomes just another generic explanation.
The problem r/programming is trying to solve is real. Technical communities are filling up with content that looks like analysis but is text generated from generated inputs, with nobody who ever touched the thing in production, nobody with skin in the game. But the criterion "generated by LLM = bad" is too blunt. It's like banning everyone who uses autocomplete because someone abused autocomplete.
What strikes me as an honest — and verifiable — criterion is: is there something in this post that required this specific person to write it? Not "did a person write it?" — that's different. If the answer is no, the post doesn't have enough original thought, regardless of who produced it.
When I reviewed my commits looking for what was mine and what was the model's, the exercise was worth something because I had real commits. When I wrote about Anthropic's position shift on Claude CLI, I had the log of my workflow before and after. Without that, it was just another press note.
Common mistakes when thinking about this ban (and what it actually protects you from)
Mistake 1: Thinking that "you wrote it" is enough.
No. I wrote three posts that wouldn't pass my own filter. The act of typing doesn't add value — the specific experience that informed that typing does.
Mistake 2: Thinking the ban solves the underlying problem.
Communities that ban LLM content without defining what positive criterion they're looking for will end up with less content, not better content. Mediocre human content is still mediocre.
Mistake 3: Assuming that if you use AI in the process, the result is contaminated.
I've been using Claude as an interlocutor for over a year. The posts that pass my filter pass because they started from real friction, not because I wrote them alone. The process doesn't invalidate the result if the result has something to say.
Mistake 4: Believing that moderators apply the criterion consistently.
They don't. It's impossible at scale. What they'll end up banning is content that looks generated — content that has the smell of LLM, that texture of exhaustiveness without experience. And that's a moving target.
FAQ: LLM ban in programming communities
Is r/programming banning everything that uses AI, or just what looks AI-generated?
The official policy talks about content "generated or significantly assisted by LLMs." In practice, moderators apply qualitative judgment: if the post has no verifiable original perspective, it can get pulled even if a human wrote it. If it has verifiable original perspective, it'll probably survive even if an LLM was in the process.
How do I know if my post would pass the r/programming filter?
The most honest question you can ask yourself: is there something in this post that required you specifically to write it? Not "did you write it?" — that's different. If the answer is no, the post doesn't have enough original thought, regardless of who produced it.
Will other technical communities follow the same path?
Probably, some of them. Hacker News already has an informal moderation culture that penalizes generic content. Stack Overflow has rules about LLM answers. The movement exists. The question is whether they'll articulate more precise criteria or just use the ban as a blunt instrument.
Does this hurt developers who work with AI day to day?
Only if they produce generic content about their work with AI. If you write about a tool you actually use with real friction, real logs, and your own stance, the ban doesn't touch you — or shouldn't. The problem is that "shouldn't" and "in practice" are different things when moderation is human and subjective.
Is there a way to write with LLMs and have the content be authentic?
Yes, but it requires the original friction to be real. If you start the process from a concrete experience — an error that cost you time, an architecture decision that didn't pan out, a measurement that surprised you — and use the LLM to articulate it better, the result can have original value. If you start from "write me a post about X," it doesn't.
Is it worth publishing on r/programming or communities with that kind of ban?
Depends what you're looking for. If you want distribution for generic content, no. If you have something concrete to say from real experience, the ban works in your favor — there's less noise to compete against. The filter is tough, but the audience that survives that filter is the one worth having.
What I was left with after banning my own posts
There's a part of this exercise that still sits heavy with me: the three posts that didn't pass were written on days when I was working a lot and writing on autopilot. Not because an LLM was generating things for me — but because I myself was running like an LLM: processing known inputs and producing predictable output.
I studied Computer Science at UBA while working full time. I'd show up to exams in my work clothes, straight from the office. I passed Calculus II on my fourth attempt. What I remember from that period isn't the course content — it's the texture of being at cognitive limit constantly. You couldn't write on autopilot there even if you wanted to. There were no resources left for that.
The best posts I've written in the last few months came from that state: when something broke my infra, when a measurement didn't make sense, when a policy change forced me to recalculate something I'd assumed was settled. The worst ones came from when I had time and produced anyway.
My final position: r/programming is doing the right thing for the wrong reasons. The "no LLM" criterion is operationally convenient but conceptually weak. The criterion that actually matters — original thought with verifiable experience behind it — is harder to moderate but it's the only one that distinguishes content worth reading from content that isn't. And that criterion applies equally to humans and machines.
If you're going to write about tech, write about something that cost you something. If it didn't cost you anything, you don't have anything to say yet — and no ban on any subreddit is going to fix that.
Been through something similar reviewing your own content? Find more context on how I think about authorship in the age of agents in the git blame analysis of my commits.
This article was originally published on juanchi.dev
Top comments (0)