Hacker News recently prohibited AI-generated comments, with 3,800 developers voting in favor. The rule is short. No extensive policy document. No AI identification software. Only one sentence appended to the site guidelines, and the largest developer forum online made a decision.
"Don't post generated comments or AI-edited comments. HN is for conversation between humans."
Here's Why This Matters
HN receives millions of monthly visitors comprising developers, founders, and scientists. However, the comment section was the most important aspect of the site, not the posted links - it was the comments themselves.
This year many users started realizing something. They saw comments that seem polished, yet empty. Paragraphs with perfect structure, but no creativity.
Writing where every sentence begins with "It's worth noting" or "To be fair."
Users had been reporting it for months. There was a post a few weeks before the guidelines adjusted that was titled HN is drowning in AI comments. Frustrations were boiling over.
What The Community Actually Said
The HN thread reached over 1,400 comments. That's enormous even by HN standards.
One commenter put it perfectly:
"There used to be a sort of gentleman's agreement that I could spare the time to read and judge your comment because you went through the effort of writing it."
This is the social contract. I read because you think.
AI shatters this relationship. You input your unrefined opinion into ChatGPT, paste the result, and click submit. The reader invests the equivalent amount of time reading. However, the author invested no time processing.
HN moderator dang made a comment that I remembered. Those who use LLMs exclusively for grammar corrections underestimate the extensive applications of LLMs. It is not just about correcting mistakes. It is about removing your style. 🤔
The Non-Native Speaker Problem
Things get complicated here. For example, one user mentioned using LLMs to \"anglicize\" their comments, as their style of direct communication tends to be downvoted. Another stated they use LLMs to improve their English-phrased search queries, rather than to create content.
These are legitimate and practical applications. And the guideline makes no such allowances.
Here's my take — a comment may have a few grammatical errors but if it comes from a real place, it's worth more than a flawless paragraph that feels like it could have been written by anybody.
Poor writing is a heuristic. It indicates to me that a human being wrote this down and that they put some thought into it. That counts for something.
This Isn't Just A HN Thing
Reddit struggled with AI spam bots for more than a year. LinkedIn comments are infested with so much AI slop that by default people just scroll past them. Stack Overflow had to prohibit AI-generated answers all the way back in 2022.
Every big developer community is encountering this same wall. The same tools that let you write so well also let you flood.
HN's response is interesting because it's not technical. There's not an AI detection model that runs over each comment. It's a social norm. They're betting that community norms and moderation can hold the line.
How well that will scale is an open question.
What This Means For Us
This is something to consider if you write online. This applies to HN, and to everything, though.
→ Your voice, your quirks, and your rough edges are really the only thing separating your posts from what a model would write
→ The grammar mistakes, the weird analogies, the opinions that don't hedge — that's the good stuff
→ The irony is thick with this one — we've created tools that can write as you and now the most venerated developer community is saying we'd rather read you, warts and all 🙂
Do you use AI to edit your comments or posts? Where do you draw the line between using it to edit and using it to write? 👇
Top comments (0)