"The L in LLM stands for Lying" just hit the top of Hacker News.
382 upvotes. 228 comments. And honestly? The post nails something most of us won't say out loud.
The original piece by Steven Wittens makes this argument: LLMs produce forgeries. Not "hallucinations" — forgeries. When Claude writes code for you, it's forging output that looks like YOUR work. When ChatGPT writes your documentation, it's forging text that looks like YOU wrote it.
And that framing hit me harder than I expected.
Because I use LLMs every single day.
Here's my actual workflow right now: I write a rough outline → Claude fills in the gaps → I edit the output → ship it. For code, for docs, for Slack messages sometimes. I'd guess 40-60% of what I produce in a day has LLM fingerprints on it.
Is that forging my own output? Technically... yeah?
The cheese analogy in the original post is wild. French Brie de Meaux can't be made outside its region because cheap imitations would flood the market and kill the real thing. Wittens argues the same thing is happening with human-produced work. Cheap AI imitations flooding every surface — code reviews, blog posts, documentation, even interview take-home assignments.
The part that made me uncomfortable
Wittens says this:
"It's perfectly okay not to use AI. It doesn't make you a troglodyte."
And I realized — I genuinely can't imagine my workflow without it anymore. Not because I'm lazy (okay, partially), but because the expectations have shifted. Ship faster. Write more. Review more PRs. The pace assumes you're using AI. Opting out isn't really opting out. It's falling behind.
That's the trap, right? You didn't choose to depend on it. The industry chose for you.
Where I actually agree
The "forgery" framing is uncomfortable but it's useful. It forces you to ask: what am I actually contributing here?
If I prompt Claude to write a React component and I ship it unchanged — what did I add? My name on the commit? That's literally forgery by Wittens' definition.
But if I write the logic, use Claude to handle the boilerplate, then review and modify — that's closer to using a power tool. A carpenter using a nail gun isn't forging carpentry.
The line between "tool" and "forgery" is how much of YOUR judgment went into the final output. And that line is blurrier than any of us want to admit.
The real question
Wittens ends by arguing that society needs to draw lines — like it does with food authenticity laws. We don't let people sell fake eggs made from chemicals. Should we let people sell fake expertise made from prompts?
I don't have a clean answer. But I think every developer using LLMs should read this post and sit with the discomfort for a minute.
Link to the original: The L in "LLM" Stands for Lying
What's your take? Is using LLMs in your daily workflow "forging" your output? Or is it just the next power tool?
Top comments (0)