Around the year 2016, which in terms of internet is millennia ago, and AI was still mostly about self-driving cars and badly configured chatbots, I used to ask co-workers a simple question: if this thing screws up, who takes the blame? Or, even worse, if we start using it in the wrong ways, who's responsible? Who do we question, exactly?
Nobody had a good answer, and that's something that still raises a lot of questions. Almost ten years later, Linux has started a movement that answered how we deal with code made by AI.
One page, just ONE page
Last week, the Linux kernel published its official policy on AI-generated code. The document is less than a page long. It was approved by Linus Torvalds himself, the person who has been the final word on everything Linux decides for over thirty years. Part of the tech press covered it as either a surrender to the AI hype or just a bureaucratic crackdown on AI-assisted tools. The problem is that both readings miss the point entirely.
What the Linux team did was answer a question that has been paralyzing open source communities for the past two years, making GitHub unmanageable for most of the bigger projects.
What the document says, in short terms:
Licensing: any contribution made with AI assistance follows the same rules that always applied to the kernel. Nothing changes.
Attribution: if you used an AI tool, you tag it. Model name, version, done. It's a single line that makes it clear what you have used:
Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]
Signature: AI agents must not add Signed-off-by tags. Period. Only a human can certify the Developer Certificate of Origin. Only a human can say "this code is mine, I have the right to submit it, and I'm responsible for it."
And that's the part where things start to get interesting.
What the Kernel team refused to do
The Linux maintainers could have gone the route some open source projects took: ban AI-generated contributions outright, treating any code touched by a model as legally contaminated. Some projects did exactly that. The fear of copyright liability was real, after all, and the easiest and quickest response was to shut the door.
They could also have gone the other way: accept anything that compiles and passes tests, no questions asked. Plenty of companies operate this way already. If the CI pipeline is green, ship it.
Linux found a third way. What they refused, above all, was the fantasy that you can police what developers do on their own machines. You can't 100% assure that someone didn't use Copilot to autocomplete a function. You can't know if they asked Claude to refactor a module. You can't know if they typed every character themselves or dictated it to a parrot that happens to know C. I mean, my cat sometimes steps on the keyboard and likes to play with my mouse, so it's safe to assume he might as well create part of my code after a few attempts.
The most important thing is: it doesn't matter how it was done, or how stressed the developer was by his cat messing with the code, or even while writing a DevTO post (Sebastian says "Hi").
What matters is: in the moment someone submits a code piece, they must put their name on it. That's the only point where control is both possible and necessary.
The word developers use for this is accountability. It means two things that can't be separated: whoever produces something must be identifiable, and whoever is identified must answer for what they produced. Remove either half, and you have a big bag of shit with a capital S.
The accountability gap that must be discussed
I've been building things with AI for a while now. I mean, it's not like I'm not being verbal about this fact. I even won some prizes by creating things with AI.
My WordPress plugin, WP-AutoInsight, connects to multiple AI APIs and generates content. But everything it produces lands in draft mode. The same thing applies to n8n automations and other tools I've been building for some clients: A human has to review it, approve it, and hit publish. A real person must be accountable.
Not every client actually reviews the drafts. But the option has to exist.
Because when AI gets it wrong, and believe me, it does get it wrong, a person made of flesh and bone will have to explain the problem to another person made of flesh and bone. That's the part that hasn't changed and won't change for a long time, regardless of how good the models get.
The Linux policy understands this. It doesn't try to evaluate the quality of AI output. It doesn't try to detect whether a human or a machine wrote that code. It simply says: whoever hits submit, that's the person who answers for it. The tool is irrelevant; what really matters is the signature.
Here's where it stops being a software story.
If you follow me and read my posts, you might know that this isn't just about coding.
The same question Linux answered in one page is the same question that courts, hospitals, newsrooms, and law firms have been struggling with for years. Who is responsible when a legal brief cites a case that doesn't exist? Who answers for a medical report generated by a model that hallucinated a diagnosis? Who owns the mistake when an AI-written news article gets the facts wrong? Who must answer for CSAM generated using AI?
Every industry that produces content with consequences is hitting the same wall. And most of them are arriving at the same answer: attach a face and a name to the content generated. Make someone sign for it. Make that signature mean something, legally.
The Linux kernel just "got lucky" and reached a good answer, in fewer words than most companies will be able to deliver.
In the same way that Linux is "getting lucky" by creating new standards before anyone else, over the last few decades.
Where the AI debate fails
The lazy reading of the Linux policy is that Torvalds opened the door for vibe-coding. The precise reading is different: Linux didn't regulate AI but regulated authorship instead.
And that distinction matters because the entire public debate about AI in development is stuck in the wrong frame. One side wants to ban AI tools because they're dangerous. The other side wants to remove all friction because AI makes everything faster. Both sides are arguing about the tool.
The Linux kernel doesn't care about the tool. It cares about who signs. And who signs, answers.
Now let's go back to the long-lasting question I've mentioned at the start of the text: "Who is responsible when this goes wrong?" applies to every piece of software, every automated decision, every AI-generated output that reaches production. That's what Linux decided.
The fact that the Linux kernel answered it in fewer words than most README files should tell us something about how we discuss things today. The answer was never complicated, we just made it seem too complex to answer.
OK, and what does this mean for us?
If you're writing code with AI assistance right now, and, let's be honest, statistically, you are, here's what you need to know about the Linux policy, in practice:
Use whatever tools you want. Nobody is coming to audit your IDE.
But when you submit that code, when you open that pull request, when you push to production: that's your code now. You reviewed it. You understood it. You certified that it does what it's supposed to do. If it breaks something, your name is on it.
"But what if I haven't reviewed it?" - you might ask. Well, it's your name there. Take the blame. That's part of the job. It always was.
The only thing the Linux policy made explicit is something that should have been obvious from the start: AI can write code, AI can write content, AI can make media. But AI can't be responsible for it. That part is still on us.
And if your workflow doesn't have a point where a human reviews, understands, and approves what the AI produced before it goes live, you don't have a workflow.
You have a liability.

Top comments (0)