DEV Community

Cover image for AI Generated Code Joins the Linux Kernel, But Humans Stay in Charge
Muhammad Zubair Bin Akbar
Muhammad Zubair Bin Akbar

Posted on

AI Generated Code Joins the Linux Kernel, But Humans Stay in Charge

The discussion around AI generated code has been building for a while. Now the Linux kernel community has finally made its stance clear.

With Linux 7.0, a new guideline explains how AI tools can be used when contributing to the kernel. It is not a green light for everything, but it is definitely not a ban either.

What Actually Changed?

The kernel now includes a document called AI Coding Assistants in its contribution process.

The idea behind it is straightforward. AI can help, but a human is always responsible.

In practice, this means AI assisted code is allowed, but fully machine generated submissions are not accepted. Every contribution must still follow licensing rules, and someone must stand behind the code.

So yes, AI can write parts of the code, but it cannot take responsibility for it.

A New Way to Credit AI

One interesting addition is a new tag called Assisted-by.

If you used an AI tool while preparing a patch, you are expected to mention it. This keeps things transparent without making the process complicated.

Earlier, some maintainers felt this could just be mentioned in the changelog, but the community has now agreed on using a proper tag.

Responsibility Still Matters

The Developer Certificate of Origin is still as important as ever.

When you submit code, you are confirming that it follows the rules and that you take full responsibility for it. That does not change just because AI was involved.

Even if most of the code came from a tool, the final responsibility still belongs to the person submitting it.

This Is Already Happening

This is not just a theoretical policy. It is already being used in real workflows.

Maintainers have been using AI tools to detect bugs and analyze code. These tools can highlight potential issues, but humans still review everything before any fix is accepted.

That balance is exactly what this policy is trying to achieve. AI helps with discovery, while humans handle validation and decisions.

Different Projects, Different Choices

Not every open source project is taking the same approach.

Some projects have completely blocked AI generated contributions due to concerns around licensing, quality, and ethics. Others treat such code with extra caution and require special approvals.

Linux is taking a more practical path by allowing AI usage but putting strict responsibility on developers.

Is This the Right Direction?

This approach feels realistic.

AI tools are already part of everyday development, and ignoring them is not really an option. Instead of resisting, the Linux community is focusing on transparency and accountability.

The real challenge is not the tools themselves. It is whether developers take the responsibility seriously.

In the end, the quality of the kernel will always depend on the people reviewing and maintaining it, not the tools they use.

Top comments (0)