Linux Kernel Finalizes Official AI Code Contribution Policy

by Priyanka Patel

The Linux kernel is perhaps the most critical piece of shared infrastructure in the modern world, powering everything from the smallest Android smartphones to the largest supercomputers. Due to the fact that the stakes for stability and security are so high, the project has long operated on a culture of extreme scrutiny and absolute accountability. Now, that culture is facing its most significant evolution yet: the integration of generative AI.

After months of rigorous debate among maintainers and the community, Linus Torvalds and the project leadership have established the first official Linux kernel AI code policy. The guidelines aim to embrace the productivity gains of modern AI development tools while ensuring that the kernel’s strict quality standards are not compromised by “AI slop” or hidden vulnerabilities.

As a former software engineer, I’ve seen how AI can accelerate the “boilerplate” phase of coding. But in the context of a kernel, where a single misplaced pointer can crash a million servers, the distance between “productive” and “perilous” is razor-thin. Torvalds’ approach is pragmatically cautious: AI is welcomed as a tool, but it is never granted the status of a contributor.

The Three Pillars of AI Accountability

The new policy is built on a foundation of transparency and human liability. The core objective is to ensure that no piece of code enters the kernel without a human being willing to stake their reputation—and legal standing—on its correctness.

First, the policy draws a hard line regarding legal certification. AI agents are strictly prohibited from using the “Signed-off-by” tag. Under the Developer Certificate of Origin (DCO), this tag serves as a legal confirmation that the contributor has the right to submit the code under the project’s license. Since an AI cannot enter into a legal contract or take legal responsibility, only a human can sign off on a patch. Even if a patch is 100% AI-generated, the human submitter assumes sole responsibility for it.

Second, the project is introducing a mandatory “Assisted-by” tag. Any contribution that utilizes AI tools must explicitly disclose the models and agents used. A typical entry might look like “Assisted-by: Claude:claude-3-opus coccinelle sparse,” providing a clear audit trail for maintainers.

Third, the human submitter is held fully accountable for the output. This includes reviewing the code for bugs, ensuring license compliance and answering for any security flaws. The project has a long memory regarding bad actors; for instance, the 2021 attempt by University of Minnesota students to sneak malicious code into the kernel serves as a reminder that dishonesty in the community can lead to a permanent ban from the project and other major open-source ecosystems.

The integration of AI into the Linux kernel represents a balance between rapid innovation and the necessity of absolute stability.

From Controversy to Consensus

The road to this policy was paved with friction. The catalyst was an incident involving Sasha Levin, a prominent NVIDIA engineer and kernel developer, who submitted patches for Linux 6.15 that were entirely AI-generated, including the changelogs and tests. While Levin had reviewed and tested the code, the fact that AI had written it was not disclosed to the reviewers. This lack of transparency sparked a backlash among other developers who felt the “human” element of the peer-review process had been bypassed.

This friction eventually led to a push for formal rules. During discussions on the Linux Kernel Mailing List (LKML) and at the North America Open Source Summit, the community debated whether to employ a “Generated-by” tag or a “Co-developed-by” tag. “Assisted-by” was chosen because it more accurately describes the AI’s role: it is a sophisticated tool, not a collaborator.

Comparison of Kernel Contribution Tags
Tag Who/What Uses It Purpose
Signed-off-by Humans Only Legal certification of origin (DCO).
Assisted-by Human + AI Tool Disclosure of AI assistance for transparency.
Co-developed-by Multiple Humans Credit for shared human intellectual effort.

The “Tool” Philosophy

The decision to avoid “Generated-by” reflects a deeper philosophy held by Linus Torvalds. He has been vocal about his desire to maintain the kernel documentation grounded in engineering rather than ideology. Torvalds stated on the LKML that he did not want the project’s documentation to develop into an “AI manifesto,” noting that while some claim AI will revolutionize software engineering and others claim it will end the world, the kernel project remains neutral.

The "Tool" Philosophy

This pragmatic view is shared by Greg Kroah-Hartman, a maintainer of the stable kernel, who noted a recent shift in AI capabilities. According to Kroah-Hartman, there was a tipping point where AI tools stopped producing mere “hallucinations” and began generating genuinely valuable security reports and refactoring suggestions.

However, the project is not relying on “AI detectors” to catch undisclosed AI code. Maintainers are instead doubling down on traditional code review—using deep technical expertise and pattern recognition to spot anomalies. The fear is not “AI slop”—which is usually easy to spot—but rather “convincing” code that looks correct, compiles perfectly, and fits the style, yet contains a subtle, long-term maintenance burden or a sophisticated bug.

For those tempted to bypass the new rules, the deterrent remains the same as it has always been: the risk of facing Torvalds’ legendary disapproval. While he may have mellowed over the years, the cost of losing trust within the kernel community is a price few developers are willing to pay.

The next phase for the community will be the practical application of these tags across the upcoming release cycles, as maintainers calibrate how much extra scrutiny “Assisted-by” patches require compared to purely human-written code.

Do you think mandatory AI disclosure improves code quality, or is it an unnecessary administrative burden? Share your thoughts in the comments.

You may also like

Leave a Comment