Linux kernel establishes AI code policy with strict disclosure rules
The Linux kernel project has formally approved AI-assisted code contributions, ending months of debate by requiring developers to tag AI-generated work and accept full legal responsibility for any bugs or security flaws.
The new policy introduces an "Assisted-by" tag for code written with AI tools like GitHub Copilot. Crucially, AI agents cannot use the "Signed-off-by" declaration, which legally certifies a developer's right to submit code. This shift anchors accountability directly to the human submitting the patch.
Linus Torvalds framed the decision pragmatically: AI is a tool, no different from a text editor or compiler. Bad actors won't follow documentation anyway, so the kernel should focus on holding developers responsible rather than policing their local machines.
Why this matters for legal teams
The Developer Certificate of Origin (DCO) requires developers to certify they have legal rights to their code. Large language models train on open-source repositories with restrictive licenses like the GPL, creating murky copyright provenance. Red Hat warned last year that undisclosed AI submissions could inadvertently violate open-source licenses and undermine the entire DCO framework.
The Linux kernel's policy addresses this by making disclosure mandatory and placing liability on the human submitter. If an AI-generated patch infringes a license or introduces a vulnerability, the developer who clicked submit bears the legal and professional consequences.
Community backlash forced the issue
Several high-profile incidents accelerated the need for clear rules. NVIDIA engineer Sasha Levin submitted a kernel patch entirely written by an LLM without disclosing it, including the changelog. The code functioned but contained a performance regression that reviewers missed-partly because they didn't know it was AI-generated.
The gaming community faced similar friction. GZDoom, a 20-year-old Doom source port, lost most of its contributors after its lead developer used undisclosed AI code. The majority forked the project into UZDoom rather than accept the lack of transparency.
Other projects have been overwhelmed by volume. The cURL creator closed bug bounties after receiving hundreds of hallucinated patches. Node.js and OCaml maintainers have fielded massive AI-generated pull requests that sparked existential debates about project governance.
The real issue was disclosure, not AI itself
Community frustration centered on dishonesty, not the use of AI tools. Developers objected to contributors claiming credit for code they didn't write and failing to flag AI assistance for reviewers.
The Linux kernel's approach strips emotion from the debate by enforcing transparency and anchoring consequences. If the code is good, it ships. If it's broken, the developer answers for it. In open-source, that's a powerful deterrent.
For legal professionals, the policy demonstrates how disclosure and liability rules can manage emerging technology risks. Rather than outright bans-which other projects like NetBSD and Gentoo have tried-the Linux kernel created a framework that acknowledges developer reality while protecting the project legally.
Learn more about AI for Legal and Generative Code to understand how these policies apply across industries.
Your membership also unlocks: