Microsoft's default Copilot attribution puts a metadata flag in your git history. That flag is forgeable, copyright-ambiguous, and tells you nothing about who actually authorized the change. The attribution problem needs a different abstraction.
VS Code 1.118 shipped with a default: when Copilot contributes to a commit, the commit automatically gets Co-Authored-By: GitHub Copilot <copilot@github.com> in its trailer. Opt-out, not opt-in.
682 points on Hacker News. 317 comments. The loudest objection in the thread: "I can add that line manually. What does it prove?"
That question deserves an actual answer.
What "Co-Authored-By" Actually Is
Co-Authored-By is a git commit trailer — a key-value pair appended after the commit message body. GitHub renders it, but git itself treats it as freeform text. There is no cryptographic binding between the trailer and the commit content. No signature. No verification. No way to tell whether Copilot actually assisted with this specific set of changes, or whether someone typed the line manually because it seemed like the right thing to do, or because they're testing something, or because they want to disclaim ownership.
The trailer says "Copilot was here." Git has no mechanism to verify that claim.
This is attribution as metadata, and it has the same failure modes as every other metadata-based governance system. Metadata is easy to add, easy to omit, and impossible to verify without an external source of truth.
The Copyright Complication
The legal question the metadata creates is worse than the one it was supposed to answer.
The U.S. Copyright Office has repeatedly held that non-human entities can't hold copyright. The Thaler v. Perlmutter ruling reaffirmed this in 2023, and the Office has continued to issue guidance in the same direction since. When "GitHub Copilot" appears as a co-author, it clouds the copyright status of the code — not because the law says AI owns anything, but because the attribution creates ambiguity about whether the human author retains full ownership, whether the "work for hire" analysis changes, and whether downstream users have clear title.
Some organizations have already set internal policies limiting AI-generated code per file to preserve legal protectability. The Microsoft default doesn't help those organizations — it actively creates a paper trail that may complicate their compliance position.
The irony: an attribution system designed to increase transparency is increasing legal uncertainty for exactly the organizations with the most rigorous IP posture.
We've Seen This Before
Three days ago, we wrote about the HERMES.md incident — a case where Anthropic inferred agent usage by scanning git history for file names. The user got a surprise $200 charge because a string existed in a repository they'd stopped using.
The lesson there was: inference from artifacts is a bad governance primitive. File names in commit history are unverifiable, adversarially gameable, and systematically error-prone.
The Co-Authored-By default is the mirror image of the same error. Instead of inferring AI involvement from forensic artifacts, you're asserting AI involvement via forgeable metadata.
Both approaches — archaeology and assertion — fail for the same reason: they substitute proxies for ground truth. The proxy (a file name, a trailer string) isn't bound to the thing it's supposed to represent (actual AI involvement, actual human authorization).
What "Verification" Actually Requires
The question behind the Co-Authored-By controversy isn't "did AI help?" — that's unanswerable with metadata. The question is: who authorized this change, and under what scope?
That's a much more tractable problem, and it has a well-defined solution: a verifiable receipt.
A receipt has three properties that a metadata flag doesn't:
Binding. The receipt is cryptographically linked to the specific commit hash. You can't attach the receipt from one commit to a different commit. The attribution follows the actual content.
Provenance. The receipt records who issued the agent credential, which human authorized the agent's operation, and what scope that authorization covered. "Copilot assisted this commit" is a statement about a tool. "Agent X, acting under token Y issued by [authority], authorized by human Z with scope W" is a statement about a delegation chain.
Verifiability. The receipt is logged to an append-only transparency ledger (a SCITT log). Third parties — auditors, enterprise security teams, compliance systems — can query the log to verify the receipt independently. They don't have to trust the assertion in the commit; they can verify it against a record that neither the developer nor the AI provider controls.
What This Looks Like in Practice
An agent identity token carries the agent's identity, the human authorization chain, and the scope of the session. When an agent commits code, the commit can carry a verifiable receipt:
AgentLair-Receipt: https://agentlair.dev/receipts/sha256:e4a8c2...
AgentLair-Session: aat-2026-05-03-...
AgentLair-Authorized-By: did:web:agentlair.dev:users:hawk_aa
The receipt URI resolves to the SCITT log entry: a signed, immutable record of the agent's credential, the human who authorized the session, the session scope, and the commit hash that receipt covers.
The Right Abstraction
Co-Authored-By: GitHub Copilot isn't wrong in its intent — AI involvement should be visible in the record. But metadata is the wrong layer for this. Metadata belongs to whoever controls the commit message. Receipts belong to an independent log that the developer can't retroactively modify and that verifiers can independently query.
The difference matters most when it matters most: when an enterprise security team needs to audit AI exposure across 50,000 commits, when a compliance system needs to verify that human authorization existed for every production change, when a copyright dispute requires evidence of who actually made the call to ship a piece of code.
Metadata gives you a trailer string. Receipts give you a chain of custody.
One of those is a note on the door. The other is a security log.
Originally published at agentlair.dev/blog/co-authored-by-is-not-enough
Top comments (0)