š¦ I know it's been a minute (months!), but I said I wouldn't forget about the follow-up for this one and I have not! It's been nagging at me to finish, but I wanted to have something to show for it first. I got that done quick enough; however, Release Please was not nearly as straightforward to learn as I anticipated considering I set up dual packages in a mono-repo as my "simple" setup. YesāI curse myself, but holidays were also involved in the shenanigans!
I'm going to try not to bore you with the nitty-grittyāwe're all smart devs around hereābut I have to finish my original thought or this outstanding post will nag at me forever. I'm not counting out a third though, because I do still have more ideas. Me and attribution deserve a break from each other after this though, so weāll save that for future-me to consider. š®āØ
Brief but Inspirational Backstory āØ
If you missed the original version, Iāll sum it up quickly: itās always bugged me that AI joined the party and never had to make itself permanently known. Not responsibilityāthatās on us as devs regardlessābut as a co-author. So I came up with an idea to fix that and over the break I quickly threw together a working version of enforcement for both CommitLint and GitLint in Python.
I call this a RAI footer, short for Responsible AI attribution. Not policy. Not governance. Just a mechanical signal in your commit history that makes AI assistance explicit and pairs it with a human signoff. The goal isnāt to moralize commits. Itās to remove ambiguity about authorship once AI enters the workflow.
š¦ The README already explains what this is and how to use it properly, if youāre curious
Doesnāt Live Alone š
I have to point out that this RAI footer is just a baby step and it does absolutely nothing to fix potential problems that may crop up from code that's committed without proper human-in-the-loop (HITL) oversight.
To make this system work as designed, I leaned on Gitās built-in functionality. The --signoff flag is an extension of legal requirements often used in open source for a Developer Certificate of Origin (DCO). This adds a literal footer to the commit stating that you, as a verified author, are also adding a layer of legal accountability. You didnāt just push code. You agreed to the projectās rules.
Which is a whole lot of legal framing for a system that has zero interest in what lawyers are doing.
My version instead says: Iām signing my name. This is correct. Thatās it.
In practice, your commit looks something like this:
git commit -s -m "chore: Add trouble in spades
Assisted-by: GitHub Copilot <copilot@github.com>"
And the result you end up with when you look back at git log:
Author: Ashley Childress <6563688+anchildress1@users.noreply.github.com>
Date: Sat Jan 17 00:32:38 2026 -0500
chore: Add trouble in spades
Assisted-by: GitHub Copilot <copilot@github.com>
Signed-off-by: Ashley Childress <6563688+anchildress1@users.noreply.github.com>
In this context, itās really less a legal signature and more Scoutās Honor. The one -s flag is your HITL review stating youāve personally signed off on this commit in its entirety, which includes the AI attribution footer.
When both footers are paired together in the context of a single commit, it says beyond any doubt that youāaka the verified authorāwere aware of the changes and approved the code AI did or did not contribute.
š¦ This gets you through the configuration, but on its own itās not very impressive. Also, thereās really not anything new or earth-shattering Iāve done hereāI simply repurposed the existing system.
One Last Tiny Helper š§©
Generally speaking, anything that pops up on my ācontinuous workā radar and can be automated is eventually automated. This is no different.
Iāve commented somewhere before that I used to be the worldās absolute worst committer. Truth? I still am. Copilot, however, generally does a much better job.
I have a generate-commit-message prompt designed with this entire flow in mind, thatās designed to look at staged files by default (with fallback), create a conventional commit message, and assign its own attribution based on the chat history. Itās far from perfect, but does a decent job at getting you in the ballpark!
š¦ Help a girl out and leave a star if you find anything useful!
The Boring Future š§
This is really all there is to the enforcement partāat least to carry us over until the AI estimators do a better job at guessing when AI helps or not!
The future and arguably more important piece of this is reporting those stats back for your repo. Itās not good enough to state that AI helped anymore. We need to start proving that itās not all terrible code!
From here, I envision a centralized reusable workflow that pulls a diff of all commits and roughly guesstimates the percentage of code that AI contributed. Update a static Shields.io badge (for now) on the README.
You could get really fancy with this if you wanted to, but honestly thereās other tooling that will catch up before that becomes necessary! This is just meant to hold us over until that tech emerges. Fingers crossed that point is sooner rather than later. š¤
š”ļø The Part Where I Sign My Name
This post was written by me. ChatGPT assisted with editing and clarity. It didnāt choose the ideas, make the decisions, or approve the work. If something here is wrong, confusing, or incomplete, that responsibility belongs to the human who signed it. AI can assist. Authorship still requires someone willing to put their name on the result.

Top comments (0)