š¦ I survived the hackathon, caught up on sleep (mostly), and the AI debate? Still going strongāat least in my head. If my last posts seemed like a soapbox, letās try a new approach. This isnāt just for the devs or the writersāitās anyone online trying to build, break, or create something honest.
Background (because the story matters) āļø
Hereās what gets me: people still treat all AI content the sameāwhether itās auto-generated fluff or a post like this, with actual thought, stubbornness, and a few creative detours baked in. I use AI as a tool, but Iām the one steering; itās got my fingerprints and my voice all over it because I wrote intentional AI instructions.
š At least, unless GPT-5 has decided to rewrite the rules again. Then it takes a bit of wrangling first.
The sad part? Both the creative writing and the fluff get the same knee-jerk reaction. Iām not worried about myselfāI know how to handle criticism and donāt mind being upfront. But not everyoneās ready to jump into the ring, and a lot of good AI assisted work gets buried because creators just donāt want to deal with the drama that comes with disclosure.
Hang out in the writers+AI corners of the internet for five minutes and youāll hear: āJust donāt discloseāwhy invite the hassle?ā Thatās not me. Iād rather own it, even if it means the occasional argument.
š„ Integrity first, sparring match secondāand my matches usually come with a grin and a little happy dance.
So letās walk through what we actually know about AI, what weāre still sorting out, and how we might just learn to disagree without burning the place down before it's sorted. š„š©āš
1. Whoās Really at the Table? šļø
Platforms, publishers, workplaces, classrooms, and every Discord mod with a badge gets to set their own boundaries. But the thing that always gets me isnāt if they do it, itās how. When āboundariesā become a one-size-fits-all firewall, thatās where I have a problem.
For example: KoFiās Discord rules are directā
āAll forms of AI-generated content (eg. art/music/writing/ChatGPT), including links to such content, and discussion thereof is not allowed in this server.ā
So, of course, I checked. āDoes that mean my stuff is banned?ā Turns out, nope. As long as I skip the preview images, weāre golden. Honest, straightforward, no drama. Awesome.
Medium, though? (If you missed the post, catch up here.) They talked about gray areas... then built a giant penalty box for every AI-assisted creator, regardless of intent or craft.
šµ For me, thatās about as thoughtful as banning all musicians because someone played Wonderwall one too many times at open mic night.
I canāt rewrite the rulebook, but I can refuse to act like these blanket rules donāt erase good, thoughtful people. Those of us who are trying to follow guidelines that don't really existāand perhaps set a few new ones in the processādon't deserve to have our work lumped together with the slop.
2. AI Content ā Equal š„
Thereās AI content, and then thereās AI content. Some of it is shallow, spammy filler that's cranked out for clicks with zero thought or care. The rest of us? Itās a tool wielded well: organized, rewritten, and given a real voice.
Bad actors werenāt invented along with AI; the existing ones just found a different shortcut.
There are tools out thereāZeroGPT and friendsāthat claim theyāll catch every AI post. But hereās the thing: Iāve actually tested this. I picked three or four posts at random, ran them through different detectors, and my highest score was 18%.
š¦ Itās not because Iām hiding anything or using some secret hack. Itās the process.
I dictate most posts on the fly. Then I hand it off to the AIāto organize, to reword, sometimes to rewrite completelyābut always under my set of rules. And it never, ever ends as a copy-paste job. Iām editing the whole time. Thereās always a humanāmeāin the loop, every single time.
3. Will AI Improve Productivity? šāāļø
Sometimes. Sometimes Not.
Thereās always a promise: AI will make you ten times faster, smarter, better, insert-your-buzzword-here. And maybe itās true... sometimes. Documentation? Absolutely. I can roll out a draft in secondsāclean, organized, done. Drafting proposals? Donāt even get me started; Iām pretty sure the principals are getting sick of how fast I can toss together a pitch.
š If they arenāt yet, give it time because I have more.
But sometimes AI just saves you from the jobs nobody wants. Like, digging through a decadeās worth of legacy code for a spike because itās finally time to rebuild that app and nobody remembers what it's actually doing or why it was even there to begin with.
𫤠I know I donāt want to do that. You donāt want to do that. Nobody wants to do that. AI doesn't care and is pretty good at it!
Honestly, sometimes it sees connections I might miss. But that doesnāt mean you can skip the whole process and trust whatever it finds. You still have to check. Maybe it saves you three days in the depths of the code mines, but the human review isnāt optional.
Still, not every job should go to the bots, either. That gnarly production bug, that support ticket, the customer callāthey all need a human. AI can be a superpower, but itās not meant to replace the parts of your work that need actual judgment, empathy, or the magic of figuring it out together.
4. AI Is Not Bad (When You Use It Like a Pro) šØāš
AI isnāt some villain lurking in your workflow. Itās a force multiplier. Used right, it makes your voice sharper and your edits fasterāused wrong, it just adds to the noise.
Thatās why every single commit I make defines exactly how much AI was involved and my posts are going to start wearing an āAI-Editedā badge. Not because someone told me to. Not as a disclaimer. Because somebody has to be willing to say there's a difference between generated and assisted.
This is one version (and yesāLeonardo made them):
š¦ And if you want to use the badge yourself, or hand it off to a friend? Donāt copy this little screenshotāthe full one (plus a couple others) are hanging out in my repo. Help yourself!
5. AI Code Is AI Content (Writers, You Too!) š¾
Hereās my rule: disclosure, plain and simple:
- Docs and posts: Add a simple footer like āThis was generated with the help of AI tool.ā
- Code or technical writing: Commit with one of 3 different footers in the commit message:
-
Generated-with: AI tool
means AI did most or all of the work -
Co-authored-by: AI tool
means the content is 50/50 -
Assisted-with: AI tool
means AI helped some, but not close to half
-
š” I started out using an email address in the commits, too ā that I thought I was making upāuntil some random app popped up as a contributor in my repo. Not cool... š
This isnāt about checking boxes. Itās about giving credit, setting an example, and actually being transparent with yourself and the future people who end up needing it.
š Besides, putting one more stamp on a long list of responsible AI use-cases puts a dent in the endless cycle of AI panic and the-world-is-ending doom speak.
6. And What About AI Images or Music? šØ
Same rules, different paint. Some artists pour weeks or months into training models on their own art. (I still havenāt managed to train mine and it's been over a month!) Others take the shortcut: punch in a sentence or two, let the AI āenhanceā it, and call it done.
ā Are they copying someoneās style? I dunnoāmaybe? Should they? I honestly donāt know...
Same applies here as does with writing: artists absolutely have the right to protect their work. But what does that look like, practically? Truth is we donāt really know yet. The laws are behind while the tech is still racing ahead. Weāll catch up. Maybe not soon enough, but eventually, we will.
š¦ I just hope that when we get there, thereās at least one person in the room who actually understands whatās happeningāand what it looks like behind the scenes. We absolutely need better laws, but we do not need people throwing broad rules at some conjured image of āAI training.ā Whatever it ends up being, honesty and fact need to come first.
7. Is AI āStealingā? (No, but...) š§¬
This is where I dig in my heels. No, using AI-generated content is not stealingāunless youāre actively pretending someone elseās work is your own or ignoring copyright on purpose. However, āpublicly availableā isnāt the same as āpublic domain,ā and nobody should lose credit for their work.
Should AI companies pay for certain data? Probably! Should writers and artists get a say? Of course. But āall AI is theftā is just as over-simplified as āall creators are saints.ā
š«” Guess what? Real life and the world around us is messyāAI included. We need smarter laws, better tools, and way less finger-pointing.
UPDATED
This is brilliant! HTTP 402 may come back from the forgotten realms of the internet. I've seen other sites like Credtent.org offer similar setups, as well. Sounds like a solution I can live with... what about you?
8. I Canāt Stay Quiet (and Neither Should You) š ļø
I can't just sit back and watch the insanity and not throw my two cents in. Weāre all still figuring this thing out. Some jumped in headfirst, others are barely dipping a toe. But we wonāt get anywhere by shutting down the conversation or tuning each other outāif thereās a better way, weāre gonna have to find it together.
So, did I miss anything? Add your take belowāwhatās a rule, reality, or tip about AI you wish more people got right? Comment, DM, or write your own story. Iāll keep this list updated.
š And please, when it comes up again, donāt leave yourself (or anyone else) out of the conversation. š«¶
š”ļø This post was AI-edited, human-approved, and finished before the next AI ban drops.
Nuance is mandatory, drama is optional, and the sarcasm is included free of charge.
This Post's ZeroGPT Score š„³
More out of curiosity than anything else...
Top comments (8)
Really appreciated your take on calling out nuance over blanket bans, it's refreshing. Iāve seen the same wave of reaction and ābecause AIā gatekeeping in developer spaces too. Youāre spot-on that intentional, human-in-the-loop work deserves its own lane, not to be lumped with churned-out fluff.
Iāve built my own orchestration stacks using Claude where Iām vetting each output, structuring context, and maintaining tight control over the patterns the model follows; integrity, not hype, powers sustainable systems. Your honesty about the process, and the badge idea, is exactly the kind of leadership this conversation needs. Nice work.
If you were designing that āAI-Editedā flow for a codebase, how would you architect the committer experience to make context and accountability just as clear?
Thank you š The nonsense going around when it comes to AI amazes me most days! Plus, it takes some serious skill to make these models behave in any sort of predictable way. So I'm glad I'm not alone in the deep end of things!
As devs, the biggest piece I see missing right now is training. My intro to AI was quite literally an automated security email saying I've been added to the Copilot group š thankfully we've made progress since then, but not everyone has. That's really where most of this idea starts. It's not enough to have access to the tools, devs also need to know how to wield them to be beneficial at all. And a little understanding goes a long way!
More on point, the only way I've been able to reliably require RAI sign off in code is through conventional commits. Those are perfect because the setup is already well defined, documented, and widely accepted. The stretch to squeeze RAI in isn't a huge leap because the base format is already there.
I touched on it briefly in this post but realistically it's based on the honor system. Ideally there's a way to enforce that AI footer along with the signature line and together that equals your RAI. Although true enforcement also means that there would have to be a fourth Human-content footer, where it's simply blank now š¤
From the top, there's 4 generic levels of AI involvement in any commit, plus the extra zero one:
In each case, follow up with the name of the tool you used, like "GitHub Copilot". The trick to really making this work (besides a linter to enforce it) is really following the principals of conventional commits and not just the format. Meaning small, frequent changes that explain both why and how more than what. Secondly, it's not enough to just state a truth. By also requiring an "official" signature line, you're adding a tangible step (for lack of a better word) that as a user I'm signing my name to this piece of work stating I've personally reviewed and signed off on it. Required HITL, by default. š
For me, that's phase 1 = accountability. Phase 2 is visibility, which could be as simple or as complex as you wanted reporting to be. A straightforward pipeline that runs with each deployment could be enough (and on push, if you squash) to grab all the commits in this release and their diffs to calculate an overall estimated percentage of AI code, populate a custom Shields badge, and stick it to the top of the README. Voilaāthere's you're visibility. āØļø
This really turned into more of an explanation than intended when I started 𤣠but I hope it makes sense. I've got the buy in to make it happen on my team at least. I'm just missing the central sort of tools I need to enforce it at scale. So I've been thinking through that, while trying to free up some time for another project. I'm open to ideas if you come up with anything or have any inputs at all!
Yeah, this makes a lot of sense and I really like how youāve mapped the gradations of AI involvement onto something as established as conventional commits. Thatās the kind of pragmatic move that makes adoption possible, because youāre not asking devs to reinvent their workflow, just extend it.
Iāve wrestled with the same thing in my own orchestration work with Claude. How do you keep a transparent paper trail without turning the whole process into bureaucratic overhead? The honor system is always where it starts, but I think your āsignature + footerā idea is the right way to add friction in the places that matter. Youāre forcing HITL by design, rather than hoping it happens.
What Iād love to see next is how tooling could help take some of that enforcement pain off the devās shoulders. Imagine a lightweight pre-commit hook that flags missing AI footers, or even better, a linter-like pass that nudges you to pick the right category before merging. That way the accountability stays real, but the burden isnāt all on memory and goodwill.
Visibility as a Shields badge on the README is brilliant, itās almost like nutritional labeling for codebases. I think if more teams saw those percentages week over week, it would normalize talking about AI contributions instead of hiding them. That alone would be a big cultural win.
Really appreciate the thought youāve put into this. If you do end up sketching out tooling for enforcement at scale, Iād be keen to see how you approach it because the balance between clarity and dev friction is the hardest part.
So true! I've got commitlint set up globally, so some sort of plugin for those footers is def next on my list. If I can ever clear the current one to make it that far! I'm sure I'll post a write-up about it whenever I can make it work though. š
That sounds like a solid next step. Commitlint already gives you the right foundation, so layering AI footers on top of that feels like the natural evolution.
Iāve been down the same road with ScrumBuddy. Getting the orchestration right is one thing, but making it stick without burdening devs is the real trick. A plugin that just slots into what people are already using is exactly how you avoid revolt.
Definitely drop that write-up when you get there, I think a lot of teams are looking for that balance between accountability and friction, and seeing your approach in practice would be huge.
I will! In the meantime, I'm spreading the idea around as much as I can and practically begging people to steal it and use everywhere š
Consider it stolen and I'll give it a go ;) thanks for sharing!!
Great job!