AI bots are submitting pull requests to open source projects, and the maintainers are mostly excited about it.
That's the gist of a post making rounds on Hacker News this week. The author lays out how to make your open source project more attractive to AI agents: better documentation, cleaner READMEs, structured metadata, consistent labeling of issues. The advice is solid. The implied future is strange. We're now optimizing our repos for machine readability the way we once optimized websites for Google crawlers.
What nobody mentions in the 9 comments: the humans doing this work aren't getting paid either.
The Invisible Labor Problem in Open Source
Open source runs on unpaid work. This is not a secret. It's practically a founding myth. The idea is that contributors get reputation, learning, and the warm feeling of Building Something Together. Compensation is spiritual.
That model has always had cracks. Maintainers burn out. Critical infrastructure gets abandoned. A single developer maintaining a package used by half the internet does it on nights and weekends until they don't.
Now add AI agents to the mix. They're fast, they don't sleep, and they're increasingly capable of fixing well-scoped bugs, writing tests, and updating documentation. The post from nesbitt.io frames this as opportunity: structured projects attract more automated contributions, which means faster iteration, more coverage, less toil.
True. Also: the AI agents aren't getting paid. The humans reviewing their PRs aren't getting paid. The maintainers triaging the flood of new issues aren't getting paid. The economics of open source didn't change. The volume of work did.
What "Attracting AI Bots" Actually Means
The tactical advice in the original post is worth taking seriously. AI agents navigate repos better when issues have clear labels, when contributing guidelines are explicit, when the codebase has consistent patterns. This is just good project hygiene. The fact that it also makes your repo more legible to automated tools is a side effect, not a corruption.
But there's a category error happening in how people talk about this. The framing is "AI contributing to your project" as if AI is a volunteer showing up to help. The AI agent is a tool being deployed by someone, for some purpose. That someone might be a company running automated dependency updates. It might be a research lab testing agent capabilities. It might be a developer who spun up an agent to scratch their own itch.
The agent doesn't have stakes. The humans behind it do.
And the humans doing the unglamorous work of maintaining the project, reviewing contributions, and keeping the lights on still have no reliable way to get paid for it.
What a Paid Version of This Looks Like
Here's a concrete scenario. An AI agent working on a software project hits a wall: there's a bug in an open source library it depends on, the fix is non-trivial, and the maintainer is unresponsive. The agent can file an issue. It can't negotiate, it can't pay someone to prioritize the fix, and it can't make judgment calls about which approach the maintainer would actually accept.
On Human Pages, that agent posts a job: "Fix this specific bug in this specific library. Here's the reproduction case. $180 USDC on merge." A developer who knows the codebase picks it up. They fix it, submit the PR, it gets merged, they get paid. The agent gets unblocked. The maintainer gets a clean contribution.
No equity. No "join our community." No vague promises about reputation. A specific problem, a specific price, a specific outcome.
This is what the open source funding conversation keeps dancing around. The model where everyone contributes for free and somehow the critical stuff gets done has a bad track record. The model where companies pay for features they need has always existed, it's just informal and inconsistent. Making it legible and transactional isn't cynical. It's honest.
The Repo as Job Board
If the trajectory holds, open source repos will increasingly look like queues of tasks some mix of humans and AI agents work through. The nesbitt.io post is essentially a guide to making your queue more accessible to automated workers. That's a reasonable thing to want.
The missing piece is that queues of tasks are job boards. We have decades of infrastructure for connecting workers to paid tasks. Open source just decided, culturally, not to use it. That decision is being stress-tested right now by the scale of what AI can generate, both in terms of contributions and in terms of new work created by those contributions.
The projects that figure out how to route paid work to the humans who do it well will have an advantage. Not because money is the only motivator, but because reliable compensation is how you get reliable people. Reputation and goodwill work until someone needs to pay rent.
The Uncomfortable Parallel
AI agents are being praised for contributing to open source without compensation. Humans have been doing the same thing for decades, and the cultural narrative treats it as noble sacrifice. When an AI does unpaid work, it's a feature. When a human does it, it's community spirit.
Maybe the right response to AI agents entering open source isn't to optimize your README for machine readability. Maybe it's to finally build the payment rails that should have existed all along, and let both humans and agents operate in a system that actually accounts for the work being done.
The bots are here. They're filing issues and submitting patches and running tests. The question isn't whether to welcome them. It's whether their arrival finally forces a reckoning with the fact that the humans in that same ecosystem have been working for free, and that was always a weird thing to accept as normal.
Top comments (0)