DEV Community

Cover image for 90% of Claude Code Output Goes to Repos Nobody Stars. That's the Wrong Problem.
Aditya Agarwal
Aditya Agarwal

Posted on

90% of Claude Code Output Goes to Repos Nobody Stars. That's the Wrong Problem.

90% of Claude Code output goes to repos with less than 2 stars. A dashboard called claudescode.dev published the data this week and Hacker News lost its mind.

The stat sounds damning. All that AI code, dumped into repos nobody cares about.

Except 90% of ALL GitHub repos have zero stars.

That number isn't a Claude problem. It's a GitHub problem. Or more accurately, it's just how GitHub works. Always has been.


The Base Rate Fallacy Nobody Mentioned

The base rate fallacy here is almost comically obvious. Most repos on GitHub are homework assignments, weekend experiments, config files someone pushed once and forgot about.

About 55% of GitHub's 800 million repos are dead or archived. No commits, no issues, no pull requests for over a year.

Claude Code didn't create the graveyard. It just moved into one.


What the Stat Actually Tells You

Here's what the stat actually tells you: people are using Claude Code for personal projects. Quick prototypes. Throwaway tools. The kind of code you build in 20 minutes because you needed it, not because you wanted an audience.

One commenter on the HN thread nailed it. They built a custom notes app with Claude Code in under 10 hours. It does everything they need. They expect zero stars when they open source it. They don't care.

That's the actual story. Stars don't measure utility. They measure popularity.

Your internal deployment script that saves your team 4 hours a week has zero stars. The "awesome-list" repo someone forked and never updated has 200.


The Deeper Problem 🤔

But there's a deeper conversation buried under the stat drama.

Claude Code now touches about 4% of all public GitHub commits. That number was close to zero eighteen months ago. GitHub's entire business model was built on human developers pushing code at human speed.

What happens when agents are pushing commits 24/7?

GitHub has already been struggling with stability issues. Some of that is their Azure migration. Some of it might be the sheer volume of AI-generated activity flooding the system.


Code Becomes the Artifact, Not the Product

One HN commenter made a point that stuck with me. In a future where most new code is AI-generated, the code itself becomes an intermediate representation. The value shifts to specifications, reviews, and proposals.

GitHub is still code-centric. Issues and discussions are secondary features built around the code.

At some point, that flips.


The 90% stat isn't about AI code being worthless. It's the first signal that GitHub's metrics, GitHub's assumptions, and maybe GitHub itself need to evolve.

Stars were never a quality metric. But we treated them like one. Now an AI agent just made that painfully obvious.

What metric would actually measure whether code is useful? 👇

Top comments (0)