DEV Community

Gerus Lab
Gerus Lab

Posted on

Your Open Source Repo Has 10K GitHub Stars. Half of Them Are Fake.

At Gerus-lab, we evaluate open source tools constantly — for our clients, for our own stack, and when vetting partners. We've built systems for Web3 projects, AI platforms, and SaaS products where choosing the wrong library can cost months of refactoring. So we pay close attention to how the community signals trust.

For years, GitHub stars were a quick shorthand. A repo with 8K stars felt safer than one with 400. Turns out, we were partly reading fiction.

The Inflation Nobody Talks About

A research paper analyzing GitHub from 2019 to 2024 found approximately 6 million fake stars across 18,000+ repositories. Up to 2022, fake stars were mostly a scam tool — phishing utilities, crypto bots, and malware clones used stars as social proof to get unsuspecting developers to clone and run something nasty.

Then something shifted.

Post-2022, the biggest growth category for fake stars became AI/LLM projects — overtaking even blockchain startups at the peak of their hype. The incentive structure is obvious: VCs measure traction by star velocity. ROSS Index, which investors actively monitor, had a top-ranked project with ~47% suspected fake stars.

One in six fast-growing open source projects shows signs of manipulation. Let that sink in.

How Cheap Is a Fake Star?

Ridiciously cheap. The going rate is $0.03 to $0.85 per star. Fiverr gigs, Telegram channels, star-swap exchanges — all openly indexed by Google.

The median star count for open source startups at seed ($3–5M raise) is 2,000–3,000 stars. At Series A ($5–20M), roughly 5,000. So a "believable seed-stage traction" costs somewhere between $100 and $2,500.

There's even a premium tier: aged GitHub accounts with 5-year commit history and the Arctic Code Vault Contributor badge sell for ~$5,000 each. These are rented out as "premium star givers" for clients who need extra authenticity.

And it's not just stars. npm download counts are inflated by AWS Lambda loops (one developer famously drove his package to 1M weekly downloads with zero real users). VS Code extension installs are botted. The corruption goes deep.

The Twist: You Can Be a Victim Without Doing Anything

Here's the dark part we found most alarming: a competitor or bad actor can fake-star YOUR repo. Suddenly your previously clean project looks like it bought stars, and you're stuck trying to prove you didn't.

GitHub has no defense mechanism for this. The attack is free, plausibly deniable, and can damage your reputation with exactly the investors or enterprise buyers you're trying to impress.

What Actually Signals Real Traction

Bessemer Venture Partners — consistently one of the top OSS-focused VC funds — stopped relying on stars years ago. Their signal: unique active contributors per month — anyone who opened an issue, submitted a PR, or committed code.

Their benchmark: 250+ unique contributors/month filters out less than 5% of the top 10,000 repos. It's a far harder metric to fake.

For quick external checks, two ratios work surprisingly well:

Fork-to-Star Ratio

Healthy projects see forks at 15–25% of star count. A repo with 1,000 stars and 200 forks = normal. One with 1,000 stars and 18 forks? Red flag. People who actually use code fork it.

Watcher-to-Star Ratio

The paper cites a case of a repo with 157,000 stars and 168 watchers. Watchers represent people who genuinely care about the project's activity. One watcher per 1,000 stars is a statistical impossibility in an organic community.

How We Changed Our Evaluation Process at Gerus-lab

After digging into this, we updated our internal vetting process for third-party OSS dependencies and potential collaborators:

  1. Stars are a floor, not a ceiling. Below 500 real stars, most projects are genuinely niche. But above that, we stop treating count as quality signal.

  2. We check contributor velocity over 90 days. Using GitHub's contributor graph API, we look for sustained contribution patterns across multiple contributors — not just one-person commit storms.

  3. We look at issue resolution rate. A project with 800 open unresolved issues and 12K stars tells a different story than one with 200 stars and 95% closed issues.

  4. We watch the forks-to-stars ratio. Anything below 10% without a clear explanation triggers deeper inspection.

  5. We look at who's watching. If watchers are anomalously low, we treat stars as marketing, not signal.

We document this as part of our technical due diligence process when onboarding new stack components for client projects — especially for fintech, Web3, and healthcare where dependency security is critical.

The Regulatory Wild Card

Since October 2024, the FTC's Consumer Review Rule in the US explicitly prohibits buying and selling fake social metrics. The fine: ~$53,000 per violation. So far enforcement has targeted Amazon and Google Maps reviews, but GitHub stars formally fit the same definition — a social metric influencing commercial decisions.

There are no OSS precedents yet. But the legal framework exists. If you're a VC or enterprise buyer who made a financial decision partly based on manipulated star counts, you have standing.

What This Means for Builders

We work with a lot of early-stage teams — Web3 founders, AI startup CTOs, SaaS product teams — who worry about this from the other side: "We built something genuinely good. How do we signal that authentically?"

Our honest advice:

  • Write about what you built. Articles, demos, case studies. These create real backlinks, real readers, and real users who might star your project with intention.
  • Build in public. Weekly changelogs, milestone posts, screencasts. Watchers compound over time.
  • Don't measure success by stars. Measure by your Discord/Slack activity, your GitHub Discussions engagement, your fork-to-star ratio.
  • Use referral analytics. Know whether your star count spikes came from Hacker News, Product Hunt, or an article — organic spikes have a source. Fake ones don't.

The open source credibility market is noisy. The projects that win long-term are the ones building genuine trust — with real users, real contributors, and real community activity. Stars can be bought. A healthy contributor ecosystem can't.


At Gerus-lab, we build production systems for startups that can't afford to pick the wrong tools. From AI integrations to Web3 infrastructure to custom SaaS platforms, we do the unglamorous work of vetting what actually belongs in a production codebase.

If you're building something and want a second opinion on your stack — or want to talk about how we approach dependency security — reach out.

What's your go-to signal for evaluating an open source project? Drop it in the comments — genuinely curious what the community has evolved to.

Top comments (0)