DEV Community

Alex Spinov
Alex Spinov

Posted on

I Analyzed Traffic to My 291 GitHub Repos — Only 1 Gets Visits

I have 291 public repos on GitHub. I checked the traffic stats for all of them.

One repo gets 158 views in 14 days. The other 290? Nearly zero.

The Numbers

Repo Views (14d) Unique Visitors
awesome-web-scraping-2026 158 116
awesome-free-apis-2026 1 1
spinov001-art.github.io 1 1
Everything else (288 repos) 0 0

Yes. 290 out of 291 repos get literally zero traffic.

Where the Traffic Comes From

For the ONE repo that gets visits:

Source Views Uniques
Hacker News 28 21
GitHub internal 10 6
Kagi (search) 9 8
Google 4 2
Bing 1 1
Habr (Russian tech) 1 1

Hacker News is the #1 traffic source. Not Google. Not GitHub search. HN.

Kagi (the paid search engine) sends more traffic than Google. Interesting.

Why That One Repo?

awesome-web-scraping-2026 is a curated list of 80+ web scraping tools — Scrapy, Playwright, Puppeteer, proxies, free APIs.

It works because:

  1. Genuine value — it includes real third-party tools, not just my projects
  2. Search-friendly — "web scraping tools 2026" is a real search query
  3. Awesome-list format — people know and trust this format
  4. Regularly updated — fresh content signals to search engines

What I Learned

1. More Repos ≠ More Traffic

I thought "291 repos = 291 chances to be found." Wrong.

291 repos with thin READMEs = 291 dead pages that Google ignores.

2. Curation Beats Creation

My most-visited repo is a LIST of OTHER people's tools. Not my own code.

People search for "best X tools" — not for your specific project.

3. One Great Repo > 291 Average Ones

If I had spent all my time making awesome-web-scraping-2026 the definitive resource (500+ tools, comparison tables, benchmarks) — it would probably have 10x the traffic.

Instead, I spread effort across 291 repos that nobody visits.

4. HN > Google for New Repos

Google takes months to index and rank new content. HN gives you traffic in hours.

If you're launching something new, post it on HN first.

What I'm Doing Now

  1. Doubling down on the winner. awesome-web-scraping-2026 gets all my attention.
  2. Adding real value. Comparison tables, benchmarks, honest reviews — not just links.
  3. Archiving the losers. 290 repos that get no traffic are noise, not signal.
  4. One repo, done well. Better than 300 repos, done poorly.

Do you have repos that get zero traffic? How do you decide what to focus on? Let me know in the comments.


The one repo that works:


More from me: 10 Dev Tools I Use Daily | 77 Scrapers on a Schedule | 150+ Free APIs

Top comments (0)