DEV Community

Cover image for I audited 25 of my open-source repos. Stars lied.
Matthias | StudioMeyer
Matthias | StudioMeyer

Posted on • Originally published at studiomeyer.io

I audited 25 of my open-source repos. Stars lied.

A friend asked me yesterday how the open-source side of the studio is doing. I checked GitHub. The top repo had five stars. Most had zero. I almost wrote back "yeah, slow start, nothing to see yet."

Then I actually ran the numbers. 3,681 npm installs last month across 15 packages. 254 PyPI installs on a six-day-old library. 12 forks. 30 to 40 unique visitors per week on the top five repos. Real users opening real issues are zero, which means either nothing is broken or nobody is loud yet, and the install counts say it is the second one.

So I sat down and audited all 25 public repos in one session. Here is what I found, what I fixed, and why GitHub stars are basically the wrong number to look at when you are five weeks into shipping.

The setup

Five weeks ago I started pushing the StudioMeyer MCP work to public repos. Memory, CRM, GEO, Crew, and a growing pile of foundation pillars under the MCP Factory umbrella. Test harnesses for the Model Context Protocol spec, security middleware in TypeScript and Python, a Rust sidecar against marketplace poisoning, n8n templates, a few tooling repos. Twenty five public repos in the studiomeyer-io org by the time I ran today's audit. Mostly TypeScript, one Rust crate, one Python package, two n8n template collections.

The audit question was plain: are people actually using this stuff?

Method

I pulled four data sources in parallel and joined them per repo:

  1. GitHub API for stars, forks, watchers, open issues, open pull requests, last push, license, archived state.
  2. npm registry + npm-stat for last-week and last-month download counts and current published version, per package.
  3. crates.io API for the one Rust crate, with the recent 90-day download count and per-version splits.
  4. PyPI + pypistats for the one Python package, with the last-month and last-day numbers.

Then for each repo I checked the last three GitHub Actions runs, listed the open Dependabot security alerts, looked at the GitHub Traffic counts (views and clones, last 14 days), and pulled all open and closed issues plus PRs.

The whole thing took about thirty minutes. I am keeping the recipe in my memory system so I can run it again every quarter without thinking.

What stars said vs. what downloads said

Top of the list by stars:

Repo Stars Forks
local-memory-mcp 5 3
ai-shield 2 2
darwin-agents 2 0
studiomeyer-memory 2 2
n8n-templates 2 1
n8n-nodes-studiomeyer-memory 2 1
mcp-video 1 0
studiomeyer-crm, geo, crew 1, 1, 1 1, 1, 0

If you stop here you would conclude the work has not landed. Average just over one star per repo. Several flagship MCP packages with zero stars and zero forks.

Top of the same list by npm downloads in the last 30 days:

Package Last week Last month
mcp-academy 18 535
n8n-nodes-studiomeyer-memory 186 491
mcp-personal-suite 11 368
mcp-tenant-pair 181 331
mcp-hook-conformance 152 285
mcp-tenant-pair-demo 160 281
mcp-tenant-pair-cli 141 268
mcp-attest-demo 11 260
mcp-protocol-conformance 11 232
mcp-server-attestation 13 148
mcp-studiomeyer-agents 144 144
mcp-attest-cli 12 123
mcp-spec-migrator 103 103
mcp-stdio-shellguard 101 101
mcp-video 2 11

That is 3,681 installs across 15 packages in 30 days, on top of 254 PyPI installs on the Python port of ai-shield (six days old at audit time), and 25 cargo installs on the Rust mcp-armor crate (also six days old).

The packages I shipped most recently, mcp-studiomeyer-agents and mcp-stdio-shellguard, picked up around 100 to 150 installs in the first week without any Reddit post, no HN submission, no email blast. They went out, registered on the MCP Registry index, got picked up by npm search, and people just installed them.

Stars and downloads are not the same metric. Stars need someone to log in, click, and get nothing back. Downloads need someone to read about a tool and run npm install. The second one is much closer to actual usage.

Issues, PRs, traffic

Closed issues across all 25 repos: four. ai-shield had two, mcp-video had one, local-memory-mcp had one. Open issues: zero, except for one cosmetic ticket on mcp-academy from a while ago. That tells me either the libraries are stable enough that nothing is breaking for users, or nobody is loud about bugs yet. Probably both, weighted toward the first because the test suites are large and the dependency surface is small for most of these packages.

Pull requests over the period: 31 merged. Most are Dependabot. A few are real fixes. ai-shield got two real PRs, mcp-personal-suite got nine. The Dependabot stream is doing actual work in the background, keeping lockfiles current.

GitHub Traffic for the last 14 days, just unique visitors so the numbers are honest:

Repo Unique views (14d)
ai-shield 37
darwin-agents 38
studiomeyer-geo 39
n8n-templates 30
studiomeyer-memory 23
agent-fleet 22
studiomeyer-marketplace 20

Thirty unique visitors on a repo over two weeks is not viral, but it is not dead either. Multiply by the number of repos and the org page is getting real attention.

Then the actual fixing

The audit surfaced one repo with real work and a few cosmetic issues.

mcp-academy had seven open Dependabot security alerts. Two high severity around fast-uri, four medium around hono CSS injection and cache leakage and bodyLimit bypass, one low around hono JWT validation. I checked the lockfile via the GitHub contents API and decoded the base64. Both transitive dependencies were already on the patched version. The Dependabot scan had not propagated yet. I dismissed all eight alerts (one was for ip-address, also already patched) with reason fix_started and a comment showing the lockfile state. There was also one open Dependabot PR bumping fast-uri from 3.1.0 to 3.1.2. I merged it. Master HEAD is now 74bf554 with zero open alerts.

mcp-personal-suite had a failing CI step on npm audit --audit-level=high. Same root cause as academy: transitive vulnerable dependencies. The package.json had no overrides for hono or fast-uri, so the lockfile was stuck on hono 4.12.14 and fast-uri 3.1.0. I cloned it locally, added overrides: { "hono": ">=4.12.18", "fast-uri": ">=3.1.2" } to package.json, ran npm install to regenerate the lockfile, then ran npm audit fix which also bumped axios 1.15.1 to 1.16.0, ip-address 10.1.0 to 10.2.0, express-rate-limit 8.3.2 to 8.5.1, and uuid 11.1.0 to 11.1.1. Result: zero vulnerabilities, all 419 tests pass, build clean. Pushed as e93ace4. CI went green within 90 seconds.

Five connector repos had recurring failed CI runs that were never real failures. The studiomeyer-memory, studiomeyer-crm, studiomeyer-geo, studiomeyer-crew, and studiomeyer-marketplace repos are docs-only mirrors. They have a README and a license file. No package.json, no .github/workflows/ directory. But Dependabot still tries to update GitHub Actions versions on a daily scan, and every attempt fails because there is nothing to update. The fix is one file per repo: .github/dependabot.yml with version: 2 and updates: []. That tells Dependabot explicitly that this repo has nothing for it to scan. Five commits, one per repo. The cached failed runs from before will stay in the history but no new ones will land.

One more repo, mcp-studiomeyer-agents, had the same docs-only Dependabot pattern but with a real package.json. It is a stdio MCP server published to npm but it has no CI workflow because the package itself is the deliverable. I scoped its dependabot.yml to npm only with no github_actions block.

Total time for all the fixing, in one session: about an hour, including the audit. Most of it was waiting for the npm install to finish on personal-suite.

What this taught me about KPIs early in OSS

The default narrative when stars are low is that the work is invisible. That is wrong. Stars are a visibility lag indicator. They show up after a Reddit post goes well, after a Hacker News Show HN climbs, after a Twitter thread gets quoted by someone bigger. They do not show up because someone installs your package and uses it for a week.

Five things actually move during the early weeks:

  1. Downloads on the package registry, weekly and monthly. npm filters obvious bot mirrors out of public stats, so the numbers are closer to honest than they look.
  2. Forks, because somebody who forks usually wants to actually run the code or change something.
  3. GitHub Traffic uniques over 14-day windows. Bots do not consistently produce uniques across rolling windows.
  4. Closed issues, closed PRs, the absolute number, because it tells you whether anybody who hit a real bug bothered to file something.
  5. Dependabot health, because as your dependency tree grows, vulnerable packages will eat your CI if you do not stay on top of it.

If I had only been watching stars I would have written off the entire MCP Factory effort. mcp-protocol-conformance has zero stars and is on its way to clearing 250 monthly installs. mcp-stdio-shellguard hit 101 installs in its first six days with the same star count.

The stars will come. They come from a viral post, from a referenced position in a comparison article, from one influencer dropping a link. None of those things happen because the CI is green. They happen because the code does something useful and someone outside the org notices.

What I would tell my past self

Run the audit early. Run it monthly. Keep the recipe out of your head and in a script or a memory system that survives between sessions. The hour I spent today turned a vague "we should ship more stars" anxiety into a concrete list of one real bug fix and five repos that needed silencing. None of those would have been visible from the GitHub front page.

Also: GitHub does not give you that audit by default. You have to write it yourself. The good news is that the data is all there, in three free APIs, and parsing it takes about thirty lines of bash.

Next pieces of work, in priority order, are a Reddit r/mcp post for mcp-armor, since five weeks of zero stars on a real Rust security crate with 100+ npm-equivalent installs is a fair candidate for the "oh, that exists?" reaction. And a Hacker News Show HN for mcp-stdio-shellguard once the next CVE wave hits. Both are visibility moves, not engineering moves.

Engineering side of the org keeps shipping. The audit just made it less invisible to me.


If you want the recipe I used, the bash and the Python parsing, the gh API patterns, the npm-stat fallback, ping me. I will write it up as a separate post if more than three people ask. Otherwise the version in my notes is enough.

Top comments (0)