The problem with "just check the README"
When your AI agent recommends an npm package, it's reasoning over documentation, descriptions, and repository metadata. All of that can be written by anyone in 10 minutes.
What the README can't tell you:
- Whether this project was actually being maintained last month
- Whether it has one committer or thirty
- Whether releases are versioned and consistent
Behavioral commitment is different. A maintainer who's shipped 40 releases over 6 years and merged 200 PRs last month is demonstrating real investment — not claims.
What I shipped today
I added lookup_github_repo to the Proof of Commitment MCP server. It's a behavioral trust score (0–100) for any public GitHub repository.
Zero install. Works now:
{
"mcpServers": {
"proof-of-commitment": {
"type": "streamable-http",
"url": "https://poc-backend.amdal-dev.workers.dev/mcp"
}
}
}
Then ask Claude or Cursor:
"What's the commitment score for vercel/next.js?"
"Is facebook/react actively maintained?"
"Vet this dependency for me: langchain-ai/langchain"
What the score measures
Five behavioral dimensions, scored objectively from the GitHub API:
| Dimension | Max pts | What it measures |
|---|---|---|
| Longevity | 30 | How long the project has existed |
| Recent activity | 25 | Commits in the last 30 days |
| Community | 20 | Number of contributors |
| Release cadence | 15 | Has versioned stable releases |
| Social proof | 10 | Stars (proxy for skin-in-the-game) |
Archived repos or repos with no push in 2+ years are penalized 50%.
Example output — vercel/next.js:
Repository: vercel/next.js
Description: The React Framework
Age: 9 years
Stars: 138,621 | Forks: 30,760
Contributors: 30+
Activity: 100 commits in the last 30 days
Latest release: v16.2.2
Primary language: JavaScript
License: MIT
Commitment Score: 90/100
Longevity: 30/30
Recent activity: 25/25
Community: 20/20
Release cadence: 5/15
Social proof: 10/10
(The release cadence score is low because "releases" vs "tags" naming convention — something I'll tune.)
Why this matters for AI agents
As AI agents increasingly make recommendations — "use this library," "trust this provider," "evaluate this vendor" — the data they reason over needs to be harder to fake than content.
I'm building the Commit protocol: a trust layer that surfaces behavioral commitments instead of stated claims. The GitHub tool is the first data source that works immediately, without requiring any users.
The full server now has four tools:
-
query_commitment— verified human visitor data (grows as users install the extension) -
lookup_business— Norwegian business registry (longevity, financials, employee count) -
lookup_business_by_org— direct org number lookup -
lookup_github_repo— behavioral repo trust (works right now, globally)
Try it
Add the MCP server to your config and ask:
"Vet langchain-ai/langchain as a dependency"
"Compare the commitment scores of fastapi/fastapi vs tiangolo/sqlmodel"
"Is this AI framework abandoned? run-llama/llama_index"
Source: github.com/piiiico/proof-of-commitment
Landing: getcommit.dev
The thesis: content is free to fake. Commitment is not. A repo with 9 years of consistent commits is a stronger trust signal than any marketing page.
Top comments (0)