Remember before Lighthouse? Web performance was a black box. You knew your site felt slow, but you didn't have a standardized way to measure it, benchmark it, or explain it to stakeholders.
Lighthouse changed that. One URL, one score, actionable breakdown. Suddenly performance was a conversation everyone could have, not just the senior engineer who profiled Chrome DevTools.
Code repos have the same problem today
Most developers can tell you whether a repo 'feels' well-maintained. But there's no standardized score. No quick way to benchmark. No shared language between the developer who maintains it and the manager who funds it.
The signals exist — CI pipelines, test coverage, dependency health, branch protection, type safety, dead code, security — but nobody aggregates them into a single, comparable number.
Why this matters now
Two trends are colliding:
AI coding tools are producing repos faster than ever. Claude Code, Cursor, Windsurf — developers are shipping in hours what used to take weeks. But the AI focuses on working code, not operational readiness.
Open-source dependency chains are deeper than ever. When you pick a starter template or library, you're inheriting its infrastructure patterns. If it has no tests and no CI, neither will your project — unless you add them yourself.
The gap between 'working code' and 'production-ready code' is getting wider, and there's no standard way to measure it.
What a Lighthouse for repos looks like
We built RepoFortify to be that standard. Paste a public GitHub URL, get a score out of 100 across 9 signals:
- CI pipeline (15%)
- Test coverage (25%)
- Dependency health (10%)
- Branch protection (10%)
- Type safety (10%)
- Dead code (10%)
- Exposed routes (5%)
- Documentation (10%)
- Security headers (5%)
No signup, no paywall for public repos. We also ship an MCP package (npx @repofortify/mcp) so AI coding tools can run scans inline.
Top comments (0)