TL;DR — Your AI coding agent's quality is capped by the quality of its context. Most devs have stale, generic, or missing CLAUDE.md / .cursorrules files. Caliber fixes this in one command — scans your repo, generates tailored configs, recommends MCPs, and gives you a 0-100 setup score.
---
## The problem nobody talks about
Everyone's debating which AI coding tool is best — Claude Code vs Cursor vs Codex. Meanwhile, the real reason most developers aren't getting great results is upstream of all of them:
Their project context is wrong.
YourCLAUDE.mdwas written in 20 minutes when you first set up the project. Your.cursorruleswas copy-pasted from a Reddit thread. Neither has been touched since.
And your AI agent is making decisions based on that stale, inaccurate information every single session.
## What bad context actually looks like
Here are the most common failure modes I've seen:
🚨 Stale architecture — YourCLAUDE.mdsays you use REST APIs but you migrated to GraphQL 3 months ago. The agent keeps generating REST patterns.
🚨 Contradictory rules — Old rules say "use CommonJS", newer ones say "use ESM". The agent picks one arbitrarily.
🚨 No MCP coverage — You're running PostgreSQL and there's a great Postgres MCP that would let your agent query your schema directly. You've never heard of it.
🚨 Config drift — You refactor every week. Your AI config was updated once, on day one.
🚨 Team inconsistency — One dev has MCPs set up, another doesn't. Rules differ across machines. There's no source of truth.
## Meet Caliber
Caliber is a CLI tool that solves this with one command. It scans your project and:
- ✅ Generates a tailoredCLAUDE.mdwith your actual stack, architecture, and commands
- ✅ Creates.cursorrules/.cursor/rules/*.mdcmatching your dependencies
- ✅ Recommends MCPs you should install based on what you're running
- ✅ Deletes stale rules that contradict your current code
- ✅ Scores your setup 0–100 across 6 dimensions
And it can run continuously — so configs stay fresh as your code evolves.
## See it in action
Here's whatcaliber initlooks like on a real Next.js + TypeScript project:
bash<br>$ caliber init<br>Scanning project structure...<br>Detected: TypeScript, React, Next.js, Tailwind CSS<br>Detected: 847 files, 12 dependencies with AI relevance<br>Config files:<br>+ create CLAUDE.md project context<br>+ create .cursorrules cursor rules<br>~ modify .cursor/rules/testing.mdc outdated patterns<br>- delete .claude/rules/old-api.md stale, contradicts code<br>Skills:<br>+ create .claude/skills/deploy.md deploy flow<br>+ create .cursor/skills/review.md code review<br>MCP Recommendations:<br> • @modelcontextprotocol/server-postgres (detected: pg in package.json)<br> • @modelcontextprotocol/server-github (detected: .git, gh-cli)<br> • @upstash/context7-mcp (detected: React 18+)<br>
Notice it's not just adding files — it also deletes stale ones and modifies outdated patterns.
## The setup score: caliber score
This is my favorite feature. Runcaliber scoreon any project and get a deterministic 0–100 grade — no LLM needed, works offline.
bash<br>$ caliber score<br>Score: 87/100 Grade: A<br>Existence 23/25 • Config files present, cross-platform parity<br>Quality 22/25 • Commands documented, no bloat, no vague rules<br>Coverage 18/20 • Dependencies & services reflected in configs<br>Accuracy 13/15 • Documented commands & paths actually exist<br>Freshness 8/10 • Config recency, no leaked secrets<br>Bonus 3/5 • Hooks, AGENTS.md, OpenSkills format<br>
This gives you an objective baseline before onboarding a new dev, switching AI tools, or auditing your setup.
## The full CLI — 4 commands, that's it
| Command | What it does |
|---|---|
|caliber init| Scan repo, generate/update all config files |
|caliber score| Rate your setup 0–100 (offline, no LLM) |
|caliber recommend| Discover MCPs and skills for your stack |
|caliber config| Set provider, API key, model |
Works with: Claude Code, Cursor, OpenAI Codex
No API key needed with Claude Code or Cursor — uses your existing subscription.
## Why this matters for teams
This isn't just a solo-dev tool. The consistency problem is worse at the team level:
> "One dev has MCPs configured, another doesn't. Cursor rules differ across machines. Nobody knows which CLAUDE.md is the canonical one."
With Caliber, you commit your configs to git like any other file. Every developer who runscaliber initgets the same baseline — and the same AI agent experience. New team members are set up in 30 seconds.
## Get started in 30 seconds
bash<br>npm install -g @rely-ai/caliber<br>caliber init<br>
That's it. No API key needed if you're on Claude Code or Cursor.
🔗 GitHub: https://github.com/rely-ai-org/caliber
📦 npm: https://www.npmjs.com/package/@rely-ai/caliber
💬 Discord: https://discord.gg/XUNaJEsw
MIT licensed. Open source. Your code never leaves your machine.
---
Built at Rely AI. If this saved you time, a ⭐ on GitHub goes a long way — and PRs are very welcome.
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (0)