A few weeks ago I logged into Claude Code, typed /cost, and got back this:
With your Claude Max subscription, no need to monitor cost.
Two days later I had used 91% of my weekly limit in a single morning of work. I had no idea which session, which project, or which model was responsible. I tried ccusage (which is great) and it gave me totals — but I wanted to ask questions like "which of my projects is eating Opus tokens unnecessarily?" and "what's my actual cache hit rate over the last 30 days?" Those answers weren't there.
So I built kerf-cli — a local-first cost intelligence tool for Claude Code. This post is about why it exists, what it does, and what I learned about Claude Code billing along the way.
The actual problem
Anthropic gives you a lot of data and almost no analytics on top of it. Every Claude Code session is logged to ~/.claude/projects/<encoded-cwd>/<session-id>.jsonl with full token breakdowns: input, output, cache_read, cache_creation, model, timestamp, git branch. The data is rich. The tooling on top of it is thin.
Here's what existed when I started:
-
Claude Code's
/costcommand — current session only, and it actively discourages Max subscribers from looking - Anthropic's web console — org-level dashboards for Teams/Enterprise, nothing for solo developers on Pro/Max
- ccusage — excellent for quick reports, but parses JSONL on every invocation with no persistence
- A handful of other CLIs and menu-bar apps — most are read-only reporters
What I wanted was three things none of them offered:
- A persistent analytical layer I could query with SQL
- Real budget enforcement that blocks Claude Code when I exceed a cap, not just a warning
- Concrete optimization recommendations — not "you spent $47" but "switch these 12 sessions from Opus to Sonnet and you'll save $140/month"
How kerf works
Kerf is a TypeScript CLI built on commander, ink, and better-sqlite3. The architecture is dead simple: it reads Claude Code's existing JSONL session files, ingests them into a local SQLite database, and then every command and the web dashboard query that database directly. You run kerf sync once and it ingests every Claude Code session you've ever had. Subsequent syncs are incremental — only changed files are re-parsed. Then everything else is fast queries against the local DB.
The commands that matter
kerf summary
The bread and butter — what did I spend?
$ kerf summary --period week --by-project
For week (Apr 1 → Apr 7):
Total cost: $178.04
Sessions: 25
Tokens: 454.0M
Cache hit: 98%
By project:
projects $117.00 (66%) 5 sessions
subagents $42.50 (24%) 14 sessions
kerf $14.54 (8%) 4 sessions
kerf query
The SQL escape hatch I built mostly for myself:
$ kerf query "SELECT date(timestamp) as day, ROUND(SUM(cost_usd), 2) as cost
FROM messages
WHERE timestamp > date('now', '-7 days')
GROUP BY day ORDER BY day DESC"
day cost
2026-04-07 $46.62
2026-04-06 $11.84
2026-04-04 $33.51
2026-04-03 $28.20
2026-04-02 $14.33
The --examples flag prints a dozen useful queries to copy. The --schema flag prints the database schema. Writes are blocked — only SELECT statements are allowed.
kerf efficiency
The command that actually saves money. This is the one I use every Monday morning.
This was the moment kerf paid for itself. I had been blindly using Opus for everything because it was the default. The analyzer pointed out that most of my Opus sessions had patterns that would have been fine on Sonnet — file edits, small refactors, dependency bumps. I switched those workflows, cut a meaningful chunk of my monthly Claude bill, and noticed zero quality difference.
kerf budget + kerf init --enforce-budgets
The killer feature. kerf budget set 50 --period weekly sets a cap, then kerf init --enforce-budgets installs a Claude Code PreToolUse hook that runs kerf budget check before every tool call. If you're over budget, the hook returns exit code 2 and Claude Code blocks the action.
This is the difference between knowing you blew your budget and not being able to. Other tools warn. Kerf enforces.
kerf dashboard
The local web UI for visual people. Opens at http://localhost:3847 — SQLite-backed so queries are sub-100ms, three killer-features cards (budget, efficiency, cache) front and center, sortable session table with drill-down, stacked cost-over-time chart by model. Zero auth, zero cloud, zero data leaving your machine. That's the screenshot at the top of this post.
What I learned about Claude Code billing
A few non-obvious things from spending too much time staring at JSONL files:
1. Cache reads can be 60–80% of your total cost. This was the biggest surprise. Cache reads are billed at 10% of standard input rate, which sounds cheap until you realize you're caching 50K tokens per turn and reading them on every message. Optimizing your CLAUDE.md and reducing cache invalidation was the biggest single lever I found.
2. Opus is the default and it almost never needs to be. I ran kerf efficiency on a month of data: 90% of my Opus tokens were on sessions that had no complexity signal (no debugging, no architecture decisions, no large refactors — just file edits and small fixes). Switching them to Sonnet was a 4x cost reduction with no measurable quality drop.
3. Claude Code's JSONL streams partial usage updates. When you parse them, you have to keep the MAX value per field across duplicate message IDs, not the latest value. I learned this the hard way — my v2.1 parser was undercounting input tokens because it kept the "last" entry instead of the "max," which meant the final zero-input chunk overwrote the real numbers from earlier chunks. Fixed before launch, but it's a subtle trap anyone parsing these logs will hit.
4. The 5-hour billing window is real but Anthropic doesn't expose it clearly. Max subscribers are billed against a rolling 5-hour window, not a daily quota. If you don't track this, you can get surprised when the window rolls over mid-session.
Technical decisions I'd defend
- SQLite over a JSON file. JSON is fine for ccusage's "read once, compute, discard" model. For an analytics layer you want sub-100ms queries, joins, aggregations, and a schema migration story. SQLite via better-sqlite3 is the right tool.
- Local-only over cloud-first. The moment you add cloud sync, you need auth, storage, privacy controls, GDPR compliance, a business model. None of that serves the primary use case of "show me where my money went." Kerf is local-first on purpose. A hosted team tier is on the roadmap but will always be optional.
- Ink for the terminal UI. React components in the terminal feel weird at first but the composability is worth it.
- Hooks as the enforcement mechanism. Claude Code ships a native hook system. Using hooks means kerf doesn't have to intercept or proxy Claude Code's traffic — it just responds to events Claude Code already emits.
What's next
v1 is focused on doing one thing well: Claude Code observability for individual developers. The roadmap from here:
- v2.x: Cursor and Codex support
- v2.x: Slack/Discord alerts on budget thresholds
- v2.x: GitHub Actions integration for cost gates on PRs
- v3.x (paid team tier): cloud sync, team aggregation, SSO
But none of that ships before I'm sure v1 is rock solid. The CLI will always be free and MIT licensed.
Try it
npm install -g kerf-cli
kerf sync
kerf summary
GitHub: github.com/dhanushkumarsivaji/kerf-cli
Show HN discussion: news.ycombinator.com/item?id=47683060
If you've hit billing surprises with Claude Code, I'd love to hear about them in the comments. The more weird patterns I see, the better the analyzer gets.

Top comments (0)