DEV Community

Cover image for I built a token cost tracker for Claude Code - and it changed the way I think about AI development
Patric
Patric

Posted on

I built a token cost tracker for Claude Code - and it changed the way I think about AI development

It started with a number I couldn’t explain

A few months ago, I looked at my Anthropic billing dashboard and saw a number that didn’t add up. I knew roughly how much I’d worked with Claude Code. The bill said otherwise.
I couldn’t tell which sessions were expensive. I couldn’t tell why. I only had a total figure at the end of the month—and no way to link it to specific work.
That bothered me. Not because of the money—but because of the lack of visibility. And as someone who’s worked in IT for 30 years, I know: What you can’t measure, you can’t improve.
So I built TRACE.

What I thought I was building—and what I actually built

The original idea was simple: log token consumption per session, calculate costs, store in SQLite. An afternoon project.
What I actually built—over weeks of iteration with Claude Code itself as a development partner—is a local MCP server that:

  • Tracks token costs per project and session in real time
  • Detects session health (green / yellow / red) based on configurable thresholds
  • Sends native macOS notifications when expensive limits are exceeded – no dashboard required
  • Automatically keeps AI_CONTEXT.md up to date via Git hooks
  • Generates enriched handoff prompts when starting a new thread
  • Displays everything in a web dashboard with a 7-day history, provider badges, and live multi-session tracking
  • Runs directly in the VS Code Simple Browser Panel – no external browser required
  • Optionally starts automatically at Mac login via LaunchAgent MIT license – free to use, forkable, no restrictions. With solid test coverage ensuring it works in real-world scenarios. The gap between an “afternoon project” and a production-ready open-source tool says something about how AI-powered development really works when you seriously commit to it.

What Nobody Talks About: Context Rot

Here’s what I’ve learned—and what I’ve never heard in any podcast, video, or presentation:
Token costs do not scale linearly with the work. They scale with the session length.
Every turn in a Claude Code session appends to the conversation history. By turn 50, every new message carries the weight of the previous 49 turns as input tokens. By turn 100, you’re burning tens of thousands of tokens per message—just to maintain context, even if the actual task is small.
Anthropic calls this “context rot” in its documentation: As the number of tokens grows, accuracy and recall degrade. The model doesn’t lose tokens—it loses attention. It has to spread its focus across an ever-growing history.
The practical consequence: A 300-turn session doesn’t just cost more than 30 sessions of 10 turns each. It costs significantly more—and the quality of the later turns is measurably worse.

Push Instead of Pull: Why Built-in Commands Aren’t Enough

Claude Code has built-in visibility tools. /cost shows the current session usage. /context visualizes context window utilization. /stats provides usage statistics.
These are pull mechanisms. You have to remember to use them. You have to be curious enough to check. You have to already suspect that something is wrong.
TRACE is a push mechanism. It comes to you.
When a session exceeds 80,000 tokens, a notification—a subtle Tink sound and a macOS alert—appears before it gets really expensive. At 150,000 tokens, a more distinct Funk signals that it’s time for a new thread. You don’t have to remember to check. TRACE checks for you.
The dashboard also runs directly in the VS Code Simple Browser—for those who want to keep everything in one interface.

Feature /cost /context TRACE
Current session usage
Visual context window
Cache tokens separately
Historical sessions
Cost per project
Monthly budget & alerts
Session health indicator
Push notifications
Handoff prompt
AI_CONTEXT.md auto-update
VS Code Simple Browser
Autostart on Login

There is no universal token limit

A question that quickly arises during use: at what point is a session “too long”? The answer is: it depends.
80,000 tokens as a warning threshold is a good starting point—but a developer working on a small script will reach that after just a few turns, while someone refactoring a complex backend is still far from it.
TRACE therefore makes the thresholds configurable—directly in the dashboard or in the configuration file. Three recommendations for guidance:

Usage Type Warning Critical
Economical – cost-conscious 50,000 100,000
Standard – recommended 80,000 150,000
Intensive – large projects 120,000 200,000

These numbers are not set in stone. They are a starting point that should be adjusted once you understand your own usage patterns.

Note: TRACE tracks Claude Code sessions—i.e., work done in the terminal. For claude.ai web/desktop chats, a separate Anthropic Usage API is required, which is only available for Team and Enterprise accounts.

What a real day of usage looks like

I recently installed TRACE for a colleague. After one day of use, the logs told a clear story.
A single session showed over 37 million cache read tokens and nearly $20 in costs. The session had accumulated hundreds of turns—including many invisible tool-use turns that Claude Code generates internally for file reads, Bash commands, and code analysis. Each of these counts as a turn in the transcript, even if the user typed only 30 prompts.
The insight isn’t that something went wrong. The insight is: Without visibility, you don’t even ask the question in the first place.
TRACE makes the invisible visible. That’s all it does—but it turns out to be a lot.

The /resume Trap

One more thing worth knowing: Claude Code’s /resume command is more expensive than it looks.
When resuming a session, Claude Code sends the entire conversation history as input tokens—including invisible “Thinking Block Signatures” from Extended Thinking turns. These are base64-encoded, unreadable, and cannot be truncated. But they are sent to the API with every resume and billed accordingly.
Anthropic’s own GitHub issues document cases where resuming a 24-hour session cost ~156,000 input tokens—before the user had even typed a single character.
Anthropic’s own documentation is clear: Do not rely on session resumption. Save results as state and pass them to a fresh session.
TRACE’s new_session() tool does exactly that—it generates a compressed handoff prompt from AI_CONTEXT.md, CLAUDE.md, current Git changes, and the open task in the backlog. A new thread gets everything it needs in a few hundred tokens instead of hundreds of thousands.

The Bugs That Taught Me the Most

The $0.0000 Sessions. TRACE logged sessions but showed zero cost. The model string in the transcripts was claude-sonnet-4-5-20250929. The config key was claude-sonnet-4-5. Exact matching failed. A line of prefix matching fixed it—but finding it really required digging into the logs.
The health indicator that disappeared after a refresh. Session Health turned red at 150,000 tokens—and then disappeared when the browser was refreshed because the state was stored in a JavaScript variable. The fix was to move the health state to a last_health.json file on the server side. Obvious in hindsight. Not obvious until a real user encountered it.
The AI_CONTEXT.md file that was out of date. The doc synthesizer only updated on feat: and fix: commits. A day with chore: and docs: commits left the context file four days out of date. Removing the commit type filter entirely fixed it.
Every bug taught us something about the gap between “works in theory” and “works when a real person uses it all day.”

Where TRACE stands today

  • Solid test coverage, all tests passing
  • Tracks input, output, cache creation, and cache read tokens separately with correct pricing
  • Session Health: green under 80,000 tokens, yellow up to 150,000, red above that – all configurable
  • Native notifications on macOS, Windows, and Linux
  • Web dashboard with 7-day history, dark/light/auto theme, provider badges, live multi-session tracking
  • Runs in the VS Code Simple Browser – no external browser required
  • Optional autostart via macOS LaunchAgent
  • Enhanced Handoff prompts with current phase, open tasks, files to read

What I would tell someone just starting out

The most useful thing I built wasn’t the dashboard. It was the discipline of treating AI_CONTEXT.md as a first-class artifact. Every project gets one. Every commit potentially updates it. Every new session starts by reading it.
An AI assistant with full context is a different tool than one that starts from scratch. TRACE exists to make the former the standard—not the exception.
The second most useful thing: Measure before you optimize. I had strong intuitions about which sessions were expensive. The data contradicted most of them. The expensive sessions weren’t the ones with the difficult problems. They were the ones that ran for a long time.
And the third: Don’t wait for permission to build something useful.

TRACE is open source under the MIT license: github.com/MyPatric69/trace
If you find it useful: a star helps. If you find a bug: an issue helps more. If you build something with it: I’d love to hear about it.

Top comments (0)