DEV Community

Cover image for Where Did My Claude Code Money Go? I Built a Tool to Find Out
Ranajoy
Ranajoy

Posted on

Where Did My Claude Code Money Go? I Built a Tool to Find Out

I still remember the message. A developer on my team - sharp, careful - pinged me: "My Claude Code bill spiked $200 this week. Same workflow. Something's off."

I had no answer. The built-in usage view showed session totals. The web billing page showed monthly aggregates. But neither could answer the only question that mattered: which specific turn ate the money? How do I improve the way I use Claude Code?

That gap - between knowing you spent and seeing exactly why - is what this closes.


What It Is

Claude Code Cost Explorer is a local dashboard that reads Claude Code's own session files already stored locally on your machine and shows you cost down to the individual turn. No API keys. No databases. No instrumentation. Just pip install, run one command, and your browser opens.

  • Day view: Cost trends, session count, API calls - “Where did that $100 day come from?”
  • Session view: Per‑project cost, token totals, timestamps - “Did the auth-refactor session really cost $12?”
  • Turn view: Full prompt, thinking blocks, tool call inputs/outputs, per‑turn cost, model used - “Wait, that one turn cost $4.20?”

It doesn’t predict. It doesn’t optimize. It just surfaces exactly what happened - with cost attached to every breath the agent took.


Why Nothing Else Fit

I searched. Many tools give summaries, live burn‑rate counters, or heavy dashboards. But I couldn't find a single one that would show me why a session cost $12 - down to the turn that triggered 3 Read calls, a broad Glob, and a long thinking chain. So I built it.


What You Actually See

Claude Code already stores its session data locally. This tool reads those files — no uploads, no network calls - and renders:

  • User prompts - exactly what you asked
  • Agent workflow - thinking blocks, tool calls (Read, Write, Glob, Bash), inputs
  • Tool results - raw outputs Claude received
  • Per‑turn cost - calculated from local pricing tables
  • Model tags - which Claude model handled that turn, and what it cost

The real value kicks in when you start noticing patterns.

I once spotted that a simple “fix the build” prompt caused Claude to recursively Glob through node_modules - not because it needed to, but because the prompt wasn't precise enough. An immature request that cost a quiet $3.70 in a single turn. I refined the prompt; the next identical task cost under $0.40.

Another time, I noticed Claude repeatedly pulling AWS CloudWatch log groups and metrics across multiple debugging sessions. Same access pattern, same API calls, same tokens burned. That repetition was a clear signal: this is a reusable workflow. I ended up building a skill to package that exact diagnostic routine - subsequent sessions that would have cost $8-10 each now run in a fraction of the time and cost.

Without turn-level visibility, those patterns stay invisible. With it, you move from cost surprise to cost control - and from one‑shot prompts to smart, reusable tooling.


Install & Run

pip install claude-code-cost-explorer
ccx
# Automatically opens http://127.0.0.1:5000
Enter fullscreen mode Exit fullscreen mode

Requirements: Python 3.10+, Claude Code installed and used at least once.

The architecture is intentionally minimal - standard library, simple templates, tests. No frontend build step. No database migrations. The smarts are in the log parsing and cost attribution, not in the plumbing.


Honest Limitations

  • Not exact billing. Local cost estimates; typically within 1-2% of your actual Anthropic Claude Code bill.
  • Not a CLI utility. For lightning‑fast terminal checks, other lightweight tools shine. This is for when you need to look inside a session.
  • Not teamware. No multi‑user dashboards, no RBAC.

Why I Built This

I’m a AIops/MLOps engineer and solution architect. I spend my days obsessing over system design, architecture optimizations, and making complex pipelines observable and cost‑efficient. I instrument production APIs, ML models, and cloud infrastructure. But my daily coding agent - burning $200-400 a month, firing thousands of API calls - operated with less visibility than a $5 cloud function. That felt broken.

This started as a weekend project to answer a single question: “Where did that $100 day come from?” My colleagues started using it. It worked. So I decided to take the next step - I wanted to understand how open‑source software is actually built, packaged, and shared with the world. This is my first attempt, small in scale, but an attempt nonetheless.

Now it's out there. If it helps you move from cost anxiety to cost clarity - and maybe even inspires you to build a skill from a pattern you discover - that's the whole win.

File issues. Fork it. The code stays small so it's approachable.

github.com/ranajoy-dutta/claude-code-cost-explorer

Top comments (0)