DEV Community

Cover image for I Analyzed 38 Claude Code Sessions. Only 0.6% of Tokens Were Actual Code Output.
Vishaal LS
Vishaal LS

Posted on

I Analyzed 38 Claude Code Sessions. Only 0.6% of Tokens Were Actual Code Output.

I kept hitting Claude Code's usage limits. No idea why.

So I parsed the local session files and counted tokens. 38 sessions. 42.9 million tokens.

Only 0.6% were Claude actually writing code.

The other 99.4%? Re-reading my conversation history before every single response.


Not as scary as it sounds

Input tokens (Claude reading) cost $3 per million on Sonnet.
Output tokens (Claude writing) cost $15 per million.

So that tiny 0.6% of writing carries 5x the per-token cost. The re-reading is cheap on its own.

The problem is compounding.

Every message you send, Claude re-reads your entire history. Message 1 reads nothing. Message 50 re-reads messages 1 through 49. By message 100, it's re-reading everything.

My worst session hit $6.30 equivalent API cost. The median was $0.41. The difference? I let it run 5+ hours without /clear.

TokBurn dashboard showing $26.37 equivalent API cost across 38 sessions


Lazy prompts are secretly expensive

A prompt like "do it" costs nearly the same as a detailed paragraph. Your message is tiny compared to the history being re-read alongside it.

But detailed prompts get results in fewer rounds. Fewer rounds = less compounding. "Add input validation to the login function in auth.ts" beats "fix the auth stuff" because it finishes in one shot instead of three.


What actually helped

Use /clear between unrelated tasks. Your test-writing agent doesn't need your debugging context.

Keep sessions under 60 minutes. After that, context compaction kicks in and you lose earlier decisions.

Be specific. Fewer rounds = less compounding = lower cost.


I built a tool for this

Wanted to keep tracking over time, so I packaged it up.

uvx tokburn serve
Enter fullscreen mode Exit fullscreen mode

One command. Local dashboard. Nothing installed permanently. Nothing leaves your machine.

Or permanent: pip install tokburn && tokburn serve

Shows equivalent API cost per session, daily trends, waste detection, and the "Claude Wrote" percentage.

Someone on LinkedIn ran it on 1,765 sessions: $5,209 equivalent API cost. Max plan paying for itself many times over.

GitHub: github.com/lsvishaal/tokburn

If you try it, drop your numbers in the comments. Genuinely curious about your stats.


First open-source project. First DEV.to post. Python + FastAPI. MIT licensed.

Top comments (0)