DEV Community

gentic news
gentic news

Posted on • Originally published at gentic.news

Codeburn: The TUI That Shows Exactly Where Your Claude Code Tokens Are Going

A new open-source TUI, Codeburn, analyzes Claude Code session transcripts to show token spend by task type, helping developers optimize their usage and costs.

The Problem: Blind Spending

Developers using Claude Code heavily, like the creator spending $200+ daily, have had zero visibility into what consumes their tokens. While tools like ccusage show aggregate cost, they don't answer critical questions: Is debugging the budget killer? Which project is the most expensive? Is most of the spend just on conversation?

The Solution: Codeburn

codeburn is a new, open-source Terminal User Interface (TUI) that solves this. It reads the session transcripts Claude Code already stores locally (~/.claude/projects/) and classifies every interaction into 13 deterministic categories based on tool usage patterns—no additional LLM calls required.

What It Shows You

  • Cost by Task Type: Coding (edits, writes), Debugging, Exploration, Brainstorming, and—crucially—Conversation (turns with no tool use).
  • Cost by Project, Model, Tool, and MCP Server: Pinpoint exactly which part of your workflow is driving costs.
  • Daily Activity Chart: Visual timeline with gradient bars showing activity peaks.
  • Interactive Views: Use arrow keys to switch between Today, Week, and Month summaries.
  • Swiftbar Widget (macOS): A menu bar widget for at-a-glance daily spend.

The Eye-Opening Insight

For its creator, the data was revealing: 56% of total spend was on "Conversation"—turns where Claude responded without using any tools. The actual act of coding (file edits and writes) accounted for only 21%. This insight is a direct lever for optimizing prompts and workflow to reduce waste.

How To Use It Right Now

Installation and use is straightforward:

npx codeburn
Enter fullscreen mode Exit fullscreen mode

That's it. It works with any existing Claude Code installation and requires no configuration. It parses your local session history on the fly.

Actionable Takeaways from the Data

  1. Audit Your "Conversation" Spend: High conversation costs often mean you're not giving Claude enough context or clear instructions upfront, leading to back-and-forth clarification. Be more specific in your initial prompt.
  2. Compare Project Costs: Identify if one legacy codebase or experimental project is disproportionately expensive. This can justify refactoring or changing your approach.
  3. Evaluate MCP Servers: See if a particular MCP server (like a database connector or API tool) is token-heavy. You might need to optimize its prompts or seek an alternative.
  4. Model Selection Validation: Confirm if using a more capable (and expensive) model like Claude Opus 4.6 is justified for a given task type, or if Sonnet would suffice.

This tool transforms Claude Code from a black-box expense into an instrumented, optimizable development environment.


Originally published on gentic.news

Top comments (0)