<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vishaal LS</title>
    <description>The latest articles on DEV Community by Vishaal LS (@lsvishaal).</description>
    <link>https://dev.to/lsvishaal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lsvishaal"/>
    <language>en</language>
    <item>
      <title>I Analyzed 38 Claude Code Sessions. Only 0.6% of Tokens Were Actual Code Output.</title>
      <dc:creator>Vishaal LS</dc:creator>
      <pubDate>Sun, 22 Mar 2026 10:39:47 +0000</pubDate>
      <link>https://dev.to/lsvishaal/i-analyzed-38-claude-code-sessions-only-06-of-tokens-were-actual-code-output-56li</link>
      <guid>https://dev.to/lsvishaal/i-analyzed-38-claude-code-sessions-only-06-of-tokens-were-actual-code-output-56li</guid>
      <description>&lt;p&gt;I kept hitting Claude Code's usage limits. No idea why.&lt;/p&gt;

&lt;p&gt;So I parsed the local session files and counted tokens. 38 sessions. 42.9 million tokens.&lt;/p&gt;

&lt;p&gt;Only &lt;strong&gt;0.6%&lt;/strong&gt; were Claude actually writing code.&lt;/p&gt;

&lt;p&gt;The other 99.4%? Re-reading my conversation history before every single response.&lt;/p&gt;




&lt;h2&gt;
  
  
  Not as scary as it sounds
&lt;/h2&gt;

&lt;p&gt;Input tokens (Claude reading) cost &lt;strong&gt;$3 per million&lt;/strong&gt; on Sonnet.&lt;br&gt;
Output tokens (Claude writing) cost &lt;strong&gt;$15 per million&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So that tiny 0.6% of writing carries 5x the per-token cost. The re-reading is cheap on its own.&lt;/p&gt;

&lt;p&gt;The problem is compounding.&lt;/p&gt;

&lt;p&gt;Every message you send, Claude re-reads your &lt;em&gt;entire&lt;/em&gt; history. Message 1 reads nothing. Message 50 re-reads messages 1 through 49. By message 100, it's re-reading everything.&lt;/p&gt;

&lt;p&gt;My worst session hit &lt;strong&gt;$6.30&lt;/strong&gt; equivalent API cost. The median was &lt;strong&gt;$0.41&lt;/strong&gt;. The difference? I let it run 5+ hours without &lt;code&gt;/clear&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02ftx58fzhus0pfan192.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02ftx58fzhus0pfan192.png" alt="TokBurn dashboard showing $26.37 equivalent API cost across 38 sessions" width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Lazy prompts are secretly expensive
&lt;/h2&gt;

&lt;p&gt;A prompt like "do it" costs nearly the same as a detailed paragraph. Your message is tiny compared to the history being re-read alongside it.&lt;/p&gt;

&lt;p&gt;But detailed prompts get results in fewer rounds. Fewer rounds = less compounding. "Add input validation to the login function in auth.ts" beats "fix the auth stuff" because it finishes in one shot instead of three.&lt;/p&gt;


&lt;h2&gt;
  
  
  What actually helped
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use &lt;code&gt;/clear&lt;/code&gt; between unrelated tasks.&lt;/strong&gt; Your test-writing agent doesn't need your debugging context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep sessions under 60 minutes.&lt;/strong&gt; After that, context compaction kicks in and you lose earlier decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be specific.&lt;/strong&gt; Fewer rounds = less compounding = lower cost.&lt;/p&gt;


&lt;h2&gt;
  
  
  I built a tool for this
&lt;/h2&gt;

&lt;p&gt;Wanted to keep tracking over time, so I packaged it up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx tokburn serve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One command. Local dashboard. Nothing installed permanently. Nothing leaves your machine.&lt;/p&gt;

&lt;p&gt;Or permanent: &lt;code&gt;pip install tokburn &amp;amp;&amp;amp; tokburn serve&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Shows equivalent API cost per session, daily trends, waste detection, and the "Claude Wrote" percentage.&lt;/p&gt;

&lt;p&gt;Someone on LinkedIn ran it on &lt;strong&gt;1,765 sessions&lt;/strong&gt;: &lt;strong&gt;$5,209&lt;/strong&gt; equivalent API cost. Max plan paying for itself many times over.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/lsvishaal/tokburn" rel="noopener noreferrer"&gt;github.com/lsvishaal/tokburn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you try it, drop your numbers in the comments. Genuinely curious about your stats.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;First open-source project. First DEV.to post. Python + FastAPI. &lt;a href="https://github.com/lsvishaal/tokburn" rel="noopener noreferrer"&gt;MIT licensed.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>programming</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
