<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sergio Ramos Vicente</title>
    <description>The latest articles on DEV Community by Sergio Ramos Vicente (@sergioramosv).</description>
    <link>https://dev.to/sergioramosv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sergioramosv"/>
    <language>en</language>
    <item>
      <title>I was burning through AI tokens without realizing it. Here's how I fixed it.</title>
      <dc:creator>Sergio Ramos Vicente</dc:creator>
      <pubDate>Sun, 05 Apr 2026 16:17:31 +0000</pubDate>
      <link>https://dev.to/sergioramosv/i-was-burning-through-ai-tokens-without-realizing-it-heres-how-i-fixed-it-bn</link>
      <guid>https://dev.to/sergioramosv/i-was-burning-through-ai-tokens-without-realizing-it-heres-how-i-fixed-it-bn</guid>
      <description>&lt;p&gt;I've been using Claude Code and Codex daily for months. They're some of the best programming tools I've tried. But there's something nobody tells you when you start: &lt;strong&gt;context runs out fast, and the cost grows exponentially&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real problem isn't the message you're sending
&lt;/h2&gt;

&lt;p&gt;When you're 50 messages into a session and you send message 51, your CLI doesn't just send that message. It sends &lt;strong&gt;all 51&lt;/strong&gt;. The entire conversation, from the beginning, with every single request.&lt;/p&gt;

&lt;p&gt;On top of that, Claude Code's system prompt is 13,000 characters — also sent with every message. Every command result the AI has run, every file it read, every search it performed — all of it is in the history, resent again and again.&lt;/p&gt;

&lt;p&gt;In a real session, message 51 can end up sending 85,000 characters to the API. For a single message.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why existing tools don't fix this
&lt;/h2&gt;

&lt;p&gt;There's a very popular tool for this problem: &lt;strong&gt;RTK (Rust Token Killer)&lt;/strong&gt;, with over 16,000 GitHub stars. It does exactly what it promises: it works as a shell wrapper that intercepts the stdout of each command before it enters the context. When the AI runs &lt;code&gt;git diff&lt;/code&gt;, RTK filters the output before the result is stored in the history.&lt;/p&gt;

&lt;p&gt;The problem isn't RTK — it's the scope of that approach.&lt;/p&gt;

&lt;p&gt;Once a command result has entered the history, RTK can't touch it anymore. And on message 51, those 50 previous messages — with all their results, logs, file reads — are resent in full to the API. RTK has no visibility into the accumulated history.&lt;/p&gt;

&lt;p&gt;In numbers: in a 50-turn session with 150,000 total tokens, RTK saves approximately &lt;strong&gt;1.6%&lt;/strong&gt;. It can only act on the current turn.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;Squeezr is a local HTTP proxy that intercepts each request before it reaches the API. It operates at a different level than RTK: not on the stdout of a single command, but on the &lt;strong&gt;complete HTTP request&lt;/strong&gt; — it sees and compresses the entire conversation on every send.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The system prompt&lt;/strong&gt; is compressed once and cached. From 13,000 chars down to ~650. On the next request, and the one after, it comes straight from cache — no recompression.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command and tool results&lt;/strong&gt; are filtered before they accumulate in the history. When the AI runs &lt;code&gt;npm test&lt;/code&gt; and gets 200 lines back, Squeezr extracts only the failing tests. When it&lt;br&gt;
   reads a file, it keeps what's relevant. When it searches, it compacts the results. Git commands, Docker, kubectl, compilers, linters — each has its own specific pattern. And unlike RTK, Squeezr also compresses &lt;strong&gt;file reads and search results&lt;/strong&gt;, not just bash output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The full history&lt;/strong&gt; is compressed with every request. Older messages are summarized automatically. Message 51 doesn't resend 50 full conversations — it resends 48 compressed ones and the last 3 intact.&lt;/p&gt;

&lt;p&gt;The result on that same 85,000 char example: &lt;strong&gt;25,000 chars&lt;/strong&gt;. 71% less, on every message. In long sessions, cumulative savings reach &lt;strong&gt;89%&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  No quality loss
&lt;/h2&gt;

&lt;p&gt;Compression is lossless. All original content is stored locally. If the AI needs more detail from something that was compressed, it calls &lt;code&gt;squeezr_expand()&lt;/code&gt; and gets the full original back instantly — no cost, no API call.&lt;/p&gt;

&lt;p&gt;The AI gets the same information. Without the filler.&lt;/p&gt;
&lt;h2&gt;
  
  
  AI compression uses the cheapest model you already have — no extra cost
&lt;/h2&gt;

&lt;p&gt;When a block is too long for deterministic patterns, Squeezr uses an AI model to summarize it — always the cheapest one from the provider you're already using: Haiku if you're on Claude, GPT-4o-mini if you're on Codex, Flash if you're on Gemini. And if you work with local models through Ollama or LM Studio, it uses local models too. No extra API keys, no additional cost.&lt;/p&gt;
&lt;h2&gt;
  
  
  What changed in practice
&lt;/h2&gt;

&lt;p&gt;Sessions last much longer. The AI keeps track because the context isn't filled with noise. And token spending dropped considerably:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;squeezr gain&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Squeezr — Token Savings
-----------------------------------
  Requests processed:      33
  Saved chars:          6,987,655
  Total tokens saved:   1,912,840
  Tool saving:            94,67%
  Context reduction:       78%
-----------------------------------
  By Tool                                 
  Read (161x): -83.8%                    
  WebFetch (25x): -60%                   
  Grep (15x): -66.4%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;squeezr discover&lt;/code&gt; shows you exactly which patterns are saving the most in your specific workflow. For me, vitest results and git diffs are the biggest wins.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; squeezr-ai
  squeezr setup
  squeezr start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works today with Claude Code, Codex, Aider, and Gemini CLI. Cursor support is coming soon.&lt;/p&gt;

&lt;p&gt;MIT. &lt;a href="https://squeezr.es" rel="noopener noreferrer"&gt;https://squeezr.es&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you try it, squeezr gain will tell you exactly how much you're saving.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tooling</category>
      <category>opensource</category>
      <category>claude</category>
    </item>
  </channel>
</rss>
