<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: tje8x</title>
    <description>The latest articles on DEV Community by tje8x (@tje8x).</description>
    <link>https://dev.to/tje8x</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tje8x"/>
    <language>en</language>
    <item>
      <title>Anthropic just forced OpenClaw users onto per-token billing. Here is how to actually see where your money goes</title>
      <dc:creator>tje8x</dc:creator>
      <pubDate>Tue, 14 Apr 2026 02:58:41 +0000</pubDate>
      <link>https://dev.to/tje8x/anthropic-just-forced-openclaw-users-onto-per-token-billing-here-is-how-to-actually-see-where-your-1l07</link>
      <guid>https://dev.to/tje8x/anthropic-just-forced-openclaw-users-onto-per-token-billing-here-is-how-to-actually-see-where-your-1l07</guid>
      <description>&lt;p&gt;Last week, Anthropic cut off subscription access for OpenClaw and all third-party agent tools. If you were running OpenClaw on a Claude Max plan, you already know the math changed overnight.&lt;/p&gt;

&lt;p&gt;The immediate advice everywhere was the same: switch to Haiku, use cheaper models, trim your system prompts. But none of these suggestions tell you where your money is actually going across your full stack, or how much you could realistically be token optimizing.&lt;/p&gt;

&lt;p&gt;I wanted actual numbers. So I built a tool that connects to billing APIs, pulls historical spend, and runs optimization analyses on all of your numbers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The full picture is bigger than LLM inference tokens&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most people only think about their OpenAI or Anthropic bill. But if you are running anything in production, you are also paying for cloud infra, monitoring, communication APIs, search services, creative generation, and probably Stripe fees on revenue.&lt;/p&gt;

&lt;p&gt;I started with a handful of common providers (also based on my own usage with GPT, Claude, AWS, Vercel, Gemini, SendGrid, Tavily, even cryptocurrencies!) and tracked everything going back a few months, categorizing broadly into&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LLM inference&lt;/li&gt;
&lt;li&gt;Cloud infra&lt;/li&gt;
&lt;li&gt;Creative/media generation&lt;/li&gt;
&lt;li&gt;Communication&lt;/li&gt;
&lt;li&gt;Monitoring&lt;/li&gt;
&lt;li&gt;Search/data&lt;/li&gt;
&lt;li&gt;Advertising&lt;/li&gt;
&lt;li&gt;Cash/Treasury&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The spend breakdown itself was useful but the optimization analyses were the real finding.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Six Optimization categories, hundreds in savings&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The tool runs six modules on billing data. Every number comes from your actual usage patterns and live model pricing. Nothing is hardcoded or assumed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Model Routing:&lt;/strong&gt; 80% of GPT-4o requests had under 500 input tokens - classification, extraction, short lookups. The tool identifies these by looking at request size distribution, then prices the same workload on cheaper models using live rates from OpenRouter. Switching those tasks to mini results in hundreds/mo in savings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caching Opportunity:&lt;/strong&gt; My API calls had an 85% consistency score - input tokens were the same content repeated every call (system prompts, tool definitions, context preamble).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch Processing:&lt;/strong&gt; 70% of my daily API volume was consistent day over day - automated pipelines, not interactive queries. Moving those to OpenAI's Batch API provides a substantial discount.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unit Economics:&lt;/strong&gt; With Stripe connected, the tool builds a P&amp;amp;L per customer. My AI inference cost per customer is just shy of a dollar per month with total operating costs close to $2.50. Margins looked fine today but AI costs were growing much faster than rev - the tool processed this and recommended some optimization strategies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Price Comparison:&lt;/strong&gt; My exact Anthropic workload - 2M input, 500k output tokens - priced on every comparable model at current rates. Not a theoretical comparison table. My actual volume at today's prices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spend Forecast:&lt;/strong&gt; Growth rates calculated from my billing history and banking uploads. Although forecasting is still not perfect (and not a true predictor of future trends), it was helpful in assessing critical failure points in spend.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The tool&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It's called ANVX. Open source, data stays local.&lt;/p&gt;

&lt;p&gt;It's available as an MCP server for Claude Desktop and ChatGPT.&lt;br&gt;
GitHub: &lt;a href="https://github.com/tje8x/anvx" rel="noopener noreferrer"&gt;https://github.com/tje8x/anvx&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or if you're running a swarm of agents on OpenClaw, install it as an OpenClaw skill:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;clawhub &lt;span class="nb"&gt;install &lt;/span&gt;anvx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It pulls your billing history the moment you connect a provider - no waiting period. Upload bank or card statements to fill in the gaps. The real-time financial model builds over time and the forecasts get more accurate as more data comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why this matters right now&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The cutoff forced cost visibility on everyone at the same time. For many builders, this is the first time they're paying per-token and the first time they have any reason to look at what they're spending.&lt;/p&gt;

&lt;p&gt;Generic advice captures one optimization category. Model routing might save a few hundred a month but caching saves another hundred, batch processing saves tens of dollars, and the unit economics analysis might show your cost trajectory as unsustainable regardless. Without looking at actual data, we're all just guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What's next&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The intelligence layer is live. Next: a card and wallet layer that lets you act on the recommendations - program spend, route payments, execute model switches, consolidate all your tokens.&lt;/p&gt;

&lt;p&gt;Early access: &lt;a href="https://anvx.io" rel="noopener noreferrer"&gt;https://anvx.io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>ai</category>
      <category>claude</category>
      <category>devtools</category>
    </item>
  </channel>
</rss>
