<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Diallo West</title>
    <description>The latest articles on DEV Community by Diallo West (@diallo_west_9848dddc9ba5a).</description>
    <link>https://dev.to/diallo_west_9848dddc9ba5a</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/diallo_west_9848dddc9ba5a"/>
    <language>en</language>
    <item>
      <title>How I Built an API That Cuts LLM Token Costs by 11-22%</title>
      <dc:creator>Diallo West</dc:creator>
      <pubDate>Fri, 08 May 2026 02:06:21 +0000</pubDate>
      <link>https://dev.to/diallo_west_9848dddc9ba5a/how-i-built-an-api-that-cuts-llm-token-costs-by-11-22-1l10</link>
      <guid>https://dev.to/diallo_west_9848dddc9ba5a/how-i-built-an-api-that-cuts-llm-token-costs-by-11-22-1l10</guid>
      <description>&lt;p&gt;I've been building AI-powered tools for the past year, and one thing kept bugging me: I was wasting money on tokens.&lt;/p&gt;

&lt;p&gt;Not because my prompts were bad — but because they were &lt;em&gt;verbose&lt;/em&gt;. Every prompt I wrote had filler words, redundant phrases, and unnecessary politeness that inflated my token counts without improving the output.&lt;/p&gt;

&lt;p&gt;So I built &lt;a href="https://fortress-optimizer.com" rel="noopener noreferrer"&gt;Fortress Token Optimizer&lt;/a&gt; — an API that compresses prompts before they reach the LLM. Same meaning, fewer tokens, lower cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Look at a typical prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Could you please help me analyze this sales data and provide detailed
insights and recommendations for improvement?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;18 tokens.&lt;/strong&gt; But the LLM doesn't need "Could you please help me" — that's 5 tokens of politeness that doesn't change the output.&lt;/p&gt;

&lt;p&gt;After optimization:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Analyze this sales data and provide detailed insights and
recommendations for improvement?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;14 tokens. 22% saved.&lt;/strong&gt; The model produces the same quality response.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Benchmarks
&lt;/h2&gt;

&lt;p&gt;I tested across 5 prompt styles (casual chatty, business, technical):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Prompt Type&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Savings&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Casual chatty (cover letter request)&lt;/td&gt;
&lt;td&gt;75 tokens&lt;/td&gt;
&lt;td&gt;58 tokens&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;23%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Technical (debugging help)&lt;/td&gt;
&lt;td&gt;100 tokens&lt;/td&gt;
&lt;td&gt;92 tokens&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;8%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning request (ML resources)&lt;/td&gt;
&lt;td&gt;90 tokens&lt;/td&gt;
&lt;td&gt;81 tokens&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Business analysis&lt;/td&gt;
&lt;td&gt;77 tokens&lt;/td&gt;
&lt;td&gt;74 tokens&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Project planning&lt;/td&gt;
&lt;td&gt;69 tokens&lt;/td&gt;
&lt;td&gt;61 tokens&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;12%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Average&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;82 tokens&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;73 tokens&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;11%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The pattern: &lt;strong&gt;the chattier the prompt, the more savings.&lt;/strong&gt; Casual prompts with filler like "basically", "I was wondering if", "um", "please help me" see 15-23% savings. Technical prompts that are already dense save less.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;Four optimization passes, server-side:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Phrase compression&lt;/strong&gt; — removes filler ("Could you please help me" → removed)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deduplication&lt;/strong&gt; — "analyze the data and provide analysis" → "analyze the data"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Meta-removal&lt;/strong&gt; — strips instructions-about-instructions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sentence optimization&lt;/strong&gt; — tightens phrasing without changing meaning&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's not a regex. The optimizer understands prompt structure — it won't strip a code block or remove meaningful qualifiers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;p&gt;Three lines in Python or JavaScript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;fortress&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;optimizer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fortress_optimizer&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FortressClient&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FortressClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;fk_your_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;optimize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Could you please help me analyze this data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;optimization&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;optimized_prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="c1"&gt;# → "Analyze this data"
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;tokens&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;savings_percentage&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;% saved&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# → "22% saved"
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;npm&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;fortress&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;optimizer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;FortressClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fortress-optimizer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FortressClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FORTRESS_API_KEY&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;optimize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Your prompt here&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also available as a &lt;a href="https://marketplace.visualstudio.com/items?itemName=fortress-optimizer.fortress-token-optimizer" rel="noopener noreferrer"&gt;VS Code extension&lt;/a&gt; that runs in the background.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does This Save At Scale?
&lt;/h2&gt;

&lt;p&gt;At 500 prompts/day with balanced optimization (~11% savings):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Monthly Savings&lt;/th&gt;
&lt;th&gt;Annual Savings&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GPT-4 ($0.03/1K)&lt;/td&gt;
&lt;td&gt;$4.05&lt;/td&gt;
&lt;td&gt;$48.60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Opus ($0.015/1K)&lt;/td&gt;
&lt;td&gt;$2.03&lt;/td&gt;
&lt;td&gt;$24.30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT-4o ($0.005/1K)&lt;/td&gt;
&lt;td&gt;$0.68&lt;/td&gt;
&lt;td&gt;$8.10&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For a team of 10 engineers at 500 prompts/day each, that's &lt;strong&gt;$486/year on GPT-4&lt;/strong&gt; — and it compounds as models get more expensive or usage grows.&lt;/p&gt;

&lt;p&gt;The savings are modest for individual developers, but they add up for teams running batch processing, RAG pipelines, or high-volume applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Optimization Levels
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;Savings&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Conservative&lt;/td&gt;
&lt;td&gt;~5%&lt;/td&gt;
&lt;td&gt;Production prompts, minimal changes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Balanced&lt;/td&gt;
&lt;td&gt;~11-15%&lt;/td&gt;
&lt;td&gt;General use (default)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Aggressive&lt;/td&gt;
&lt;td&gt;~15-22%&lt;/td&gt;
&lt;td&gt;Batch processing, cost-sensitive&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Free to Try
&lt;/h2&gt;

&lt;p&gt;50,000 tokens/month free, no credit card. &lt;a href="https://fortress-optimizer.com" rel="noopener noreferrer"&gt;Get a key&lt;/a&gt; and try it on your existing prompts.&lt;/p&gt;

&lt;p&gt;I'd love feedback — especially if you're running high-volume LLM workloads where token costs are a real line item.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://fortress-optimizer.com" rel="noopener noreferrer"&gt;Website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/fortress-optimizer" rel="noopener noreferrer"&gt;npm package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pypi.org/project/fortress-optimizer/" rel="noopener noreferrer"&gt;Python package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=fortress-optimizer.fortress-token-optimizer" rel="noopener noreferrer"&gt;VS Code extension&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>python</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
