<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: André Bergan</title>
    <description>The latest articles on DEV Community by André Bergan (@andber6).</description>
    <link>https://dev.to/andber6</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/andber6"/>
    <language>en</language>
    <item>
      <title>ML-based LLM request classifier for cost-optimized routing (~2ms inference)</title>
      <dc:creator>André Bergan</dc:creator>
      <pubDate>Tue, 07 Apr 2026 11:45:29 +0000</pubDate>
      <link>https://dev.to/andber6/ml-based-llm-request-classifier-for-cost-optimized-routing-2ms-inference-24c4</link>
      <guid>https://dev.to/andber6/ml-based-llm-request-classifier-for-cost-optimized-routing-2ms-inference-24c4</guid>
      <description>&lt;p&gt;I built a request classifier that decides which LLM tier a prompt needs before it's sent to a provider. The goal is cost optimization: route simple requests to cheap models, keep complex ones on premium.      &lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Feature extraction:&lt;/strong&gt; token count, estimated complexity, conversation depth, presence of code/math/reasoning markers, language detection
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model:&lt;/strong&gt; MLP trained on ~50K labeled samples (rule-based scorer as teacher), exported to ONNX for fast inference
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inference:&lt;/strong&gt; &amp;lt;2ms per classification, runs inline with the request&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Three output tiers:&lt;/strong&gt; economy (e.g. Gemini Flash), standard (e.g. GPT-4o-mini), premium (e.g. GPT-4o/Claude Sonnet)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic cache:&lt;/strong&gt; Qdrant-based layer that catches near-duplicate prompts (cosine similarity threshold 0.95)
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Training pipeline
&lt;/h2&gt;

&lt;p&gt;The rule-based scorer acts as a teacher model to generate labels, then distills into the MLP. Retraining happens via outcome signals from downstream quality checks.                                        &lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The routing engine is open source: &lt;a href="https://github.com/andber6/kestrel" rel="noopener noreferrer"&gt;https://github.com/andber6/kestrel&lt;/a&gt;&lt;br&gt;
Hosted version with billing/caching: &lt;a href="https://usekestrel.io" rel="noopener noreferrer"&gt;https://usekestrel.io&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Question for the community
&lt;/h2&gt;

&lt;p&gt;Has anyone experimented with similar prompt classification approaches? The hardest part has been defining what makes a prompt "need" a premium model. Currently using hand-engineered features but I'm wondering if anyone has had success with learned embeddings for this kind of routing decision.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>opensource</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
