<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chris Paul</title>
    <description>The latest articles on DEV Community by Chris Paul (@chrisp04).</description>
    <link>https://dev.to/chrisp04</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chrisp04"/>
    <language>en</language>
    <item>
      <title>Why routing LLM calls is harder than it looks (lessons from building ai-gateway)</title>
      <dc:creator>Chris Paul</dc:creator>
      <pubDate>Sat, 18 Apr 2026 06:16:04 +0000</pubDate>
      <link>https://dev.to/chrisp04/why-routing-llm-calls-is-harder-than-it-looks-lessons-from-building-ai-gateway-4hcg</link>
      <guid>https://dev.to/chrisp04/why-routing-llm-calls-is-harder-than-it-looks-lessons-from-building-ai-gateway-4hcg</guid>
      <description>&lt;p&gt;Most apps I’ve worked on treat LLMs in a very simple way:&lt;/p&gt;

&lt;p&gt;You pick a model → send every request to it → hope for the best.&lt;/p&gt;

&lt;p&gt;At first, that works.&lt;/p&gt;

&lt;p&gt;But over time I kept running into the same problems:&lt;/p&gt;

&lt;p&gt;simple queries hitting expensive models&lt;br&gt;
provider outages breaking entire flows&lt;br&gt;
no control over cost vs quality tradeoffs&lt;/p&gt;

&lt;p&gt;So I started building a small LLM routing layer that sits in front of model calls and decides which model should handle each request.&lt;/p&gt;

&lt;p&gt;This turned out to be way more interesting (and harder) than I expected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The core idea&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of this:&lt;/p&gt;

&lt;p&gt;app → single LLM → response&lt;/p&gt;

&lt;p&gt;I wanted:&lt;/p&gt;

&lt;p&gt;app → router → (cheap model / reasoning model / fallback) → response&lt;/p&gt;

&lt;p&gt;The router decides based on the prompt:&lt;/p&gt;

&lt;p&gt;simple → cheaper / faster model&lt;br&gt;
complex → reasoning model&lt;br&gt;
failure → fallback provider&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I built&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The system is a self-hostable gateway with:&lt;/p&gt;

&lt;p&gt;multi-provider support (Groq, Gemini fallback)&lt;br&gt;
intent-based routing (embedding similarity)&lt;br&gt;
semantic caching to avoid repeated calls&lt;br&gt;
health-aware failover across providers&lt;br&gt;
multi-tenant API keys + quotas&lt;/p&gt;

&lt;p&gt;For embeddings, I experimented with running a local BGE model via Transformers.js instead of using external APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The hardest problem: routing decisions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where things get tricky.&lt;/p&gt;

&lt;p&gt;At first I used embedding similarity to classify prompts into categories like:&lt;/p&gt;

&lt;p&gt;simple question&lt;br&gt;
summarization&lt;br&gt;
code / reasoning&lt;/p&gt;

&lt;p&gt;It works well for clear cases.&lt;/p&gt;

&lt;p&gt;But ambiguous prompts break everything.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;“Explain this system design in simple terms”&lt;/p&gt;

&lt;p&gt;Is that:&lt;/p&gt;

&lt;p&gt;summarization?&lt;br&gt;
reasoning?&lt;br&gt;
both?&lt;/p&gt;

&lt;p&gt;This is where simple heuristics start to fall apart.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local embeddings: great idea, annoying reality&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Running embeddings locally felt like a big win:&lt;/p&gt;

&lt;p&gt;no external API&lt;br&gt;
no rate limits&lt;br&gt;
more control&lt;/p&gt;

&lt;p&gt;But in practice:&lt;/p&gt;

&lt;p&gt;cold start takes ~2–5 seconds (ONNX init)&lt;br&gt;
memory overhead (~30–50MB even for small models)&lt;br&gt;
scaling becomes tricky&lt;/p&gt;

&lt;p&gt;Once the model is warm, performance is fine.&lt;/p&gt;

&lt;p&gt;But that first request penalty is very real, especially for user-facing systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What actually worked&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A few things that made a noticeable difference:&lt;/p&gt;

&lt;p&gt;semantic caching → avoids recomputing embeddings and responses&lt;br&gt;
fallback logic → makes the system much more reliable&lt;br&gt;
cheap-first routing → try fast/cheap models, escalate if needed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What didn’t work (yet)&lt;/strong&gt;&lt;br&gt;
purely heuristic routing (not reliable enough)&lt;br&gt;
static thresholds for classification&lt;br&gt;
assuming “simple vs complex” is easy to define&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where I think this goes next&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The obvious direction is moving toward learning-based routing:&lt;/p&gt;

&lt;p&gt;track which responses get escalated&lt;br&gt;
use retries / failures as signals&lt;br&gt;
gradually learn which model performs best per prompt type&lt;/p&gt;

&lt;p&gt;Instead of hardcoding rules, let the system adapt over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Biggest takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Building around LLMs isn’t just about prompts.&lt;/p&gt;

&lt;p&gt;It’s about:&lt;/p&gt;

&lt;p&gt;cost control&lt;br&gt;
reliability&lt;br&gt;
system design&lt;/p&gt;

&lt;p&gt;The model is just one part of the system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Curious to hear from others&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you’ve worked on something similar:&lt;/p&gt;

&lt;p&gt;How are you deciding which model to use?&lt;br&gt;
Are you running embeddings locally or using APIs?&lt;br&gt;
Have you tried any learning-based routing approaches?&lt;/p&gt;

&lt;p&gt;Would love to hear how others are tackling this.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>architecture</category>
      <category>backend</category>
    </item>
  </channel>
</rss>
