<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Santiago de Polonia</title>
    <description>The latest articles on DEV Community by Santiago de Polonia (@santiago-pl).</description>
    <link>https://dev.to/santiago-pl</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/santiago-pl"/>
    <language>en</language>
    <item>
      <title>LiteLLM was compromised -that's why I'm building GoModel</title>
      <dc:creator>Santiago de Polonia</dc:creator>
      <pubDate>Tue, 24 Mar 2026 18:39:51 +0000</pubDate>
      <link>https://dev.to/santiago-pl/litellm-was-compromised-thats-why-im-building-gomodel-nmm</link>
      <guid>https://dev.to/santiago-pl/litellm-was-compromised-thats-why-im-building-gomodel-nmm</guid>
      <description>&lt;p&gt;LiteLLM just had a serious supply chain incident.&lt;/p&gt;

&lt;p&gt;According to the &lt;a href="https://github.com/BerriAI/litellm/issues/24518" rel="noopener noreferrer"&gt;public GitHub reports&lt;/a&gt;, malicious PyPI versions of LiteLLM were published, including 1.82.8, with code that could run automatically on Python startup and steal secrets like environment variables, SSH keys, and cloud credentials. The reported payload sent that data to an attacker-controlled domain. A follow-up issue says the PyPI package was compromised through the maintainer's PyPI account, and that the bad releases were not shipped through the official GitHub CI/CD flow.&lt;/p&gt;

&lt;p&gt;This is bigger than one package. It is a reminder that the AI infra layer is now part of your security boundary.&lt;/p&gt;

&lt;p&gt;That is one reason I'm building GoModel: a faster, simpler alternative to LiteLLM, written in Go. My goal is straightforward - less complexity, smaller attack surface, and better performance for teams that want a reliable LLM gateway.&lt;/p&gt;

&lt;p&gt;You can check it out here: &lt;a href="https://github.com/ENTERPILOT/GOModel/" rel="noopener noreferrer"&gt;https://github.com/ENTERPILOT/GOModel/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>performance</category>
      <category>opensource</category>
      <category>security</category>
    </item>
    <item>
      <title>Benchmarking GoModel, a LiteLLM alternative: lessons learned from building a simple benchmark</title>
      <dc:creator>Santiago de Polonia</dc:creator>
      <pubDate>Mon, 23 Mar 2026 15:36:23 +0000</pubDate>
      <link>https://dev.to/santiago-pl/benchmarking-gomodel-vs-litellm-alternative-lessons-learned-from-building-a-simple-benchmark-45m</link>
      <guid>https://dev.to/santiago-pl/benchmarking-gomodel-vs-litellm-alternative-lessons-learned-from-building-a-simple-benchmark-45m</guid>
      <description>&lt;p&gt;When I started working on GoModel, I did not plan to spend much time on benchmarking.&lt;/p&gt;

&lt;p&gt;I assumed benchmarking would be annoying, fragile, and probably much harder than it looked. In my head, it felt like one of those tasks that sounds simple at first, but turns into a mini research project once you actually start.&lt;/p&gt;

&lt;p&gt;What I learned is the opposite: creating a &lt;strong&gt;useful&lt;/strong&gt; benchmark is much easier than most people think.&lt;/p&gt;

&lt;p&gt;And one big reason is that AI makes the whole process much easier than it was a few years ago.&lt;/p&gt;

&lt;p&gt;That was the biggest lesson for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is GoModel?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/ENTERPILOT/GOModel" rel="noopener noreferrer"&gt;GoModel&lt;/a&gt; is an open-source AI gateway / LLM proxy written in Go. It sits between your app and model providers like OpenAI, Anthropic, Gemini, Groq, xAI, and Ollama, and exposes a single OpenAI-compatible API.&lt;/p&gt;

&lt;p&gt;I built it because I wanted a lightweight, production-friendly gateway that was easy to deploy, easy to reason about, and fully open-source.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I decided to benchmark it
&lt;/h2&gt;

&lt;p&gt;At some point, I kept making the same claim in my head:&lt;/p&gt;

&lt;p&gt;“GoModel feels lighter and faster.”&lt;/p&gt;

&lt;p&gt;That may be true, but “feels” is not evidence.&lt;/p&gt;

&lt;p&gt;I was mostly comparing it against LiteLLM, because LiteLLM is the best-known option in this space and the default reference point for many people looking at LLM gateways.&lt;/p&gt;

&lt;p&gt;So I decided to stop guessing and just measure it.&lt;/p&gt;

&lt;p&gt;That turned out to be one of the most useful things I have done for the project, not only because of the results, but because of what I learned while building the benchmark itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The biggest change: benchmarking is easier now because you can just talk to AI (Lesson 1)
&lt;/h2&gt;

&lt;p&gt;A few years ago, even starting a benchmark felt heavy.&lt;/p&gt;

&lt;p&gt;First you had to think through the methodology. Then you had to decide what to measure. Then you had to write the scripts. Then you had to figure out how to run them, collect the numbers, and make sense of the results.&lt;/p&gt;

&lt;p&gt;Now a lot of that work is much easier.&lt;/p&gt;

&lt;p&gt;You can literally start by describing what you want in plain English:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I have two services&lt;/li&gt;
&lt;li&gt;they do the same job&lt;/li&gt;
&lt;li&gt;I want to compare throughput, latency, and memory usage&lt;/li&gt;
&lt;li&gt;I want a simple repeatable benchmark&lt;/li&gt;
&lt;li&gt;I do not need a perfect academic setup&lt;/li&gt;
&lt;li&gt;I just want something fair and useful&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is already enough to get moving.&lt;/p&gt;

&lt;p&gt;AI is very good at helping with exactly this kind of task. Not because it magically solves benchmarking for you, but because it removes a lot of the friction around getting started.&lt;/p&gt;

&lt;p&gt;It can help you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;define a reasonable benchmark scope&lt;/li&gt;
&lt;li&gt;generate load scripts&lt;/li&gt;
&lt;li&gt;suggest what metrics to collect&lt;/li&gt;
&lt;li&gt;point out obvious mistakes in the setup&lt;/li&gt;
&lt;li&gt;format results&lt;/li&gt;
&lt;li&gt;help you explain the limitations clearly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That part feels very different from how things used to be.&lt;/p&gt;

&lt;p&gt;Before, benchmarking often felt blocked by setup cost.&lt;/p&gt;

&lt;p&gt;Now it is much more like: &lt;strong&gt;&lt;a href="https://steipete.me/posts/just-talk-to-it" rel="noopener noreferrer"&gt;just talk to AI&lt;/a&gt;, get a first version working, then iterate&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That does not mean you should trust every output blindly. You still need to think. You still need to validate the setup. You still need to understand what is actually being measured.&lt;/p&gt;

&lt;p&gt;But the barrier to entry is much lower now.&lt;/p&gt;

&lt;p&gt;And I think that is a big deal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 2: a benchmark does not need to be perfect to be useful
&lt;/h2&gt;

&lt;p&gt;This was the biggest mindset shift.&lt;/p&gt;

&lt;p&gt;I think many developers avoid benchmarking because they imagine they need a huge setup: many machines, a big test matrix, production traffic replay, deep statistical analysis, and charts for every possible scenario.&lt;/p&gt;

&lt;p&gt;In reality, you can learn a lot from a small benchmark if you ask a clear question.&lt;/p&gt;

&lt;p&gt;My question was simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If both tools are used as an LLM gateway in front of the same kind of workload, how do they behave in terms of throughput, latency, and memory usage?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is already enough.&lt;/p&gt;

&lt;p&gt;You do not need to model the entire internet. You just need a test that is fair enough to reveal something meaningful.&lt;/p&gt;

&lt;p&gt;AI also helps here because it forces you to phrase the question clearly. If you cannot explain the benchmark clearly to an AI assistant, there is a good chance your scope is still too vague.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 3: benchmarking forces product clarity
&lt;/h2&gt;

&lt;p&gt;This part surprised me.&lt;/p&gt;

&lt;p&gt;I expected benchmarking to tell me about performance.&lt;/p&gt;

&lt;p&gt;What it also did was clarify the product itself.&lt;/p&gt;

&lt;p&gt;Once you measure something, you are forced to answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is this product actually optimized for?&lt;/li&gt;
&lt;li&gt;Where should it be better?&lt;/li&gt;
&lt;li&gt;What trade-offs did I make intentionally?&lt;/li&gt;
&lt;li&gt;What should users care about most?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my case, the benchmark made the positioning much clearer.&lt;/p&gt;

&lt;p&gt;GoModel is not just “an AI gateway.”&lt;/p&gt;

&lt;p&gt;It is a Go-based, open-source, single-binary gateway designed to be lightweight, simple to deploy, and efficient in the hot path of LLM requests.&lt;/p&gt;

&lt;p&gt;Without benchmarking, those are just words.&lt;/p&gt;

&lt;p&gt;With benchmarking, they become testable claims.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 4: benchmarking is also a debugging tool
&lt;/h2&gt;

&lt;p&gt;Before doing this, I mostly thought about benchmarks as something you publish.&lt;/p&gt;

&lt;p&gt;That was a mistake.&lt;/p&gt;

&lt;p&gt;A benchmark is also one of the fastest ways to find weak spots in your own system.&lt;/p&gt;

&lt;p&gt;As soon as you push something under repeatable load, you start noticing where memory grows faster than expected, where latency becomes uneven, and where parts of the system become bottlenecks.&lt;/p&gt;

&lt;p&gt;Even if I had never published the results, building the benchmark would still have been worth it.&lt;/p&gt;

&lt;p&gt;It gave me a much more honest picture of the system.&lt;/p&gt;

&lt;p&gt;And again, AI helps here not by replacing the benchmark, but by helping you move faster once you find a problem. You can ask it to review the script, suggest what might be skewing the result, or help you isolate one part of the test.&lt;/p&gt;

&lt;h2&gt;
  
  
  My biggest takeaway
&lt;/h2&gt;

&lt;p&gt;The biggest lesson I learned is very simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benchmarking is much more accessible today with AI tools.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You do not need a lab.&lt;/p&gt;

&lt;p&gt;You do not need a giant team.&lt;/p&gt;

&lt;p&gt;You do not need a perfect methodology.&lt;/p&gt;

&lt;p&gt;And now, you also do not need to start from a blank page.&lt;/p&gt;

&lt;p&gt;You can just describe what you want to measure, use AI to help generate a first version, and improve it from there.&lt;/p&gt;

&lt;p&gt;You still need to think.&lt;/p&gt;

&lt;p&gt;You still need to validate the setup.&lt;/p&gt;

&lt;p&gt;You still need to be honest about the limits.&lt;/p&gt;

&lt;p&gt;But getting started is much easier than it used to be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;If you are building infrastructure, developer tools, or performance-sensitive software, I think it is worth benchmarking earlier than you expect.&lt;/p&gt;

&lt;p&gt;Not because you need a marketing graph.&lt;/p&gt;

&lt;p&gt;Because benchmarking forces clarity.&lt;/p&gt;

&lt;p&gt;It helps you understand your product better, find bottlenecks faster, and communicate value more concretely.&lt;/p&gt;

&lt;p&gt;And today, with AI, it is easier than ever to start.&lt;/p&gt;

&lt;p&gt;That was true for me with GoModel, and it is probably true for a lot of other projects too.&lt;/p&gt;

&lt;p&gt;If you want to check out the project, GoModel is open-source and available on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ENTERPILOT/GOModel" rel="noopener noreferrer"&gt;GOModel on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also published the full benchmark results here if you want to see the setup and the raw comparison:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://enterpilot.io/blog/gomodel-vs-litellm-benchmark-march-2026/" rel="noopener noreferrer"&gt;GoModel vs LiteLLM benchmark (March 2026)&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>go</category>
      <category>performance</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
