<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kevin Tayong</title>
    <description>The latest articles on DEV Community by Kevin Tayong (@kevintayong).</description>
    <link>https://dev.to/kevintayong</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kevintayong"/>
    <language>en</language>
    <item>
      <title>After a small alpha, we’re letting more people try our LLM key management setup</title>
      <dc:creator>Kevin Tayong</dc:creator>
      <pubDate>Tue, 20 Jan 2026 21:26:41 +0000</pubDate>
      <link>https://dev.to/kevintayong/after-a-small-alpha-were-letting-more-people-try-our-llm-key-management-setup-57h5</link>
      <guid>https://dev.to/kevintayong/after-a-small-alpha-were-letting-more-people-try-our-llm-key-management-setup-57h5</guid>
      <description>&lt;p&gt;Over the last few weeks, we’ve been running a small, gated alpha for an internal setup we built to manage LLM API keys and usage.&lt;/p&gt;

&lt;p&gt;The original problem was pretty simple… &lt;/p&gt;

&lt;p&gt;As soon as you start using multiple LLM providers, key management and cost visibility get messy fast.&lt;/p&gt;

&lt;p&gt;We wanted something that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Didn’t require hardcoding keys everywhere&lt;/li&gt;
&lt;li&gt;Didn’t log prompts or responses&lt;/li&gt;
&lt;li&gt;Worked with both cloud APIs and local models&lt;/li&gt;
&lt;li&gt;Gave us a clear view of usage and cost over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So we built a setup on top of the any-llm library that does a few things differently.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API keys are encrypted client-side before they ever leave the machine. They’re never stored in plaintext.
&lt;/li&gt;
&lt;li&gt;We use a single “virtual key” across providers instead of juggling multiple secrets.
&lt;/li&gt;
&lt;li&gt;Usage tracking is metadata-only: token counts, model names, timestamps, and performance metrics like time to first token.
&lt;/li&gt;
&lt;li&gt;No prompt or response data is collected.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inference stays on the client, which means the same setup works with cloud APIs and local models.&lt;/p&gt;

&lt;p&gt;For anyone who wants to see how this is currently put together, the setup lives here at any-llm.ai&lt;/p&gt;

</description>
      <category>llm</category>
      <category>openai</category>
      <category>chatgpt</category>
      <category>api</category>
    </item>
  </channel>
</rss>
