DEV Community

Cover image for After a small alpha, we’re letting more people try our LLM key management setup
Kevin Tayong
Kevin Tayong

Posted on

After a small alpha, we’re letting more people try our LLM key management setup

Over the last few weeks, we’ve been running a small, gated alpha for an internal setup we built to manage LLM API keys and usage.

The original problem was pretty simple…

As soon as you start using multiple LLM providers, key management and cost visibility get messy fast.

We wanted something that:

  • Didn’t require hardcoding keys everywhere
  • Didn’t log prompts or responses
  • Worked with both cloud APIs and local models
  • Gave us a clear view of usage and cost over time

So we built a setup on top of the any-llm library that does a few things differently.

  • API keys are encrypted client-side before they ever leave the machine. They’re never stored in plaintext.
  • We use a single “virtual key” across providers instead of juggling multiple secrets.
  • Usage tracking is metadata-only: token counts, model names, timestamps, and performance metrics like time to first token.
  • No prompt or response data is collected.

Inference stays on the client, which means the same setup works with cloud APIs and local models.

For anyone who wants to see how this is currently put together, the setup lives here at any-llm.ai

Top comments (0)