This is a submission for the GitHub Copilot CLI Challenge
What I Built
LLM Registry is a Python package and CLI tool that keeps track of LLM model metadata - pricing, features, context windows, API parameters - across different providers.
The problem is simple: if you're working with models from OpenAI, Anthropic, Google, and others, there's no single place to check things like "does this model support vision?" or "what's the input cost per million tokens?" You end up digging through multiple provider docs every time. I got tired of that, so I built this.
It ships with 140+ models across 16 providers out of the box. You can query them via a Python API or a CLI tool called llmr. You can also add your own custom models that get stored locally and override the built-in ones if needed.
It's published on PyPI (pip install llm-registry) and the source is on GitHub (https://github.com/yamanahlawat/llm-registry).
Demo
My Experience with GitHub Copilot CLI
I used Copilot CLI throughout building this project. Since LLM Registry is itself a CLI tool, it made sense to build it with a terminal-native agent rather than relying on an IDE.



Top comments (0)