LlmTornado is a .NET library I've been maintaining for over two years, now with the help of many amazing Contributors. Tornado acts as a gateway similar to LiteLLM, supporting commercial providers (OpenAI, Anthropic, Google, DeepSeek, Cohere, Mistral, Azure, Groq..) and self-hosted inference servers (Ollama, LocalAI..)
Tornado provides a unified, OpenAI-shaped inference interface with powerful, strongly typed Vendor Extensions
enabling the use of the specific APIs various Providers offer, allowing a broader scope of applications than relying on any single provider can. This approach promotes resiliency to temporary downtime of providers - which is still the norm when using public APIs.
Tornado also acts as an API harmonizer for the supported providers. For example, suppose a request accidentally passes a temperature
parameter to a reasoning model, where such an argument is not supported. We take care of that, to maximize the probability of the call succeeding. This applies to various whims of the providers, such as developer_message
vs system_prompt
(in Tornado there is just a System role for Messages), Google having completely different endpoints for embedding multiple texts at once, and many other annoyances.
With fewer abstractions than full frameworks like Semantic Kernel, Tornado aims to reduce the complexity of building applications, while enabling easy synthetisation of the abstractions your unique project will benefit from. Out-of-the-box Tornado includes conversations, strongly typed RAG primitives, and FSMs.
Tornado is extensively documented and covered with 200+ unit tests, powering real-world applications with thousands of active users daily.
👉 Check the awesome demos in the readme: https://github.com/lofcz/LlmTornado
Top comments (0)