Ever wanted to use GPT-4, Mistral, and Qwen in the same agent β with routing logic that decides which to pick for each task?
Thatβs what I built with FederatedRouter inside MultiMindSDK β an open-source AI agent framework Iβm co-creating.
This tool lets you:
- Run multiple LLMs in a single pipeline
- Add fallback models
- Control latency/cost tradeoffs
- Build modular agents that evolve
π₯ Quick Code Example
from multimind.client.federated_router import FederatedRouter
gpt4_client = ...
mistral_client = ...
qwen_client = ...
router = FederatedRouter(
clients={
"gpt4": gpt4_client,
"mistral": mistral_client,
"qwen": qwen_client
},
routing_fn=lambda prompt:
"qwen" if "translate" in prompt.lower() else
"mistral" if len(prompt) < 50 else
"gpt4"
)
response = router.generate("Write a tweet about AI in Japanese.")
print(response)
π Why Devs Love It
- Works with Ollama, OpenRouter, local APIs
- Can be embedded in pipelines or standalone
- Avoids LangChainβs complexity
π§© Composable + Open Source
Use it with:
- Custom tool agents
- Prompt pipelines
- Vector search (soon)
- Self-evolving DAGs (already supported)
π¦ Try MultiMindSDK: pip install multimind-sdk
| npm i multimind-sdk
π§ͺ Website: https://multimind.dev
GitHub: https://github.com/multimindlab/multimind-sdk
Top comments (0)