DEV Community

Cover image for πŸ’‘ Use FederatedRouter to Switch Between GPT-4, Gemini, and Mistral in One AI Agent (Open Source)
Nikhil Kumar
Nikhil Kumar

Posted on

πŸ’‘ Use FederatedRouter to Switch Between GPT-4, Gemini, and Mistral in One AI Agent (Open Source)

Ever wanted to use GPT-4, Mistral, and Qwen in the same agent β€” with routing logic that decides which to pick for each task?

That’s what I built with FederatedRouter inside MultiMindSDK β€” an open-source AI agent framework I’m co-creating.

This tool lets you:

  • Run multiple LLMs in a single pipeline
  • Add fallback models
  • Control latency/cost tradeoffs
  • Build modular agents that evolve

πŸ”₯ Quick Code Example

from multimind.client.federated_router import FederatedRouter

gpt4_client = ...
mistral_client = ...
qwen_client = ...

router = FederatedRouter(
    clients={
        "gpt4": gpt4_client,
        "mistral": mistral_client,
        "qwen": qwen_client
    },
    routing_fn=lambda prompt:
        "qwen" if "translate" in prompt.lower() else
        "mistral" if len(prompt) < 50 else
        "gpt4"
)

response = router.generate("Write a tweet about AI in Japanese.")
print(response)

Enter fullscreen mode Exit fullscreen mode

🌟 Why Devs Love It

  • Works with Ollama, OpenRouter, local APIs
  • Can be embedded in pipelines or standalone
  • Avoids LangChain’s complexity

🧩 Composable + Open Source

Use it with:

  • Custom tool agents
  • Prompt pipelines
  • Vector search (soon)
  • Self-evolving DAGs (already supported)

πŸ“¦ Try MultiMindSDK: pip install multimind-sdk | npm i multimind-sdk

πŸ§ͺ Website: https://multimind.dev
GitHub: https://github.com/multimindlab/multimind-sdk

Top comments (0)