DEV Community

Daniel Lenton
Daniel Lenton

Posted on

We Built a Dynamic Router Improving LLM Quality, Cost and Speed ✨

Are you also overwhelmed by all the LLM models and providers constantly coming onto the scene? To me it sometimes feels like trying to drink from a firehose, especially when it comes to aligning with my own specific task and prompts. Chosing the wrong model for your task means slower, more expensive, and less competent models, which nobody wants 🫠

Image description


The Common Dilemma

The AI landscape is cluttered with options like Llama, Gemini, GPT, and Mistral, leading to a common scenario:

Image description


Dynamic Routing with Unify ✨

Before you roll your eyes at yet another buzzword, let me try to explain what we've built in a bit more detail. Basically, with Unify, you don't have to manually test each model against your requirements or juggle multiple accounts and API keys. All models are available with a single API key, and you can easily benchmark your prompts to assess which LLMs and providers are best for your own task.

Image description

Unify can also automatically route your prompts to the most suitable LLM based on your preferences for quality, speed, and cost. This means you can focus on what truly matters - building your exceptional AI-driven applications 🔥

Feel free to check out a more comprehensive walkthrough.

Image description


So, high-level, what does Unify bring to the table?

  • ⚙️ Control: Choose which models and providers you want to route to and then adjust how important quality, cost, and latency are for you. That's it; now the performance of your LLM app is fully in your hands, not the providers!

  • 📈 Self Improvement: As each new model and provider comes onto the scene, sit back and watch your LLM application automatically improve over time. We quickly add support for the latest and greatest, ensuring your custom cost-quality-speed requirements are always fully optimized.

  • 📊 Observability: Don't want to route? No sweat. Quickly compare all models and providers, and see which are truly the best for your own needs, on your own prompts, for your own task.

  • ⚖️ Impartiality: We treat all models and providers equally, as we don't have a horse in the race. You can trust our benchmarks.

  • 🔑 Convenience: The power of all models and providers behind a single endpoint, queryable individually or via the router, all with a single API key. 'pip install unifyai', and away you go!

  • 🧑‍💻 Focus: Don't stress updating the model and provider every few weeks. Just specify your performance needs and get back to building great AI products. We'll handle the rest for you!


Image description


Getting Started is a Breeze:

pip install unifyai

from unify import Unify

unify = Unify(
    api_key=("UNIFY_KEY"),
    endpoint="router@q:1",
)

response = unify.generate(user_prompt="Hello there")

Enter fullscreen mode Exit fullscreen mode

It's that simple 👌

Image description


Why We Think You'll Like Unify:

  • 🎨Focus on Development: Spend more time creating and less time worrying about finding the most appropriate LLM.

  • ⚙️Adaptive and Efficient: Your app will self-improve as you automatically benchmark each new LLM on your own prompts and for your own task, enabling you to quickly integrate the latest and greatest LLMs into your workflow.

  • ⚖️Quality, Cost and Speed: These are the three pillars for all LLMs. Unify's router ensures you never have to compromise on any of them.


Every signup comes with $50 free credit to get you started!

Image description

Top comments (0)