Tired of watching your OpenAI API quota melt like ice cream in July?
WE HEAR YOU! And we just shipped a solution.
With our latest update, EvoAgentX now supports locally deployed language models โ thanks to upgraded LiteLLM integration.
๐ What does this mean?
- No more sweating over token bills ๐ธ
- Total control over your compute + privacy ๐
- Experiment with powerful models on your own terms
- Plug-and-play local models with the same EvoAgentX magic
๐ Heads up: small models are... well, small.
For better results, we recommend running larger ones with stronger instruction-following.
๐ Code updates here:
- litellm_model.py
- model_configs.py
So go ahead โ
Unleash your agents. Host your LLMs. Keep your tokens.
โญ๏ธ And if you love this direction, please star us on GitHub! Every star helps our open-source mission grow:
๐ https://github.com/EvoAgentX/EvoAgentX
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.