The AI ecosystem is booming.
If you've started building AI-powered apps, you might have missed some of those awesome open-source projects that can help you make your LLM queries, more relevant and improve the general quality of your chatbots and AI apps.
Here are 9 projects to take your app to the next level in 2023.
LLMonitor (sponsored)
LLMonitor is an all-in-one open-source toolkit for AI devs ready to take their app to production, with features such as:
- π΅ Cost & latency analytics
- πͺ Users tracking
- π Traces to debug easily
- π Inspect & replay AI requests
- π·οΈ Label and export fine-tuning datasets
- π²οΈ Collect feedback from users
- π§ͺ Evaluate prompts against tests
Once you're ready for users to try your app, using an observability solution is essential.
We invite you to give LLMonitor a shot (it's completely free up to 1000 events / day).
β Give LLMonitor a βοΈ on GitHub
Guidance
Guidance is a new language format released by Microsoft allowing to create complex agent flows. It looks like this:
experts = guidance('''
{{#system~}}
You are a helpful and terse assistant.
{{~/system}}
{{#user~}}
I want a response to the following question:
{{query}}
Name 3 world-class experts (past or present) who would be great at answering this?
Don't answer the question yet.
{{~/user}}
{{#assistant~}}
{{gen 'expert_names' temperature=0 max_tokens=300}}
{{~/assistant}}
{{#user~}}
Great, now please answer the question as if these experts had collaborated in writing a joint anonymous answer.
{{~/user}}
{{#assistant~}}
{{gen 'answer' temperature=0 max_tokens=500}}
{{~/assistant}}
''', llm=gpt4)
This interleaving of generation and prompting allows for precise output structure that might help produce clear and parsable results.
LiteLLM
Call any LLM API using the OpenAI format (Bedrock, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.)
For example, integrating Anthropic will look like this:
from litellm import completion
import os
## set ENV variables
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
response = completion(
model="claude-2",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
This way, you can integrate different models in your app without learning and intergrating new APIs.
Zep
Zep allows you to summarize, embed, and enhance chat histories and documents asynchronously. It ensures that these operations do not impact the chat experience of your users.
With Zep, chatbot histories are persisted to a database, enabling you to easily scale out as your user base grows.
As a drop-in replacement for popular LangChain components, Zep allows you to get your application into production within minutes, without the need to rewrite your existing code.
LangChain
Who hasn't heard of LangChain by the now? LangChain is the most popular AI framework allowing you to plug together models into chains, with vector stores and more to build powerful AI apps.
DeepEval
DeepEval is an evaluation framework designed for LLM apps that is straightforward to use and available open-source.
It functions similarly to Pytest, but with a specialization in testing LLM applications. DeepEval assesses using metrics like factual consistency, accuracy, and answer relevancy utilizing LLMs along with various other NLP models.
pgVector
pgVector is a Postgres extension to store your embeddings and perform operations such as similarity search.
If you're using Supabase, pgVector is already available.
You could use pgVector instead of specialized vector databases Pinecone to simplify your stack.
promptfoo
With promptfoo, similarly to deepEval, you can test your prompts & models against predefined test cases.
Evaluate quality and catch regressions by comparing LLM outputs side-by-side, score outputs automatically by defining test cases.
Model Fusion
Model Fusion is a TypeScript library designed for building AI applications, chatbots, and agents.
It offers support for a wide range of models, including text generation, image generation, text-to-speech, speech-to-text, and embedding models.
Features:
- Multimodal: combine different modalities such as text, images, and speech.
- Streaming: Model Fusion supports streaming for many generation models, including text streaming, structure streaming, and full duplex speech streaming.
- Utility: Model Fusion provides a set of utility functions for tools and tool usage, vector indices, and guard functions.
- Type inference and validation: Model Fusion leverages TypeScript to infer types and validate model responses.
Useful if you're prefer Typescript to Python.
Model fusion is a quite new but very promising project.
Thank you for reading!
Any project we missed? please tell us in the comments :)
A star on our GitHub project would mean the world π Click on the cat to make him happy π
Top comments (7)
Great piece as usual!
Great write-up!
Great tools for using AI!
Thank you Nevo!
Great resources!
Glad to hear!