Introduction
If you've ever built an AI chatbot for a website, you know that integrating a large language model (LLM) often means wiring up API calls, managing async flow, and writing custom backend logic - all before your bot can even say "Hello World".
Over a year ago, I wrote about how to integrate LLMs with React ChatBotify, which involves a manual method that worked, but required a fair bit of glue code and configuration. While React ChatBotify has made it easier to build a chatbot UI, LLM integration still demanded work that could quickly grow in complexity.
That's exactly the pain point the LLM Connector Plugin is designed to solve - by providing out-of-the-box LLM integrations.
With the LLM Connector Plugin, you can eliminate boilerplate, abstract away complexity, and get your chatbot talking to an LLM in minutes. In this post, I'll walk you through what the plugin does, how to install it, and how it makes building smart, conversational UIs with React ChatBotify not only simpler, but faster.
What is the LLM Connector Plugin?
The LLM Connector Plugin is an abstraction layer to streamline LLM integrations within React ChatBotify. It enables developers to connect React ChatBotify to Large Language Model Providers such as OpenAI and Google Gemini with ease. It even ships with integrations with Browser models, achievable with an extremely simple setup:
import ChatBot from "react-chatbotify";
import LlmConnector, { WebLlmProvider } from "@rcb-plugins/llm-connector";
const MyComponent = () => {
const flow = {
start: {
llmConnector: {
initialMessage: "Ask away!",
provider: new WebLlmProvider({
model: 'Qwen2-0.5B-Instruct-q4f16_1-MLC',
}),
}
}
}
return (
<ChatBot plugins=[LlmConnector()]/>
);
};
As can be seen from the snippet above, by providing a simple declarative interface, the plugin lets you focus on your bot's behavior and flow. This is in contrast to the past where you'd have to manually handle API calls and message formatting. The result of the above snippet?
Despite how simple it looks, under the hood, the plugin does a lot of heavy lifting - such as handling streaming of responses, syncing of audio, managing typing indicators and more! Ok so we saw a short snippet, but what did it contain and how exactly do we use the plugin? Let's find out!
Installation and Setup
The LLM Connector Plugin is available on NPM and can be installed via the following command:
npm install @rcb-plugins/llm-connector
Take note that the plugin is only compatible with React ChatBotify versions later than v2.0.0-beta.34!
After installing the plugin, you can import and initialize it in your project as follows:
import ChatBot from "react-chatbotify";
import LlmConnector from "@rcb-plugins/llm-connector";
const MyComponent = () => {
return (
<ChatBot plugins=[LlmConnector()]/>
);
};
We'll next create a block dedicated to handling LLM conversations and proceed to add the llmConnector
attribute to it:
import ChatBot from "react-chatbotify";
import LlmConnector from "@rcb-plugins/llm-connector";
const MyComponent = () => {
const flow = {
start: {
llmConnector: {}
}
}
return (
<ChatBot plugins=[LlmConnector()]/>
);
};
Hmmm, nothing's happening. But don't fret, we're almost there! We're now just missing a LLM Provider to have your chatbot start talking. We'll look at how that's done through a minimal example next!
A Minimal Example
In this minimal example, we'll import and use the WebLlmProvider, which is provided by default with the plugin. Note that the plugin ships with 3 built-in providers (OpenAI, Gemini and WebLlm), which serve to cover the vast majority of common use cases. Let's go ahead and import the WebLlmProvider as such and initialize it within the provider
property inside llmConnector
:
import ChatBot from "react-chatbotify";
import LlmConnector, { WebLlmProvider } from "@rcb-plugins/llm-connector";
const MyComponent = () => {
const flow = {
start: {
llmConnector: {
provider: new WebLlmProvider({
model: 'Qwen2-0.5B-Instruct-q4f16_1-MLC',
}),
}
}
}
return (
<ChatBot plugins=[LlmConnector()]/>
);
};
Notice that when we initialized the WebLlmProvider, we also passed in a minimal set of configurations which included the model. In this case, we tried it with Qwen2–0.5B-Instruct-q4f16_1-MLC but feel free to test it with other models as well (bear in mind the size of the model if you're running it in your browser)!
It is important to note that configurations for providers can actually vary greatly. For the configuration guides of the default providers, you may look here.
The React ChatBotify documentation website also comes with several live examples demonstrating the default providers at work. You are strongly encouraged to check them out:
Creating Your Own Provider
While the plugin offers default providers to cater for the vast majority of common use cases, it's understandable that advanced users may wish to customize their LLM solutions. With that in mind, the plugin is designed to allow users to easily provide their own custom providers!
Developers looking to create custom providers can do so by simply importing and implementing the Provider interface. The only method enforced by the interface is sendMessage
, which returns an AsyncGenerator<string>
for the LlmConnector Plugin to consume. A minimal example of a custom provider is shown below:
import ChatBot from "react-chatbotify";
import { Provider } from "@rcb-plugins/llm-connector";
class MyCustomProvider implements Provider {
/**
* Streams or batch-calls Openai and yields each chunk (or the full text).
*
* @param messages messages to include in the request
* @param stream if true, yields each token as it arrives; if false, yields one full response
*/
public async *sendMessages(messages: Message[]): AsyncGenerator<string> {
// obviously we should do something with the messages (e.g. call a proxy) but this is just an example
yield "Hello World!";
}
}
If you're looking to create your own provider, consider referencing the implementations for the default providers.
Conclusion
I hope this article has demonstrated how much simpler and faster it is now to integrate LLMs with React ChatBotify.
In the next few articles, we'll dive into more detailed integrations with each of the default providers, including how we can end an LLM conversation. If you're keen to dive deeper, do keep a lookout!
Finally, if you have any feedback, suggestions, or thoughts about what's shared, feel free to leave a comment or reach out on discord. Thank you for reading and see you around! 😊
Top comments (0)