The previous article our focus has been on delving into foundational elements of LangChain4j such as ChatLanguageModel
, ChatMessage
, ChatMemory
, and others. Working with components at this level offers great flexibility and complete control, but it comes with the added burden of writing extensive boilerplate code. LLM-driven applications typically necessitate a multitude of interconnected components rather than a single component. Those components includes prompt templates, chat memory, LLMs, output parsers, RAG components: embedding models and stores. It frequently involves numerous interactions between these components and coordinating them all becomes an intricate and laborious task.
LangChain4j aims to simplify the development process by allowing the developers to concentrate on the core business logic without getting bogged down in intricate implementation details. To achieve this, LangChain4j provides two essential high-level abstractions: AiServices
and Chains
. These concepts are designed to streamline the workflow, enables to leverage the power of AI effectively while minimizing the complexity of low-level operations.
AiServices and Chains
AiServices
LangChain4j introduces a novel approach called AiServices
, tailored specifically for the Java ecosystem. The primary objective of AI Services is to abstract away the complexities associated with interacting with Large Language Models (LLMs) and other components, providing a simple and intuitive API.
This approach draws inspiration from popular frameworks like Spring Data JPA and Retrofit, where developers can declaratively define an interface representing the desired API, and LangChain4j automatically generates an object (proxy) that implements this interface. AiServices
can be perceived as a component within the service layer of the application, designed to provide AI-powered services, hence the name.
AiServices
streamline common operations such as formatting inputs for LLMs and parsing outputs from LLMs. Furthermore, they support advanced features like chat memory, tools (Function Calling), and Retrieval-Augmented Generation (RAG).
AiServices
can be leveraged to build stateful interactive applications that facilitate back-and-forth interactions, as well as to automate processes where each call to the LLM is isolated and self-contained. This versatility empowers developers to harness the power of AI in a seamless and efficient manner, without the need to delve into low-level implementation details.
Chains
LangChain4j also offers another composable approach called Chains
. The concept of Chains originates from Python's LangChain. Chains
combine multiple low-level components and orchestrate the interactions between them, enabling the creation of more complex and customized workflows.
However, one potential drawback of Chains
is their inherent rigidity, which can pose challenges when customization is required. Currently, LangChain4j has implemented only two types of Chains: ConversationalChain
and ConversationalRetrievalChain
.
The ConversationalChain
facilitates back-and-forth conversations with an LLM, maintaining context and memory across multiple interactions. On the other hand, the ConversationalRetrievalChain
extends this functionality by incorporating a retrieval component (ContentRetriever
), allowing the LLM to access and leverage external data sources during the conversation.
LangChain4J recommends using the AiServices
instead of Chains
as it is more flexible, declarative and provides a simple API.
In this article, we will see various examples of using AiServices
First let's see a basic AiServices examples
//DEPS dev.langchain4j:langchain4j:0.29.1
//DEPS dev.langchain4j:langchain4j-ollama:0.29.1
import java.io.Console;
import java.time.Duration;
import java.util.Set;
import dev.langchain4j.memory.ChatMemory;
import dev.langchain4j.memory.chat.MessageWindowChatMemory;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.ollama.OllamaChatModel;
import dev.langchain4j.service.AiServices;
class AiServicesBasic {
private static final String MODEL = "mistral";
private static final String BASE_URL = "http://localhost:11434";
private static final Duration timeout = Duration.ofSeconds(120);
interface ChatMinion {
String chat(String message);
}
public static void main(String[] args) {
ChatLanguageModel model = OllamaChatModel.builder()
.baseUrl(BASE_URL)
.modelName(MODEL)
.temperature(0.2)
.timeout(timeout)
.build();
ChatMemory memory = MessageWindowChatMemory.withMaxMessages(10);
ChatMinion minion = AiServices.builder(ChatMinion.class)
.chatLanguageModel(model)
.chatMemory(memory)
.build();
Console console = System.console();
String question = console.readLine("\n\nPlease enter your question: ");
Set<String> set = Set.of("exit", "quit");
while (!set.contains(question)) {
String response = minion.chat(question);
System.out.println(response);
question = console.readLine("\n\nPlease enter your question: ");
}
}
}
We define the simple interface.
interface ChatMinion {
String chat(String message);
}
Next come the low-level components ChatLanguageModel
and ChatMemory
ChatLanguageModel model = OllamaChatModel.builder()
.baseUrl(BASE_URL)
.modelName(MODEL)
.temperature(0.2)
.timeout(timeout)
.build();
ChatMemory memory = MessageWindowChatMemory.withMaxMessages(10);
Finally, create the AiServices
object using the low-level components we created above.
ChatMinion minion = AiServices.builder(ChatMinion.class)
.chatLanguageModel(model)
.chatMemory(memory)
.build();
LangChain4j's AiServices
utility enables creating the proxy objects that implement custom interfaces defined (ChatMinion
in this case). AiServices
should be provided the Class
of the interface along with the low-level components to be integrated (ChatLanguageModel
and ChatMemory
). AiServices
then generates a proxy object that implements this interface using reflection.
This proxy object handles the necessary conversions for inputs and outputs, abstracting away the complexities of working with low-level components. Here the ChatMinion
interface has a chat
method that accepts a String
as input. However, the underlying ChatLanguageModel
component expects a ChatMessage
object as input. In this scenario, AiServices
will automatically convert the String
input into a UserMessage
before invoking the ChatLanguageModel
.
Similarly, when the ChatLanguageModel
returns an AiMessage
, AiServices
will convert it into a String
before returning the result from the chat
method. This seamless conversion process allows working with familiar data types in the application code, while AiServices
handles the underlying transformations and interactions with the low-level components transparently.
String response = minion.chat(question);
To execute the statement above, AiServices
does the heavylifting just described.
The next code example is the stream version of the AiServices
where the responses are streamed unlike the previous one.
One additional point to note is that the LLM responds in a sarcastic tone! That is achieved using the SystemMessage
annotation. AiServices
takes care of passing the SystemMessage
to the LLM
interface ChatMinion {
@SystemMessage("Answer in a sarcastic tone.")
TokenStream chat(String message);
}
Here's the full code.
//DEPS dev.langchain4j:langchain4j:0.29.1
//DEPS dev.langchain4j:langchain4j-ollama:0.29.1
import java.io.Console;
import java.time.Duration;
import java.util.Set;
import java.util.concurrent.CompletableFuture;
import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.memory.ChatMemory;
import dev.langchain4j.memory.chat.MessageWindowChatMemory;
import dev.langchain4j.model.chat.StreamingChatLanguageModel;
import dev.langchain4j.model.ollama.OllamaStreamingChatModel;
import dev.langchain4j.model.output.Response;
import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.TokenStream;
class AiServicesStream {
private static final String MODEL = "mistral";
private static final String BASE_URL = "http://localhost:11434";
private static final Duration timeout = Duration.ofSeconds(120);
private static String question;
interface ChatMinion {
@SystemMessage("Answer in a sarcastic tone.")
TokenStream chat(String message);
}
public static void main(String[] args) {
StreamingChatLanguageModel model = OllamaStreamingChatModel.builder()
.baseUrl(BASE_URL)
.modelName(MODEL)
.timeout(timeout)
.temperature(0.0)
.build();
ChatMemory memory = MessageWindowChatMemory.withMaxMessages(10);
ChatMinion minion = AiServices.builder(ChatMinion.class)
.streamingChatLanguageModel(model)
.chatMemory(memory)
.build();
Console console = System.console();
question = console.readLine("\n\nPlease enter your question: ");
Set<String> set = Set.of("exit", "quit");
while (!set.contains(question.toLowerCase())) {
CompletableFuture<Response<AiMessage>> future = new CompletableFuture<>();
TokenStream stream = minion.chat(question);
stream.onNext(System.out::print)
.onComplete(response -> {
future.complete(response);
})
.onError(error -> {
future.completeExceptionally(error);
})
.start();
future.join();
question = console.readLine("\n\nPlease enter your question: ");
}
System.exit(0);
}
}
Sample input and output for the above
Please enter your question: Hello my name is Kevin
Oh, hello there! I'm just the most interesting person you'll ever meet.
You know, I have a pet unicorn that I ride to work every day.
And I can solve a Rubik's cube in under 5 seconds, no big deal.
So, what's your superpower? Oh wait, you don't have one?
Well, I guess we'll just have to be impressed by my extraordinary abilities then.
Please enter your question: What is my name?
Oh, right! I almost forgot.
Your name is... oh, who am I kidding?
It doesn't matter what your name is.
I bet you're still more boring than a snail's race.
But hey, keep trying to impress me with your mundane existence!
It's always a good laugh.
While the current examples have focused on text-based interactions with Large Language Models (LLMs), the power of AiServices
extends far beyond handling plain text data. AiServices
can leverage the capabilities of LLMs to work with various types of structured data, such as Plain Old Java Objects (POJOs), Collection classes, and more.
By leveraging the versatility of LLMs, AiServices
can seamlessly convert structured data objects into a format suitable for the LLM, process the data, and then convert the LLM's output back into the desired structured data format. This powerful feature enables developers to harness the power of LLMs for a wide range of tasks involving structured data, such as data processing, analysis, transformation, and generation.
For instance, you could define an interface that accepts or returns a POJO representing a complex data structure, and AiServices
will handle the conversion between the POJO and the LLM's input/output format transparently. Similarly, you could work with Collection classes like Lists or Maps, allowing the LLM to process and manipulate the data within these collections.
This capability opens up numerous possibilities for integrating LLMs into various domains and applications, extending their utility beyond pure text-based tasks. With AiServices
, you can leverage the power of LLMs to process and manipulate structured data in a seamless and intuitive manner, without being constrained by the limitations of traditional text-based interfaces.
In the following examples, let's build a crude profanity filter in the text.
//DEPS dev.langchain4j:langchain4j:0.29.1
//DEPS dev.langchain4j:langchain4j-open-ai:0.29.1
import java.util.Map;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.UserMessage;
class AiServicesWordAnalysis {
enum WordAnalysis {
OFFENSIVE, BAD, NEUTRAL, GOOD
}
interface WordModerator {
@UserMessage("Analyze the profanity of {{it}}")
WordAnalysis analyzeWords(String text);
@UserMessage("Does {{it}} have a profanity?")
boolean isProfane(String text);
@UserMessage("Provide alternate better words for the profane words in {{it}}")
Map<String, String> alternateWords(String text);
}
public static void main(String[] args) {
ChatLanguageModel model = OpenAiChatModel.withApiKey("demo");
WordModerator moderator = AiServices.create(WordModerator.class, model);
WordAnalysis analysis =moderator.analyzeWords("He is shit");
System.out.println("Analysis: " + analysis);
boolean isProfane = moderator.isProfane("He is a dumbo");
System.out.println("Is Profane: " + isProfane);
Map<String, String> replacements = moderator.alternateWords("He is not intelligent but a shit and dumbo");
System.out.println(replacements);
}
}
Here we define our interface WordModerator
that defines three methods and the enum WordAnalysis
to be supplied to the AiServices
enum WordAnalysis {
OFFENSIVE, BAD, NEUTRAL, GOOD
}
interface WordModerator {
@UserMessage("Analyze the profanity of {{it}}")
WordAnalysis analyzeWords(String text);
@UserMessage("Does {{it}} have a profanity?")
boolean isProfane(String text);
@UserMessage("Provide alternate better words for the profane words in {{it}}")
Map<String, String> alternateWords(String text);
}
-
analyzeWords
determines if the degree of profanity in the given text as defined in the enumWordAnalysis
-OFFENSIVE
,BAD
,NEUTRAL
,GOOD
. -
isProfane
returnstrue
orfalse
indicating the presence of profanity in the given text. -
alternateWords
finds out the list of words that are profane and the LLM provides alternate words to be replaced for those words
As mentioned above, AiServices
handles the conversion of input/output between the application and the LLM in a transparent manner.
The following is what the LLM returns when running the code
Analysis: OFFENSIVE
Is Profane: true
{shit=fool, dumbo=dimwit}
In the next example, we will see how AiServices
deals with POJOs. We will try to create a primitive resume screener and the names extractor from the text. It is quite similar to the previous example
//DEPS dev.langchain4j:langchain4j:0.29.1
//DEPS dev.langchain4j:langchain4j-ollama:0.29.1
import java.time.Duration;
import java.util.List;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.ollama.OllamaChatModel;
import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.UserMessage;
class AiServicesCandidateInfo {
private static final String MODEL = "mistral";
private static final String BASE_URL = "http://localhost:11434";
private static final Duration timeout = Duration.ofSeconds(120);
class Candidate {
String firstName;
String lastName;
String email;
String experience;
String profession;
String phone;
@Override
public String toString() {
return "Candidate: [firstName=" + firstName + ", lastName=" + lastName + ", email=" + email + ", experience="
+ experience + ", profession=" + profession + ", phone=" + phone + "]";
}
}
interface CandidateInfoCollector {
@UserMessage("Extract information about a person from {{it}}")
Candidate extractCandidateInfo(String text);
@UserMessage("Extract all person names from {{it}}")
List<String> extractPersonNames(String text);
}
public static void main(String[] args) {
ChatLanguageModel model = OllamaChatModel.builder()
.baseUrl(BASE_URL)
.modelName(MODEL)
.timeout(timeout)
.build();
CandidateInfoCollector candidateInfoCollector = AiServices.create(CandidateInfoCollector.class, model);
String text = """
I am Arjun Kumar, I have been working at a Software Developer for 5 years.
Email: arjun@myemail.com
Phone: +919876543210
""";
Candidate candidate = candidateInfoCollector.extractCandidateInfo(text);
System.out.println(candidate);
String text2 = """
There was an interview being conducted in a software company.
Arjun and Ananya planned to attend the interview.
Next morning they went to the venue.
There they met their friends Akash, Mithun, Sita, Kausalya and Kumar who were also attending.
The interviewers were Bob and Steve!
""";
// Person person = personExtractor.extractPersonFrom(text);
List<String> candidates = candidateInfoCollector.extractPersonNames(text2);
System.out.println(candidates);
}
}
Here we define our POJO - Candidate
and also override the toString
method
class Candidate {
String firstName;
String lastName;
String email;
String experience;
String profession;
String phone;
}
The interface CandidateInfoCollector
defines the two methods annotated with the UserMessage
instruction
interface CandidateInfoCollector {
@UserMessage("Extract information about a person from {{it}}")
Candidate extractCandidateInfo(String text);
@UserMessage("Extract all person names from {{it}}")
List<String> extractPersonNames(String text);
}
Running the code extracts the POJO object and the list of names
Candidate: [firstName=Arjun, lastName=Kumar, email=arjun@myemail.com, experience=5 years, profession=Software Developer, phone=+919876543210]
[ Arjun, Ananya, Akash, Mithun, Sita, Kausalya, Kumar, Bob, Steve]
As we can see from the examples above, AiServices
enables the developer to focus on the business logic by taking away the complexities of interacting with the LLM and handling the different data types transparently.
There are more to be explored in LangChain4j such as Tools (Function Calling), Retrieval Augmented Generation (RAG), etc. We will explore those in the upcoming articles.
The code examples are available on GitHub repo.
Happy Coding!
Top comments (0)