DEV Community

Cover image for Langchain Expression Language
byteminds
byteminds

Posted on

Langchain Expression Language

LangChain has emerged as a prominent framework, empowering developers to harness the capabilities of advanced language models. However, development can be challenging, particularly for those without a solid technical background. The LangChain Expression Language(LCEL) helps by providing a simple and declarative way to interact with core components and much more.

Benefits of LangChain Expression Language

  1. Simplified Chain Composition: Engaging with core components becomes effortless through intuitive pipe operations.
  2. Efficient Language Model Calls: Out-of-the-box support for batch, async, and streaming APIs eliminates the complexity of optimizing language model interactions.
  3. Structured Conversational Flows: Provides a well-defined structure for Conversation Retrieval Chains, VectorStore retrieval, and Memory-based prompts.
  4. Function Calling: Similar to OpenAI’s offerings, LCEL introduces a seamless method of function calling, enhancing code clarity and usability.

Introducing ChefBot: Your Culinary Companion

Meet ChefBot, the culinary guide in our LCEL journey. With ingredients or preferences in hand, ChefBot can craft personalized recipe recommendations. Get ready to see Langchain Expression Language in action with ChefBot’s delectable journey!

If you are a visual learner or to dive deeper, refer to the YouTube video.

PromptTemplate, LLMs, and OutputParser

Notice the significant syntactical shift with a declarative approach: Draft a prompt, select the OpenAI model, and effortlessly combine them using the pipe operator (chain = prompt | model).

Witness how ChefBot crafts recipes from ingredients, illustrating our code. With the newly available String Output Parser, AI’s chat message output also gets neatly transformed.

from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.schema.output_parser import StrOutputParser

model = ChatOpenAI()
prompt = ChatPromptTemplate.from_template("
Given the ingredients: {ingredients}, what is a recipe that I can cook at home?")
chain = prompt | model |  StrOutputParser()
chain.invoke({"ingredients": "chicken, tomatoes, garlic, olive oil"})
Enter fullscreen mode Exit fullscreen mode

Batch, Stream, and Async

Batch: Unlocking batch processing’s potential, LangChain’s Expression Language simplifies LLM queries by executing multiple tasks in a go. LangChain’s batch also optimizes inputs through parallel LLM calls.

Langchain ensures that parallel calls are made to the Open AI model, thereby optimizing performance.

response = chain.batch([{"ingredients": "chicken, tomatoes, garlic"},
{"ingredients":"chicken, tomatoes, eggs"}])
Enter fullscreen mode Exit fullscreen mode

Stream helps with the real-time data flow, ideal for dynamic chatbots and live-stream applications. ChefBot illustrates the power as it progressively streams information, eliminating wait time.

chain = prompt | model
for s in chain.stream({"ingredients": "chicken, tomatoes, garlic, olive oil"}):print(s.content, end="")
Enter fullscreen mode Exit fullscreen mode

Async: LangChain Expression Language introduces async counterparts for methods like invoke, batch, and stream. By utilizing, ainvoke and await methods for seamless async execution, the tasks can be made to run independently, thus boosting responsiveness and application speed.

response = await chain.ainvoke({"ingredients": "chicken, tomatoes, garlic, olive oil"})
Enter fullscreen mode Exit fullscreen mode

Function Calling

LangChain Expression Language reaches beyond data piping, welcoming function-based operations for tailored tasks on demand. This facet enhances the workflow with reusability.

In the conventional programming realm, a function represents reusable code blocks. In LangChain, functions embody structured schemas, sent to platforms like Open AI for processing.

Let’s implement this concept for ChefBot. We define a reusable function, ‘generate_recipe’, which crafts recipes for the ingredients provided. By adding the required parameters, cuisine, we ensure the response contains the type of cuisine of the recipe. Hence, we can ensure that the important attributes in the response are included in the outcome.

generate_recipe_fn = [
    {
      "name": "generate_recipe",
      "description": "Generate a recipe based on user preferences",
      "parameters": {
        "type": "object",
        "properties": {
          "cuisine": {
                "type": "string",
                "description": "The cuisine of the recipe"
            }
        },
        "required": ["cuisine"]
      }
    }
  ]

chain = prompt | model.bind(function_call= {"name": "generate_recipe"}, functions= generate_recipe_fn)

# Generate the recipe using chefbot
recipe = chain.invoke({"ingredients": "chicken, tomatoes, garlic, olive oil"}, config={})

Enter fullscreen mode Exit fullscreen mode

VectorStore, Emeddings & Retrievers

Moving into intricate terrain, we delve into vector stores, embeddings, and retrievers.

Vector stores host vector representations of words or phrases, while embeddings are a type of vector representation that capture the meaning of a word or phrase in a high-dimensional space as numbers. Utilizing them, retriever chains extract contextual data, refining user query responses.

Next, we implement this concept with ChefBot showcasing its real-world efficacy. ChefBot houses a vast recipe collection stored as knowledge embeddings which can be queried with “retrieval-augmented generation” chains.

Let’s understand these concepts with ChefBot example.

from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.schema.runnable import RunnablePassthrough

# recipe for Garlic Tomato Chicken to be stored as embeddings
recipe_ingredients = '''The ingredients of Garlic Tomato Chicken are:
1. 2 chicken breasts (boneless, skinless)
2. 4 tomatoes (diced)
3. 4 cloves of garlic (minced)
4. 2 tablespoons olive oil'''

#create the retriever from the Chroma Vectorstore.
# pass the recipe text and embed it with OpenAIEmbeddings
vectorstore = Chroma.from_texts([recipe_ingredients], embedding=OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
# template to be passed to the prompt
template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
Enter fullscreen mode Exit fullscreen mode
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()
# Notice context and RunnablePassthrough input to the chain
chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)

response = chain.invoke("How many tomatoes are in the recipe ingredients?")
Enter fullscreen mode Exit fullscreen mode

In real-world applications, you can ingest your own private data as embeddings into the vector store and use the expression language for easy and accurate retrieval.

Conclusion

As we wrap up this exploration, remember that Langchain Expression Language's reach extends beyond the topics we’ve covered. Conversational Retrieval Chains, Multi-LLM Chain Fusion, Tools Integration, Memory Enhancement, SQL Querying, and Python REPL Coding are among the multifaceted areas that LCEL seamlessly touches.

To truly grasp the depth of LCEL’s capabilities with the above-mentioned features, and see real-world demonstrations, refer to the YouTube video.

https://www.youtube.com/watch?v=AM77pbogh5s

Top comments (2)

Collapse
 
daslaw profile image
Dauda Lawal

Very resourceful, thanks for sharing.

Collapse
 
bytemindsai profile image
byteminds

Thanks for the appreciation.