DEV Community

Stephen Collins
Stephen Collins

Posted on

How to Create LCEL Chains in LangChain

Blog post cover image

LangChain is a framework designed to enhance the capabilities of large language models (LLMs) by integrating external tools and data sources directly into the workflow. It addresses two main limitations of LLMs: the inability to access real-time data and the lack of tools to interact with the external world. By using LangChain, developers can create applications that leverage the intelligence of LLMs while overcoming these restrictions.

This guide will walk you through creating a simple app to fetch current stock prices and calculate the percentage difference between them using a LangChain chain. I'll also demonstrate LangChain's Expression Language (LCEL), showcasing how to create chains of operations with less syntax than before LCEL was introduced.

All of the code for this tutorial is available on GitHub.

Project Overview

Our project involves creating a LangChain chain that:

  1. Fetches the current prices of two stocks.
  2. Passes these prices to a Python Read-Eval-Print Loop (REPL) to calculate the percent difference between them.

These tasks require using two separate tools that we pass to our LLM in order to overcome LLM limitations.

Prerequisites

Before starting, ensure you have Python 3.9 or greater installed on your system along with the following packages:

  • requests for making HTTP requests.
  • python-dotenv for loading environment variables.
  • langchain for creating LangChain Chains, agents, and retrieval strategies.
  • langchain_openai for integrating OpenAI models.
  • langchain_core for LangChain core functionalities.

In addition, you need to have the following environment variables:

See the example.env file for an example of how to set up your environment variables.

Step-by-Step Implementation

Step 1: Setup and Initialization

First, we install the necessary libraries from the requirements.txt file:

pip3 install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Next, import the necessary modules and set up the environment:

import os
import requests
from langchain.output_parsers import JsonOutputKeyToolsParser
from langchain_core.tools import tool
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain.globals import set_debug
from dotenv import load_dotenv

# Enable debug mode to see intermediate chain outputs
set_debug(True)

load_dotenv()

OPENAI_CHAT_MODEL = "gpt-3.5-turbo-1106"
Enter fullscreen mode Exit fullscreen mode

Here, we enable debug mode for visibility into intermediate chain steps and load environment variables, including our OpenAI model choice.

Step 2: Define Tools

We define two tools: repl_tool and stock_search.

  • repl_tool: Executes a Python command and returns the result. This tool allows us to perform precise calculations that LLMs can't directly execute.
@tool
def repl_tool(command: str) -> str:
    """Run a Python REPL command. Return the final value set to a variable called "result" """
    exec_globals = {}
    exec(command, None, exec_globals)
    return exec_globals['result']
Enter fullscreen mode Exit fullscreen mode
  • stock_search: Fetches stock prices using the SERP API. This tool overcomes the LLM's limitation of not accessing real-time data.
@tool
def stock_search(stock_tickers: list) -> list:
    """Search for multiple stocks with their ticker symbols, including the stock exchange, like AAPL:NASDAQ or GOOGL:NASDAQ, returning a list of stock summaries using the SERP API."""
    api_key = os.getenv('SERP_API_KEY')
    if not api_key:
        raise ValueError("SERP API key not found in environment variables.")
    # Fetching logic here...
Enter fullscreen mode Exit fullscreen mode

Step 3: Create LangChain Chain

We construct a chain using LCEL syntax, demonstrating the integration of prompts, models, output parsers, and tools.

chain = (
    get_stock_prices_prompt
    | model
    | JsonOutputKeyToolsParser(key_name="stock_search", first_tool_only=True, return_single=True)
    | stock_search
    | calculate_percent_difference_prompt
    | model
    | JsonOutputKeyToolsParser(key_name="repl_tool", first_tool_only=True, return_single=True)
    | repl_tool
)
Enter fullscreen mode Exit fullscreen mode

This chain starts with a prompt to get stock prices, fetches them using stock_search, calculates the percent difference with repl_tool, and returns the result.

Step 4: Invoke the Chain

Finally, we invoke the chain with specific stock tickers and print the result:

result = chain.invoke({"stockA": "AAPL", "stockB": "GOOGL"})
# Returns the percent difference between the two stock prices
print(result)
Enter fullscreen mode Exit fullscreen mode

Criticisms of LangChain

This example project took quite a bit of effort digging into LangChain's source code because of how outdated the documentation is.

Additionally, the documentation's examples are not very well explained, often missing critical parts that prevent you from copying and pasting the code snippets to try out yourself.

If you have to use LangChain, I hope this guide helps you.

However - my take? I would recommend not using LangChain if you want to build production-ready applications unless you are prepared to dig through the source code to understand how it all works.

I'll continue to write about LangChain and provide more up-to-date and helpful guides than their documentation, if there's interest in more LangChain tutorials.

Hopefully in the near future, they take a serious effort to updating their documentation, so LangChain can be easier to use.

Conclusion

By following this tutorial, you've learned how to use LangChain to fetch real-time data and perform calculations by integrating external tools with LLMs. This example project serves as a foundation for building more complex applications that leverage LangChain and LLMs. Whether you're interested in financial analysis or any other domain, LangChain offers a way to extend the capabilities of your LLM-based applications.

Top comments (0)