<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jaydeep Biswas</title>
    <description>The latest articles on DEV Community by Jaydeep Biswas (@jaydeepb21).</description>
    <link>https://dev.to/jaydeepb21</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jaydeepb21"/>
    <language>en</language>
    <item>
      <title>Seeking advice on optimizing response time and handling multiple requests on AWS instance with NVIDIA A10G GPU</title>
      <dc:creator>Jaydeep Biswas</dc:creator>
      <pubDate>Thu, 11 Apr 2024 06:27:42 +0000</pubDate>
      <link>https://dev.to/jaydeepb21/seeking-advice-on-optimizing-response-time-and-handling-multiple-requests-on-aws-instance-with-nvidia-a10g-gpu-5fa7</link>
      <guid>https://dev.to/jaydeepb21/seeking-advice-on-optimizing-response-time-and-handling-multiple-requests-on-aws-instance-with-nvidia-a10g-gpu-5fa7</guid>
      <description>&lt;p&gt;Hey everyone,&lt;/p&gt;

&lt;p&gt;I'm currently facing some challenges with optimizing the response time of my AWS instance. Here's the setup: I'm using a &lt;code&gt;g5.xlarge&lt;/code&gt; instance which houses a single NVIDIA A10G GPU with 24GB of VRAM. Recently, I fine-tuned a mistralai/Mistral-7B-Instruct-v0.2 model on my custom data and then merged it with the base model. Additionally, I applied quantization methods to optimize further.&lt;/p&gt;

&lt;p&gt;However, when I send a request to my fine-tuned model, it's taking approximately &lt;strong&gt;3 minutes to respond&lt;/strong&gt;, even for requests with a max token of 1024. I'm &lt;strong&gt;looking for suggestions on how to reduce this response time&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Furthermore, I've &lt;strong&gt;encountered errors when attempting to handle multiple requests simultaneously&lt;/strong&gt;. Specifically, I've received errors like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;"Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)" &lt;/li&gt;
&lt;li&gt;"The SW shall provide an estimated value for the torque
CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with &lt;code&gt;TORCH_USE_CUDA_DSA&lt;/code&gt; to enable device-side assertions."&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Could someone please guide me on how to address these errors and efficiently handle multiple requests simultaneously on my AWS instance?&lt;/p&gt;

&lt;p&gt;Any help or advice would be greatly appreciated. &lt;strong&gt;Thanks in advance!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>python</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>LangChain's 3rd Module: Agents🦜🕴️</title>
      <dc:creator>Jaydeep Biswas</dc:creator>
      <pubDate>Wed, 10 Jan 2024 18:29:33 +0000</pubDate>
      <link>https://dev.to/jaydeepb21/lanchains-3rd-module-agents-58o6</link>
      <guid>https://dev.to/jaydeepb21/lanchains-3rd-module-agents-58o6</guid>
      <description>&lt;p&gt;Hey there! Throughout our latest blog series, we've delved into a wide array of subjects. Here's an overview of the topics we've explored thus far:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://dev.to/jaydeepb21/unlocking-infinite-possibilities-with-langchain-4543"&gt;Installation and Setup of LangChain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/jaydeepb21/1st-module-model-io-4b6a"&gt;LangChain's 1st Module: Model I/O&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/jaydeepb21/langchains-2nd-module-retrieval-2ape"&gt;LangChain's 2nd Module: Retrieval&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Exploring LangChain's Agents 🔍🤖
&lt;/h1&gt;

&lt;p&gt;Today, I want to dive into this exciting concept called &lt;strong&gt;"Agents" ** in LangChain. It's pretty mind-blowing!&lt;br&gt;
**LangChain&lt;/strong&gt; introduces an innovative idea called "&lt;strong&gt;Agents&lt;/strong&gt;" that takes the concept of &lt;strong&gt;chains&lt;/strong&gt; to a whole new level. Agents use &lt;strong&gt;language models&lt;/strong&gt; to dynamically figure out sequences of actions to perform, making them highly versatile and adaptable. Unlike regular chains, where actions are hardcoded in code, agents utilize language models as reasoning engines to decide which actions to take and in what order.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Agent&lt;/strong&gt; is the main part responsible for decision-making. It harnesses the power of a language model and a prompt to figure out the next steps to achieve a specific objective. The inputs to an agent usually include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tools&lt;/strong&gt;: Descriptions of available tools (more on this later).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Input&lt;/strong&gt;: The high-level objective or query from the user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intermediate Steps&lt;/strong&gt;: A history of (action, tool output) pairs executed to reach the current user input.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;The result of an &lt;strong&gt;agent&lt;/strong&gt; can either be the next thing to do (&lt;strong&gt;AgentActions&lt;/strong&gt;) or the ultimate reply to give to the user (&lt;strong&gt;AgentFinish&lt;/strong&gt;). An action includes details about a tool and the input needed for that tool.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Tools 🛠️
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tools&lt;/strong&gt; are interfaces that an &lt;strong&gt;agent&lt;/strong&gt; can use to interact with the world. They allow agents to perform various tasks like searching the web, running shell commands, or accessing external APIs. In &lt;strong&gt;LangChain&lt;/strong&gt;, tools are crucial for expanding the capabilities of agents and helping them achieve diverse tasks.&lt;/p&gt;

&lt;p&gt;To use tools in LangChain, you can load them using the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.agents import load_tools

tool_names = [...]
tools = load_tools(tool_names)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some tools may need a base &lt;strong&gt;Language Model (LLM)&lt;/strong&gt; for initialization. In such cases, you can pass an LLM like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.agents import load_tools

tool_names = [...]
llm = ...
tools = load_tools(tool_names, llm=llm)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup allows you to access a variety of tools and integrate them into your agent's workflows. The complete list of tools with usage documentation is available &lt;a href="https://python.langchain.com/docs/integrations/tools"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Examples of Tools  📚🔧
&lt;/h3&gt;

&lt;h4&gt;
  
  
  DuckDuckGo
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;DuckDuckGo&lt;/strong&gt; tool lets you perform web searches using its search engine. Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.tools import DuckDuckGoSearchRun

search = DuckDuckGoSearchRun()
search.run("Manchester United vs Luton Town match summary")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cy81EzWv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iioje77b0qmtm8555k1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cy81EzWv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iioje77b0qmtm8555k1b.png" alt="Image description" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  DataForSeo
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;DataForSeo&lt;/strong&gt; toolkit allows you to get search engine results using the DataForSeo API. To use it, you need to set up your API credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os

os.environ["DATAFORSEO_LOGIN"] = "&amp;lt;your_api_access_username&amp;gt;"
os.environ["DATAFORSEO_PASSWORD"] = "&amp;lt;your_api_access_password&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once credentials are set, you can create a &lt;strong&gt;DataForSeoAPIWrapper&lt;/strong&gt; tool to access the API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.utilities.dataforseo_api_search import DataForSeoAPIWrapper

wrapper = DataForSeoAPIWrapper()

result = wrapper.run("Weather in Los Angeles")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;DataForSeoAPIWrapper&lt;/strong&gt; tool fetches search engine results from various sources.&lt;/p&gt;

&lt;p&gt;You can customize the type of results and fields returned in the JSON response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;json_wrapper = DataForSeoAPIWrapper(
    json_result_types=["organic", "knowledge_graph", "answer_box"],
    json_result_fields=["type", "title", "description", "text"],
    top_count=3,
)

json_result = json_wrapper.results("Bill Gates")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Specify the location and language for your search results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;customized_wrapper = DataForSeoAPIWrapper(
    top_count=10,
    json_result_types=["organic", "local_pack"],
    json_result_fields=["title", "description", "type"],
    params={"location_name": "Germany", "language_code": "en"},
)

customized_result = customized_wrapper.results("coffee near me")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Choose the search engine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;customized_wrapper = DataForSeoAPIWrapper(
    top_count=10,
    json_result_types=["organic", "local_pack"],
    json_result_fields=["title", "description", "type"],
    params={"location_name": "Germany", "language_code": "en", "se_name": "bing"},
)

customized_result = customized_wrapper.results("coffee near me")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The search is customized to use Bing as the search engine.&lt;/p&gt;

&lt;p&gt;Specify the type of search:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;maps_search = DataForSeoAPIWrapper(
    top_count=10,
    json_result_fields=["title", "value", "address", "rating", "type"],
    params={
        "location_coordinate": "52.512,13.36,12z",
        "language_code": "en",
        "se_type": "maps",
    },
)

maps_search_result = maps_search.results("coffee near me")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These examples showcase how you can customize searches based on result types, fields, location, language, search engine, and search type.&lt;/p&gt;

&lt;h4&gt;
  
  
  Shell (bash)
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;Shell toolkit&lt;/strong&gt; gives agents the ability to interact with the shell environment, allowing them to run shell commands. This feature is powerful but should be used carefully, especially in sandboxed environments. Here's how to use the Shell tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.tools import ShellTool

shell_tool = ShellTool()

result = shell_tool.run({"commands": ["echo 'Hello World!'", "time"]})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the Shell tool runs two shell commands: echoing "Hello World!" and displaying the current time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eeejfpwc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g22f2n1brdpxk9putou9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eeejfpwc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g22f2n1brdpxk9putou9.png" alt="Image description" width="800" height="249"&gt;&lt;/a&gt;&lt;br&gt;
You can provide the Shell tool to an agent for more complex tasks. Here's an example of an agent using the Shell tool to fetch links from a web page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.agents import AgentType, initialize_agent
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(temperature=0.1)

shell_tool.description = shell_tool.description + f"args {shell_tool.args}".replace(
    "{", "{{"
).replace("}", "}}")
self_ask_with_search = initialize_agent(
    [shell_tool], llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
self_ask_with_search.run(
    "Download the langchain.com webpage and grep for all urls. Return only a sorted list of them. Be sure to use double quotes."
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--86WAj_ty--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u5i12exsl5rprq7omkrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--86WAj_ty--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u5i12exsl5rprq7omkrx.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;br&gt;
In this scenario, the agent uses the Shell tool to execute a series of commands to fetch, filter, and sort URLs from a web page.&lt;/p&gt;

&lt;p&gt;The examples provided showcase some of the tools available in &lt;strong&gt;LangChain&lt;/strong&gt;. These tools ultimately expand the capabilities of agents (explored in the next subsection) and empower them to efficiently perform various tasks. Depending on your project's needs, you can choose the tools and toolkits that best suit your requirements and integrate them into your agent's workflows.&lt;/p&gt;
&lt;h3&gt;
  
  
  Return to Agents ↩️🤖
&lt;/h3&gt;

&lt;p&gt;Let's talk about &lt;strong&gt;agents&lt;/strong&gt; now.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;AgentExecutor&lt;/strong&gt; is like the engine that runs an agent. It's responsible for calling the agent, making it do actions, giving the agent the results, and doing this in a loop until the agent finishes its task. In simpler terms, it might look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;next_action = agent.get_action(...)
while next_action != AgentFinish:
    observation = run(next_action)
    next_action = agent.get_action(..., next_action, observation)
return next_action
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AgentExecutor deals with various complexities, like what happens when the agent picks a tool that doesn't exist, handling tool errors, managing what the agent produces, and providing logs at different levels.&lt;/p&gt;

&lt;p&gt;Although the &lt;strong&gt;AgentExecutor class&lt;/strong&gt; is the main runtime for agents in LangChain, there are other experimental runtimes like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Plan-and-execute Agent&lt;/li&gt;
&lt;li&gt;Baby AGI&lt;/li&gt;
&lt;li&gt;Auto GPT&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To understand the agent framework better, let's build a basic agent from scratch and then explore pre-built agents.&lt;/p&gt;

&lt;p&gt;Before we dive into building the agent, let's review some key terms and schema:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AgentAction&lt;/strong&gt;: This is like a set of instructions for the agent. It includes the

&lt;code&gt;tool&lt;/code&gt;

to use and

&lt;code&gt;tool_input&lt;/code&gt;

the input for that tool.
- &lt;strong&gt;AgentFinish&lt;/strong&gt;: This indicates the agent has finished its task and is ready to give a response to the user.
- &lt;strong&gt;Intermediate Steps&lt;/strong&gt;: These are like records of what the agent did before. They help the agent remember context for future actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let's create a simple agent using OpenAI Function Calling. We'll start by making a tool that calculates word length. This is useful because language models sometimes make mistakes when counting word lengths due to tokenization.&lt;/p&gt;

&lt;p&gt;First, load the language model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test the model with a word length calculation::&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;llm.invoke("how many letters in the word educa?")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Define a simple function to calculate word length:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.agents import tool

@tool
def get_word_length(word: str) -&amp;gt; int:
    """Returns the length of a word."""
    return len(word)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've created a tool named &lt;em&gt;get_word_length&lt;/em&gt; that takes a word as input and returns its length.&lt;br&gt;
Now, create a prompt for the agent. The prompt guides the agent on how to reason and format the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "You are a very powerful assistant but not great at calculating word lengths.",
        ),
        ("user", "{input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To provide tools to the agent, format them as OpenAI function calls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.tools.render import format_tool_to_openai_function

llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the agent by defining input mappings and connecting components:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser

agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_function_messages(
            x["intermediate_steps"]
        ),
    }
    | prompt
    | llm_with_tools
    | OpenAIFunctionsAgentOutputParser()
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've created our agent, which understands user input, uses available tools, and formats output.&lt;br&gt;
Interact with the agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;agent.invoke({"input": "how many letters in the word educa?", "intermediate_steps": []})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's write a runtime for the agent. The simplest runtime calls the agent, executes actions, and repeats until the agent finishes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.schema.agent import AgentFinish

user_input = "how many letters in the word educa?"
intermediate_steps = []

while True:
    output = agent.invoke(
        {
            "input": user_input,
            "intermediate_steps": intermediate_steps,
        }
    )
    if isinstance(output, AgentFinish):
        final_result = output.return_values["output"]
        break
    else:
        print(f"TOOL NAME: {output.tool}")
        print(f"TOOL INPUT: {output.tool_input}")
        tool = {"get_word_length": get_word_length}[output.tool]
        observation = tool.run(output.tool_input)
        intermediate_steps.append((output, observation))

print(final_result)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CQBZLUQU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pp8lnnzushzn7v5xlxl4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CQBZLUQU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pp8lnnzushzn7v5xlxl4.png" alt="Image description" width="800" height="570"&gt;&lt;/a&gt;&lt;br&gt;
To simplify this, use the &lt;strong&gt;AgentExecutor class&lt;/strong&gt;. It encapsulates agent execution and offers error handling, early stopping, tracing, and other improvements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.agents import AgentExecutor

agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "how many letters in the word educa?"})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AgentExecutor makes it easier to interact with the agent and simplifies the execution process.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;Memory in Agents&lt;/em&gt; 🧠🤖
&lt;/h4&gt;

&lt;p&gt;The agent we've made so far doesn't remember past conversations, making it stateless. To enable follow-up questions and continuous conversations, we need to add &lt;strong&gt;memory&lt;/strong&gt; to the agent. Here are the two steps involved:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add a memory variable in the prompt to store chat history.&lt;/li&gt;
&lt;li&gt;Keep track of the chat history during interactions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's start by adding a memory placeholder in the prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.prompts import MessagesPlaceholder

MEMORY_KEY = "chat_history"
prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "You are a very powerful assistant but not great at calculating word lengths.",
        ),
        MessagesPlaceholder(variable_name=MEMORY_KEY),
        ("user", "{input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, create a list to track the chat history:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.schema.messages import HumanMessage, AIMessage

chat_history = []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the agent creation step, include the memory as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_function_messages(
            x["intermediate_steps"]
        ),
        "chat_history": lambda x: x["chat_history"],
    }
    | prompt
    | llm_with_tools
    | OpenAIFunctionsAgentOutputParser()
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When running the agent, make sure to update the chat history:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input1 = "how many letters in the word educa?"
result = agent_executor.invoke({"input": input1, "chat_history": chat_history})
chat_history.extend([
    HumanMessage(content=input1),
    AIMessage(content=result["output"]),
])
agent_executor.invoke({"input": "is that a real word?", "chat_history": chat_history})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This lets the agent maintain a conversation history and answer follow-up questions based on past interactions.&lt;/p&gt;

&lt;p&gt;Congratulations! You've successfully created and executed your &lt;strong&gt;first end-to-end agent in LangChain&lt;/strong&gt;. To explore LangChain's capabilities further, you can delve into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different agent types supported.&lt;/li&gt;
&lt;li&gt;Pre-built Agents&lt;/li&gt;
&lt;li&gt;How to work with tools and tool integrations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Agent Types 🤖📝
&lt;/h3&gt;

&lt;p&gt;LangChain offers various agent types, each suited for specific use cases. Here are some available agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zero-shot ReAct:&lt;/strong&gt; Chooses tools based on their descriptions using the ReAct framework. Versatile and requires tool descriptions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured input ReAct:&lt;/strong&gt; Handles multi-input tools, suitable for tasks like web browsing. Uses a tools' argument schema for structured input.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI Functions:&lt;/strong&gt; Designed for models fine-tuned for function calling, compatible with models like gpt-3.5-turbo-0613 and gpt-4-0613.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversational:&lt;/strong&gt; Tailored for conversational settings, uses ReAct for tool selection, and employs memory to remember previous interactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-ask with search:&lt;/strong&gt; Relying on a single tool, "Intermediate Answer," it looks up factual answers to questions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReAct document store:&lt;/strong&gt; Interacts with a document store using the ReAct framework, requiring "Search" and "Lookup" tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Explore these agent types to find the one that best suits your needs in LangChain. These agents allow you to bind a set of tools within them to handle actions and generate responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prebuilt Agents 🤖🛠️
&lt;/h2&gt;

&lt;p&gt;Let's continue our exploration of agents, focusing on prebuilt agents available in LangChain.&lt;/p&gt;

&lt;h3&gt;
  
  
  LangChain Gmail Toolkit 📧🔧
&lt;/h3&gt;

&lt;p&gt;LangChain provides a convenient toolkit for Gmail, allowing you to connect your LangChain email to the Gmail API. To get started, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set Up Credentials:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download the credentials.json file as explained in the Gmail API documentation.&lt;/li&gt;
&lt;li&gt;Install required libraries using the following commands:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; pip install --upgrade google-api-python-client
 pip install --upgrade google-auth-oauthlib
 pip install --upgrade google-auth-httplib2
 pip install beautifulsoup4  # Optional for parsing HTML messages
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create Gmail Toolkit:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initialize the toolkit with default settings:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; from langchain.agents.agent_toolkits import GmailToolkit

 toolkit = GmailToolkit()
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Customize authentication as needed. Behind the scenes, a googleapi resource is created using the following methods:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; from langchain.tools.gmail.utils import build_resource_service, get_gmail_credentials

 credentials = get_gmail_credentials(
     token_file="token.json",
     scopes=["https://mail.google.com/"],
     client_secrets_file="credentials.json",
 )
 api_resource = build_resource_service(credentials=credentials)
 toolkit = GmailToolkit(api_resource=api_resource)
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Use Toolkit Tools:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The toolkit offers various tools such as &lt;code&gt;GmailCreateDraft&lt;/code&gt;, &lt;code&gt;GmailSendMessage&lt;/code&gt;, &lt;code&gt;GmailSearch&lt;/code&gt;, &lt;code&gt;GmailGetMessage&lt;/code&gt;, and &lt;code&gt;GmailGetThread&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_GmailCreateDraft_: Create a draft email with specified message fields.
_GmailSendMessage_: Send email messages.
_GmailSearch_: Search for email messages or threads.
_GmailGetMessage_: Fetch an email by message ID.
_GmailGetThread_: Search for email messages.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Initialize Agent:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initialize the agent with the toolkit and other settings:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; from langchain.llms import OpenAI
 from langchain.agents import initialize_agent, AgentType

 llm = OpenAI(temperature=0)
 agent = initialize_agent(
     tools=toolkit.get_tools(),
     llm=llm,
     agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
 )
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a Gmail draft for editing:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; agent.run("Create a Gmail draft for me to edit...")
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Search for the latest email in your drafts:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; agent.run("Could you search in my drafts for the latest email?")
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These examples demonstrate LangChain's Gmail toolkit capabilities, enabling programmatic interactions with Gmail.&lt;/p&gt;

&lt;h3&gt;
  
  
  SQL Database Agent 📊🤖
&lt;/h3&gt;

&lt;p&gt;This agent interacts with SQL databases, particularly the Chinook database. Be cautious as it is still in development. To use:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Initialize Agent:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   from langchain.agents import create_sql_agent
   from langchain.agents.agent_toolkits import SQLDatabaseToolkit
   from langchain.sql_database import SQLDatabase
   from langchain.llms.openai import OpenAI
   from langchain.agents import AgentExecutor
   from langchain.agents.agent_types import AgentType
   from langchain.chat_models import ChatOpenAI

   db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")
   toolkit = SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0))

   agent_executor = create_sql_agent(
       llm=OpenAI(temperature=0),
       toolkit=toolkit,
       verbose=True,
       agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
   )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_Disclaimer_

- The query chain may generate insert/update/delete queries. Be cautious, and use a custom prompt or create a SQL user without write permissions if needed.
- Be aware that running certain queries, such as "run the biggest query possible," could overload your SQL database, especially if it contains millions of rows.
- Data warehouse-oriented databases often support user-level quotas to limit resource usage.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Describe a table:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; agent_executor.run("Describe the playlisttrack table")
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Run a query:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; agent_executor.run("List the total sales per country. Which country's customers spent the most?")
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The agent will execute the query and provide the result, such as the country with the highest total sales.&lt;/p&gt;

&lt;p&gt;To get the total number of tracks in each playlist, you can use the following query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```
agent_executor.run("Show the total number of tracks in each playlist. The Playlist name should be included in the result.")
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent will return the playlist names along with the corresponding total track counts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Caution:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Be cautious about running certain queries that could overload your database.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Pandas DataFrame Agent 🐼📊🤖
&lt;/h3&gt;

&lt;p&gt;This agent interacts with Pandas DataFrames for question-answering purposes. Use with caution to prevent potential harm from generated Python code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Initialize Agent:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
   from langchain.chat_models import ChatOpenAI
   from langchain.agents.agent_types import AgentType

   from langchain.llms import OpenAI
   import pandas as pd

   df = pd.read_csv("titanic.csv")

   agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Count rows in the DataFrame:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; agent.run("how many rows are there?")
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Filter rows based on criteria:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; agent.run("how many people have more than 3 siblings")
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Jira Toolkit 📅🔧
&lt;/h3&gt;

&lt;p&gt;The Jira toolkit allows agents to interact with a Jira instance. Follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install Libraries and Set Environment Variables:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   %pip install atlassian-python-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   import os
   from langchain.agents import AgentType
   from langchain.agents import initialize_agent
   from langchain.agents.agent_toolkits.jira.toolkit import JiraToolkit
   from langchain.llms import OpenAI
   from langchain.utilities.jira import JiraAPIWrapper

   os.environ["JIRA_API_TOKEN"] = "abc"
   os.environ["JIRA_USERNAME"] = "123"
   os.environ["JIRA_INSTANCE_URL"] = "https://jira.atlassian.com"
   os.environ["OPENAI_API_KEY"] = "xyz"

   llm = OpenAI(temperature=0)
   jira = JiraAPIWrapper()
   toolkit = JiraToolkit.from_jira_api_wrapper(jira)
   agent = initialize_agent(
       toolkit.get_tools(), llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
   )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new issue in a project:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; agent.run("make a new issue in project PW to remind me to make more fried rice")
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, you can interact with your Jira instance using natural language instructions and the Jira toolkit.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>langchain</category>
      <category>llm</category>
    </item>
    <item>
      <title>LangChain's 2nd Module: Retrieval🦜🐕</title>
      <dc:creator>Jaydeep Biswas</dc:creator>
      <pubDate>Sun, 07 Jan 2024 18:29:28 +0000</pubDate>
      <link>https://dev.to/jaydeepb21/langchains-2nd-module-retrieval-2ape</link>
      <guid>https://dev.to/jaydeepb21/langchains-2nd-module-retrieval-2ape</guid>
      <description>&lt;p&gt;In our recent blog series, we've traversed through a diverse range of topics. Here's the topics we've covered so far:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://dev.to/jaydeepb21/unlocking-infinite-possibilities-with-langchain-4543"&gt;Installation and Setup of LangChain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/jaydeepb21/1st-module-model-io-4b6a"&gt;LangChain's 1st Module: Model I/O&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Retrieval Augmented Generation (RAG)&lt;/strong&gt; is a crucial process in LangChain, especially for applications that need specific user data not present in the model's training set. In simpler terms, it involves fetching external data and blending it seamlessly into the language model's generation process. LangChain offers a robust set of tools and features to make this process easy, accommodating both simple and complex applications.&lt;/p&gt;

&lt;p&gt;Let's break down the components involved in the retrieval process:&lt;/p&gt;

&lt;h3&gt;
  
  
  Document Loaders 📄
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Document loaders&lt;/strong&gt; in LangChain enable the extraction of data from various sources. They enable the extraction of data from various sources, boasting over 100 loaders that support a diverse range of document types and sources such as private S3 buckets, public websites and databases. All these loaders ingest data into &lt;strong&gt;Document&lt;/strong&gt; classes.&lt;/p&gt;

&lt;p&gt;You have the flexibility to choose a document loader based on your specific needs from &lt;a href="(https://python.langchain.com/docs/integrations/document_loaders)"&gt;here&lt;/a&gt;. Here are examples of different loaders:&lt;/p&gt;

&lt;h4&gt;
  
  
  Text File Loader
&lt;/h4&gt;

&lt;p&gt;To load a simple .txt file into a document, you can use the TextLoader:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders import TextLoader

loader = TextLoader("./sample.txt")
document = loader.load()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  CSV Loader
&lt;/h4&gt;

&lt;p&gt;For loading data from a CSV file, LangChain provides the CSVLoader. You can even customize parsing by specifying field names:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders.csv_loader import CSVLoader

loader = CSVLoader(file_path='./example_data/sample.csv')
documents = loader.load()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv', csv_args={
    'delimiter': ',',
    'quotechar': '"',
    'fieldnames': ['MLB Team', 'Payroll in millions', 'Wins']
})
documents = loader.load()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  PDF Loaders
&lt;/h4&gt;

&lt;p&gt;LangChain's PDF Loaders offer various methods for parsing and extracting content from PDF files. Different loaders cater to different needs:&lt;/p&gt;

&lt;h5&gt;
  
  
  PyPDFLoader (Basic PDF Parsing)
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders import PyPDFLoader

loader = PyPDFLoader("example_data/layout-parser-paper.pdf")
pages = loader.load_and_split()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  MathPixLoader (Mathematical Content and Diagrams)
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders import MathpixPDFLoader

loader = MathpixPDFLoader("example_data/math-content.pdf")
data = loader.load()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  PyMuPDFLoader (Fast PDF Parsing with Detailed Metadata)
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders import PyMuPDFLoader

loader = PyMuPDFLoader("example_data/layout-parser-paper.pdf")
data = loader.load()

# Optionally pass additional arguments for PyMuPDF's get_text() call
data = loader.load(option="text")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  PDFMiner Loader (Granular Control over Text Extraction)
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders import PDFMinerLoader

loader = PDFMinerLoader("example_data/layout-parser-paper.pdf")
data = loader.load()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  AmazonTextractPDFParser utilizes AWS Textract for OCR and other advanced PDF parsing features.
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders import AmazonTextractPDFLoader

# Requires AWS account and configuration
loader = AmazonTextractPDFLoader("example_data/complex-layout.pdf")
documents = loader.load()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  PDFMinerPDFasHTMLLoader generates HTML from PDF for semantic parsing.
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders import PDFMinerPDFasHTMLLoader

loader = PDFMinerPDFasHTMLLoader("example_data/layout-parser-paper.pdf")
data = loader.load()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  PDFPlumberLoader provides detailed metadata and supports one document per page.
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders import PDFPlumberLoader

loader = PDFPlumberLoader("example_data/layout-parser-paper.pdf")
data = loader.load()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Integrated Loaders&lt;/strong&gt; play a vital role in LangChain by allowing direct data loading from various applications such as Slack, Figma, Google Drive, databases, and more. These loaders empower LLMs to seamlessly incorporate information from diverse sources, expanding the capabilities of language generation applications.&lt;/p&gt;

&lt;p&gt;Let's explore a couple of examples to illustrate how &lt;em&gt;Integrated Loaders&lt;/em&gt; can be employed:&lt;/p&gt;

&lt;h3&gt;
  
  
  Example I - Slack 💬
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Slack&lt;/strong&gt;, a popular instant messaging platform, can be integrated into LLM workflows with ease. Here's a simplified step-by-step guide:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to your Slack Workspace Management page.&lt;/li&gt;
&lt;li&gt;Navigate to {your_slack_domain}.slack.com/services/export.&lt;/li&gt;
&lt;li&gt;Select the desired date range and initiate the export.&lt;/li&gt;
&lt;li&gt;Slack notifies you via email and DM once the export is ready.&lt;/li&gt;
&lt;li&gt;The exported data is in a .zip file located in your Downloads folder or the designated download path.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, you can use the &lt;strong&gt;SlackDirectoryLoader&lt;/strong&gt; from the &lt;code&gt;langchain.document_loaders&lt;/code&gt; package to load this data into LangChain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_loaders import SlackDirectoryLoader

SLACK_WORKSPACE_URL = "https://xxx.slack.com"  # Replace with your Slack URL
LOCAL_ZIPFILE = ""  # Path to the Slack zip file

loader = SlackDirectoryLoader(LOCAL_ZIPFILE, SLACK_WORKSPACE_URL)
docs = loader.load()
print(docs)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example II - Figma 🎨
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Figma&lt;/strong&gt;, a widely-used tool for interface design, offers a REST API for data integration. Here's a simplified guide:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Obtain the Figma file key from the URL format: &lt;a href="https://www.figma.com/file/%7Bfilekey%7D/sampleFilename" rel="noopener noreferrer"&gt;https://www.figma.com/file/{filekey}/sampleFilename&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Node IDs are found in the URL parameter ?node-id={node_id}.&lt;/li&gt;
&lt;li&gt;Generate an access token following instructions at the Figma Help Center.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, you can use the &lt;strong&gt;FigmaFileLoader&lt;/strong&gt; class from &lt;code&gt;langchain.document_loaders.figma&lt;/code&gt; to load Figma data into LangChain. This example demonstrates how to generate HTML/CSS code based on Figma design input:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
from langchain.document_loaders.figma import FigmaFileLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.chat_models import ChatOpenAI
from langchain.indexes import VectorstoreIndexCreator
from langchain.chains import ConversationChain, LLMChain
from langchain.memory import ConversationBufferWindowMemory
from langchain.prompts.chat import ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate

figma_loader = FigmaFileLoader(
    os.environ.get("ACCESS_TOKEN"),
    os.environ.get("NODE_IDS"),
    os.environ.get("FILE_KEY"),
)

index = VectorstoreIndexCreator().from_loaders([figma_loader])
figma_doc_retriever = index.vectorstore.as_retriever()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The generate_code function uses the Figma data to create HTML/CSS code.&lt;/li&gt;
&lt;li&gt;It employs a templated conversation with a GPT-based model.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def generate_code(human_input):
    # Template for system and human prompts
    system_prompt_template = "Your coding instructions..."
    human_prompt_template = "Code the {text}. Ensure it's mobile responsive"

    # Creating prompt templates
    system_message_prompt = SystemMessagePromptTemplate.from_template(system_prompt_template)
    human_message_prompt = HumanMessagePromptTemplate.from_template(human_prompt_template)

    # Setting up the AI model
    gpt_4 = ChatOpenAI(temperature=0.02, model_name="gpt-4")

    # Retrieving relevant documents
    relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input)

    # Generating and formatting the prompt
    conversation = [system_message_prompt, human_message_prompt]
    chat_prompt = ChatPromptTemplate.from_messages(conversation)
    response = gpt_4(chat_prompt.format_prompt(context=relevant_nodes, text=human_input).to_messages())

    return response

# Example usage
response = generate_code("page top header")
print(response.content)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the &lt;strong&gt;generate_code&lt;/strong&gt; function utilizes Figma data to create HTML/CSS code through LangChain's capabilities. These Integrated Loaders showcase how LangChain simplifies the integration of external data, enabling powerful applications in various domains.&lt;/p&gt;

&lt;h3&gt;
  
  
  Document Transformers 🔄
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Document Transformers&lt;/strong&gt; in LangChain help shape and modify documents, the building blocks we've created earlier. These tools are crucial for tasks like breaking down long texts, combining information, and filtering content, making them fit neatly into a model's understanding or specific application requirements.&lt;/p&gt;

&lt;p&gt;One handy tool is the &lt;strong&gt;RecursiveCharacterTextSplitter&lt;/strong&gt;, a versatile text splitter using a character list. It allows you to tweak parameters like chunk size, overlap, and starting index. Here's a simple example in Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.text_splitter import RecursiveCharacterTextSplitter

state_of_the_union = "Your long text here..."

text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=100,
    chunk_overlap=20,
    length_function=len,
    add_start_index=True,
)

texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another tool is the &lt;strong&gt;CharacterTextSplitter&lt;/strong&gt;, which divides text based on a chosen character, with controls for chunk size and overlap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.text_splitter import CharacterTextSplitter

text_splitter = CharacterTextSplitter(
    separator="\n\n",
    chunk_size=1000,
    chunk_overlap=200,
    length_function=len,
    is_separator_regex=False,
)

texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if you're dealing with HTML content, use the &lt;strong&gt;HTMLHeaderTextSplitter&lt;/strong&gt;. It cleverly splits HTML content based on header tags while keeping the semantic structure intact:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.text_splitter import HTMLHeaderTextSplitter

html_string = "Your HTML content here..."
headers_to_split_on = [("h1", "Header 1"), ("h2", "Header 2")]

html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
html_header_splits = html_splitter.split_text(html_string)
print(html_header_splits[0])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Things get even more interesting when you combine different tools. For example, combining &lt;strong&gt;HTMLHeaderTextSplitter&lt;/strong&gt; with the &lt;strong&gt;Pipelined Splitter&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.text_splitter import HTMLHeaderTextSplitter, RecursiveCharacterTextSplitter

url = "https://example.com"
headers_to_split_on = [("h1", "Header 1"), ("h2", "Header 2")]
html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
html_header_splits = html_splitter.split_text_from_url(url)

chunk_size = 500
text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size)
splits = text_splitter.split_documents(html_header_splits)
print(splits[0])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;LangChain also offers specialized splitters for different programming languages, such as the &lt;strong&gt;Python Code Splitter&lt;/strong&gt; and the &lt;strong&gt;JavaScript Code Splitter&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.text_splitter import RecursiveCharacterTextSplitter, Language

python_code = """
def hello_world():
    print("Hello, World!")
hello_world()
"""

python_splitter = RecursiveCharacterTextSplitter.from_language(
    language=Language.PYTHON, chunk_size=50
)
python_docs = python_splitter.create_documents([python_code])
print(python_docs[0])

js_code = """
function helloWorld() {
  console.log("Hello, World!");
}
helloWorld();
"""

js_splitter = RecursiveCharacterTextSplitter.from_language(
    language=Language.JS, chunk_size=60
)
js_docs = js_splitter.create_documents([js_code])
print(js_docs[0])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For handling text based on token count (useful for models with token limits), there's the &lt;strong&gt;TokenTextSplitter&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.text_splitter import TokenTextSplitter

text_splitter = TokenTextSplitter(chunk_size=10)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, there's the &lt;strong&gt;LongContextReorder&lt;/strong&gt; that shuffles documents to prevent performance slowdowns in models due to lengthy contexts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.document_transformers import LongContextReorder

reordering = LongContextReorder()
reordered_docs = reordering.transform_documents(docs)
print(reordered_docs[0])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These tools showcase the incredible ways you can transform documents in LangChain, from simple text splitting to complex reordering and language-specific separation. For more detailed insights and specific use cases, diving into the LangChain documentation and Integrations section is highly recommended. And don't worry, in our examples, the loaders have already done the heavy lifting of creating chunked documents for us!&lt;/p&gt;

&lt;h3&gt;
  
  
  Text Embedding Model 📝➡️🔠
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Text Embedding Models&lt;/strong&gt; in LangChain bring a standardized way of handling various embedding model providers like OpenAI, Cohere, and Hugging Face. These models work by transforming text into vector representations, allowing for powerful operations like semantic search through text similarity in vector space.&lt;/p&gt;

&lt;p&gt;Getting started is usually a breeze, involving the installation of specific packages and setting up API keys. In our case, we've already taken care of this for OpenAI.&lt;/p&gt;

&lt;p&gt;In LangChain, the go-to method for embedding multiple texts is &lt;strong&gt;embed_documents&lt;/strong&gt;. Take a look at this example using OpenAI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.embeddings import OpenAIEmbeddings

# Initialize the model
embeddings_model = OpenAIEmbeddings()

# Embed a list of texts
embeddings = embeddings_model.embed_documents(
    ["Hi there!", "Oh, hello!", "What's your name?", "My friends call me World", "Hello World!"]
)
print("Number of documents embedded:", len(embeddings))
print("Dimension of each embedding:", len(embeddings[0]))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a single text, like a search query, you can use &lt;strong&gt;embed_query&lt;/strong&gt;. This is handy for comparing a query to a set of document embeddings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.embeddings import OpenAIEmbeddings

# Initialize the model
embeddings_model = OpenAIEmbeddings()

# Embed a single query
embedded_query = embeddings_model.embed_query("What was the name mentioned in the conversation?")
print("First five dimensions of the embedded query:", embedded_query[:5])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Understanding these embeddings is key. Each piece of text becomes a vector, and the dimension depends on the model used – for OpenAI, it's typically 1536-dimensions vectors. These embeddings then used for retrieving relevant information.&lt;/p&gt;

&lt;p&gt;And here's the cool part: LangChain isn't limited to just OpenAI. It's designed to seamlessly work with various providers. While the setup and usage might differ slightly based on the provider, the core concept of embedding texts into vector space stays the same. For all the nitty-gritty details, including advanced configurations and integrations with different embedding model providers, the LangChain documentation in the &lt;strong&gt;Integrations&lt;/strong&gt; section is a goldmine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vector Stores 🗄️
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Vector Stores&lt;/strong&gt; in LangChain are the vaults that efficiently store and search those text embeddings. LangChain integrates with over 50 vector stores, offering a standardized interface for a smooth user experience.&lt;/p&gt;

&lt;p&gt;Let's dive into an example where we've embedded texts and now want to store and search them using &lt;strong&gt;Chroma&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.vectorstores import Chroma

db = Chroma.from_texts(embedded_texts)
similar_texts = db.similarity_search("search query")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, if we want to use &lt;strong&gt;FAISS&lt;/strong&gt; for creating indexes, here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS

pdfstore = FAISS.from_documents(pdfpages, 
            embedding=OpenAIEmbeddings())

airtablestore = FAISS.from_documents(airtabledocs, 
            embedding=OpenAIEmbeddings())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosd24my12auyvp6rndax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosd24my12auyvp6rndax.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Retrievers 🔍
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Retrievers&lt;/strong&gt; in LangChain are like smart search engines, but way more flexible. They don't just find documents, they understand what you're looking for. Unlike vector stores that focus on storing, retrievers are all about finding information.&lt;/p&gt;

&lt;p&gt;Let's start with the &lt;strong&gt;Chroma retriever&lt;/strong&gt;. Setting it up involves a few steps, like installing Chroma with &lt;code&gt;pip install chromadb&lt;/code&gt;. Then, you load, split, embed, and retrieve documents. Here's a simple example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma

full_text = open("state_of_the_union.txt", "r").read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_text(full_text)

embeddings = OpenAIEmbeddings()
db = Chroma.from_texts(texts, embeddings)
retriever = db.as_retriever()

retrieved_docs = retriever.invoke("What did the president say about Ketanji Brown Jackson?")
print(retrieved_docs[0].page_content)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, there's the &lt;strong&gt;MultiQueryRetriever&lt;/strong&gt;, automates prompt tuning by generating multiple queries for a user input query and combines the results. It generates multiple queries based on your input and combines the results. Check it out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.chat_models import ChatOpenAI
from langchain.retrievers.multi_query import MultiQueryRetriever

question = "What are the approaches to Task Decomposition?"
llm = ChatOpenAI(temperature=0)
retriever_from_llm = MultiQueryRetriever.from_llm(
    retriever=db.as_retriever(), llm=llm
)

unique_docs = retriever_from_llm.get_relevant_documents(query=question)
print("Number of unique documents:", len(unique_docs))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, imagine you want just the relevant parts from a long document. That's where &lt;strong&gt;Contextual Compression Retriever&lt;/strong&gt; steps in. It compresses retrieved documents, keeping only the relevant info. Take a look:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor

llm = OpenAI(temperature=0)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)

compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
print(compressed_docs[0].page_content)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's talk team players. The &lt;strong&gt;EnsembleRetriever&lt;/strong&gt; brings different algorithms together for a grand performance. In this example, BM25 and FAISS Retrievers join forces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.retrievers import BM25Retriever, EnsembleRetriever
from langchain.vectorstores import FAISS

bm25_retriever = BM25Retriever.from_texts(doc_list).set_k(2)
faiss_vectorstore = FAISS.from_texts(doc_list, OpenAIEmbeddings())
faiss_retriever = faiss_vectorstore.as_retriever(search_kwargs={"k": 2})

ensemble_retriever = EnsembleRetriever(
    retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]
)

docs = ensemble_retriever.get_relevant_documents("apples")
print(docs[0].page_content)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, here's something for those who want more from a document. The &lt;strong&gt;MultiVector Retriever&lt;/strong&gt; lets you query with multiple vectors per document. Here's how you can split documents into smaller chunks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.retrievers.multi_vector import MultiVectorRetriever
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.storage import InMemoryStore
from langchain.document_loaders import TextLoader
import uuid

loaders = [TextLoader("file1.txt"), TextLoader("file2.txt")]
docs = [doc for loader in loaders for doc in loader.load()]
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)
docs = text_splitter.split_documents(docs)

vectorstore = Chroma(collection_name="full_documents", embedding_function=OpenAIEmbeddings())
store = InMemoryStore()
id_key = "doc_id"
retriever = MultiVectorRetriever(vectorstore=vectorstore, docstore=store, id_key=id_key)

doc_ids = [str(uuid.uuid4()) for _ in docs]
child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
sub_docs = [sub_doc for doc in docs for sub_doc in child_text_splitter.split_documents([doc])]
for sub_doc in sub_docs:
    sub_doc.metadata[id_key] = doc_ids[sub_docs.index(sub_doc)]

retriever.vectorstore.add_documents(sub_docs)
retriever.docstore.mset(list(zip(doc_ids, docs)))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, for those who want a balance between accuracy and context, there's the &lt;strong&gt;Parent Document Retriever&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.retrievers import ParentDocumentRetriever

loaders = [TextLoader("file1.txt"), TextLoader("file2.txt")]
docs = [doc for loader in loaders for doc in loader.load()]

child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
vectorstore = Chroma(collection_name="full_documents", embedding_function=OpenAIEmbeddings())
store = InMemoryStore()
retriever = ParentDocumentRetriever(vectorstore=vectorstore, docstore=store, child_splitter=child_splitter)

retriever.add_documents(docs, ids=None)

retrieved_docs = retriever.get_relevant_documents("query")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These retrievers make LangChain a powerhouse for retrieving information. Whether you want focused content, multiple perspectives, or a balanced approach, there's a retriever for you. And hey, don't forget the documentation for more explorations!&lt;/p&gt;

&lt;p&gt;A self-querying retriever constructs structured queries from natural language inputs and applies them to its underlying VectorStore. Its implementation is shown in the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.chat_models from ChatOpenAI
from langchain.chains.query_constructor.base from AttributeInfo
from langchain.retrievers.self_query.base from SelfQueryRetriever

metadata_field_info = [AttributeInfo(name="genre", description="...", type="string"), ...]
document_content_description = "Brief summary of a movie"
llm = ChatOpenAI(temperature=0)

retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info)

retrieved_docs = retriever.invoke("query")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;WebResearchRetriever&lt;/strong&gt; performs web research based on a given query -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.retrievers.web_research import WebResearchRetriever

# Initialize components
llm = ChatOpenAI(temperature=0)
search = GoogleSearchAPIWrapper()
vectorstore = Chroma(embedding_function=OpenAIEmbeddings())

# Instantiate WebResearchRetriever
web_research_retriever = WebResearchRetriever.from_llm(vectorstore=vectorstore, llm=llm, search=search)

# Retrieve documents
docs = web_research_retriever.get_relevant_documents("query")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For our examples, we can also use the standard retriever already implemented as part of our vector store object as follows -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzj0caffki4p5t9gfxv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzj0caffki4p5t9gfxv9.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
We can now query the retrievers. The output of our query will be document objects relevant to the query. These will be ultimately utilized to create relevant responses in further sections.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6kpy3jxee3y9idi727s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6kpy3jxee3y9idi727s.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3aut72hk74aefm7rpnje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3aut72hk74aefm7rpnje.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next Chapter: &lt;a href="https://dev.to/jaydeepb21/lanchains-3rd-module-agents-58o6"&gt;Lanchain's 3rd Module: Agents&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Top 5 High Demand Programming Languages in 2024</title>
      <dc:creator>Jaydeep Biswas</dc:creator>
      <pubDate>Fri, 05 Jan 2024 18:29:20 +0000</pubDate>
      <link>https://dev.to/jaydeepb21/top-5-highly-sought-after-programming-languages-in-2024-548l</link>
      <guid>https://dev.to/jaydeepb21/top-5-highly-sought-after-programming-languages-in-2024-548l</guid>
      <description>&lt;p&gt;&lt;strong&gt;Will Python Maintain Its Dominance?&lt;/strong&gt;&lt;br&gt;
Hey fellow coders! As we step into the exciting realm of 2024, it's only natural to wonder which programming languages are holding their ground and, more importantly, where you should focus your up-skilling efforts. With the ever-expanding universe of coding languages, it's crucial to stay ahead of the curve. So, let's dive into the top five programming languages that are set to dominate in 2024 and beyond.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Python: The Unstoppable Force&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python has been on a relentless climb up the programming language charts, and in 2024, it's showing no signs of slowing down. Praised for its versatility and rapid development capabilities, Python has become a go-to language for various applications. Statista ranks it as the third most-used language in 2023, and the TIOBE Index currently crowns it as number one. Whether you're into AI, automation, or workflow optimization, strong Python skills are a hot commodity in the job market.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcp1gec1auh3bafaeyce.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcp1gec1auh3bafaeyce.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Java: The Time-Tested Titan&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Java, born in 1995, has stood the test of time and remains a stalwart in the programming world. A recent survey of 14 million developer jobs placed Java as the third most in-demand language. Widely used in web development, cloud computing, IoT applications, and large-scale enterprise tools, Java offers a sense of job security that's hard to match.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwpf3wdjsx8zxy9pyu2w.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwpf3wdjsx8zxy9pyu2w.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Kotlin: Powering the Future of App Development&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're into Android or cross-platform app development, Kotlin should be on your radar. Endorsed by Google as the official language for Android development in 2017, Kotlin has steadily gained popularity. Fintech enthusiasts, take note – SumUp is in search of a senior backend Kotlin engineer to contribute to an innovative in-app point-of-sale solution in the heart of Paris.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsycu9xn7r911iybij61.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsycu9xn7r911iybij61.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. JavaScript: The Ever-Adaptable Dynamo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;JavaScript, with its incredible adaptability, continues to hold a prime spot among the most in-demand languages. Powering over 98% of all websites in some capacity, JavaScript is the force behind the dynamic and interactive content you encounter on your devices daily. From laptops to smartphones to smart TVs, JavaScript is the unsung hero of front-end web development.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm7t6241oi8ea0ggio5t.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm7t6241oi8ea0ggio5t.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Rust: Releasing Performance and Safety&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enter Rust, a programming language designed for speed, safety, and practicality. Positioned as a systems programming language, Rust runs like a speeding bullet, preventing segfaults and guaranteeing thread safety. Its memory efficiency makes it ideal for embedded systems, making it a rising star in the programming constellation.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Rust Benefits:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast and efficient for high-performance applications.&lt;/li&gt;
&lt;li&gt;Safe and reliable, perfect for mission-critical software.&lt;/li&gt;
&lt;li&gt;Easy to learn with a supportive developer community.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Rust Cons:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Limited support/documentation as compared to other languages.&lt;/li&gt;
&lt;li&gt;Challenges in integration with existing codebases.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpw8r9vsbsvoryily2ei.gif" alt="Image description"&gt;
As we embark on this coding journey into 2024, these five languages stand out as the champions of the programming arena. Whether you're a Python aficionado, a Java enthusiast, a Kotlin connoisseur, a JavaScript wizard, or a Rust explorer, there's a world of opportunities awaiting your skills. Happy coding, and may your 2024 be filled with endless lines of success! 🚀✨&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>python</category>
      <category>javascript</category>
      <category>rust</category>
    </item>
    <item>
      <title>LangChain's 1st Module: Model I/O 🦜🤖</title>
      <dc:creator>Jaydeep Biswas</dc:creator>
      <pubDate>Wed, 03 Jan 2024 18:29:56 +0000</pubDate>
      <link>https://dev.to/jaydeepb21/1st-module-model-io-4b6a</link>
      <guid>https://dev.to/jaydeepb21/1st-module-model-io-4b6a</guid>
      <description>&lt;p&gt;In the last post we have gone through the &lt;a href="https://dev.to/jaydeepb21/unlocking-infinite-possibilities-with-langchain-4543"&gt;Installation and Setup of LangChain&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the realm of LangChain, the pivotal element shaping any application lies in the language model. This module lays the foundation for effective interaction with language models, ensuring a seamless integration process. 🚀&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Components of Model I/O 🧩
&lt;/h2&gt;

&lt;h3&gt;
  
  
  LLMs and Chat Models (used interchangeably): 🗣️
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;LLMs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Definition:&lt;/em&gt; Pure text completion models.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Input/Output:&lt;/em&gt; Take a text string as input and return a text string as output.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Chat Models:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Definition:&lt;/em&gt; Models leveraging a language model as a base, differing in input and output formats.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Input/Output:&lt;/em&gt; Accept a list of chat messages as input and return a Chat Message.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prompts: 📝
&lt;/h3&gt;

&lt;p&gt;Templatize, dynamically select, and manage model inputs. This enables the creation of flexible and context-specific prompts guiding the language model's responses.&lt;/p&gt;

&lt;h3&gt;
  
  
  Output Parsers: 📤
&lt;/h3&gt;

&lt;p&gt;These components extract and format information from model outputs. They prove invaluable for converting raw language model output into structured data or specific formats required by the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  LLMs: 🧠
&lt;/h3&gt;

&lt;p&gt;LangChain's integration with Large Language Models (LLMs), such as OpenAI, Cohere, and Hugging Face, constitutes a fundamental aspect of its functionality. LangChain itself doesn't host LLMs but provides a uniform interface for interacting with various LLMs.&lt;/p&gt;

&lt;p&gt;This section outlines the usage of the OpenAI LLM wrapper in LangChain, applicable to other LLM types. Assuming it's installed, let's initialize the LLM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.llms import OpenAI
llm = OpenAI()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;LLMs adhere to the Runnable interface, the basic building block of the LangChain Expression Language (LCEL). This implies support for invoke, ainvoke, stream, astream, batch, abatch, astream_log calls.&lt;/p&gt;

&lt;p&gt;LLMs accept strings as inputs or objects coerced to string prompts, including List[BaseMessage] and PromptValue. Now, let's delve into some examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = llm.invoke("List the seven wonders of the world.")
print(response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F371rfcn7ourpms0lujkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F371rfcn7ourpms0lujkj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can alternatively call the stream method to stream the text response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for chunk in llm.stream("Where were the 2012 Olympics held?"):
    print(chunk, end="", flush=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsrayzpfzkl7yv8p43va.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsrayzpfzkl7yv8p43va.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Chat Models: Revolutionizing Conversations in LangChain 💬
&lt;/h1&gt;

&lt;p&gt;In the dynamic realm of LangChain, the integration of chat models emerges as a pivotal force, breathing life into interactive chat applications. These models, a specialized variant of language models, wield the power of internal language models while showcasing a distinctive interface tailored around chat messages as both inputs and outputs. Let's embark on an in-depth exploration of leveraging OpenAI's chat model within the LangChain ecosystem.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.chat_models import ChatOpenAI
chat = ChatOpenAI()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within the language of LangChain, chat models seamlessly interact with various message types, including AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage (sporting an arbitrary role parameter). The stalwarts among these are undeniably HumanMessage, AIMessage, and SystemMessage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.schema.messages import HumanMessage, SystemMessage
messages = [
    SystemMessage(content="You are Michael Jordan."),
    HumanMessage(content="Which shoe manufacturer are you associated with?"),
]
response = chat.invoke(messages)
print(response.content)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljxuqr05cs54273nigdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljxuqr05cs54273nigdq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Unveiling the Power of Prompts 🎭
&lt;/h3&gt;

&lt;p&gt;Prompts, the architects of coherent and relevant language model outputs, assume a central role in the LangChain narrative. From straightforward instructions to intricate few-shot examples, handling prompts within LangChain is a streamlined journey, all thanks to a suite of dedicated classes and functions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Crafting a Dynamic Prompt with &lt;code&gt;PromptTemplate&lt;/code&gt; 🖋️
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.prompts import PromptTemplate

# Simple prompt with placeholders
prompt_template = PromptTemplate.from_template(
    "Tell me a {adjective} joke about {content}."
)

# Filling placeholders to create a prompt
filled_prompt = prompt_template.format(adjective="funny", content="robots")
print(filled_prompt)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For chat models, where prompts evolve into more structured conversations with messages assigned specific roles, LangChain introduces the &lt;code&gt;ChatPromptTemplate&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Shaping an Interactive Chat Prompt 🤔
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.prompts import ChatPromptTemplate

# Defining a chat prompt with various roles
chat_template = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful AI bot. Your name is {name}."),
        ("human", "Hello, how are you doing?"),
        ("ai", "I'm doing well, thanks!"),
        ("human", "{user_input}"),
    ]
)

# Formatting the chat prompt
formatted_messages = chat_template.format_messages(name="Bob", user_input="What is your name?")
for message in formatted_messages:
    print(message)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This strategic approach empowers the creation of chatbots that are not only interactive but also dynamic in their responses, adapting to the nuances of the conversation.&lt;/p&gt;

&lt;p&gt;Both &lt;code&gt;PromptTemplate&lt;/code&gt; and &lt;code&gt;ChatPromptTemplate&lt;/code&gt; seamlessly integrate with the LangChain Expression Language (LCEL), positioning themselves as integral components within more extensive and intricate workflows—a topic we'll delve deeper into shortly.&lt;/p&gt;

&lt;p&gt;Custom prompt templates become the artisans' tools, essential for tasks demanding unique formatting or specific instructions. The artistry involves defining input variables and crafting a custom formatting method, providing LangChain the flexibility to cater to a diverse array of application-specific needs.&lt;/p&gt;

&lt;p&gt;Discover the power of few-shot prompting in LangChain, a feature that empowers models to learn from examples. This proves indispensable for tasks requiring contextual understanding or recognition of specific patterns. Few-shot prompt templates can be meticulously constructed from a set of examples or with the aid of an Example Selector object—unveil more on this &lt;a href="https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Embrace the journey of transforming prompts into dialogues, where language models breathe life into interactive narratives within the LangChain universe.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Power of Output Parsers in LangChain 🛠️
&lt;/h1&gt;

&lt;p&gt;Output parsers stand as the unsung heroes in the vibrant ecosystem of LangChain, playing a pivotal role in shaping the responses generated by language models. This section is an exploration of the nuanced world of output parsers, accompanied by code examples utilizing LangChain's diverse set, including PydanticOutputParser, SimpleJsonOutputParser, CommaSeparatedListOutputParser, DatetimeOutputParser, and XMLOutputParser.&lt;/p&gt;

&lt;h2&gt;
  
  
  PydanticOutputParser: Crafted Precision ✨
&lt;/h2&gt;

&lt;p&gt;LangChain introduces the PydanticOutputParser, a gem for parsing responses into Pydantic data structures. Let's delve into a step-by-step example to witness its prowess:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Initializing the language model
model = OpenAI(model_name="text-davinci-003", temperature=0.0)

# Defining the desired data structure using Pydantic
class Joke(BaseModel):
    setup: str = Field(description="question to set up a joke")
    punchline: str = Field(description="answer to resolve the joke")

    @validator("setup")
    def question_ends

_with_question_mark(cls, field):
        if field[-1] != "?":
            raise ValueError("Badly formed question!")
        return field

# Setting up a PydanticOutputParser
parser = PydanticOutputParser(pydantic_object=Joke)

# Creating a prompt with format instructions
prompt = PromptTemplate(
    template="Answer the user query.\n{format_instructions}\n{query}\n",
    input_variables=["query"],
    partial_variables={"format_instructions": parser.get_format_instructions()},
)

# Defining a query to prompt the language model
query = "Tell me a joke."

# Combining prompt, model, and parser for structured output
prompt_and_model = prompt | model
output = prompt_and_model.invoke({"query": query})

# Parsing the output using the parser
parsed_result = parser.invoke(output)

# The result is a structured object
print(parsed_result)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febisppchezevs16uo0e1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febisppchezevs16uo0e1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  SimpleJsonOutputParser: Decoding JSON-Like Elegance 🌐
&lt;/h2&gt;

&lt;p&gt;When dealing with JSON-like outputs, LangChain's SimpleJsonOutputParser takes the stage. Here's a glimpse into its functionality:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Creating a JSON prompt
json_prompt = PromptTemplate.from_template(
    "Return a JSON object with `birthdate` and `birthplace` key that answers the following question: {question}"
)

# Initializing the JSON parser
json_parser = SimpleJsonOutputParser()

# Crafting a chain with the prompt, model, and parser
json_chain = json_prompt | model | json_parser

# Streaming through the results
result_list = list(json_chain.stream({"question": "When and where was Elon Musk born?"}))

# The result is a list of JSON-like dictionaries
print(result_list)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ivn3p1pdwl4bmimwn3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ivn3p1pdwl4bmimwn3x.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CommaSeparatedListOutputParser: Unraveling Lists with Ease 📜
&lt;/h2&gt;

&lt;p&gt;The CommaSeparatedListOutputParser steps in when extracting comma-separated lists from model responses becomes imperative. Witness its simplicity in action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Initializing the parser
output_parser = CommaSeparatedListOutputParser()

# Creating format instructions
format_instructions = output_parser.get_format_instructions()

# Creating a prompt to request a list
prompt = PromptTemplate(
    template="List five {subject}.\n{format_instructions}",
    input_variables=["subject"],
    partial_variables={"format_instructions": format_instructions}
)

# Defining a query to prompt the model
query = "English Premier League Teams"

# Generating the output
output = model(prompt.format(subject=query))

# Parsing the output using the parser
parsed_result = output_parser.parse(output)

# The result is a list of items
print(parsed_result)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsw19qdkzttl7grlyu7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsw19qdkzttl7grlyu7w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DatetimeOutputParser: Unveiling Temporal Insights 🕰️
&lt;/h2&gt;

&lt;p&gt;LangChain's DatetimeOutputParser is tailored for parsing datetime information. Experience its capabilities firsthand:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Initializing the DatetimeOutputParser
output_parser = DatetimeOutputParser()

# Creating a prompt with format instructions
template = """
Answer the user's question:
{question}
{format_instructions}
"""

prompt = PromptTemplate.from_template(
    template,
    partial_variables={"format_instructions": output_parser.get_format_instructions()},
)

# Creating a chain with the prompt and language model
chain = LLMChain(prompt=prompt, llm=OpenAI())

# Defining a query to prompt the model
query = "when did Neil Armstrong land on the moon in terms of GMT?"

# Running the chain
output = chain.run(query)

# Parsing the output using the datetime parser
parsed_result = output_parser.parse(output)

# The result is a datetime object
print(parsed_result)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzs4ivpeqqnov56j7jlv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzs4ivpeqqnov56j7jlv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These examples unfold the versatility of LangChain's output parsers, adept at structuring diverse model responses to cater to various applications and formats. Output parsers emerge as indispensable tools, elevating the usability and interpretability of language model outputs within the LangChain ecosystem.&lt;/p&gt;

&lt;p&gt;Next Chapter : &lt;a href="https://dev.to/jaydeepb21/langchains-2nd-module-retrieval-2ape"&gt;LangChain's 2nd Module: Retrieval&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>langchain</category>
      <category>llm</category>
    </item>
    <item>
      <title>Discover Infinite Possibilities with LangChain 🦜🚀</title>
      <dc:creator>Jaydeep Biswas</dc:creator>
      <pubDate>Tue, 02 Jan 2024 18:28:08 +0000</pubDate>
      <link>https://dev.to/jaydeepb21/unlocking-infinite-possibilities-with-langchain-4543</link>
      <guid>https://dev.to/jaydeepb21/unlocking-infinite-possibilities-with-langchain-4543</guid>
      <description>&lt;h2&gt;
  
  
  Introduction 🌐
&lt;/h2&gt;

&lt;p&gt;Embarking on the exploration of &lt;strong&gt;LangChain&lt;/strong&gt; feels like diving into a world of limitless possibilities, where innovation meets the power of language models. At its core, &lt;strong&gt;LangChain&lt;/strong&gt; serves as a revolutionary framework, meticulously crafted to empower developers in the creation of applications that seamlessly tap into the capabilities of language models. This toolkit goes beyond mere coding; it represents a holistic ecosystem, streamlining the process from conceptualization to implementation through its diverse components.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;LangChain&lt;/strong&gt; Libraries 📚
&lt;/h2&gt;

&lt;p&gt;As I delve into the intricacies of &lt;strong&gt;LangChain&lt;/strong&gt;, I encounter its foundational elements – the Python and JavaScript-based &lt;strong&gt;LangChain Libraries&lt;/strong&gt;. These libraries, the backbone of &lt;strong&gt;LangChain&lt;/strong&gt;, provide the necessary interfaces and integrations for various components. They not only offer a runtime for combining these components into coherent chains but also present ready-made implementations for immediate application.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;LangChain Templates&lt;/strong&gt; 🎨
&lt;/h2&gt;

&lt;p&gt;Adding to the richness of the &lt;strong&gt;LangChain&lt;/strong&gt; experience are the purpose-built &lt;strong&gt;LangChain Templates&lt;/strong&gt;. These deployable reference architectures cater to a spectrum of tasks, offering a solid starting point for projects, whether it be a conversational chatbot or a sophisticated analytical tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;LangServe&lt;/strong&gt;: Transforming Projects into Web Services 🌐
&lt;/h2&gt;

&lt;p&gt;Enter &lt;strong&gt;LangServe&lt;/strong&gt;, a versatile library designed for deploying &lt;strong&gt;LangChain&lt;/strong&gt; chains as REST APIs. This tool is the linchpin for transforming &lt;strong&gt;LangChain&lt;/strong&gt; projects into accessible and scalable web services, a crucial step in taking applications to the next level.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;LangSmith&lt;/strong&gt;: A Developer's Playground 🛠️
&lt;/h2&gt;

&lt;p&gt;Complementing this toolkit is &lt;strong&gt;LangSmith&lt;/strong&gt;, a dedicated developer platform that serves as a testing ground for debugging, evaluating, and monitoring chains built on any large language model (LLM) framework. Its seamless integration with &lt;strong&gt;LangChain&lt;/strong&gt; makes it an indispensable companion for developers striving to refine and perfect their applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Synergy in Action 🔄
&lt;/h2&gt;

&lt;p&gt;The synergy among these components empowers me to navigate the development, productionization, and deployment of applications with unparalleled ease. With &lt;strong&gt;LangChain&lt;/strong&gt;, the journey begins by crafting applications using the libraries, drawing inspiration from templates for guidance. &lt;strong&gt;LangSmith&lt;/strong&gt; steps in to aid in inspecting, testing, and monitoring chains, ensuring a continuous enhancement process. Finally, the deployment phase becomes a seamless experience with &lt;strong&gt;LangServe&lt;/strong&gt;, effortlessly transforming any chain into an API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion 🌟
&lt;/h2&gt;

&lt;p&gt;As I prepare to delve deeper into the intricacies of setting up &lt;strong&gt;LangChain&lt;/strong&gt;, the excitement builds. The prospect of creating intelligent, language model-powered applications is within reach, thanks to the comprehensive and user-friendly ecosystem that &lt;strong&gt;LangChain&lt;/strong&gt; provides.&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting Started with LangChain: Installation and Setup 🚀
&lt;/h1&gt;

&lt;p&gt;Dive into the world of &lt;strong&gt;LangChain&lt;/strong&gt; with a straightforward installation process. Follow this step-by-step guide to set up LangChain seamlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation Steps 🛠️
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installing LangChain
&lt;/h3&gt;

&lt;p&gt;Install &lt;strong&gt;LangChain&lt;/strong&gt; using either &lt;code&gt;pip&lt;/code&gt; or &lt;code&gt;conda&lt;/code&gt; with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install langchain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For those who prefer the latest features and are comfortable with a bit more adventure, you can install LangChain directly from the source. Clone the repository and navigate to the langchain/libs/langchain directory. Then, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -e .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For experimental features, consider installing &lt;strong&gt;langchain-experimental&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install langchain-experimental
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  LangChain CLI 📟
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;LangChain CLI&lt;/strong&gt; is a helpful tool for LangChain templates and LangServe projects. Install it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install langchain-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  LangServe Setup 🌐
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;LangServe&lt;/strong&gt; is essential for deploying LangChain chains as a REST API and gets installed alongside the LangChain CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating External Tools (OpenAI Example) 🤝
&lt;/h3&gt;

&lt;p&gt;LangChain often requires integrations with external entities. For example, for &lt;strong&gt;OpenAI's model APIs&lt;/strong&gt;, install the OpenAI Python package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To access the API, set your OpenAI API key as an environment variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export OPENAI_API_KEY="your_api_key"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, pass the key directly in your Python environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
os.environ['OPENAI_API_KEY'] = 'your_api_key'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  LangChain Modules 🧩
&lt;/h2&gt;

&lt;p&gt;LangChain offers modular components, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model I/O:&lt;/strong&gt; Facilitates interaction with language models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval:&lt;/strong&gt; Enables access to application-specific data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agents:&lt;/strong&gt; Empower applications to select tools based on high-level directives.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chains:&lt;/strong&gt; Pre-defined, reusable compositions for application development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory:&lt;/strong&gt; Maintains application state across multiple chain executions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  LangChain Expression Language (LCEL) 💬
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;LangChain Expression Language (LCEL)&lt;/strong&gt; is a declarative way to compose modules. Example LCEL snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import BaseOutputParser

# Example chain
chain = ChatPromptTemplate() | ChatOpenAI() | CustomOutputParser()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Next Steps 🚀
&lt;/h2&gt;

&lt;p&gt;Now that the basics are covered, &lt;em&gt;delve deeper into each LangChain module&lt;/em&gt;, learn the LangChain Expression Language, explore common use cases, implement them, deploy end-to-end applications with LangServe, and leverage &lt;strong&gt;LangSmith&lt;/strong&gt; for debugging, testing, and monitoring.&lt;/p&gt;

&lt;p&gt;Unleash the full potential of &lt;strong&gt;LangChain&lt;/strong&gt; as you craft powerful language model applications!&lt;br&gt;
Next Chapter: &lt;a href="https://dev.to/jaydeepb21/1st-module-model-io-4b6a"&gt;LangChain's 1st Module: Model I/O&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>langchain</category>
      <category>llm</category>
    </item>
    <item>
      <title>Understanding LangChain: Unveiling the Power Behind the Platform 🦜🔗</title>
      <dc:creator>Jaydeep Biswas</dc:creator>
      <pubDate>Mon, 01 Jan 2024 18:29:52 +0000</pubDate>
      <link>https://dev.to/jaydeepb21/understanding-langchain-unveiling-the-power-behind-the-platform-4k8d</link>
      <guid>https://dev.to/jaydeepb21/understanding-langchain-unveiling-the-power-behind-the-platform-4k8d</guid>
      <description>&lt;h2&gt;
  
  
  Introduction 🌐
&lt;/h2&gt;

&lt;p&gt;A few months ago, I stumbled upon LangChain while immersed in a Language Model (LLM) project. Since then, I have been captivated by its capabilities, though I often find that it is not as widely comprehended as it deserves to be. This is unsurprising, considering that when I initially explored LangChain, its utility wasn't immediately apparent. However, as I delved deeper and incorporated it into my projects, the true potential of LangChain gradually unfolded.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unraveling LangChain's Power 🔍
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;br&gt;
LangChain, at its core, is a versatile platform that tackles a spectrum of challenges encountered in language-related projects. It goes beyond the surface-level understanding and requires a hands-on approach to fully appreciate its capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;br&gt;
LangChain finds its strength in solving intricate problems associated with language processing and understanding. Here's an exploration of the key areas where LangChain excels:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Dynamic Language Modeling&lt;/strong&gt; 🌐&lt;br&gt;
LangChain shines in the realm of Language Model projects. Its ability to adapt dynamically to varying linguistic contexts sets it apart. This adaptability is especially beneficial when dealing with diverse datasets and complex linguistic structures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Problematic Misunderstandings Clarified&lt;/strong&gt; 🤔&lt;br&gt;
When discussing LangChain with others, a common theme emerges - a lack of clear comprehension. LangChain addresses this by acting as a bridge, clarifying misunderstandings and facilitating a more intuitive grasp of its functionalities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Project-Driven Revelation&lt;/strong&gt; 🚀&lt;br&gt;
The true power of LangChain unfolds progressively as you integrate it into your projects. It is not merely a tool; it's an evolving solution that reveals its depth and sophistication as you navigate through different use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Elaborating on LangChain's Role 🛠️
&lt;/h2&gt;

&lt;p&gt;LangChain's role becomes more apparent when you explore its features in-depth:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Adaptive Learning Mechanism&lt;/strong&gt; 🔄&lt;br&gt;
LangChain boasts an adaptive learning mechanism, making it well-suited for projects that involve continuous learning and evolving linguistic patterns. This adaptability ensures that the platform remains effective across diverse language landscapes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Semantic Clarity Engine&lt;/strong&gt; 🧠&lt;br&gt;
The platform acts as a Semantic Clarity Engine, dissecting complex linguistic constructs and providing a clearer understanding. This is particularly valuable in scenarios where the meaning behind language nuances can significantly impact project outcomes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview 🌐
&lt;/h2&gt;

&lt;p&gt;LangChain emerges as a crucial abstraction in the landscape of Large Language Models (LLMs). In the era post-ChatGPT, where various LLMs abound, LangChain provides a unified interface to seamlessly experiment with and switch between models, offering both performance and cost advantages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt Engineering 🤖
&lt;/h2&gt;

&lt;p&gt;Crafting the right question, or "prompt," is pivotal for meaningful results with LLMs. LangChain facilitates prompt engineering through prompt templates, enabling the integration of external data from enterprise databases. The platform also introduces the intriguing concept of chaining LLMs, allowing the output of one question to serve as input for another.&lt;/p&gt;

&lt;h2&gt;
  
  
  Retrieval Augmented Generation (RAG) 🔄
&lt;/h2&gt;

&lt;p&gt;LangChain delves into the importance of context in LLMs, introducing Retrieval Augmented Generation (RAG). By augmenting retrieval with context, users can significantly enhance the relevance and accuracy of responses. LangChain provides tools for embedding private data into a vector store, allowing for intelligent context provision to LLMs based on the user's data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools and Agents 🧰
&lt;/h2&gt;

&lt;p&gt;The platform introduces the "ReAct" prompting technique, encouraging LLMs to think step by step for enhanced reasoning. LangChain defines tools as wrappers around APIs and offers various pre-built tools. These tools can be associated with agents, which are then employed to augment LLM capabilities. This approach is particularly useful in overcoming the limitation of frozen-in-time knowledge in LLMs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Response Processing and Conversational Interfaces 🗣️
&lt;/h2&gt;

&lt;p&gt;LangChain addresses the challenge of processing LLM output by providing features like Callbacks, Streaming, and Batching. This ensures efficient data processing, whether for building conversational applications or APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion 🎉
&lt;/h2&gt;

&lt;p&gt;In essence, LangChain is a versatile Swiss Army Knife for LLMs, offering a comprehensive suite of tools and features. This overview only scratches the surface of its capabilities, encouraging users to explore further and unlock the full potential of language models.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I will continue to write blogs and tutorials on Langchain&lt;/em&gt;. To learn more and engage the community, here is a link to their documentation. &lt;a href="https://python.langchain.com/en/latest/index.html"&gt;LangChain Documentation&lt;/a&gt; 📘&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>langchain</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
