<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nicolas Oulianov</title>
    <description>The latest articles on DEV Community by Nicolas Oulianov (@oulianov).</description>
    <link>https://dev.to/oulianov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oulianov"/>
    <language>en</language>
    <item>
      <title>How to add AI analytics to a Langchain agent?</title>
      <dc:creator>Nicolas Oulianov</dc:creator>
      <pubDate>Tue, 09 Jan 2024 17:34:00 +0000</pubDate>
      <link>https://dev.to/oulianov/how-to-add-ai-analytics-to-a-langchain-agent-137l</link>
      <guid>https://dev.to/oulianov/how-to-add-ai-analytics-to-a-langchain-agent-137l</guid>
      <description>&lt;p&gt;Building an AI agent with Langchain is an exciting endeavor. In a few lines of Python, you have the prototype of an agent. You can chat with your data!&lt;/p&gt;

&lt;p&gt;But like any cutting-edge technology, it comes with its own set of challenges. Hallucinations, incorrect responses, and misunderstandings can plague your Langchain-based agent. &lt;/p&gt;

&lt;p&gt;What’s the first step to overcome these hurdles? &lt;strong&gt;Analytics.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xp2pRxiB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cp3jbkl7ndmzu3o68ewz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xp2pRxiB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cp3jbkl7ndmzu3o68ewz.png" alt="Let's deep dive into the world of analytics. Here's the visual of a nice galaxy." width="720" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Analytics Spectrum
&lt;/h2&gt;

&lt;p&gt;In software, analytics spans from performance metrics to user satisfaction insights.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance Analytics:&lt;/strong&gt; Dive deep into code execution details to catch bugs and optimize performance. &lt;em&gt;Is everything working as intended?&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Product Analytics:&lt;/strong&gt; Assess whether your agent meets the expectations of end-users. &lt;em&gt;Am I making my users happy?&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To create exceptional software, it's crucial to balance both types of analytics.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Analytics Does Your Agent Need?
&lt;/h2&gt;

&lt;p&gt;For Langchain agents, you want to make sure that :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The agent code runs well and without bug. &lt;/li&gt;
&lt;li&gt;The agent answers correctly in main cases. &lt;/li&gt;
&lt;li&gt;The agent behaves reasonably in edge cases.&lt;/li&gt;
&lt;li&gt;End users like the agent. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2vfF3ASC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whkdvfun49ozccbx3k1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2vfF3ASC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whkdvfun49ozccbx3k1d.png" alt="Those kind of analytics build on each other. Imagine them stacked like a pyramid." width="720" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The ideal approach is to progress systematically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with basic metrics like execution speed and error rate.&lt;/li&gt;
&lt;li&gt;Develop tailored tests for the agent's behavior in main and edge cases.&lt;/li&gt;
&lt;li&gt;Collect user feedback post-implementation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;User feedback is, by far, the most valuable thing you can get to improve an agent. But it can be hard to collect properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Langchain agent code
&lt;/h2&gt;

&lt;p&gt;Assuming you have a basic Langchain agent for Doc Q&amp;amp;A, let's explore how to integrate analytics. For example, you code may look like that :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.prompts&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatPromptTemplate&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_community.chat_models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_community.embeddings&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAIEmbeddings&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_community.vectorstores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FAISS&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_core.output_parsers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StrOutputParser&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_core.runnables&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RunnablePassthrough&lt;/span&gt;

&lt;span class="n"&gt;vectorstore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;FAISS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_texts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Phospho is the LLM analytics platform&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Paris is the capital of Fashion (sorry not sorry London)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The Concorde had a maximum cruising speed of 2,179 km (1,354 miles) per hour.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;OpenAIEmbeddings&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_retriever&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;template&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Answer the question based only on the following context:
{context}

Question: {question}
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ChatPromptTemplate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_template&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;retrieval_chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;context&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;retriever&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;RunnablePassthrough&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nc"&gt;StrOutputParser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the chain, you invoke it this way.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;question&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the speed of the Concorde?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;retrieval_chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The speed of the Concorde is 2,179km per hour.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s add analytics to this chain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding analytics to a Langchain agent
&lt;/h2&gt;

&lt;p&gt;There are multiple ways to add analytics to a Langchain agent. &lt;/p&gt;

&lt;p&gt;The basics is that you log the agent’s inputs and outputs in a database, and then you compute metrics to get insights. &lt;/p&gt;

&lt;p&gt;Doing this yourself can work for simple cases, but gets annoying quickly.&lt;/p&gt;

&lt;p&gt;There are many analytics providers. They differ on :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The amount of setup required&lt;/li&gt;
&lt;li&gt;The analytics level (performance analytics vs. product analytics)&lt;/li&gt;
&lt;li&gt;The broadness of insights&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Integrated analytics (Langsmith)
&lt;/h3&gt;

&lt;p&gt;You may have heard of &lt;a href="https://www.langchain.com/langsmith"&gt;Langsmith&lt;/a&gt;, the analytics platform created by Langchain to help debug your agent, with technical insights about execution speed and step by step decomposition.&lt;/p&gt;

&lt;p&gt;Langchain is mainly focused on performance analytics and debugging. While this is important, it doesn’t give you the full picture.&lt;/p&gt;

&lt;p&gt;Currently, Langsmith is in closed beta, which unfortunately means you cannot use Langsmith without an invitation.&lt;/p&gt;

&lt;p&gt;Fortunately, alternatives exist. &lt;/p&gt;

&lt;h3&gt;
  
  
  Analytics with callbacks
&lt;/h3&gt;

&lt;p&gt;A common way to add analytics is to add a &lt;strong&gt;callback&lt;/strong&gt; when invoking the chain or the agent. Callbacks are functions called every time something happens. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://python.langchain.com/docs/integrations/callbacks"&gt;Langchain&lt;/a&gt; documentation contains some examples.&lt;/p&gt;

&lt;p&gt;Let’s focus on the example of adding analytics with phospho, which is a product analytics platform for LLM apps.  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install the phospho module
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; phospho
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create an account on the &lt;a href="http://phospho.ai"&gt;phospho&lt;/a&gt; platform. Add your API key and project id as environment variables.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;PHOSPHO_API_KEY &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"..."&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;PHOSPHO_PROJECT_ID &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add the callback when invoking your chain or your agent.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;phospho.integrations&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PhosphoLangchainCallbackHandler&lt;/span&gt;

&lt;span class="n"&gt;question&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the speed of the Concorde?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;retrieval_chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
        &lt;span class="c1"&gt;# Add the callback in config
&lt;/span&gt;        &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;callbacks&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="nc"&gt;PhosphoLangchainCallbackHandler&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="c1"&gt;# Note: you can add multiple callbacks 
&lt;/span&gt;        &lt;span class="p"&gt;]},&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it! phospho will log the input, output, and intermediate steps of the chain.&lt;/p&gt;

&lt;p&gt;phospho logging runs asynchronously and the heavy computations happen remotely, so the impact on the agent’s execution speed is minimal.&lt;/p&gt;

&lt;p&gt;The chain input and output is logged to phospho in the “Tasks” tab. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6IG2MwWU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qxzy22284mw1ygf4o7v8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6IG2MwWU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qxzy22284mw1ygf4o7v8.png" alt="Task are displayed into phospho" width="720" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To have more details about the log, click on View. In the raw task data, the intermediate steps of the chain and retrieved Documents are also logged to phospho.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to improve the product?
&lt;/h2&gt;

&lt;p&gt;phospho automatically labels the task as a &lt;a href="https://docs.phospho.ai/features/evaluation"&gt;success or a failure&lt;/a&gt; with default criterias. It can be improved with feedback given by you or by your users. &lt;/p&gt;

&lt;p&gt;phospho also &lt;a href="https://docs.phospho.ai/features/events"&gt;detects events&lt;/a&gt;. For example, here phospho detected that this interaction was a “question answering” event. Set up custom events that trigger webhooks when detected. For example, to receive a slack message when a user discuss a certain topic.&lt;/p&gt;

&lt;p&gt;By monitoring the success rate, monitoring different versions, and running tests, you can get a feeling on what really matters for your users. &lt;/p&gt;

&lt;h3&gt;
  
  
  Custom callback
&lt;/h3&gt;

&lt;p&gt;Fore more advanced integration with langchain, customize the phospho callback passed to the chain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;phospho.integrations&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PhosphoLangchainCallbackHandler&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyCustomLangchainCallbackHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PhosphoLangchainCallbackHandler&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

        &lt;span class="c1"&gt;# The full list of callbacks is available here: 
&lt;/span&gt;        &lt;span class="c1"&gt;# https://python.langchain.com/docs/modules/callbacks/
&lt;/span&gt;    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;on_agent_finish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;finish&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;AgentFinish&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Run on agent end.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

        &lt;span class="c1"&gt;# Do something custom here
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;phospho&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;question&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the speed of the Concorde?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;retrieval_chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
        &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;callbacks&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="nc"&gt;MyCustomLangchainCallbackHandler&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; 
        &lt;span class="p"&gt;]},&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, enhancing the performance and user satisfaction of your Langchain agent involves adding analytics into your development process.&lt;/p&gt;

&lt;p&gt;By strategically addressing both performance and product analytics, you can ensure that your agent not only runs smoothly but also delivers a positive experience to end users.&lt;/p&gt;

&lt;p&gt;Explore &lt;a href="https://docs.phospho.ai/"&gt;phospho&lt;/a&gt; to learn more about how to improve your Langchain agent with analytics.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
