<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: tatyana</title>
    <description>The latest articles on DEV Community by tatyana (@tatyana).</description>
    <link>https://dev.to/tatyana</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tatyana"/>
    <language>en</language>
    <item>
      <title>👀Aim+LlamaIndex: Track intermediate prompts, responses, and context chunks through Aim’s sophisticated UI.</title>
      <dc:creator>tatyana</dc:creator>
      <pubDate>Thu, 25 May 2023 13:46:18 +0000</pubDate>
      <link>https://dev.to/tatyana/aimllamaindex-track-intermediate-prompts-responses-and-context-chunks-through-aims-sophisticated-ui-3k6p</link>
      <guid>https://dev.to/tatyana/aimllamaindex-track-intermediate-prompts-responses-and-context-chunks-through-aims-sophisticated-ui-3k6p</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b4_bqssi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5pgca1h97hf9aqzfwiyn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b4_bqssi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5pgca1h97hf9aqzfwiyn.gif" alt="demo" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this demo, you can see the capabilities of Aim for logging events while running queries within LlamaIndex. We use the AimCallback to store the outputs and showing how to explore them using Aim Text Explorer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aimstack.io/"&gt;aimstack.io&lt;/a&gt; is an awesome tool for tracking metadata for your LLM app (e.g. experiments 🧪, prompts 📝, and more).&lt;/p&gt;

&lt;p&gt;It’s the perfect tool for tracing within LlamaIndex, which manages multiple LLM calls on top of your data.&lt;/p&gt;

&lt;p&gt;Now you can easily track intermediate prompts, responses, and context chunks through Aim’s sophisticated UI.&lt;/p&gt;

&lt;p&gt;Full guide here: &lt;a href="https://gpt-index.readthedocs.io/en/latest/examples/callbacks/AimCallback.html"&gt;https://gpt-index.readthedocs.io/en/latest/examples/callbacks/AimCallback.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aim repo: &lt;a href="https://github.com/aimhubio/aim"&gt;https://github.com/aimhubio/aim&lt;/a&gt;&lt;br&gt;
Llama_index repo: &lt;a href="https://github.com/jerryjliu/llama_index"&gt;https://github.com/jerryjliu/llama_index&lt;/a&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>LangChain + Aim: Building and Debugging AI Systems Made EASY!</title>
      <dc:creator>tatyana</dc:creator>
      <pubDate>Thu, 13 Apr 2023 20:52:38 +0000</pubDate>
      <link>https://dev.to/tatyana/langchain-aim-building-and-debugging-ai-systems-made-easy-1bk0</link>
      <guid>https://dev.to/tatyana/langchain-aim-building-and-debugging-ai-systems-made-easy-1bk0</guid>
      <description>&lt;h2&gt;
  
  
  The Rise of Complex AI Systems
&lt;/h2&gt;

&lt;p&gt;With the introduction of ChatGPT and large language models (LLMs) such as GPT3.5-turbo and GPT4, AI progress has skyrocketed. These models have enabled tons of AI-based applications, bringing the power of LLMs to real-world use cases.&lt;/p&gt;

&lt;p&gt;But the true power of AI comes when we combine LLMs with other tools, scripts, and sources of computation to create much more powerful AI systems than standalone models.&lt;/p&gt;

&lt;p&gt;As AI systems get increasingly complex, the ability to effectively debug and monitor them becomes crucial. Without comprehensive tracing and debugging, the improvement, monitoring and understanding of these systems become extremely challenging.&lt;/p&gt;

&lt;p&gt;In the article, we will take a look at how to use Aim to easily trace complex AI systems built with LangChain. Specifically, we will go over how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;track all inputs and outputs of chains,&lt;/li&gt;
&lt;li&gt;visualize and explore individual chains,&lt;/li&gt;
&lt;li&gt;compare several chains side-by-side.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  LangChain: Building AI Systems with LLMs
&lt;/h2&gt;

&lt;p&gt;LangChain is a library designed to enable the development of powerful applications by integrating LLMs with other computational resources or knowledge sources. It streamlines the process of creating applications such as question answering systems, chatbots, and intelligent agents.&lt;/p&gt;

&lt;p&gt;It provides a unified interface for managing and optimizing prompts, creating sequences of calls to LLMs or other utilities (chains), interacting with external data sources, making decisions and taking actions. LangChain empowers developers to build sophisticated, cutting-edge applications by making the most of LLMs and easily connecting them with other tools!&lt;/p&gt;

&lt;h2&gt;
  
  
  Aim: Upgraded Debugging Experience for AI Systems
&lt;/h2&gt;

&lt;p&gt;Monitoring and debugging AI systems requires more than just scanning output logs on a terminal.&lt;/p&gt;

&lt;p&gt;Introducing Aim!&lt;/p&gt;

&lt;p&gt;Aim is an open-source AI metadata library that tracks all aspects of your AI system's execution, facilitating in-depth exploration, monitoring, and reproducibility.&lt;/p&gt;

&lt;p&gt;Importantly, Aim helps to query all the tracked metadata programmatically and is equipped with a powerful UI / observability layer for the AI metadata.&lt;/p&gt;

&lt;p&gt;In that way, Aim makes debugging, monitoring, comparing different executions a breeze.&lt;/p&gt;

&lt;p&gt;Experience the ultimate control with Aim!&lt;/p&gt;

&lt;p&gt;Check out Aim on GitHub: github.com/aimhubio/aim&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2m1bfs5w6qwu5s9357ig.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2m1bfs5w6qwu5s9357ig.jpg" alt="Aim - text explorer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Aim + LangChain = 🚀
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyavyuyuqixs2o02xjyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyavyuyuqixs2o02xjyo.png" alt="Aim+LangChain"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the release of LangChain v0.0.127, it's now possible to trace LangChain agents and chains with Aim using just a few lines of code! All you need to do is configure the Aim callback and run your executions as usual.&lt;/p&gt;

&lt;p&gt;Aim does the rest for you by tracking tools and LLMs’ inputs and outputs, agents' actions, and chains results. As well as, it tracks CLI command and arguments, system info and resource usage, env variables, git info, and terminal outputs.&lt;/p&gt;

&lt;p&gt;Let's move forward and build an agent with LangChain, configure Aim to trace executions, and take a quick journey around the UI to see how Aim can help with debugging and monitoring.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-On Example: Building a Multi-Task AI Agent
&lt;/h2&gt;

&lt;p&gt;Setting up the agent and the Aim callback&lt;br&gt;
Let’s build an agent equipped with two tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the SerpApi tool to access Google search results,&lt;/li&gt;
&lt;li&gt;the LLM-math tool to perform required mathematical operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this particular example, we'll prompt the agent to discover who Leonardo DiCaprio's girlfriend is and calculate her current age raised to the 0.43 power:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tools = load_tools(["serpapi", "llm-math"], llm=llm, callback_manager=manager)
agent = initialize_agent(
    tools,
    llm,
    agent="zero-shot-react-description",
    callback_manager=manager,
    verbose=True,
)
agent.run(
    "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that the chain is set up, let's integrate the Aim callback. It takes just a few lines of code and Aim will capture all the moving pieces during the execution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.callbacks import AimCallbackHandler

aim_callback = AimCallbackHandler(
    repo=".",
    experiment_name="scenario 1: OpenAI LLM",
)

aim_callback.flush_tracker(langchain_asset=agent, reset=False, finish=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Aim is entirely open-source and self-hosted, which means your data remains private and isn't shared with third parties.&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Find the full script and more examples in the official LangChain docs: &lt;a href="https://python.langchain.com/en/latest/ecosystem/aim_tracking.html" rel="noopener noreferrer"&gt;https://python.langchain.com/en/latest/ecosystem/aim_tracking.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Executing the agent and running Aim
&lt;/h2&gt;

&lt;p&gt;Before executing the agent, ensure that Aim is installed by executing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install aim
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's run multiple executions and launch the Aim UI to visualize and explore the results:&lt;/p&gt;

&lt;p&gt;execute the script by running python example.py,&lt;br&gt;
then, start the UI with aim up command.&lt;br&gt;
With the Aim up and running, you can effortlessly dive into the details of each execution, compare results, and gain insights that will help you to debug and iterate over your chains.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring executions via Aim
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Home page
&lt;/h3&gt;

&lt;p&gt;On the home page, you'll find an organized view of all your tracked executions, making it easy to keep track of your progress and recent runs. To navigate to a specific execution, simply click on the link, and you'll be taken to a dedicated page with comprehensive information about that particular execution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fthzw0i93ozu9qn4iqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fthzw0i93ozu9qn4iqa.png" alt="Home page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deep dive into a single execution&lt;br&gt;
When navigating to an individual execution page, you'll find an overview of system information and execution details. Here you can access:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CLI command and arguments,&lt;/li&gt;
&lt;li&gt;Environment variables,&lt;/li&gt;
&lt;li&gt;Packages,&lt;/li&gt;
&lt;li&gt;Git information,&lt;/li&gt;
&lt;li&gt;System resource usage,&lt;/li&gt;
&lt;li&gt;and other relevant information about an individual execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3e0vygzh98zvtlbdj7z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3e0vygzh98zvtlbdj7z.png" alt="Execution page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aim automatically captures terminal outputs during execution. Access these logs in the “Logs” tab to easily keep track of the progress of your AI system and identify issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsn71y1nk3ko6msg9ggu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsn71y1nk3ko6msg9ggu.png" alt="Logs tab"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the "Text" tab, you can explore the inner workings of a chain, including agent actions, tools and LLMs inputs and outputs. This in-depth view allows you to review the metadata collected at every step of execution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fif1ab9scip6ry8eykp8m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fif1ab9scip6ry8eykp8m.png" alt="Text tab"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Aim's Text Explorer, you can effortlessly compare multiple executions, examining their actions, inputs, and outputs side by side. It helps to identify patterns or spot discrepancies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1znrads8yh50ybvjms9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1znrads8yh50ybvjms9.jpg" alt="Text explorer"&gt;&lt;/a&gt;&lt;br&gt;
For instance, in the given example, two executions produced the response, "Camila Morrone is Leo DiCaprio's girlfriend, and her current age raised to the 0.43 power is 3.8507291225496925." However, another execution returned the answer "3.991298452658078". This discrepancy occurred because the first two executions incorrectly identified Camila Morrone's age as 23 instead of 25.&lt;/p&gt;

&lt;p&gt;With Text Explorer, you can easily compare and analyze the outcomes of various executions and make decisions to adjust agents and prompts further.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;In conclusion, as AI systems become more complex and powerful, the need for comprehensive tracing and debugging tools becomes increasingly essential. LangChain, when combined with Aim, provides a powerful solution for building and monitoring sophisticated AI applications. By following the practical examples in this blog post, you can effectively monitor and debug your LangChain-based systems!&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn more
&lt;/h2&gt;

&lt;p&gt;Check out the Aim + LangChain integration docs &lt;a href="http://bit.ly/418FIHX" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LangChain repo: &lt;a href="https://github.com/hwchase17/langchain" rel="noopener noreferrer"&gt;https://github.com/hwchase17/langchain&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aim repo: &lt;a href="https://github.com/aimhubio/aim" rel="noopener noreferrer"&gt;https://github.com/aimhubio/aim&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have questions, &lt;a href="https://discord.com/invite/zXq2NfVdtF" rel="noopener noreferrer"&gt;join the Aim community&lt;/a&gt;, share your feedback, open issues for new features and bugs. You’re most welcome! 🙌&lt;/p&gt;

&lt;p&gt;Drop a ⭐️ on GitHub, if you find Aim useful.&lt;/p&gt;

&lt;p&gt;This article was originally published on &lt;a href="https://aimstack.io/blog" rel="noopener noreferrer"&gt;Aim Blog&lt;/a&gt; by Gor Arakelyan. &lt;/p&gt;

</description>
      <category>openai</category>
      <category>chatgpt</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
    <item>
      <title>Exploring MLflow experiments with a powerful UI</title>
      <dc:creator>tatyana</dc:creator>
      <pubDate>Wed, 15 Mar 2023 12:39:08 +0000</pubDate>
      <link>https://dev.to/tatyana/exploring-mlflow-experiments-with-a-powerful-ui-2hk4</link>
      <guid>https://dev.to/tatyana/exploring-mlflow-experiments-with-a-powerful-ui-2hk4</guid>
      <description>&lt;p&gt;Excited to share with you the release of &lt;strong&gt;aimlflow&lt;/strong&gt;, an integration that helps to seamlessly run a powerful experiment tracking UI on MLflow logs! 🎉&lt;/p&gt;

&lt;p&gt;While &lt;strong&gt;MLflow&lt;/strong&gt; provides a great foundation for managing machine learning projects, it can be challenging to effectively explore and understand the results of tracked experiments. &lt;strong&gt;Aim&lt;/strong&gt; is a tool that addresses this challenge by providing a variety of features for deeply exploring and learning tracked experiments insights and understanding results via UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a67B9q-T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u29nl1xezyekpnk7f8hb.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a67B9q-T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u29nl1xezyekpnk7f8hb.gif" alt="Aim UI" width="880" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With aimlflow, MLflow users can now seamlessly view and explore their MLflow experiments using Aim’s powerful features, leading to deeper understanding and more effective decision-making.&lt;/p&gt;

&lt;p&gt;To be able to explore MLflow logs with Aim, you need to convert MLflow experiments to Aim format. All the metrics, tags, config, artifacts, and experiment descriptions will be stored and live-synced in a &lt;code&gt;.aim&lt;/code&gt;repo located on the file system.&lt;/p&gt;

&lt;p&gt;This means that you can run your training script, and without modifying a single line of code, live-time view the logs on the beautiful UI of Aim. Isn’t it amazing? 🤩&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jQw-gtkL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lozpiyzifvgd5zzw85k9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jQw-gtkL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lozpiyzifvgd5zzw85k9.png" alt="Aim Metrics Explorer" width="880" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Read the guide demonstrating how MLflow experiments can be explored with Aim on Medium: &lt;a href="https://bit.ly/3YQdNuy"&gt;https://bit.ly/3YQdNuy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love to hear your feedback :))&lt;/p&gt;

&lt;p&gt;If you have any questions &lt;a href="https://community.aimstack.io/"&gt;join Aim community&lt;/a&gt;, share your feedback, open issues for new features and bugs. 🙌&lt;/p&gt;

&lt;p&gt;Show some love by dropping a ⭐️ on &lt;a href="https://github.com/aimhubio/aim"&gt;GitHub&lt;/a&gt;, if you think Aim is useful.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>machinelearning</category>
      <category>mlflow</category>
      <category>python</category>
    </item>
  </channel>
</rss>
