<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Prashant Iyer</title>
    <description>The latest articles on DEV Community by Prashant Iyer (@prashantriyer).</description>
    <link>https://dev.to/prashantriyer</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prashantriyer"/>
    <language>en</language>
    <item>
      <title>🤖Dueling AIs: Questioning and Answering with Language Models🚀</title>
      <dc:creator>Prashant Iyer</dc:creator>
      <pubDate>Sun, 28 Jul 2024 20:36:55 +0000</pubDate>
      <link>https://dev.to/llmware/dueling-ais-questioning-and-answering-with-language-models-5f0l</link>
      <guid>https://dev.to/llmware/dueling-ais-questioning-and-answering-with-language-models-5f0l</guid>
      <description>&lt;p&gt;You've probably asked a question to a &lt;em&gt;language model&lt;/em&gt; before and then had it give you an answer. After all, this is what we most commonly use language models for.&lt;/p&gt;

&lt;p&gt;But have you ever received a question from a language model? While not as common, this application of AI has diverse use cases in areas like education, where you might want a model to give you practice questions for a test, and in sales enablement, where you question your business's sales team about your products to improve their ability to make sales.&lt;/p&gt;

&lt;p&gt;Now, &lt;strong&gt;what if we had a face off⚔️ between two different models&lt;/strong&gt;: one that asked questions about a topic and another that answered them? All without human intervention?&lt;/p&gt;

&lt;p&gt;In this article, we're going to look at exactly that. We'll provide a sample passage about OpenAI's AI safety team as context to our models. We'll then let our models duel it out! One model will ask questions based on this passage, and another model will respond!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbe8905k96aob2nvvd1sj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbe8905k96aob2nvvd1sj.gif" alt="Duel GIF" width="500" height="208"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Our AI Models🤖
&lt;/h2&gt;

&lt;p&gt;Intoducing, &lt;code&gt;slim-q-gen-tiny-tool&lt;/code&gt;. This will be our question model, capable of generating 3 different types of questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multiple choice questions&lt;/li&gt;
&lt;li&gt;Boolean (true/false) questions&lt;/li&gt;
&lt;li&gt;General open-ended questions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Facing off against this will be &lt;code&gt;bling-phi-3-gguf&lt;/code&gt;! This will be our answer model, giving appropriate responses to any of the above types of questions.&lt;/p&gt;

&lt;p&gt;One important note is that both these models are &lt;em&gt;GGUF quantized&lt;/em&gt;. This means that they are smaller and faster versions of their original counterparts. What this means for us is that we can run them on just a CPU, with no need for resources like GPUs!&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Providing input parameters✏️
&lt;/h2&gt;

&lt;p&gt;This is what our function signature for this example looks like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;ask_and_answer_game&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;source_passage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;q_model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;slim-q-gen-tiny-tool&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;number_of_tries&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;question_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                        &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;source_passage&lt;/code&gt; is the text input that we will provide our models,&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;q_model&lt;/code&gt; is our questioning model,&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;number_of_tries&lt;/code&gt; is the number of questions we will attempt to generate (more on this later!)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;question_type&lt;/code&gt; can be either &lt;code&gt;"multiple choice"&lt;/code&gt;, &lt;code&gt;"boolean"&lt;/code&gt; or &lt;code&gt;"question"&lt;/code&gt; corresponding to each of the types of questions we saw above,&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;temperature&lt;/code&gt; is a value ranging from 0 to 1 that determines how much variance we will see in our generated questions. Here, the value of 0.5 is relatively high so that we get a good variety of questions with little repetition!&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 2: Loading in our models🪫🔋
&lt;/h2&gt;

&lt;p&gt;With the inputs taken care of, let's now load in both our models.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;q_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ModelCatalog&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sample&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that we have &lt;code&gt;sample=True&lt;/code&gt; to increase variety in our model output (the questions generated).&lt;/p&gt;

&lt;p&gt;Now, for the answer model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;answer_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ModelCatalog&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bling-phi-3-gguf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We won't mess with the &lt;code&gt;sample&lt;/code&gt; or &lt;code&gt;temperature&lt;/code&gt; options here because we want concise, fact-based answers from this model.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Generating our questions🤔💬
&lt;/h2&gt;

&lt;p&gt;We'll try to generate questions &lt;code&gt;number_of_tries&lt;/code&gt; times, which in this case is 10. We'll then then update our &lt;code&gt;questions&lt;/code&gt; list with only the unique questions, to avoid repetitions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;questions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="c1"&gt;# Loop number_of_tries times
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;number_of_tries&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;q_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;function_call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;source_passage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;question_type&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;new_q&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_response&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;# Check to see that the question generated is unique
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;new_q&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;new_q&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_q&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An important function here is &lt;code&gt;q_model.function_call()&lt;/code&gt;. This is how the &lt;code&gt;llmware&lt;/code&gt; library lets you prompt language models with &lt;strong&gt;just a single function call&lt;/strong&gt;. Here, we pass in the source text and question type as arguments.&lt;/p&gt;

&lt;p&gt;The function returns &lt;code&gt;response&lt;/code&gt;, a dictionary with a lot of information about the call, but we're only interested in the &lt;code&gt;question&lt;/code&gt; key, which is located inside the &lt;code&gt;llm_response&lt;/code&gt; dictionary.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Responding to our questions📝
&lt;/h2&gt;

&lt;p&gt;Now that the questions have been generated, &lt;strong&gt;the duel is on!&lt;/strong&gt; Let's use our answering model to now respond to these questions. We'll loop through our &lt;code&gt;questions&lt;/code&gt; list, pass in the source passage as context to the model and ask each question.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Loop through each question
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;question&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Print out the question
&lt;/span&gt;    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;question: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; - &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Validate the question list and run inference
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;answer_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;inference&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;add_context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;test_passage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Print out the answer
&lt;/span&gt;        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;response: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_response&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is important to note that our question model returns each &lt;code&gt;question&lt;/code&gt; as a &lt;code&gt;list&lt;/code&gt;, with the first element (&lt;code&gt;question[0]&lt;/code&gt;) containing the actual string corresponding to the question.&lt;/p&gt;

&lt;p&gt;For each &lt;code&gt;question&lt;/code&gt;, we then need to perform some validation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check to see that the &lt;code&gt;question&lt;/code&gt; is of the correct data type (&lt;code&gt;list&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Check to see that the &lt;code&gt;question&lt;/code&gt; is not empty.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, the &lt;code&gt;answer_model.inference()&lt;/code&gt; function will ask our model the question, passing in the &lt;code&gt;test_passage&lt;/code&gt; as context.&lt;/p&gt;

&lt;p&gt;Finally, we print out the response.&lt;/p&gt;




&lt;h2&gt;
  
  
  Results!✅
&lt;/h2&gt;

&lt;p&gt;Let's quickly look at our sample passage. This passage was taken from a CNBC news story in May 2024 about OpenAI's work with safety and security.&lt;/p&gt;

&lt;p&gt;"OpenAI said Tuesday it has established a new committee to make recommendations to the company’s board about safety and security, weeks after dissolving a team focused on AI safety. In a blog post, OpenAI said the new committee would be led by CEO Sam Altman as well as Bret Taylor, the company’s board chair, and board member Nicole Seligman. The announcement follows the high-profile exit this month of an OpenAI executive focused on safety, Jan Leike. Leike resigned from OpenAI leveling criticisms that the company had under-invested in AI safety work and that tensions with OpenAI’s leadership had reached a breaking point."&lt;/p&gt;

&lt;p&gt;Now, let's see what our output looks like!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0w13ads9o8px7wdrihw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0w13ads9o8px7wdrihw.png" alt="Sample output" width="711" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see all the questions that were asked about the passage, as well as concise, fact-based responses given to them!&lt;/p&gt;

&lt;p&gt;Note that there are only 9 questions here while we provided &lt;code&gt;number_of_tries=10&lt;/code&gt;. This means that one question generated was a duplicate and was ignored.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And with that, we're done with this example! Recall that we used the &lt;code&gt;llmware&lt;/code&gt; library to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Load in a question and answer model&lt;/li&gt;
&lt;li&gt;Generate unique questions about a source passage&lt;/li&gt;
&lt;li&gt;Respond to each question with accuracy.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;And remember that we did all of this on just a CPU!&lt;/strong&gt; 💻&lt;/p&gt;

&lt;p&gt;Check out our YouTube videon on this example!&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/380Yr2bc_Qk?start=143"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If you made it this far, thank you for taking the time to go through this topic with us ❤️! For more content like this, make sure to &lt;a href="https://dev.to/llmware"&gt;visit our dev.to page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The source code for many more examples like this one are on &lt;a href="https://github.com/llmware-ai/llmware" rel="noopener noreferrer"&gt;our GitHub&lt;/a&gt;. Find this example &lt;a href="https://github.com/llmware-ai/llmware/blob/a58c2dc7ea94c1a8eef87bc0fd1cc34fb616c743/examples/SLIM-Agents/using-slim-q-gen.py" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our repository also contains a &lt;a href="https://github.com/llmware-ai/llmware/blob/main/examples/Notebooks/NoteBook_Examples/using-slim-q-gen-notebook.ipynb" rel="noopener noreferrer"&gt;notebook for this example&lt;/a&gt; that you can run yourself using Google Colab, Jupyter or any other platform that supports .ipynb notebooks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://discord.gg/fCztJQeV7J" rel="noopener noreferrer"&gt;Join our Discord&lt;/a&gt; to interact with a growing community of AI enthusiasts of all levels of experience!&lt;/p&gt;

&lt;p&gt;Please be sure to visit our website &lt;a href="https://llmware.ai/" rel="noopener noreferrer"&gt;llmware.ai&lt;/a&gt; for more information and updates.&lt;/p&gt;

</description>
      <category>python</category>
      <category>beginners</category>
      <category>ai</category>
      <category>rag</category>
    </item>
    <item>
      <title>🔉From Sound to Insights: Using AI🤖 for Audio File Transcription and Analysis!🚀</title>
      <dc:creator>Prashant Iyer</dc:creator>
      <pubDate>Fri, 28 Jun 2024 20:57:16 +0000</pubDate>
      <link>https://dev.to/llmware/from-sound-to-insights-using-ai-for-audio-file-transcription-and-analysis-36ek</link>
      <guid>https://dev.to/llmware/from-sound-to-insights-using-ai-for-audio-file-transcription-and-analysis-36ek</guid>
      <description>&lt;p&gt;If we were given an audio file, is there any way we could identify the time stamps where specific words were said? Is there any way we could extract all the key words mentioned about a topic?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With AI 🤖, we can do all of this and much more!&lt;/strong&gt; The key lies in being able to parse audio into text, allowing us to then harness the natural language processing capabilities of &lt;em&gt;language models&lt;/em&gt; to perform sophisticated analyses and inferences on our data.&lt;/p&gt;

&lt;p&gt;Regardless of who you are, such an approach to audio transcription and analysis will augment how you interact with and extract knowledge from audio files.&lt;/p&gt;

&lt;p&gt;Let's see how we can do this with &lt;code&gt;llmware&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Tools 🤖
&lt;/h2&gt;

&lt;p&gt;We'll be using two models for this example.&lt;/p&gt;

&lt;p&gt;The first is Whisper by OpenAI. This is the model that will allow us to parse the audio files, i.e. convert them from audio to text.&lt;/p&gt;

&lt;p&gt;The second is the SLIM (&lt;em&gt;Structured Language Instruction Model&lt;/em&gt;) Extract Tool by LLMWare, which we'll be using to ask questions about our audio. This is a &lt;em&gt;GGUF quantized&lt;/em&gt; version of a much larger model called &lt;em&gt;slim-extract&lt;/em&gt;. All this means is that our model, the SLIM Extract Tool, is a smaller and faster version of the original model. &lt;strong&gt;This allows us to run it locally on a CPU&lt;/strong&gt;, without the need for powerful computational resources like GPUs!&lt;/p&gt;

&lt;p&gt;With that out of the way, let's get started with the example.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Loading in audio files 🔉🔉
&lt;/h2&gt;

&lt;p&gt;If you have audio files that you want to run the example with, then feel free to use those by setting &lt;code&gt;input_folder&lt;/code&gt; appropriately, but if not, the &lt;code&gt;llmware&lt;/code&gt; library provides you with several sets of sample audio files!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;voice_sample_files&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Setup&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;load_voice_sample_files&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;small_only&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;input_folder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;voice_sample_files&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;greatest_speeches&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, we're loading in the &lt;code&gt;greatest_speeches&lt;/code&gt; set of audio files.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Parsing our audio files 📝
&lt;/h2&gt;

&lt;p&gt;Now that we have our audio files, we can go about parsing them into chunks of text. Recall that we'll be needing the WhisperCPP model to do this. But fortunately, you won't have to directly interact with the model yourself since the &lt;code&gt;Parser&lt;/code&gt; class from the &lt;code&gt;llmware&lt;/code&gt; library will take care of this for you!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;parser_output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Parser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunk_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_chunk_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;parse_voice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_folder&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;write_to_db&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;copy_to_library&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;remove_segment_markers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;chunk_by_segment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;real_time_progress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, the &lt;code&gt;chunk_size&lt;/code&gt; and &lt;code&gt;max_chunk_size&lt;/code&gt; indicate how big each chunk of parsed text will be. We're passing in our folder containing the audio files to the &lt;code&gt;parse_voice()&lt;/code&gt; function of the &lt;code&gt;Parser&lt;/code&gt; class.&lt;/p&gt;

&lt;p&gt;The function does accept many more optional arguments about how we'd like the audio to be parsed, but we can ignore them for this example.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Text searching 🕵️
&lt;/h2&gt;

&lt;p&gt;Let's now run a text search on our parsed audio. We can try searching for the word "president". What this means is that we want to find all the portions of the audio and corresponding text that have the word "president" in it. We can do this using the &lt;code&gt;fast_search_dicts()&lt;/code&gt; function in the &lt;code&gt;Utilies&lt;/code&gt; class in the &lt;code&gt;llmware&lt;/code&gt; library.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Utilities&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;fast_search_dicts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;president&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;parser_output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Step 4: Making an AI call on text chunks 🤖
&lt;/h2&gt;

&lt;p&gt;Now that we have a list of text blocks containing the word "president", lets use an AI model to identify which presidents are being mentioned in the selected text blocks.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;extract_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ModelCatalog&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;slim-extract-tool&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sample&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, we're using the &lt;code&gt;ModelCatalog&lt;/code&gt; class to load in our SLIM Extract Tool. Let's now iterate over each text block containing "president".&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;final_list&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;extract_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;function_call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;president name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We're making a &lt;code&gt;function_call()&lt;/code&gt; for "president name". &lt;strong&gt;This is how we ask our Tool to identify the president name in the text block.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Analyzing our output 🔍
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;function_call()&lt;/code&gt; function would have returned a dictionary containing a lot of data about the function call. We specifically want the &lt;code&gt;president_name&lt;/code&gt; key in the dictionary.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;extracted_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;president_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_response&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_response&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;president_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;extracted_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_response&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;president_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;update: skipping result - no president name found - &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_response&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If the value of the &lt;code&gt;president_name&lt;/code&gt; key is a non-empty string, then we store its value in &lt;code&gt;extracted_name&lt;/code&gt;. Otherwise, no result was found and we print this out.&lt;/p&gt;

&lt;p&gt;Now lets see if the president name matched any of the recent American presidents in this list:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;various_american_presidents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kennedy&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;carter&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;nixon&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reagan&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;clinton&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;obama&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To do this, we'll check if the &lt;code&gt;extracted_name&lt;/code&gt; contains any of these American presidents. If we have a match, then we'll add it to our &lt;code&gt;final_list&lt;/code&gt; as a dictionary containing some information about the location of the name in the audio as well as the text block it was in.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;president&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;various_american_presidents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;president&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;extracted_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;final_list&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;president&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;source&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;file_source&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;time_start&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;coords_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]})&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Results! ✅
&lt;/h2&gt;

&lt;p&gt;Let's now output the &lt;code&gt;final_list&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;final_list&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;final results: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is what an one search result in the output would look after running the code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3g0jpwu6s1vq2mvwtjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3g0jpwu6s1vq2mvwtjc.png" alt="Sample output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, we have a Python dictionary as output containing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;key&lt;/code&gt;: the name of the president identified, which here is "kennedy"&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;source&lt;/code&gt;: the audio file this was found in, which here is "ConcessionStand.wav"&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;time_start&lt;/code&gt;: the time stamp in seconds where the president was mentioned, which here is 339.9 seconds&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;text&lt;/code&gt;: which contains the text chunk the name was found in.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And we're done! To recap, we were able to parse our audio files into text, run a text search on them for the word "president", and then use our SLIM Extract Tool to identify the specific presidents named in our text chunks! &lt;strong&gt;And remember that we did all this on just a CPU! 💻&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Be sure to check out our YouTube video on this example!&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/5y0ez5ZBpPE?start=804"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If you made it this far, thank you for taking the time to go through this topic with us ❤️! For more content like this, make sure to &lt;a href="https://dev.to/llmware"&gt;visit our dev.to page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The source code for many more examples like this one are on &lt;a href="https://github.com/llmware-ai/llmware" rel="noopener noreferrer"&gt;our GitHub&lt;/a&gt;. Find this example &lt;a href="https://github.com/llmware-ai/llmware/blob/main/examples/Use_Cases/parsing_great_speeches.py" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our repository also contains a &lt;a href="https://github.com/llmware-ai/llmware/blob/main/examples/Notebooks/NoteBook_Examples/parsing_great_speeches_notebook.ipynb" rel="noopener noreferrer"&gt;notebook for this example&lt;/a&gt; that you can run yourself using Google Colab, Jupyter or any other platform that supports .ipynb notebooks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://discord.gg/fCztJQeV7J" rel="noopener noreferrer"&gt;Join our Discord&lt;/a&gt; to interact with a growing community of AI enthusiasts of all levels of experience!&lt;/p&gt;

&lt;p&gt;Please be sure to visit our website &lt;a href="https://llmware.ai/" rel="noopener noreferrer"&gt;llmware.ai&lt;/a&gt; for more information and updates.&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>🤖AI-Powered Contract Queries: Use Language Models for Effective Analysis!🔥</title>
      <dc:creator>Prashant Iyer</dc:creator>
      <pubDate>Fri, 28 Jun 2024 20:48:31 +0000</pubDate>
      <link>https://dev.to/llmware/ai-powered-contract-queries-use-language-models-for-effective-analysis-461o</link>
      <guid>https://dev.to/llmware/ai-powered-contract-queries-use-language-models-for-effective-analysis-461o</guid>
      <description>&lt;p&gt;Imagine you were given a large contract and asked a really specific question about it: "What is the notice for termination for convenience?" It would be an ordeal to locate the answer for this in the contract.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But what if we could use AI 🤖&lt;/strong&gt; to analyze the contract and answer this for us?&lt;/p&gt;

&lt;p&gt;What we want here is to perform something known as &lt;em&gt;retrieval-augmented generation&lt;/em&gt; (RAG). This is the process by which we give a &lt;em&gt;language model&lt;/em&gt; some external sources (such as a contract). The external sources are intended to enhance the model's context, giving it a more comprehensive understanding of a topic. The model should then give us more accurate responses to the questions we ask it on the topic.&lt;/p&gt;

&lt;p&gt;Now, a general purpose model like Chat-GPT might be able to answer questions about contracts with RAG, but &lt;strong&gt;what if we instead used a model that's been trained and fine-tuned&lt;/strong&gt; specifically on contract data?&lt;/p&gt;




&lt;h2&gt;
  
  
  Our AI model 🤖
&lt;/h2&gt;

&lt;p&gt;For this example, we'll be using LLMWare's &lt;em&gt;dragon-yi-6b-gguf&lt;/em&gt; model. This model is RAG-finetuned for fact-based question-answering on complex business and legal documents.&lt;/p&gt;

&lt;p&gt;This means that it is specialized in giving us short and concise responses to questions involving documents like contracts. This makes it perfect for our example!&lt;/p&gt;

&lt;p&gt;This is also a &lt;em&gt;GGUF quantized&lt;/em&gt; model, meaning that it is a smaller and faster version of the original 6 billion parameter &lt;em&gt;dragon-yi-6b&lt;/em&gt; model. Fortunately for us, this means that &lt;strong&gt;we can run it on a CPU 💻&lt;/strong&gt; without the need for powerful computational resources like GPUs!&lt;/p&gt;

&lt;p&gt;Now, let's look at an example of using the &lt;code&gt;llmware&lt;/code&gt; library for contract analysis from start to end!&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Loading in files 📁
&lt;/h2&gt;

&lt;p&gt;Let's start off by loading in our contracts to be analyzed. The &lt;code&gt;llmware&lt;/code&gt; library provides sample contracts via the &lt;code&gt;Setup&lt;/code&gt; class, but you can also use your own files in this example by replacing the &lt;code&gt;agreements_path&lt;/code&gt; below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;local_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Setup&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;load_sample_files&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;agreements_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;local_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AgreementsLarge&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, we load in the &lt;code&gt;AgreementsLarge&lt;/code&gt; set of files.&lt;/p&gt;

&lt;p&gt;Next, we'll create a &lt;code&gt;Library&lt;/code&gt; object and add our sample files to this library. An &lt;code&gt;llmware&lt;/code&gt; library breaks documents down into text chunks and stores them in a database so that we can access them easily later.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;msa_lib&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Library&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;create_new_library&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;msa_lib503_635&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;msa_lib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_files&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agreements_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Step 2: Locating MSA files 🔍
&lt;/h2&gt;

&lt;p&gt;Let's say that we want to consider only MSA (master services agreements) files from our sample contracts.&lt;/p&gt;

&lt;p&gt;We can first create a &lt;code&gt;Query&lt;/code&gt; object containing all our files, and then run a &lt;code&gt;text_search_by_page()&lt;/code&gt; to filter only the files that contain "master services agreement" on their front page.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;q&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msa_lib&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;master services agreement&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text_search_by_page&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;page_num&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;results_only&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;msa_docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;file_source&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;results&lt;/code&gt; from the text search will be a dictionary containing detailed information about the text query. However, we're only interested in the &lt;code&gt;file_source&lt;/code&gt; key representing the file names.&lt;/p&gt;

&lt;p&gt;Great! We now have our MSA files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38f6e5r3ki5xwrhxgve0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38f6e5r3ki5xwrhxgve0.gif" alt="Simpsons GIF"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Loading our model 🪫🔋
&lt;/h2&gt;

&lt;p&gt;Now, we can load in our model using the &lt;code&gt;Prompt&lt;/code&gt; class in the &lt;code&gt;llmware&lt;/code&gt; library.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;model_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llmware/dragon-yi-6b-gguf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;prompter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Prompt&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Step 4: Analyzing our files using AI 🧠💡
&lt;/h2&gt;

&lt;p&gt;Let's now iterate over our MSA files, and for each file, we'll:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;identify the text chunks containing the word "termination",&lt;/li&gt;
&lt;li&gt;add those chunks as a source for our AI call, and&lt;/li&gt;
&lt;li&gt;run the AI call "What is the notice for termination for convenience?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can start by performing a text query for the word "termination".&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msa_docs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;doc_filter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;file_source&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;
    &lt;span class="n"&gt;termination_provisions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text_query_with_document_filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;termination&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;doc_filter&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We'll then add these &lt;code&gt;termination_provisions&lt;/code&gt; as a source to our model.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;sources&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;prompter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_source_query_results&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;termination_provisions&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And with that done, we can call the LLM and ask it our question.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;prompter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;prompt_with_source&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the notice for termination for convenience?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Results! ✅
&lt;/h2&gt;

&lt;p&gt;Let's print out our &lt;code&gt;response&lt;/code&gt; and see what the output looks like.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;update: llm response - &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here's what the output of our code looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1z3rck11d71foci7rvu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1z3rck11d71foci7rvu.png" alt="Sample output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we have is a Python dictionary with several keys, notably:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;llm_response&lt;/code&gt;: giving us the answer to our question, which here is "30 days written notice"&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;evidence&lt;/code&gt;: giving us the text where the model found the answer to the question&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The dictionary also contains detailed information about the metadata of the AI call, but these are not relevant to our example and have been omitted from the output above.&lt;/p&gt;




&lt;h2&gt;
  
  
  Human in the loop! 👤
&lt;/h2&gt;

&lt;p&gt;We're not done just yet! If we wanted to generate a CSV report for a human to review the results of our analysis, we can make use of the &lt;code&gt;HumanInTheLoop&lt;/code&gt; class. All we need to do is save the current state of our &lt;code&gt;prompter&lt;/code&gt; and call the &lt;code&gt;export_current_interaction_to_csv()&lt;/code&gt; function.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;prompter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save_state&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;csv_output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;HumanInTheLoop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompter&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;export_current_interaction_to_csv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And that brings us to the end of our example! To summarize, we used the &lt;code&gt;llmware&lt;/code&gt; library to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Load in sample files&lt;/li&gt;
&lt;li&gt;Filter only the MSA files&lt;/li&gt;
&lt;li&gt;Use the dragon-yi-6b-gguf model to ask questions about termination provisions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;And remember that we did all of this on just a CPU! 💻&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check out our YouTube video on this example!&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Cf-07GBZT68"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If you made it this far, thank you for taking the time to go through this topic with us ❤️! For more content like this, make sure to &lt;a href="https://dev.to/llmware"&gt;visit our dev.to page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The source code for many more examples like this one are on &lt;a href="https://github.com/llmware-ai/llmware" rel="noopener noreferrer"&gt;our GitHub&lt;/a&gt;. Find this example &lt;a href="https://github.com/llmware-ai/llmware/blob/main/examples/Use_Cases/msa_processing.py" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our repository also contains a &lt;a href="https://github.com/llmware-ai/llmware/blob/main/examples/Notebooks/NoteBook_Examples/msa_processing_notebook.ipynb" rel="noopener noreferrer"&gt;notebook for this example&lt;/a&gt; that you can run yourself using Google Colab, Jupyter or any other platform that supports .ipynb notebooks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://discord.gg/fCztJQeV7J" rel="noopener noreferrer"&gt;Join our Discord&lt;/a&gt; to interact with a growing community of AI enthusiasts of all levels of experience!&lt;/p&gt;

&lt;p&gt;Please be sure to visit our website &lt;a href="https://llmware.ai/" rel="noopener noreferrer"&gt;llmware.ai&lt;/a&gt; for more information and updates.&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>🤖AI-Powered Data Queries: Ask Questions in Plain English, Get Instant Results!🚀</title>
      <dc:creator>Prashant Iyer</dc:creator>
      <pubDate>Tue, 11 Jun 2024 19:17:30 +0000</pubDate>
      <link>https://dev.to/llmware/ai-powered-data-queries-ask-questions-in-plain-english-get-instant-results-2lfe</link>
      <guid>https://dev.to/llmware/ai-powered-data-queries-ask-questions-in-plain-english-get-instant-results-2lfe</guid>
      <description>&lt;p&gt;You've might have heard of SQL. It's a widely used programming language for storing and processing information in &lt;em&gt;relational databases&lt;/em&gt; - simply put, relational databases store data in tables, where each row stores an entity and each column stores an attribute for that entity.&lt;/p&gt;

&lt;p&gt;Let's say we have a table called &lt;code&gt;customers&lt;/code&gt; in a relational database. If I wanted to access the names of all customers (&lt;code&gt;customer_names&lt;/code&gt;) that have an &lt;code&gt;annual_spend&lt;/code&gt; of at least $1000, I would have to formulate an SQL &lt;em&gt;query&lt;/em&gt; like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;customer_names&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;customers&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;annual_spend&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I would then run this query against the database to access my results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But what if AI 🤖 could do all this for us?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdxfvn2whyy1irwo9s5n.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdxfvn2whyy1irwo9s5n.gif" alt="Confused Robot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LLMWare allows us to do just that, making use of &lt;em&gt;small language models&lt;/em&gt; such as slim-sql-1b-v0, which is only 1 billion parameters in size.&lt;/p&gt;




&lt;h2&gt;
  
  
  SLIM SQL Tool
&lt;/h2&gt;

&lt;p&gt;We'll be making use of the &lt;em&gt;SLIM (Structured Language Instruction Model) SQL Tool&lt;/em&gt;, which is a &lt;em&gt;GGUF quantized&lt;/em&gt; version of the slim-sql-1b-v0 model. This essentially means that our Tool is of a smaller scale than the original model. To our advantage, it doesn't require much computational power to run, so it can run locally on a CPU without an internet connection or a GPU!&lt;/p&gt;

&lt;p&gt;This Tool is specialized in small, fast, local prototyping and is effective for SQL operations that involve a single table. The Tool enables us to ask our questions about data entirely in natural language and still get accurate results! Let's look at an example of how to do this from start to finish.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Loading our Model 🪫🔋
&lt;/h2&gt;

&lt;p&gt;We'll start off by loading in our SLIM SQL Tool. Here, we check to see if the model is already downloaded locally, if not, we download it using the &lt;code&gt;ModelCatalog&lt;/code&gt; class.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;sql_tool_repo_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LLMWareConfig&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;get_model_repo_path&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;slim-sql-tool&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql_tool_repo_path&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nc"&gt;ModelCatalog&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llmware/slim-sql-tool&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Step 2: Loading our Data 📊
&lt;/h2&gt;

&lt;p&gt;We then load in the sample &lt;code&gt;customer_table.csv&lt;/code&gt; file containing our data. This sample file is provided by the &lt;code&gt;llmware&lt;/code&gt; library!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;files&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listdir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql_tool_repo_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;csv_file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;customer_table.csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we create a new SQL table called &lt;code&gt;customer1&lt;/code&gt; from our CSV file using the &lt;code&gt;SQLTables&lt;/code&gt; class provided by the &lt;code&gt;llmware&lt;/code&gt; library. This is important because we can run SQL queries on an SQL table, but not on a CSV file! &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;sql_db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SQLTables&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;experimental&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;sql_db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_new_table_from_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql_tool_repo_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;csv_file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;table_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;customer1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;update: successfully created new db table&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Setting &lt;code&gt;experimental=True&lt;/code&gt; will use a provided testing database to create the table in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09iicvn9dylzhmkh1kzv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09iicvn9dylzhmkh1kzv.gif" alt="Robot Dancing"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Querying with AI 🤖
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;We're finally getting to the good stuff!&lt;/strong&gt; Now that we have our model and data, we can begin to ask our questions and get back some results.&lt;/p&gt;

&lt;p&gt;We'll first load our agent, which is an instance of the &lt;code&gt;LLMfx&lt;/code&gt; class. This class provides us a way to interact with various models through function calls. Our agent will take a natural language input from the user, communicate it to the appropriate model, generate an SQL query, run that query against the database, and then return the results of the query to us. &lt;strong&gt;Essentially, this is where the magic happens!&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LLMfx&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load_tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sql&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sample&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;get_logits&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we create a list of natural language questions we're going to be asking given our customer data. Let's see if our agent can answer them!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;query_list&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the highest annual spend of any customer?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                  &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Which customer has account number 1234953&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                  &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Which customer has the lowest annual spend?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can loop through each of the queries, and let the &lt;code&gt;query_db()&lt;/code&gt; function do all the work for us. All our results will be stored in the agent object.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_list&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query_db&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;table_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Results! ✅
&lt;/h2&gt;

&lt;p&gt;Now that we have our results in the agent's &lt;code&gt;research_list&lt;/code&gt;, we can print them out.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;research_list&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;research: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;research_list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For the example we have seen so far, this is what the output would look like for the first question ("What is the highest annual spend of any customer?").&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2pgwnyixblzuy7cjp3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2pgwnyixblzuy7cjp3u.png" alt="Robot Dancing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The output is a dictionary containing a lot of detailed information about the steps carried out by the agent, but here are some of the more interesting parts of it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;db_response&lt;/code&gt; &lt;strong&gt;gives us what we want, the answer to the question!&lt;/strong&gt; In this case, the response is 93540, meaning that the highest annual spend of any customer was $93540!&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sql_query&lt;/code&gt; shows us the SQL query that was generated from our natural language question using the SLIM SQL tool. In this case, the query generated was:
```SQL
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SELECT MAX(annual_spend) FROM customer1&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

---

## Conclusion
**And just like that, we've done it!** All we gave the program was a list of natural language questions and a CSV file with data. Behind the scenes, the `llmware` library:
1. created a table in a database with our data,
2. passed our questions into an AI model to get SQL queries,
3. ran the queries against the database, and
4. returned the results of the queries!

And if you're still not impressed, **remember that we can run this example locally on just a CPU 💻**!

Check out our YouTube video on this topic to see us explain the source code and analyze the results!
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/z48z5XOXJJg?start=204"&gt;
&lt;/iframe&gt;


If you made it this far, thank you for taking the time to go through this topic with us ❤️! For more content like this, make sure to [visit our dev.to page](https://dev.to/llmware).

The source code for many more examples like this one are on [our GitHub](https://github.com/llmware-ai/llmware). Find this example [here](https://github.com/llmware-ai/llmware/blob/main/examples/SLIM-Agents/text2sql-end-to-end-2.py).

Our repository also contains a [notebook for this example](https://github.com/llmware-ai/llmware/blob/main/examples/Notebooks/NoteBook_Examples/text2sql-end-to-end-2-notebook.ipynb) that you can run yourself using Google Colab, Jupyter or any other platform that supports .ipynb notebooks.

[Join our Discord](https://discord.gg/fCztJQeV7J) to interact with a growing community of AI enthusiasts of all levels of experience!

Please be sure to visit our website [llmware.ai](https://llmware.ai/) for more information and updates.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>python</category>
      <category>sql</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
