<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kane Hooper</title>
    <description>The latest articles on DEV Community by Kane Hooper (@kanehooper).</description>
    <link>https://dev.to/kanehooper</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kanehooper"/>
    <language>en</language>
    <item>
      <title>Running open-source AI models locally with Ruby</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Tue, 30 Jan 2024 06:43:22 +0000</pubDate>
      <link>https://dev.to/kanehooper/running-open-source-ai-models-locally-with-ruby-1n16</link>
      <guid>https://dev.to/kanehooper/running-open-source-ai-models-locally-with-ruby-1n16</guid>
      <description>&lt;p&gt;G’day Rubyists. I’m currently working with a client on implementing a custom AI solution using an open source AI model. The reason for this is the client has very sensitive customer information and we don’t want to pass this across to OpenAI or other proprietary models, in order to maintain a very high level of security. &lt;/p&gt;

&lt;p&gt;The solution has been to download and run an open source AI model in an AWS virtual machine, keeping the model completely under our control, with the Rails application making API calls to the AI in a safe environment.&lt;/p&gt;

&lt;p&gt;I wanted to share with you how to download an open source AI model locally, get it running and run Ruby scripts against it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Go Custom?
&lt;/h3&gt;

&lt;p&gt;The reason behind this project is straightforward: data security. When dealing with sensitive client information, the safest route is often to keep things in-house. This approach led us to explore custom AI models, which offer a higher degree of control and privacy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open source models
&lt;/h3&gt;

&lt;p&gt;Over the last 6 months we have started to see a plethora of open source models hitting the market. While not as powerful as GPT-4, many of these models are showing performance that exceeds GPT-3.5 and they are only going to get better as time goes on.&lt;/p&gt;

&lt;p&gt;There are a few other successful AI model such as Mistral, Mixtral and Lama. The right model to use depends on your processing power and what you are trying to achieve.&lt;/p&gt;

&lt;p&gt;As we are going to be running this model locally, probably the best option is Mistral. It is about 4GB in size and outperformance GPT-3.5 on most metrics. For it’s size Mistral is the best model in my opinion.&lt;/p&gt;

&lt;p&gt;Mixtral, out performs Mistral, but it a huge model and requires at least 48GB of RAM to run. &lt;/p&gt;

&lt;h3&gt;
  
  
  Parameters
&lt;/h3&gt;

&lt;p&gt;When talking about Large Language Models they are generally referred to by their parameter size, and a brief description of this is useful. &lt;/p&gt;

&lt;p&gt;The Mistral model, which we will be running locally, is a 7 billion parameter model. Mixtral is a 70 billion parameter model.&lt;/p&gt;

&lt;p&gt;It works this way, all of these LLMs are neural networks. A neural network is a collection of neurons, and each neuron connects to all of the other neurons in the proceeding layers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7mo9rmurr9vauwkznrk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7mo9rmurr9vauwkznrk.png" alt="Neural network" width="585" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each connection has a weight, which is usually a percentage. Each neuron also has a bias which modifies the data as it passed through that node.&lt;/p&gt;

&lt;p&gt;The whole purpose of a neural network is to “learn” a very advanced algorithm which is effectively a pattern matching algorithm. In the case of LLMs, by being trained of huge amounts of text, it learns the ability to predict text patterns and so can generate meaningful responses to our prompts.&lt;/p&gt;

&lt;p&gt;In simple terms the parameters are the number of weights and biases in the model. This tends to give us an idea of how many neurons are in the neural network. For a 7 billion parameter model there will be something on the order of 100 layers, with thousands of neurons per layer.&lt;/p&gt;

&lt;p&gt;To put in context GPT-3.5 has about 175 billion parameters. It’s actually quite amazing that Mistral with 7 billion parameters can outperform GPT-3.5 in many metrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Software to run models locally
&lt;/h3&gt;

&lt;p&gt;In order to run our open source models locally it is necessary to download software to do this. While there are several options on the market the simplest I found, and the one which will run on an Intel Mac, is Ollama.&lt;/p&gt;

&lt;p&gt;Right now Ollama runs on Mac and Linux, with Windows coming in the future. Though you can use WSL on Windows to run a Linux shell.&lt;/p&gt;

&lt;p&gt;Ollama allows you to download and run these open source models. It also opens up the model on a local port giving you the ability to make API calls via your Ruby code. And this is where it gets fun as a Ruby developer. You can write Ruby apps that integrate with your own local models.&lt;/p&gt;

&lt;p&gt;You can also watch this setup process on my YouTube video.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://www.youtube.com/watch?si=DtpF-mccfOOx9i-e&amp;amp;v=0ZlkH4i_BnM&amp;amp;feature=youtu.be" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--aMbEuPGl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i.ytimg.com/vi/0ZlkH4i_BnM/maxresdefault.jpg" height="450" class="m-0" width="800"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://www.youtube.com/watch?si=DtpF-mccfOOx9i-e&amp;amp;v=0ZlkH4i_BnM&amp;amp;feature=youtu.be" rel="noopener noreferrer" class="c-link"&gt;
          Running open-source AI models locally with Ruby - YouTube
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          reinteractive CEO shares his process for using Ollama.ai on a local computer to set up and run a large language model that captures and learns a unique set o...
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://res.cloudinary.com/practicaldev/image/fetch/s--ML3VIQ9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.youtube.com/s/desktop/8b6c1f4c/img/favicon.ico" width="16" height="16"&gt;
        youtube.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Setting Up Ollama
&lt;/h3&gt;

&lt;p&gt;Installation of Ollama is straightforward on Mac and Linux systems. Just download the software and it will install the package. Ollama is primarily command-line based, making it easy to install and run models. Just follow the steps and you will be set up in about 5 minutes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download Ollama at &lt;a href="https://olama.ai/"&gt;https://olama.ai/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo484uu8obin39xp0rm3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo484uu8obin39xp0rm3v.png" alt="Olama Homepage" width="706" height="663"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing your first model
&lt;/h3&gt;

&lt;p&gt;Once you have Ollama set up and running, you should see the Ollama icon in your task bar. This means it’s running in the background and will run your models.&lt;/p&gt;

&lt;p&gt;The next step is to download the model.&lt;/p&gt;

&lt;p&gt;Open your terminal&lt;br&gt;
Run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama run mistral&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The first time this will download Mistral, which will take some time as it is about 4GB in size.&lt;/p&gt;

&lt;p&gt;Once it has finished downloading it will open the Ollama prompt and you can start communicating with Mistral.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5o65s29gau3njbjxwz7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5o65s29gau3njbjxwz7e.png" alt="Running Mistral in terminal" width="585" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next time you run &lt;code&gt;ollama run mistral&lt;/code&gt; it will just run the model.&lt;/p&gt;
&lt;h3&gt;
  
  
  Customizing Models
&lt;/h3&gt;

&lt;p&gt;With Ollama you can create customizations to the base model. This is a little like creating custom GPTs in OpenAI.&lt;/p&gt;

&lt;p&gt;Full details are provided in the Ollama documentation.&lt;/p&gt;

&lt;p&gt;The steps to create a custom model are fairly simple:&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;Modelfile&lt;/code&gt;&lt;br&gt;
Add the following text to the Modelfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mistral

# Set the temperature set the randomness or creativity of the response
PARAMETER temperature 0.3

# Set the system message
SYSTEM ”””
You are an excerpt Ruby developer. 
You will be asked questions about the Ruby Programming
language. 
You will provide an explanation along with code examples.
”””
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The system message is what primes the AI model to respond in a given way.&lt;/p&gt;

&lt;p&gt;Create the new model. Run the following command in the terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama create &amp;lt;model-name&amp;gt; -f ‘./Modelfile’&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In my case, I am calling the model Ruby. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama create ruby -f ‘./Modelfile’&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will create the new model.&lt;/p&gt;

&lt;p&gt;List your models with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fronyyhn84k2uxegpkxyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fronyyhn84k2uxegpkxyi.png" alt="ollama list" width="582" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can run the custom model&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama run ruby&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating with Ruby
&lt;/h3&gt;

&lt;p&gt;Although there's no dedicated gem for Ollama yet, Ruby developers can interact with the model using basic HTTP request methods. Ollama runs in the background, and it opens up the model via port &lt;code&gt;11434&lt;/code&gt;, so you can access it on `&lt;a href="http://localhost:11434%E2%80%99"&gt;http://localhost:11434’&lt;/a&gt;.&lt;br&gt;
The Ollama API documentation provides the different endpoints for the basic commands such as chat and creating embeddings.&lt;/p&gt;

&lt;p&gt;For us we want to work with the &lt;code&gt;/api/chat&lt;/code&gt; endpoint to send a prompt to the AI model.&lt;/p&gt;

&lt;p&gt;Here is some basic Ruby code for interacting with the model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4o9j2ud5af35ifydkqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4o9j2ud5af35ifydkqr.png" alt="Ruby code" width="707" height="674"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Ruby code does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The code starts by requiring three libraries: 'net/http', 'uri', and 'json'. These libraries are used for making HTTP requests, parsing URIs, and handling JSON data respectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A URI object is created with the address of the API endpoint ('&lt;a href="http://localhost:11434/api/chat'"&gt;http://localhost:11434/api/chat'&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A new HTTP POST request is created using the Net::HTTP::Post.new method with the URI as the argument.&lt;br&gt;
The content type of the request is set to 'application/json'.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The body of the request is set to a JSON string that represents a hash. This hash contains three keys: 'model', 'messages', and 'stream'. The 'model' key is set to 'ruby' which is our model, the 'messages' key is set to an array containing a single hash representing a user message, and the 'stream' key is set to false.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The messages hash follows a model for intersecting with AI models. It takes a role and the content. The roles can be system, user and assistance. System is the priming message for how the model should respond. We already set that in the Modelfile. The user message is our standard prompt, and the model will respond with the assistant message.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The HTTP request is sent using the Net::HTTP.start method. This method opens a network connection to the specified hostname and port, and then sends the request. The read timeout for the connection is set to 120 seconds given that I am running on a 2019 Intel Mac, the responds can be a little slow. This isn’t an issue running on the appropriate AWS servers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The response from the server is stored in the 'response' variable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical Use Cases
&lt;/h3&gt;

&lt;p&gt;The real value of running local AI models comes into play for companies dealing with sensitive data. These models are really good at processing unstructured data, like emails or documents, and extracting valuable, structured information. &lt;/p&gt;

&lt;p&gt;For one use case I am training the model on all of the customer information in a CRM. This allows users to ask questions about the customer without needing to go through sometimes hundreds of notes. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Where security is not an issue I am more likely to work directly with OpenAI. But for companies that need private models, then Open Source is definitely the way to go.&lt;/p&gt;

&lt;p&gt;If I get around to it, one of these days I’ll write a Ruby wrapper around the Ollama APIs to make it a little easier to interact with. If you would like to work on that project, then definitely reach out.&lt;/p&gt;

&lt;p&gt;Have fun working with open source models.&lt;/p&gt;




&lt;p&gt;Kane Hooper is the CEO of reinteractive a specialist Ruby on Rails dev firm.&lt;/p&gt;

&lt;p&gt;If you need any help with your Rails or AI projects you can contact Kane directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="mailto:kane.hooper@reinteractive.com"&gt;kane.hooper@reinteractive.com&lt;/a&gt;&lt;br&gt;
reinteractive.com&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ruby</category>
      <category>rails</category>
    </item>
    <item>
      <title>Creating an Intelligent Knowledge Base Q&amp;A App with GPT-3 and Ruby</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Tue, 21 Feb 2023 05:10:31 +0000</pubDate>
      <link>https://dev.to/kanehooper/creating-an-intelligent-knowledge-base-qa-app-with-gpt-3-and-ruby-15ke</link>
      <guid>https://dev.to/kanehooper/creating-an-intelligent-knowledge-base-qa-app-with-gpt-3-and-ruby-15ke</guid>
      <description>&lt;h2&gt;
  
  
  What is covered in this article
&lt;/h2&gt;

&lt;p&gt;In this article you will create a Q&amp;amp;A application based on knowledge articles from your own website. You will use GPT-3 to make it respond intelligently and code it in Ruby.&lt;/p&gt;

&lt;p&gt;The example in this article is based on an e-commerce company called Sterling Parts that’s sells car parts. Users will be able to ask the AI questions which are answered in Sterling Parts “frequently asked questions”, “about “us and “terms of use” pages.&lt;/p&gt;

&lt;p&gt;In a future article you will learn how to create a Q&amp;amp;A that can ask questions related to the products on the e-commerce page as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical assistance:
&lt;/h2&gt;

&lt;p&gt;Matenia Rossides, Engineering Manager, reinteractive.&lt;br&gt;
Miguel Dagatan, Senior Rails Developer, reinteractive.&lt;br&gt;
Allan Andal, Senior Rails Developer, reinteractive&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;In order to follow along with this articles you need to have the following knowledge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Intermediate knowledge of Ruby&lt;/li&gt;
&lt;li&gt;An understanding of how to integrate Ruby with Openai&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://reinteractive.com/articles/integrating-ruby-with-openai-a-beginners-guide" rel="noopener noreferrer"&gt;Integrating Ruby with Open AI a Beginners Guide&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Github
&lt;/h2&gt;

&lt;p&gt;The code for this article is available on Github:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/reinteractive/gpt3-embeddings" rel="noopener noreferrer"&gt;https://github.com/reinteractive/gpt3-embeddings&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Summary of the process
&lt;/h2&gt;

&lt;p&gt;You will be creating two Ruby scripts that can be used to prepare the knowledge base and ask GPT-3 questions about the knowledge base. It will work similarly to ChatGPT except you will train the model to answer specifically based on the knowledge from your website.&lt;/p&gt;

&lt;p&gt;In order to make this process work you will use arrays known in Machine Learning as Vector embeddings. Embedding is the process of converting a section of text into an array of numbers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h77ci5ah9uyfa4ipxnw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h77ci5ah9uyfa4ipxnw.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This vector is a numerical representation of the meaning contained in the text.&lt;/p&gt;

&lt;p&gt;These vectors permit semantic search to find the most relevant information in the knowledge base and then utilise GPT-3 to provide a meaningful answer to the users question.&lt;/p&gt;

&lt;p&gt;The word “semantic” refers to the meaning of language. As a Ruby developer you would be familiar with keyword searching, which finds exact or partial matches of words within a database. Semantic search finds matches in a database based on the meaning or intent of a question.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;Question: “How to I cut apples?”&lt;/p&gt;

&lt;p&gt;Knowledge article 1: “In order to peel an orange you first need to get an orange peeler.”&lt;/p&gt;

&lt;p&gt;Knowledge article 2: “If you want to slice up an apple you will first need to get a sharp knife.”&lt;/p&gt;

&lt;p&gt;Semantic search will return knowledge article 2 as the most relevant. It is sufficiently “intelligent” to understand that cut and slice and semantically similar.&lt;/p&gt;

&lt;p&gt;There is quite some complex mathematics that goes behind this. I am currently writing an article detailing the mathematics which will be released soon, if you are interested.&lt;/p&gt;

&lt;p&gt;To achieve semantic search, your application needs to convert text into vector embeddings.&lt;/p&gt;

&lt;p&gt;GPT-3 has an embedding endpoint which will convert your text into a vector of 1500 values. Each value in the vector represents a feature of the text.&lt;/p&gt;

&lt;p&gt;Some examples of different features that might be represented in a vector embedding include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic meaning&lt;/li&gt;
&lt;li&gt;Parts of speech&lt;/li&gt;
&lt;li&gt;Frequency of use&lt;/li&gt;
&lt;li&gt;Associations with other words&lt;/li&gt;
&lt;li&gt;Grammatical structure&lt;/li&gt;
&lt;li&gt;Sentiment&lt;/li&gt;
&lt;li&gt;Text length&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As GPT-3 has token limits, the knowledge base needs to be broken down into chunks no more than about 3000 tokens in total (about 2000 words). Each chunk is converted into a vector and stored in a database to be searched against.&lt;/p&gt;

&lt;p&gt;When a user asks a question, this question is also converted into a vector, and using a specific mathematical process, is used to search the database to find the knowledge base chunk which has the most relevant meaning.&lt;/p&gt;

&lt;p&gt;From there we send the question and knowledge based chunk to GPT-3 to respond with a meaningful answer.&lt;/p&gt;
&lt;h2&gt;
  
  
  Summary of the process
&lt;/h2&gt;

&lt;p&gt;In this knowledge base Q&amp;amp;A there will be two Ruby scripts. The first script is used to prepare the knowledge base vector embeddings. This will be embeddings.rb . The second file will be our AI script which will be called questions.rb&lt;/p&gt;

&lt;p&gt;These are the steps you will be following to create our Q&amp;amp;A AI.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prepare the knowledge base data into chunks of approximately 2000 words each and save in a text file.&lt;/li&gt;
&lt;li&gt;Convert each text file into a vector embedding using the OpenAI embeddings endpoint. [embeddings.rb]&lt;/li&gt;
&lt;li&gt;Save the embeddings and original knowledge based text in a database that we can later query. For this simple example a 4. CSV file will act as the database. [embeddings.rb]&lt;/li&gt;
&lt;li&gt;Get the question from the user and convert this into its own vector embedding. [questions.rb]&lt;/li&gt;
&lt;li&gt;Compare the question vector against the database and find the knowledge based text which has the closest semantic meaning to the question. [questions.rb]&lt;/li&gt;
&lt;li&gt;Pass the user question and knowledge based text to the GPT-3 completions endpoint with a tailored prompt. [questions.rb]&lt;/li&gt;
&lt;li&gt;Get the response from GPT-3 and display it to the user. [questions.rb]&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Step One: Preparing the data
&lt;/h2&gt;

&lt;p&gt;Since GPT-3 has a 4096 total token limit for prompt plus response, it is important to convert the knowledge based data into chunks of approximately 2000 words. We need enough space for the question tokens and the response tokens.&lt;/p&gt;

&lt;p&gt;Where possible it is valuable to create chunks that have similar meaning. For the Sterling Parts website the FAQ, About Us and Terms of Use are all under 1000–2000 words each, it makes semantic sense to covert each individual page into its own text file.&lt;/p&gt;

&lt;p&gt;In your Ruby application create a folder called training-data.&lt;/p&gt;



&lt;p&gt;The training-data folder will contain the knowledge base text for your application. These will be stored as standard .txt files. For this example using the Sterling Parts website, the following web pages will be stored as text files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.sterlingparts.com.au/faqs" rel="noopener noreferrer"&gt;https://www.sterlingparts.com.au/faqs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.sterlingparts.com.au/about-us" rel="noopener noreferrer"&gt;https://www.sterlingparts.com.au/about-us&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.sterlingparts.com.au/Terms-and-Conditions" rel="noopener noreferrer"&gt;https://www.sterlingparts.com.au/Terms-and-Conditions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The terms and conditions page is the largest at 2008 words. For your purposes take your content and copy and paste it into text files within the training-data folder.&lt;/p&gt;

&lt;p&gt;It doesn’t matter what the files are called so long as they are .txt files.&lt;/p&gt;


&lt;h2&gt;
  
  
  Step Two: Converting data into vector embeddings
&lt;/h2&gt;

&lt;p&gt;You will need the ruby-openai gem in order to work with the OpenAI API.&lt;/p&gt;

&lt;p&gt;Install the required gems.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gem install ruby-openai dotenv&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Ensure you have your OpenAI API key saved in the .env file.&lt;/p&gt;

&lt;p&gt;Import the &lt;code&gt;openai&lt;/code&gt; library, and create a new instance using your API key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# embeddings.rb

require 'dotenv'
require 'ruby/openai'

Dotenv.load()

openai = OpenAI::Client.new(access_token: ENV['OPENAI_API_KEY'])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Next is the process to extract all of the text files data. In Ruby you will loop through the training-data folder, read the data from each text file and store this data in an array.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# embeddings.rb

# The array will be used to hold the data from each file
text_array = []

# Loop through all .txt files in the /training-data folder
Dir.glob("training-data/*.txt") do |file|
  # Read the data from each file and push to the array
  # The dump method is used to convert spacings into newline characters \n
  text = File.read(file).dump()
  text_array &amp;lt;&amp;lt; text
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;With all the training data in the &lt;code&gt;text_array&lt;/code&gt; the next step is to convert each body of text into a vector embedding. To do this you will use the OpenAI embeddings endpoint. The endpoint takes two parameters, the model which is &lt;code&gt;text-embedding-ada-002&lt;/code&gt; and the &lt;code&gt;input&lt;/code&gt; which is the text from each file.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# embeddings.rb

# This array is used to store the embeddings
embedding_array = []

# Loop through each element of the array
text_array.each do |text|
  # Pass the text to the embeddings API which will return a vector and
  # store in the response variable.
  response = openai.embeddings(
    parameters: {
      model: "text-embedding-ada-002",
      input: text
    }
  )

  # Extract the embedding from the response object
  embedding = response['data'][0]['embedding']

  # Create a Ruby hash containing the vector and the original text
  embedding_hash = {embedding: embedding, text: text}
  # Store the hash in an array.
  embedding_array &amp;lt;&amp;lt; embedding_hash
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can print the embedding variable and you will see a vector of 1500 values. This is your vector embedding.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;embedding_array&lt;/code&gt; stores the values of the vector embeddings and original text which will be stored in our database for semantic search purposes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step Three: Storing the embeddings in the CSV (database)
&lt;/h2&gt;

&lt;p&gt;For the purposes of this article a CSV file is being used to substitute for a database. There are specialised vector databases that have very efficient semantic search algorithms. From a Ruby perspective the most useful tool is Redis. It contains a vector search feature. This link provides more information: &lt;a href="https://redis.com/blog/rediscover-redis-for-vector-similarity-search/" rel="noopener noreferrer"&gt;https://redis.com/blog/rediscover-redis-for-vector-similarity-search/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this step, a CSV file is created which includes two columns embedding and text. This CSV stores the original text from each file along with its vector embedding. You will need to import the csv library for this step.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# embeddings.rb

require 'dotenv'
require 'ruby/openai'
require 'csv'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is the final code for the &lt;code&gt;embeddings.rb&lt;/code&gt; script. It will create a CSV file with the &lt;code&gt;embedding&lt;/code&gt; and &lt;code&gt;text&lt;/code&gt; headers, loop through the embedding_array and save the respective vector embeddings and text into the CSV.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;embeddings.rb

CSV.open("embeddings.csv", "w") do |csv|
  # This sets the headers
  csv &amp;lt;&amp;lt; [:embedding, :text]
  embedding_array.each do |obj|
    # The embedding vector will be stored as a string to avoid comma 
    # sperated issues between the values in the CSV
    csv &amp;lt;&amp;lt; [obj[:embedding], obj[:text]]
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The final action for the &lt;code&gt;embeddings.rb&lt;/code&gt; script is to run it and populate the CSV file with the data.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ruby embeddings.rb&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step Four: Get the question from the user
&lt;/h2&gt;

&lt;p&gt;Now it’s time to turn your attention to the next Ruby script &lt;code&gt;questions.rb&lt;/code&gt; . This is the main file which will be run to query the users question against the CSV of vectors, returning the original text that has the closest semantic similarity and get GPT-3 to return an intelligent answer.&lt;/p&gt;

&lt;p&gt;Generally speaking you would be running this inside a Rails application and getting the users question from a text field from your view. This example will stick to the CLI to keep things simple.&lt;/p&gt;

&lt;p&gt;Create a new file &lt;code&gt;questions.rb&lt;/code&gt; and import the required library.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# questions.rb

require 'dotenv'
require 'ruby/openai'
require 'csv'

Dotenv.load()

openai = OpenAI::Client.new(access_token: '&amp;lt;YOUR_API_KEY&amp;gt;')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You need to get the users question, which will be used to query the knowledge base.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# questions.rb

puts "Welcome to the Sterling Parts AI Knowledge Base. How can I help you?"
question = gets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Using the embeddings OpenAI endpoint, the users question can be converted into a vector embedding. This will permit the use of a mathematical formula to find the text in the &lt;code&gt;embeddings.csv&lt;/code&gt; file with the closest meaning and intent.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# questions.rb

# Convert the question into a vector embedding
response = openai.embeddings(
  parameters: {
    model: "text-embedding-ada-002",
    input: question
  }
)

# Extract the embedding value
question_embedding = response['data'][0]['embedding']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Printing the embedding will show you an array of 1500 values. This is your vector embedding.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step Five: Search the CSV to find the text with the closest semantic meaning to the question.
&lt;/h2&gt;

&lt;p&gt;This is where the mathematics comes in. I am in the process of writing a detailed article on the mathematics of vector embeddings and how to search for closest semantic meaning. For now let’s keep it simple.&lt;/p&gt;

&lt;p&gt;The mathematics used is called cosine similarity. If you recall from high school mathematics the cosine is used in trigonometry to find the angle of a triangle. In Machine Learning cosine similarity is used to find the similarity between two vectors. This is simple to represent in two dimensions. In two dimensions a vector has two values and can be plotted on a graph.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50d10wd0cb1iba5csyob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50d10wd0cb1iba5csyob.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: Statistics for Machine Learning by Pratap Dangeti&lt;/p&gt;

&lt;p&gt;Cosine similarity can be used to determine the relationship between the two vectors. It is represented between 0 and 1, with 1 indicating identical vectors.&lt;/p&gt;

&lt;p&gt;With vector embeddings, you are dealing with vectors of 1500 values. It is impossible to visually represent this, but nevertheless the mathematics works the same.&lt;/p&gt;

&lt;p&gt;By using cosine similarity, the computer can determine which text file contains meanings most closely related to the users question.&lt;/p&gt;

&lt;p&gt;I created the &lt;code&gt;cosine-similarity&lt;/code&gt; Ruby gem which performs the mathematics without the need to understand its inner workings.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gem install cosine-similarity&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Import the library at the top of the questions.rb file.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# questions.rb

require 'dotenv'
require 'ruby/openai'
require 'csv'
require 'cosine-similarity'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The next step is to loop through all rows of the CSV file, and compare the question vector to the original text vectors. The &lt;code&gt;cosine_similarity&lt;/code&gt; method will compare the question against each of the original texts and will return a number between 0 and 1.&lt;/p&gt;

&lt;p&gt;You are interested in the similarity with the highest value. This is the text with the closest meaning and intent to the question.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# questions.rb

# Store the similairty scores as the code loops through the CSV
similarity_array = []

# Loop through the CSV and calculate the cosine-similarity between
# the question vector and each text embedding
CSV.foreach("embeddings.csv", headers: true) do |row|
  # Extract the embedding from the column and parse it back into an Array
  text_embedding =  JSON.parse(row['embedding'])

  # Add the similarity score to the array
  similarity_array &amp;lt;&amp;lt; cosine_similarity(question_embedding, text_embedding)
end

# Return the index of the highest similarity score
index_of_max = similarity_array.index(similarity_array.max)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The &lt;code&gt;index_of_max&lt;/code&gt; variable now contains the index of the highest similarity score. This can be used to extract the text from the CSV that is needed to send to GPT-3 along with the users question.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# questions.rb

# Used to store the original text
original_text = ""

# Loop through the CSV and find the text which matches the highest
# similarity score
CSV.foreach("embeddings.csv", headers: true).with_index do |row, rowno|
  if rowno == index_of_max
    original_text = row['text']
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Step Six: Pass the user question and knowledge based text to the GPT-3 completions endpoint
&lt;/h2&gt;

&lt;p&gt;Continuing with the &lt;code&gt;questions.rb&lt;/code&gt; file. The script is now storing the users &lt;code&gt;question&lt;/code&gt; and the &lt;code&gt;original_text&lt;/code&gt; which have the closest meaning. It is now possible to pass this information to GPT-3 completions endpoint to return an intelligent response to the users question.&lt;/p&gt;

&lt;p&gt;A tailored prompt is necessary in order to prime GPT-3 to respond in an appropriate manner that aligns with the users question and the purpose of the knowledge base. This falls within the scope of prompt design.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prompt = 
"You are an AI assistant. You work for Sterling Parts which is a car parts 
online store located in Australia. You will be asked questions from a 
customer and will answer in a helpful and friendly manner.

You will be provided company information from Sterline Parts under the 
[Article] section. The customer question will be provided unders the 
[Question] section. You will answer the customers questions based on the 
article.

If the users question is not answered by the article you will respond with 
'I'm sorry I don't know.'

[Article]
#{original_text}

[Question]
#{question}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The key to a successful prompt is providing sufficient information to GPT for it to form a pattern of response. The first paragraph primes GPT on how it will respond to the users question.&lt;/p&gt;

&lt;p&gt;The second paragraph provides the context for GPT so it can determine what informaiton it will use to respond with.&lt;/p&gt;

&lt;p&gt;At the bottom of the prompt the &lt;code&gt;original_text&lt;/code&gt; and &lt;code&gt;question&lt;/code&gt; are injected from your previous steps.&lt;/p&gt;

&lt;p&gt;Note: If you find you are not getting the best response from GPT, it is often necessary to modify and experiment with the prompt. Take advantage of the OpenAI playground to test your prompt until you get it right. &lt;a href="https://platform.openai.com/playground" rel="noopener noreferrer"&gt;https://platform.openai.com/playground&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The prompt is now passed to GPT’s completion endpoint to return an intelligent response.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = openai.completions(
  parameters: {
    model: "text-davinci-003",
    prompt: prompt,
    temperature: 0.2,
    max_tokens: 500,
  }
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The temperature is quite low in the code. This will ensure GPT returns a very high probability response. If you want GPT to be more creative with its response, increase the temperature towards 0.9.&lt;/p&gt;

&lt;p&gt;This endpoint will generate a response within a few seconds (depending of server load).&lt;/p&gt;
&lt;h2&gt;
  
  
  Step Seven: Get the response from GPT-3 and display it to the user.
&lt;/h2&gt;

&lt;p&gt;The final step is to output GPT’s response to the user.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;puts "\nAI response:\n"
puts response['choices'][0]['text'].lstrip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Your script is complete!&lt;/p&gt;
&lt;h2&gt;
  
  
  Debugging the response
&lt;/h2&gt;

&lt;p&gt;There are three potential factor that can cause GPT to return a response that does not meet your requirements.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data preparation&lt;/li&gt;
&lt;li&gt;Prompt design&lt;/li&gt;
&lt;li&gt;Temperature parameter&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Data preparation
&lt;/h3&gt;

&lt;p&gt;As best as possible, attempt to create each file so that it contains similar meaning. If one file cuts off half-way through a sentence and the next file picks up midway, you may have trouble finding the exact text you need to feed GPT.&lt;/p&gt;

&lt;p&gt;You are better breaking files up into smaller, but meaningfully group texted to improve your similarity searches.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prompt design
&lt;/h3&gt;

&lt;p&gt;This one takes practice!&lt;/p&gt;

&lt;p&gt;As a general rule, the clearer and more explicit you are in the prompt, the better GPT will be able to generate results tailored to your needs.&lt;/p&gt;

&lt;p&gt;There is no substitute for trail and error on this one. If you get stuck, feel free to reach out to me directly and I will happily give you a hand. (My details are at the end of the article).&lt;/p&gt;
&lt;h3&gt;
  
  
  Temperature parameter
&lt;/h3&gt;

&lt;p&gt;I have an article on fully understanding the temperature parameter (see below.)&lt;/p&gt;

&lt;p&gt;In simple terms the temperature controls the randomness or ‘creativity’ of the model. A low temperature will provide more expected responses, while higher temperatures provide more creative responses. Depending on your needs, play with the temperature until you get the desired results.&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kanehooper" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1006947%2Fec5818b5-a4ba-4daa-bf81-fb5dcacf53bb.jpeg" alt="kanehooper"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kanehooper/mastering-the-gpt-3-temperature-parameter-with-ruby-38k3" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Mastering the GPT-3 Temperature Parameter with Ruby&lt;/h2&gt;
      &lt;h3&gt;Kane Hooper ・ Jan 18 '23&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ruby&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#openai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#gpt3&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;



&lt;h2&gt;
  
  
  Improving the model
&lt;/h2&gt;

&lt;p&gt;You may have very specific use cases that require additional training of the model to provide the types of responses that are appropriate to your knowledge base and users.&lt;/p&gt;

&lt;p&gt;There are several approaches, which will be covered in later articles, but these boil down in inline-priming and fine-tuning.&lt;/p&gt;

&lt;p&gt;Inline-priming is providing examples of questions and answers within the prompt. This is useful if you need GPT to repond in a very particular format or way. I use this frequently when I need a response in JSON.&lt;/p&gt;

&lt;p&gt;Fine-tuning is another approach altogether and is used to train your own GPT model to perform tasks it hasn’t been trained on before. This is useful for asking users questions to extract particular information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;The next area I am working on is using vector embeddings for product search. That is allowing a user to ask a question and have GPT respond with appropriate products.&lt;/p&gt;

&lt;p&gt;I know the general pattern to achieve this, I just need a spare 20 hours to write the code, test and write the article. So stay tuned.&lt;/p&gt;




&lt;p&gt;Kane Hooper is the CEO of reinteractive, the longest running dedicated Ruby on Rails development firm in the world.&lt;/p&gt;

&lt;p&gt;You can contact Kane directly for any help with your Ruby on Rails application.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.reinteractive.com" rel="noopener noreferrer"&gt;www.reinteractive.com&lt;/a&gt;&lt;br&gt;
&lt;a href="mailto:kane.hooper@reinteractive.com"&gt;kane.hooper@reinteractive.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>gpt3</category>
      <category>openai</category>
      <category>rails</category>
    </item>
    <item>
      <title>Mastering GPT-3: The mathematics of logprobs for Ruby Devs</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Sat, 04 Feb 2023 10:11:26 +0000</pubDate>
      <link>https://dev.to/kanehooper/mastering-gpt-3-the-mathematics-of-logprobs-for-ruby-devs-8lj</link>
      <guid>https://dev.to/kanehooper/mastering-gpt-3-the-mathematics-of-logprobs-for-ruby-devs-8lj</guid>
      <description>&lt;p&gt;The uninformed may perceive GPT-3’s output as mystical, but in reality, it relies solely on statistical analysis. GPT-3 uses the principles of probability to determine the likelihood of each word’s appearance through its training, and then selects the next word accordingly.&lt;/p&gt;

&lt;p&gt;The GPT-3 playground provides insight into the inner workings of this seemingly magical entity. You can reveal the top five potential words and their associated probabilities by enabling the ‘Show Probabilities’ setting, offering a glimpse into the calculations driving this model’s output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro5qx7u4pfwipfn71nzz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro5qx7u4pfwipfn71nzz.png" alt="OpenAI Playground" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;“Show Probabilities” displays the probability of the top 5 words, with the selected word highlighted in red. The word selected is determined by the temperature setting, where a setting of 0 will result in the top probability word being selected.&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kanehooper" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1006947%2Fec5818b5-a4ba-4daa-bf81-fb5dcacf53bb.jpeg" alt="kanehooper"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/kanehooper/mastering-the-gpt-3-temperature-parameter-with-ruby-38k3" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Mastering the GPT-3 Temperature Parameter with Ruby&lt;/h2&gt;
      &lt;h3&gt;Kane Hooper ・ Jan 18 '23&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ruby&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#openai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#gpt3&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;While the playground presents our results as percentages, GPT3 relies on a different mathematical approach to achieve computational efficiency.&lt;/p&gt;

&lt;p&gt;Logarithmic Probabilities, or logprob, uses logarithmic mathematics to convert the percentage into a computationally manageable number. This article delves into the intricacies of logprob and the mathematical principles that drive it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Intended Audience
&lt;/h2&gt;

&lt;p&gt;This writing is aimed at Ruby developers, particularly those fascinated by the realm of Machine Learning, eager to delve deeper into the mathematical intricacies behind GPT3 and attain a fine-grained level of control over the model’s output.&lt;/p&gt;

&lt;p&gt;For those fluent in Ruby, the article offers a unique opportunity to witness the concepts in action through code samples written in the Ruby language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before delving into the heart of this piece, it is important to possess a foundation in:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The fundamental principles of the Ruby programming language,&lt;/li&gt;
&lt;li&gt;A sound grasp of mathematics including exponentials and logarithms, and&lt;/li&gt;
&lt;li&gt;The ability to interface with the OpenAI API.&lt;/li&gt;
&lt;/ol&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kanehooper" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1006947%2Fec5818b5-a4ba-4daa-bf81-fb5dcacf53bb.jpeg" alt="kanehooper"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/kanehooper/how-to-integrate-ruby-with-openai-gpt-3-5h96" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to integrate Ruby with OpenAI (GPT-3)&lt;/h2&gt;
      &lt;h3&gt;Kane Hooper ・ Jan 12 '23&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#discuss&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#coding&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#learning&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;In this piece, we delve into the heart of logarithms, exploring the mathematical underpinnings that power some of the most intriguing computational systems.&lt;/p&gt;

&lt;p&gt;By leveraging the Math module, we will demonstrate the practical application of these principles in the context of the Ruby programming language. It is worth noting that, as a component of the core language, the Math module is readily available for use and does not need to be imported with require.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of GPT3 Token Selection
&lt;/h2&gt;

&lt;p&gt;GPT operates on a linguistic unit known as a token, which is comprised of a sequence of characters that can represent a full word or a fragment of a word. These tokens are derived from common character combinations that are observed across languages, and GPT’s vast database stores approximately 50,000 distinct tokens, each assigned a unique identifier, or token ID.&lt;/p&gt;

&lt;p&gt;As an example, consider the word “untangle.” This word can be decomposed into two tokens: “unt” and “angle.” The token “unt” has been assigned the token ID of 1418, while “angle” bears the ID 9248.&lt;/p&gt;

&lt;p&gt;To assist with tokenizing text, OpenAI has devised a tool referred to as the Tokenizer, which transforms text into a series of tokens and returns the corresponding token IDs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjojjnb5vonbqawlo7tqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjojjnb5vonbqawlo7tqc.png" alt="OpenAI Tokenizer" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each token is given an ID based on its frequency in the training data provided to GPT-3. The lower the number, the higher the frequency during training.&lt;/p&gt;

&lt;p&gt;OpenAI has estimated that a &lt;a href="https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them" rel="noopener noreferrer"&gt;single token is equivalent to approximately four characters&lt;/a&gt;; however, this estimation is not always consistent. For instance, the word “ tomorrow” — with a space in front of it — is nine characters in total, yet it is still considered a single token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxk7tq7pliqcxp53vx3px.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxk7tq7pliqcxp53vx3px.png" alt="OpenAI Tokenizer" width="800" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is a fascinating observation that the work “tomorrow”, without a leading space, is composed of two distinct tokens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ytokq609l7sv1camz4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ytokq609l7sv1camz4m.png" alt="OpenAI Tokenizer" width="800" height="61"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The power of GPT-3 lies in its ability to tokenize a prompt and use probability to determine the most likely proceding token. Tokens are affected by preceding spaces as well as capitalization.&lt;/p&gt;

&lt;p&gt;To illustrate this, consider the simple example below in the GPT-3 playground with the temperature set to 0. This forces GPT-3 to select the highest probability token each time.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Are humans omnivores?&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;GPT will check its probability tables and find that based on its training data, the word “Yes” has a 99.99% probability of appearing next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka4m5h04p7r2qgddthix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka4m5h04p7r2qgddthix.png" alt="OpenAI Playground" width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The period mark at the end of the response was also determined through probabilistic analysis. The model’s training data revealed that, given the response “Yes, humans are omnivores,” a period was the most likely character to follow with a probability of 84%.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4uzatprfrvoqlfm9kd8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4uzatprfrvoqlfm9kd8.png" alt="OpenAI Playground" width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The notion that GPT is anything more than a probability model, something akin to sentience, has been gaining traction on Reddit and other social networks of late. However, such a concept could not be further from the truth. GPT is nothing more than a sophisticated algorithm trained on an unfathomable amount of data — some 500 billion tokens of text, to be precise.&lt;/p&gt;

&lt;p&gt;Through this dataset, GPT has learned to associate certain words and phrases with one another, to the point that it can generate text that mimics human language. Yet, for all its intelligence, GPT is still a long way off from being truly sentient.&lt;/p&gt;

&lt;p&gt;A full list of GPT tokens can be found at the following link: &lt;a href="https://github.com/latitudegames/GPT-3-Encoder/blob/master/vocab.bpe" rel="noopener noreferrer"&gt;https://github.com/latitudegames/GPT-3-Encoder/blob/master/vocab.bpe&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does GPT-3 use probabilities under the hood
&lt;/h2&gt;

&lt;p&gt;The issue with probabilities is that performing operations on them is far from an efficient computing task, particularly when it comes to multiplication. Probability calculations often require multiplactive operations which are not computationally efficient. In stark contrast, addition is much more manageable computationally, with some systems seeing a tenfold reduction in processing time.&lt;/p&gt;

&lt;p&gt;Let’s return to our prompt above:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Are humans omnivores?&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;GPT has tokenized this phrase into six tokens, each with its own probability. Each token is assigned a probability P1 — P6.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rujxybysri8vnza9k81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rujxybysri8vnza9k81.png" alt="OpenAI Tokenizer" width="800" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When GPT responds to the prompt above, it needs to determine the probability of the next token, which we can call P7. To do this, it needs to calculate the combined probability of all 7 tokens occurring in its training data. This can be done mathematically by multiplying the probabilities together, P1 * P2 * P3 * P4 * P5 * P6 * P7. However, as the response gets longer, the computational cost of this calculation increases exponentially.&lt;/p&gt;

&lt;p&gt;In order to make this process more efficient, the OpenAI team turned to a method of probability invented in 1950 called Logarithmic Probability (logprob). Logprob allows computers to perform the same calculation using addition instead of multiplication, resulting in performance improvements. To gain a better understanding of Log Probability, you must first have a good grasp of logarithms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Logarithms
&lt;/h2&gt;

&lt;p&gt;Mathematics has long been the language of science, and logarithms stand as one of the most essential components of the discipline. As the inverse of exponentials, logarithms essentially undo the operations of the exponential function, much like subtraction undoes addition.&lt;/p&gt;

&lt;p&gt;Inverse operations are most clearly demonstrated through an example: Start with the number 10, and add 5. The inverse operation in this case is to subtract 5, which undoes the addition and leaves us with 10 once more.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;10 + 5 - 5

# Output: 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the same way logarithms are the inverse of exponentials.&lt;/p&gt;

&lt;p&gt;Exponentials are mathematically operations which given a base tell us how many times to multiple that base by itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmajhdd3rnutbr5ig5bjw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmajhdd3rnutbr5ig5bjw.png" alt="5 to the power of 3. Or multiply 5 by itself three times." width="74" height="64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Represented in Ruby Code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;5**3 

# Equivalent to: 
5 * 5 * 5

#Output: 125
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exponential operations are useful in solving problems related to exponential grown such as compounding interest, or exponential decay such as radioactive half-life.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfdc5qu9yh8hreo0amxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfdc5qu9yh8hreo0amxa.png" alt="Exponential graph" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The exponential function illuminates the result of repeatedly multiplying a number by itself, while the logarithm reveals the number of such multiplications required to reach that result. In other words, the exponential describes the outcome of a process of repeated self-multiplication, while the logarithm reveals the number of such self-multiplications necessary to achieve that outcome.&lt;/p&gt;

&lt;p&gt;The two concepts, therefore, exist in an intimate reciprocal relationship; the exponential informs us of the consequence of multiple self-multiplication, while the logarithm discloses the number of acts of self-multiplication necessary to attain the consequence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exponential:&lt;/strong&gt; What is the result of multiplying 2 by itself three times.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2**3

#Output: 8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Logarithm:&lt;/strong&gt; How many times do we need to multiple 2 by itself to get 8.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2 * 2 * 2

#Output: 8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The answer is 3. So the logarithm is 3.&lt;/p&gt;

&lt;p&gt;In the following equation, x is the exponent, representing the number of times two is multiplied by itself to result in 8. In other words, x is the power to which two must be raised in order to achieve a product of 8.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq84hfs7hozcqgpfzajgm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq84hfs7hozcqgpfzajgm.png" alt="Exponential Equation" width="162" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to ascertain the value of x, we must apply the logarithmic formula.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi96tgyhuiah73qit622x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi96tgyhuiah73qit622x.png" alt="Logarithmic function" width="226" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The logarithmic equation can be expressed as follows: the logarithm of 8 with a base of 2. It is an indication of the number of times that 2 must be multiplied by itself to produce 8. The logarithm of 8 with a base of 2 is 3, meaning that 2 must be multiplied by itself 3 times in order to reach 8.&lt;/p&gt;

&lt;p&gt;A good online tool for calculating logarithms is &lt;a href="https://www.symbolab.com/" rel="noopener noreferrer"&gt;https://www.symbolab.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg316x2lpffkjdcar6t1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg316x2lpffkjdcar6t1r.png" alt="The solution of x is 3" width="312" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Ruby, the Math module is capable of computing logarithms. By utilizing the Math.log method, one can easily calculate the logarithm of any given number. This method is defined as:&lt;/p&gt;

&lt;p&gt;Math.log(result, base)&lt;/p&gt;

&lt;p&gt;Where r is the result and b is the base.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg7fkk5u8in69n69jpjb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg7fkk5u8in69n69jpjb.png" alt="The parts of the logarithmic formula as they relate to the log method." width="142" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take the logarithmic function below. We can use the log method to determine the number of times 2 needs to be multiplied by itself to get 8.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjy60ms7hofuue44ke8ao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjy60ms7hofuue44ke8ao.png" alt="The logarithm of 8 with a base of 2." width="148" height="62"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Math.log(8, 2)

# Output: 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The logarithmic graph stands in stark contrast to the exponential graph, a juxtaposition which becomes especially pertinent when considering the application of Logprobs. It is in fact this particular graph that furnishes the very foundation upon which Logprobs is predicated, thus enabling it to fulfill its vital purpose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4p4qfsqxm500vu0b68i9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4p4qfsqxm500vu0b68i9.png" alt="Logarithmic graph" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Natural Logarithms
&lt;/h2&gt;

&lt;p&gt;In mathematics there is a concept of the natural logarithm. It is a specialised form of loagarith that makes use of the mathematical constant of e as a base.&lt;/p&gt;

&lt;p&gt;This constant e is an omnipresent figure in a multitude of natural phenomena, such as compound interest, population growth, radioactive decay, spread of disease, and the decibel scale. The value of ‘e’ stands at approximately 2.71828. To gain a deeper understanding of its origin, the following video is a great place to start.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/pg827uDPFqA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Here is an example of the natural log expressed mathematically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8q2yoivrlbka9x3omro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8q2yoivrlbka9x3omro.png" alt="Natural logarithm" width="158" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The formula above is asking what power do I need to raise e to in order to get 2.&lt;/p&gt;

&lt;p&gt;The answer is 0.6934.&lt;/p&gt;

&lt;p&gt;There is short-hand in mathematics for representing the natural logarithm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5se6an0jcphdx4hrrw9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5se6an0jcphdx4hrrw9l.png" alt="Natural logarithm" width="108" height="54"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The two formulas above represent the same expression. ln(2) is shorthand which means what is the power I need to raise &lt;em&gt;e&lt;/em&gt; in order to get 2.&lt;/p&gt;

&lt;p&gt;In Ruby the natural log is calculated using the log function by passing a single argument. If as additional second argument is given, it will be the base of logarithm. Otherwise the base is &lt;em&gt;e&lt;/em&gt; by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Math.log(2)

# Output: 0.6931
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How logarithms are used in probability calculations
&lt;/h2&gt;

&lt;p&gt;Probabilities are expressed as a percentage, out of the maximum possible outcome of 100%, which signifies complete certainty. Computers, on the other hand, process probability percentages in decimal form. For example, 1 equates to 100%, 0.5 equates to 50%, and so forth.&lt;/p&gt;

&lt;p&gt;Logarithms and probability have an interesting relationship: the graph of a logarithmic function is the key to understanding these two concepts in combination.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo5qc8y2qimyr3kfm6ji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo5qc8y2qimyr3kfm6ji.png" alt="Natural Logarithm function graphed" width="800" height="752"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Observing the graph above, one’s eye is immediately drawn to the red line, which cuts the X-axis at 1 — representing 100% probability.&lt;/p&gt;

&lt;p&gt;By zooming in on the range of 0 to 1, one is presented with the entire spectrum of probabilities from 0 to 100%. This graph is the key to unlocking the computation efficiencies of probability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyaoz9f6f5gy2oio9hx96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyaoz9f6f5gy2oio9hx96.png" alt="Logarithm graph from 0 to 1." width="754" height="936"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The graph above is used to represent the logarithmic probability associated with a percentage probability.&lt;/p&gt;

&lt;p&gt;The graph above demonstrates the relationship between the x-axis (percentage) and the y-axis (logprob). When the probability is 1, the logprob is 0 and when the probability is 0.5, the logprob is approximately -0.7. This relationship can be seen clearly in the graph, and can be used to determine the logprob of any given percentage.&lt;/p&gt;

&lt;p&gt;Logprobs are always negative numbers, and as the probability approaches zero the logprob increases exponentially. Crucially, these logprobs offer a significant computational advantage over the traditional use of percentages: by adding the logprobs together rather than multiplying the percentages, the same result is achieved, but with computational speed optimisation. This dramatically increases the efficiency of GPT.&lt;/p&gt;

&lt;h2&gt;
  
  
  Converting probability percentages to logprobs
&lt;/h2&gt;

&lt;p&gt;By leveraging the natural logarithmic formula, one can mathematically convert a percentage to a logprob with ease.&lt;/p&gt;

&lt;p&gt;To ascertain the logprob of 0.3, or 30%, you would apply the following formula:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24wnxl7fetk3qlu1o3g6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24wnxl7fetk3qlu1o3g6.png" alt="Natural logarithm of 0.3" width="146" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The logprob of 30% is -1.20397.&lt;/p&gt;

&lt;p&gt;In Ruby:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Math.log(0.3)

# Output: -1.20397
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Converting Logprob to Percentage
&lt;/h2&gt;

&lt;p&gt;To convert a logprob back into a percentage perform the inverse calculation which is the constant e to the power of the logprob. This will give us the probability percentage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g6wfoqdu0yafjhydgwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g6wfoqdu0yafjhydgwu.png" alt="Constant e to the power of the logprob" width="176" height="66"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Ruby we use the exponential method of the Math module. The exponential method uses the constant &lt;em&gt;e&lt;/em&gt; as the base.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Math.exp(-1.20397)

# Output: 0.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Adding Logprobs vs Multiplying Probability Percentages
&lt;/h2&gt;

&lt;p&gt;The power of logprobs is in its computational efficiency.&lt;/p&gt;

&lt;p&gt;Take the example prompt “What is AI?”. It is comprised of 4 tokens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv2ojztm6fez99lsftrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv2ojztm6fez99lsftrc.png" alt="OpenAI Tokenizer" width="238" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When this prompt is feed into GPT-3’s playground with a temperature of 0.7 we get the following response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdetmiuzt6j1u5niyoarm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdetmiuzt6j1u5niyoarm.png" alt="OpenAI playground" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the hood, GPT needs to determine the probability of the token immediately following the question “What is AI?”. Statistically “AI” has an 85.35% probability of appearing after the prompt “What is AI?”.&lt;/p&gt;

&lt;p&gt;The table below shows the probability percentage for each token as it appears successively after the preceding tokens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo6z7032utbce10ctuzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo6z7032utbce10ctuzg.png" alt="Probability percentage of each token. Orange for token, green for response." width="560" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note:&lt;/em&gt; The ‘What’ at the beginning of the prompt has no probability since it is the origin word of the prompt.&lt;/p&gt;

&lt;p&gt;Mathematically to calculate the combined probability of the tokens occurring in this sequence you would multiply these probabilities together.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0.1892 * 0.0001 * 0.4550 * 0.9982 * 0.8535

# Output: 0.000007334
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the other hand by converting these values into logprobs we can arrive at the same answer by adding the logprobs together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1bk09j1h7j5tj0htrib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1bk09j1h7j5tj0htrib.png" alt="Probabilities converted into logprobs" width="800" height="372"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-1.665 + -9.210 + -0.787 + -0.002 + -0.158

# Output: -11.823

Math.exp(-11.823)

#Output: 0.000007334
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice when the total logprob is converted back into a percentage it is identical to the product of the percentages calculated above.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarking multiplication vs addition in Ruby
&lt;/h2&gt;

&lt;p&gt;This following performance test runs a comparison between the multiply and addition operations in Ruby.&lt;/p&gt;

&lt;p&gt;I created two functions 1) multiplying a percentage together and 2) adding a logprob together — both 500 times. This would potentially simulate a 500 token response from GPT. This is looped 100,000 times to gain a statistically significant benchmark.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def benchmark_prob_multiplication
  start_time = Time.now
  100000.times do
    500.times do
      total =* 0.5
    end
  end
  end_time = Time.now
  puts "Multiply: Time elapsed #{(end_time - start_time)*1000} milliseconds"
end

def benchmark_logprob_addition
  start_time = Time.now
  100000.times do
    500.times do
      total =+ -0.693147
    end
  end
  end_time = Time.now
  puts "Addition: Time elapsed #{(end_time - start_time)*1000} milliseconds"
end

benchmark_prob_multiplication
benchmark_logprob_addition
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output:

#Multiply: Time elapsed 5558.755999999999 milliseconds
#Addition: Time elapsed 2979.3219999999997 milliseconds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can observe, logprob addition increases the speed of calculations by almost a factor of two. Taking into consideration the considerable amount of traffic handled by chatGPT alone, this improvement in computational efficiency is of great significance.&lt;/p&gt;

&lt;p&gt;An interesting note, in the GPT playground, with “show probabilities” turned on, you are able to quickly calculate the total logprob of the entire prompt and response by highlighting the text.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcc6hovaqq6e4i8fslo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcc6hovaqq6e4i8fslo9.png" alt="Total logprob on the GPT prompt and response" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Accessing token logprob via the API interface
&lt;/h2&gt;

&lt;p&gt;With the GPT API you can access the response token logprobs in the JSON response. By adding the logprobs parameter you can see the logprob of the most likely tokens, as well as the chosen token. The logprobs parameter takes an integer from 1 to 5. If 5 is passed to the API endpoint the response will return the top 5 possible tokens for each token returned. 5 is the maximum value for this parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require 'ruby/openai'
require 'json'

client = OpenAI::Client.new(access_token: '&amp;lt;YOUR_API_KEY&amp;gt;')

prompt = "What is AI?"

response = client.completions(
  parameters: {
    model: "text-davinci-003",
    prompt: prompt,
    temperature: 0.5,
    max_tokens: 10,
    logprobs: 1
  }
)

response.to_json

puts response['choices'][0]['logprobs']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
   "tokens":[
      "\n",
      "\n",
      "AI",
      " (",
      "Art",
      "ificial",
      " Intelligence",
      ")",
      " is",
      " a"
   ],
   "top_logprobs":[
      {
         "\n":-0.001757213
      },
      {
         "\n":-0.0023516808
      },
      {
         "AI":-0.15741464
      },
      {
         " (":-0.27175382
      },
      {
         "Art":-0.18933135
      },
      {
         "ificial":-0.0018770588
      },
      {
         " Intelligence":-0.002153406
      },
      {
         ")":-0.0010475154
      },
      {
         " is":-0.03325031
      },
      {
         " a":-0.37541595
      }
   ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What can you do with logprobs
&lt;/h2&gt;

&lt;p&gt;In the realm of natural language processing, the ability to control the response from GPT is a deft skill. With fine-grained control, repetition within a response can be greatly reduced or even eliminated. As an example, when the prompt “What is AI?” is posed, you may not want the model to repeat the word “AI”. To prevent this, the frequency parameter can be adjusted to decrease the logprob of the word to a point near 0.&lt;/p&gt;

&lt;p&gt;Furthermore, the logprob can be used to compare various GPT outputs and select the one with the highest probability, or perhaps the lowest probability. This is especially useful when using the n parameter, which requests multiple responses from GPT. Through these methods, fine-grained control can be achieved to ensure an optimized response from GPT.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Logprobs are a sophisticated concept, but if you take the time to understand them, you will be rewarded with a level of control over GPT’s outputs that is truly impressive. As you deepen your exploration of GPT, you will find that the ability to work with logprobs can open up a whole new world of possibilities for your applications.&lt;/p&gt;




&lt;p&gt;Kane Hooper is the CEO of reinteractive a specialist Ruby on Rails.&lt;/p&gt;

&lt;p&gt;If you need any help with your Rails project you can contact Kane directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="mailto:kane.hooper@reinteractive.com"&gt;kane.hooper@reinteractive.com&lt;/a&gt; &lt;br&gt;
reinteractive.com&lt;/p&gt;

</description>
      <category>devto</category>
      <category>blockchain</category>
      <category>web3</category>
      <category>crypto</category>
    </item>
    <item>
      <title>Transforming Education: Using OpenAI and Ruby to Summarise Wikipedia for Children</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Fri, 20 Jan 2023 09:56:56 +0000</pubDate>
      <link>https://dev.to/kanehooper/transforming-education-using-openai-and-ruby-to-summarise-wikipedia-for-children-1fb3</link>
      <guid>https://dev.to/kanehooper/transforming-education-using-openai-and-ruby-to-summarise-wikipedia-for-children-1fb3</guid>
      <description>&lt;p&gt;Personally, I am not a big fan of Wikipedia. When I was studying vector mathematics for my Masters of Machine Learning degree I tried to use Wikipedia to understand some of the key concepts. It was like walking into a buzzsaw. My head certainly felt like it had been split open.&lt;/p&gt;

&lt;p&gt;When the first Encyclopaedia Brittanica was written, it was designed for lay people to look up and understand topics they otherwise wouldn’t be able to access.&lt;/p&gt;

&lt;p&gt;My issue with Wikipedia, especially for technical topics, is that it is often edited by academics and individuals who want to show off their intellectual prowess. It makes reading the articles equivalent in difficulty to reading academic papers. The complexity can often be overwhelming.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hNOty2o5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9odlw9r4n2ko5so0rqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hNOty2o5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9odlw9r4n2ko5so0rqn.png" alt="Obstetrics introduction from Wikipedia" width="880" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That gave me the idea to write a script that would simplify Wikipedia articles.&lt;/p&gt;

&lt;p&gt;That inspired another thought: what if you could summarise all of the knowledge on Wikipedia so that children could understand it? This inspired the concept for WikiKids. We could make the vast array of information on Wikipedia available to children by using OpenAI natural language processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KV7T5EWa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m1vqh26zyjvwadtmth5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KV7T5EWa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m1vqh26zyjvwadtmth5m.png" alt="Image of child on computer" width="880" height="880"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we’ll look at the beginnings of WikiKids with a Ruby code snippet that uses two powerful libraries — OpenAI and wikipedia-client— to retrieve and summarise a Wikipedia page’s introduction.&lt;/p&gt;

&lt;p&gt;The OpenAI library is a powerful natural language processing (NLP) library that allows developers to access the OpenAI platform’s capabilities via a simple API. To process and understand human language, it includes a variety of features such as language modelling, machine learning, and deep learning.&lt;/p&gt;

&lt;p&gt;The wikipedia-client gem is a Ruby library that allows developers to easily access the vast resources of Wikipedia. It provides a simple and easy-to-use API to retrieve information directly from Wikipedia.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;In order to follow along with this code you need an intermediate understanding of Ruby.&lt;/p&gt;

&lt;p&gt;You should also have a basic understanding of the OpenAI API. I have written a beginners guide to the OpenAI API in the article below.&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kanehooper" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Uk9BB4sw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--wPiXOUO8--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/1006947/ec5818b5-a4ba-4daa-bf81-fb5dcacf53bb.jpeg" alt="kanehooper"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kanehooper/how-to-integrate-ruby-with-openai-gpt-3-5h96" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to integrate Ruby with OpenAI (GPT-3)&lt;/h2&gt;
      &lt;h3&gt;Kane Hooper ・ Jan 12 ・ 4 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ruby&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#openai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#gpt3&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  The Code
&lt;/h2&gt;

&lt;p&gt;Goal: To write a Ruby script which returns the introduction section from a Wikipedia page and summarises it in a way a 10-year-old can understand.&lt;/p&gt;

&lt;p&gt;In this example we will use the AI to summarise the introduction on Obstetrics.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Wikipedia Introduction (Obsetrics):

Obstetrics is the field of study concentrated on pregnancy,
childbirth and the postpartum period. As a medical specialty,
obstetrics is combined with gynecology under the discipline
known as obstetrics and gynecology (OB/GYN), which is a
surgical field.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Begin by installing the the two gems.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gem install ruby-openai
gem install wikipedia-client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will need an OpenAI account and an API key. At this writing you can sign up for a free account at &lt;a href="https://openai.com/api/"&gt;https://openai.com/api/&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require 'ruby/openai'
require 'wikipedia-client'

client = OpenAI::Client.new(access_token: 'YOUR_API_KEY')

wikiPage = 'Obstetrics'

page = Wikipedia.find(wikiPage)
introduction = page.summary

prompt = "Summarize the following Wikipedia introduction text so a 10-year-old can understand it: \n#{introduction}"

response = client.completions(
  parameters: {
    model: "text-davinci-003",
    prompt: prompt,
    temperature: 0.5,
    max_tokens: 2000,
  }
)

puts response['choices'][0]['text'].lstrip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few seconds the AI provides its response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI Response:

Obstetrics is a medical field that focuses on the care of
pregnant women, the process of giving birth and what happens
afterwards. It is linked to gynecology, which is the study of
women's health.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compare the AI output to the Wikipedia entry from earlier. I think it has done a nice job simplifying the content.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the code is doing
&lt;/h2&gt;

&lt;p&gt;Using your API key, the Ruby script creates an instance of the OpenAI client. It looks up the Wikipedia page on “Obstetrics” using the Wikipedia-client library. The &lt;code&gt;page.summary&lt;/code&gt; property returns the article’s introduction. This is the section at the beginning of a Wiki page.&lt;/p&gt;

&lt;p&gt;The code then creates a prompt. This is the instructions which will be sent to the OpenAI model. The prompt is the most important part of the code as it is effectively our instruction to the AI model. Prompt engineering (creating prompts that return the required results) is an important topic. There is an art to phrasing prompts in a way that causes the AI to provided us with the responses we want.&lt;/p&gt;

&lt;p&gt;The code then sends this prompt to the OpenAI API and asks for a response, by using the &lt;code&gt;completions&lt;/code&gt; method of the client object. The parameters passed to the method include the type of model to use &lt;code&gt;text-davinci-003&lt;/code&gt; is the latest and most advanced model. The &lt;code&gt;temperature&lt;/code&gt; of the response which is a number from 0 to 1 and represents the ‘creativness’ of the response. You can adjust this to manage the AIs creativity in responding. The &lt;code&gt;max_tokens&lt;/code&gt; to be used in the response. A token is 4 characters.&lt;/p&gt;

&lt;p&gt;Finally, the function writes the AI output to the console, which is a summary of the Wikipedia introduction about Obstetrics that a 10-year-old can understand.&lt;/p&gt;

&lt;h2&gt;
  
  
  WikiKids
&lt;/h2&gt;

&lt;p&gt;This script is just the beginning. I foresee a website that allows kids to visit any Wikipedia page, and the AI model will render the simplified content in real-time.&lt;/p&gt;

&lt;p&gt;Imagine giving kids the ability to ask the AI questions about the content and get meaningful answers.&lt;/p&gt;

&lt;p&gt;I modified the prompt to respond to children questions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prompt = "Provide a very friendly and encouraging response to
          the following question that a 10-year-old can
          understand: #{question}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here are some real questions and answers from the AI model. Each prompt is modified so the AI gives an answer a 10-year-old could understand.&lt;/p&gt;

&lt;p&gt;Question: &lt;em&gt;What should I study at school to become an obstetrics doctor?&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI Response: 

To become an obstetric doctor, you should study science
subjects like biology and chemistry in high school. Then you
would have to go to college and get a degree in a related
field like pre-medicine or biology. Then you would go to
medical school to learn how to become a doctor and after that, 
you would complete a special training program called a 
residency in obstetrics. It might seem like a lot of work but 
remember, you're not alone, you can do it and you will have 
lot of people to help you along the way.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Question: &lt;em&gt;I don’t understand what a Gynaecologist is?&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI Response: 

A gynaecologist is a doctor who helps women with their 
reproductive health. It's totally normal not to know what it 
is, there are many different types of doctors and it can be 
hard to keep track of them all. But don't worry, if you're 
interested in becoming a gynaecologist, you have plenty of 
time to learn more about it and see if it's something you're 
interested in pursuing.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Educational Impacts
&lt;/h2&gt;

&lt;p&gt;I teach Finance at Deakin University in Australia. One thing I have always wanted to solve was how do you reach and support every student regardless of their ability to study.&lt;/p&gt;

&lt;p&gt;With AI we could provide educational content that can be modified and adjusted to an individual students needs. We are no longer constrained by the need to provide a single educational model, but rather can meet each student or child where they are at and provide them the means to drive their own learning at a pace that suites them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;I am very interested in creating educational solutions utilising AI. If you would like to help me build WikiKids, please reach out and I would be very interested to speak.&lt;/p&gt;




&lt;p&gt;Kane Hooper is the CEO of reinteractive, the longest running dedicated Ruby on Rails development firm in the world.&lt;/p&gt;

&lt;p&gt;You can contact Kane directly for any help with your Ruby on Rails application.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.reinteractive.com"&gt;www.reinteractive.com&lt;/a&gt;&lt;br&gt;
&lt;a href="mailto:kane.hooper@reinteractive.com"&gt;kane.hooper@reinteractive.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>openai</category>
      <category>gpt3</category>
    </item>
    <item>
      <title>Creating an AI email parser using Ruby and OpenAI (GPT-3)</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Fri, 20 Jan 2023 09:37:59 +0000</pubDate>
      <link>https://dev.to/kanehooper/creating-an-ai-email-parser-using-ruby-and-openai-gpt-3-1mb4</link>
      <guid>https://dev.to/kanehooper/creating-an-ai-email-parser-using-ruby-and-openai-gpt-3-1mb4</guid>
      <description>&lt;p&gt;You may frequently need to extract crucial information from emails in your role as a developer. This might be done to extract the sender’s identity, a list of the companies referenced in the email, or the email’s overall subject. An email parser is useful since manually extracting this information can be time-consuming and error-prone.&lt;/p&gt;

&lt;p&gt;In this article, we’ll show you how to build an email parser that can easily extract crucial things from emails using Ruby and OpenAI’s GPT-3.&lt;/p&gt;

&lt;p&gt;If you need an introduction to OpenAI, you can get the basics in this article:&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kanehooper" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Uk9BB4sw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--wPiXOUO8--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/1006947/ec5818b5-a4ba-4daa-bf81-fb5dcacf53bb.jpeg" alt="kanehooper"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kanehooper/how-to-integrate-ruby-with-openai-gpt-3-5h96" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to integrate Ruby with OpenAI (GPT-3)&lt;/h2&gt;
      &lt;h3&gt;Kane Hooper ・ Jan 12 ・ 4 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ruby&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#openai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#gpt3&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;In order to follow this article you need to be familiar with Ruby methods, working with API keys and using Ruby gems.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Email Parser
&lt;/h2&gt;

&lt;p&gt;In our example we will create an AI email parser that provides us with the name of the email sender, the company they are from and the theme of the email. This might be useful for a customer service organisation to help them priorities which customers they will respond to.&lt;/p&gt;

&lt;p&gt;Of course, some of this information can already be extracted using standard Ruby gems, but the purpose of this example is to show how simple it is to parse an email for the information you need using AI.&lt;/p&gt;

&lt;p&gt;First, let’s install the required gems:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gem install ruby-openai&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, we’ll require the necessary libraries in our Ruby file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;require 'ruby-openai'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let’s define a function that takes an email as a string and returns the information we require from the AI. To generate responses to a prompt asking for the desired information, we will use OpenAI’s GTP-3 model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require "ruby/openai"

def extract_entities(email)
 client = OpenAI::Client.new(access_token: 'YOUR_API_KEY')

 prompt = "Please extract the company names, 
           email sender's name, and theme of the following email:\n\n#{email}"

 response = client.completions(
   parameters: {
     model: "text-davinci-003",
     prompt: prompt,
     temperature: 0.5,
     max_tokens: 1000
   }
 )

 puts response['choices'][0]['text']
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Parameters
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;model:&lt;/em&gt; This is the name of the AI model to utilise. Text-davinci-003 is the latest and most advanced model as of this writing.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;prompt:&lt;/em&gt; The prompt is the key variable here. This is the instruction provided to OpenAI. The accuracy of your response will be determined by how well you craft your prompt. This is known as prompt engineering.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;prompt = "Please extract the company names, email sender's name, and theme of the following email:\n\n#{email}"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;temperature:&lt;/em&gt; This tells the model how ‘creative’ to be. 0.1 will provide standard responses and is good when your answer is definite. 0.9 will provide more diverse responses from the model and is good for creative tasks.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;max_tokens:&lt;/em&gt; The maximum size of the the response. A token is equivalent to about 4 characters. The maximum limit is 4096. You can use this to limit the size of your response.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing our model
&lt;/h2&gt;

&lt;p&gt;Let’s test our method with an example email. Add the following code at the bottom of your file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;email = "Dear Kane,

  I have a complaint about the service. Tom is causing a lot of problems and I don't like what is happening.

  Regards,

  Jenny McNamara
  Marketing Manager
  Big Buys"

extract_entities(email)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Output:
# Company name: "Big Buys",
# Email senders name: "Jenny McNamara",
# Theme: "Complaint about service"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, our email parser was able to successfully extract the company names, email sender’s name, and theme of the email.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt Engineering
&lt;/h2&gt;

&lt;p&gt;There is a problem with our output. While it has provided some information about the email, it would be more useful if it gave us a little more detail. This is where prompt engineering comes in. The success of your output is primarily determined by the quality of your prompt.&lt;/p&gt;

&lt;p&gt;Prompt engineering is the most complicated and time consuming element of working with OpenAI. In a future article I will deep dive into prompt engineering in a lot more detail. For now, let’s just test what happens if we ask the AI to provide us a summary of the email, rather than the theme.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;prompt = "Please extract the company names, email sender's name, and summary of the following email:\n\n#{email}"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Output:
# Company name: "Big Buys",
# Email senders name: "Jenny McNamara",
# Summary: "Complaint about service, Tom causing problems"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now have more context about the complaint. You could continue to test and trail different prompts until the AI provides you the exact output you are looking for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;The example provided above could, with some work, be used by customer service teams to Identifying important keywords and phrases in customer service complaints to prioritize responses.&lt;/p&gt;

&lt;p&gt;Here are some other use cases for AI email parsing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Analyzing customer feedback to identify common issues and trends.&lt;/li&gt;
&lt;li&gt;Automatically categorizing and routing emails to the appropriate department or team member.&lt;/li&gt;
&lt;li&gt;Extracting relevant information, such as names and contact details, from emails to update customer records.&lt;/li&gt;
&lt;li&gt;Summarizing the contents of long emails to provide a quick overview for the recipient.&lt;/li&gt;
&lt;li&gt;Generating automated responses to common inquiries to improve customer service efficiency.&lt;/li&gt;
&lt;li&gt;Identifying potential sales leads in incoming emails.&lt;/li&gt;
&lt;li&gt;Translating emails written in foreign languages for improved communication with international customers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are numerous other applications for an email parser like this. Experiment with the prompt to see what insights you can glean from your emails.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;I hope you found this tutorial on how to create an email parser with Ruby and OpenAI’s GPT-3 useful. Many tedious tasks can be automated and time saved with a little creativity and programming skills.&lt;/p&gt;

&lt;p&gt;Let’s see what creative ideas you can come up with using AI email parsing.&lt;/p&gt;




&lt;p&gt;Kane Hooper is the CEO of reinteractive, a dedicated Ruby on Rails development firm.&lt;/p&gt;

&lt;p&gt;You can contact Kane directly for any help with your Ruby on Rails application.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.reinteractive.com"&gt;www.reinteractive.com&lt;/a&gt;&lt;br&gt;
&lt;a href="mailto:kane.hooper@reinteractive.com"&gt;kane.hooper@reinteractive.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>gpt3</category>
      <category>openai</category>
      <category>rails</category>
    </item>
    <item>
      <title>Mastering the GPT-3 Temperature Parameter with Ruby</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Wed, 18 Jan 2023 06:43:27 +0000</pubDate>
      <link>https://dev.to/kanehooper/mastering-the-gpt-3-temperature-parameter-with-ruby-38k3</link>
      <guid>https://dev.to/kanehooper/mastering-the-gpt-3-temperature-parameter-with-ruby-38k3</guid>
      <description>&lt;p&gt;In previous articles I have covered how you can use OpenAI in your Ruby application to take advantage of the GPT-3 AI model. While it is powerful, it can be complicated to use, especially when it comes to getting the specific responses you want. There are two key components to achieving success with GPT-3; the prompt and the temperature parameter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jtuNMhGs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjusllxg98ugpnhacijo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jtuNMhGs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjusllxg98ugpnhacijo.png" alt="Impressionist painting of a temerature guage by OpenAI" width="880" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we’ll look at how to master the temperature parameter when using OpenAI with Ruby. If you need an introduction to working with Ruby and OpenAI check out this article:&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kanehooper" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Uk9BB4sw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--wPiXOUO8--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/1006947/ec5818b5-a4ba-4daa-bf81-fb5dcacf53bb.jpeg" alt="kanehooper"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kanehooper/how-to-integrate-ruby-with-openai-gpt-3-5h96" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to integrate Ruby with OpenAI (GPT-3)&lt;/h2&gt;
      &lt;h3&gt;Kane Hooper ・ Jan 12 ・ 4 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ruby&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#openai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#gpt3&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How GPT generates its output
&lt;/h2&gt;

&lt;p&gt;GPT-3 was trained on a massive dataset of over 45TB of text, including books, articles, and webpages. This data was used to train the AI to recognise patterns and respond appropriately. GPT-3 was also trained with reinforcement learning, allowing it to learn from its mistakes and improve its accuracy over time.&lt;/p&gt;

&lt;p&gt;When you provide a prompt to the AI model as a developer, it will begin generating output based on the probability of consecutive words based on its training data.&lt;/p&gt;

&lt;p&gt;The OpenAI playground’s “show probabilities” setting allows developers to see the probabilities of each word in the output. This can help you understand why the AI chose a particular word or phrase. It displays a visual representation of how the temperature parameter is likely to affect the output. Developers can use this knowledge to fine-tune their applications to produce the desired results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Eem1_zl9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1d69rtx1pj53klvz6ybg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Eem1_zl9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1d69rtx1pj53klvz6ybg.png" alt="The show probabilities setting is found in the bottom right of the OpenAI playground." width="556" height="874"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is an example using the OpenAI playground.&lt;/p&gt;

&lt;p&gt;I provided the prompt:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;The capital city of Australia is&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With the temperature set low I received the following output:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Canberra.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This by the way is the correct answer. But using the “show probability” tool I was able to see other probabilities that GPT-3 might draw on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DLRiwXGx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qitivy2qw06ouzknf86f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DLRiwXGx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qitivy2qw06ouzknf86f.png" alt="The potential probabilities of words following “The capital city of Australia is”" width="880" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on GPT-3’s training data there is a 0.08% probability of the next word being ‘Sydney.’ Depending on the temperature setting, there is a possibility that GPT will respond with ‘Sydney’ rather than Canberra. This is modified based on the temperature parameter.&lt;/p&gt;

&lt;p&gt;GPT-3 is not infallible. Its accuracy is ultimately determined by the probability of data it was trained on. This shows that will being trained that 0.08% of the time when it encountered the text “The capital city of Australia is” the word following was ‘Sydney’. Be very aware of this when working with the AI model. It is only as good as the accuracy of its training data.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Temperature Parameter?
&lt;/h2&gt;

&lt;p&gt;The temperature parameter is the setting in OpenAI that affects the randomness of the output. A lower temperature will result in more predictable output, while a higher temperature will result in more random output. The temperature parameter is set between 0 and 1, with 0 being the most predictable and 1 being the most random.&lt;/p&gt;

&lt;p&gt;With a temperature of 0, GPT-3 will select the highest probable response each time. When the temperature parameter is set to 1, the randomness of the output is increased. This means that the AI model will produce more unpredictable results and is less likely to repeat the same output for a given prompt. As the temperature is increased, the probabilities become more spread out and random. As the temperature is decreased, the probabilities become more concentrated and predictable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Suggested Temperature Settings
&lt;/h2&gt;

&lt;p&gt;Here are some recommended temperatures settings for different use cases. Note, the best way to achieve the desired output with the temperature setting is to test it within the playground and use the “show probabilities” tool to debug your output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VuBbXsP9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0yggpdiba4x5ahidigk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VuBbXsP9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0yggpdiba4x5ahidigk4.png" alt="Temperature use cases" width="782" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note the difference between the code generation and refactoring. When using OpenAI for code generation, it is important to use a higher temperature setting, such as 0.8 or higher. This will allow the AI to generate more unpredictable and creative code, which can be particularly useful for generating complex programs.&lt;/p&gt;

&lt;p&gt;For code refactoring use a lower temperature setting, such as 0.2 or 0.3. This will ensure that the AI produces more accurate responses and is less likely to make mistakes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the Temperature Parameter in Ruby
&lt;/h2&gt;

&lt;p&gt;Here is an example of how to use the temperature parameter in Ruby. This code returns the sentiment of an email as positive, negative or neutral.&lt;/p&gt;

&lt;p&gt;A temperature setting of around 0.5 is recommended for sentiment analysis. This ensures that the AI can correctly interpret the sentiment of the text and deliver the desired results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require "ruby/openai"

def analyseEmail(email)
  client = OpenAI::Client.new(access_token: 'YOUR_API_TOKEN')

  prompt = "Provide a sentiment analysis of the following email. 
            Your response should be positive, neutral or negative. 
            \n\nEmail: #{email}"

  response = client.completions(
    parameters: {
      model: "text-davinci-003",
      prompt: prompt,
      temperature: 0.5,
      max_tokens: 10
    }
  )

  puts response['choices'][0]['text'].lstrip
end

email = "I am happy with the service recived by your team. Even though it
         was late, they provided a very high quality result which I am 
         greatful for."

analyseEmail(email)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Positive&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The temperature parameter in OpenAI is a critical setting that can be used to control the randomness of the AI’s output. Using the temperature parameter in Ruby is straightforward and requires only a few lines of code. With this knowledge, Ruby developers can take advantage of the temperature parameter when developing AI applications with OpenAI.&lt;/p&gt;




&lt;p&gt;Kane Hooper is the CEO of reinteractive, the longest running dedicated Ruby on Rails development firm in the world.&lt;/p&gt;

&lt;p&gt;You can contact Kane directly for any help with your Ruby on Rails application.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.reinteractive.com"&gt;www.reinteractive.com&lt;/a&gt;&lt;br&gt;
&lt;a href="mailto:kane.hooper@reinteractive.com"&gt;kane.hooper@reinteractive.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>openai</category>
      <category>gpt3</category>
      <category>ai</category>
    </item>
    <item>
      <title>How to get out of Regex hell using GPT-3 (OpenAI)</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Sun, 15 Jan 2023 10:30:00 +0000</pubDate>
      <link>https://dev.to/kanehooper/how-to-get-out-of-regex-hell-using-gpt-3-openai-59a9</link>
      <guid>https://dev.to/kanehooper/how-to-get-out-of-regex-hell-using-gpt-3-openai-59a9</guid>
      <description>&lt;p&gt;For the past 5-years I have managed approximately 100 senior software developers and almost 1-for-1 they hate dealing with Regex.&lt;/p&gt;

&lt;p&gt;They especially get frustrated when I say, “Why don’t you just validate it with Regex? It’s simple right?”&lt;/p&gt;

&lt;p&gt;Well, Regex pain is a thing of the past. By using GPT-3 you can ask the AI to generate the Regex pattern through the use of natural language. Just tell the AI model what you want and it will provide a Regex pattern to include in your code. You can even go so far as have the AI generate the function for you.&lt;/p&gt;

&lt;p&gt;And just to show how crazy simple it is, I had the AI output the code in Javascript, Ruby, Python, Rust, Go and C++ (I am not familiar with those last three).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof0wugbz9y1a733s016x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof0wugbz9y1a733s016x.png" alt="Am impressionist painting of a Regex pattern by Dalle-2" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key to generating the Regex is the prompt. This is the instruction to the AI. It sometimes requires some fiddling and playing around to get the exact response you are looking for from the AI.&lt;/p&gt;

&lt;p&gt;I did find the Regex pattern being provided by the AI was changing between languages. This is likely due to the temperature setting which I had at 0.7 which is quite high and introduces a high degree of randomness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt
&lt;/h2&gt;

&lt;p&gt;Here is the prompt I used to instruct the AI:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You are a Ruby developer. Write a function that takes a string and validates if it is an email address using Regex.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You will notice in the examples below I am using the OpenAI playground as opposed to ChatGPT. I find the playground a lot more efficient to work with and it gives you greater control over the parameters. You will need to sign up for an OpenAI account to get access to the playground.&lt;/p&gt;

&lt;h2&gt;
  
  
  Javascript
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudsfw573nyrcwa46i1i8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudsfw573nyrcwa46i1i8.png" alt="Javascript code" width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ruby
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2l02kz048kbz2boxwpc9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2l02kz048kbz2boxwpc9.png" alt="Ruby code" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Python
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd17uja3vxcix2o99flp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd17uja3vxcix2o99flp.png" alt="Python code" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Rust
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fred1zcy69rrfix29im3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fred1zcy69rrfix29im3e.png" alt="Rust code" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Go
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frafhcqjak8ylkx9fqgsi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frafhcqjak8ylkx9fqgsi.png" alt="Go code" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  C++
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4psg9yd74dw9ptlig51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4psg9yd74dw9ptlig51.png" alt="C++ code" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Summary&lt;/p&gt;

&lt;p&gt;The good news about GPT providing Regex patterns so quickly, is I would feel more confident using Regex where I might have tried other approaches to manipulate code. But one big warning, TEST, TEST, TEST. GPT is not infallible, it makes mistakes. It is not a full substitute for an intelligent and skilled developer.&lt;/p&gt;




&lt;p&gt;Here are some of my other articles on programming and GPT-3:&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kanehooper" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1006947%2Fec5818b5-a4ba-4daa-bf81-fb5dcacf53bb.jpeg" alt="kanehooper"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/kanehooper/how-to-integrate-ruby-with-openai-gpt-3-5h96" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to integrate Ruby with OpenAI (GPT-3)&lt;/h2&gt;
      &lt;h3&gt;Kane Hooper ・ Jan 12 '23&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#discuss&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#coding&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#learning&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;



&lt;div class="ltag__link"&gt;
  &lt;a href="/kanehooper" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1006947%2Fec5818b5-a4ba-4daa-bf81-fb5dcacf53bb.jpeg" alt="kanehooper"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/kanehooper/how-to-generate-gpt-output-in-json-format-for-ruby-developers-3g66" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to Generate GPT Output in JSON Format for Ruby developers&lt;/h2&gt;
      &lt;h3&gt;Kane Hooper ・ Jan 14 '23&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#welcome&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#community&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#dotnet&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#csharp&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>webdev</category>
      <category>softwaredevelopment</category>
      <category>ui</category>
      <category>discuss</category>
    </item>
    <item>
      <title>How to Generate GPT Output in JSON Format for Ruby developers</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Sun, 15 Jan 2023 10:18:00 +0000</pubDate>
      <link>https://dev.to/kanehooper/how-to-generate-gpt-output-in-json-format-for-ruby-developers-4mkf</link>
      <guid>https://dev.to/kanehooper/how-to-generate-gpt-output-in-json-format-for-ruby-developers-4mkf</guid>
      <description>&lt;p&gt;I was playing around with OpenAI (GPT-3) today, building a reasonably complicated email parser for a client. I was running into issues working with the AI’s response. Because OpenAI (GPT-3) is based on a natural language model the response is always a string. While you can prompt the AI model on how you would like it to structure this string, it was going to require me to write processing methods to extract the output I needed.&lt;/p&gt;

&lt;p&gt;I was looking for a way to easily convert the response into JSON format. I had a bright idea, why don’t I ask ChatGPT how I can get responses in JSON. In this blog post, I’ll show you how to do it quickly and easily with Ruby.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2w8zhqx43yb0rt7dr86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2w8zhqx43yb0rt7dr86.png" alt="Image description" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are new to OpenAI, I have written an article on how to get started using Ruby.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;An intermediate understanding of Ruby.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we will build
&lt;/h2&gt;

&lt;p&gt;We are going to write a short Ruby script that will extract the sender's first name and a summary of the email. The AI will return the data in JSON so it can be easily be manipulated within the code.&lt;/p&gt;

&lt;p&gt;If you are interested in creating an email parser with OpenAI I have written an article about that:&lt;/p&gt;



&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;p&gt;Here are the instructions to implement our code.&lt;/p&gt;

&lt;p&gt;Install openai gem&lt;br&gt;
Configure openai with the API key&lt;br&gt;
Create the prompt (instructions) to the AI requesting the response in JSON&lt;br&gt;
Call the createCompletion method and get the JSON&lt;br&gt;
Convert the JSON into a Ruby object&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Install the openai gem
&lt;/h2&gt;

&lt;p&gt;First things first, you’ll need to install the OpenAI Ruby gem. You can do this by running the following command in your terminal:&lt;/p&gt;

&lt;p&gt;gem install ruby-openai&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Require the openai gem
&lt;/h2&gt;

&lt;p&gt;You will need an OpenAI API key, which you can obtain from this URL: &lt;a href="https://openai.com/api/" rel="noopener noreferrer"&gt;https://openai.com/api/&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;```require 'ruby/openai’&lt;/p&gt;

&lt;p&gt;openai = OpenAI::Client.new(access_token: 'YOUR_API_KEY')&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


###Step 3: Create the prompt

Start by adding the text for the email we want the AI model to parse.
Now for the prompt. This is the most important part of working with the OpenAI model. You need to craft the prompt in such a way that it will provide you the results you are specifically looking for.
We’ll ask the AI to extract the first name and summary from the email and return it as JSON.




```prompt = ‘Analyse the following email and return the author’s first name and a summary of the email. Your response should be in JSON. \n\n”#{email}”’```



The AI model has been trained to respond to natural language requests. You will find much better results if you make your requests in a similar manner.

##Step 4: Call the completions method

The

 ```completions```

 method is how we make our request to the AI model. It takes several parameters. The most important are the

 ```model```

,

 ```prompt```

,

 ```temperature```

 and

 ```max_tokens```

.

model: The AI model to use. The most advanced model is the

 ```davinci-003```

 model.

prompt: This is the instruction provided to the AI. 

temperature: This is a number between 0 and 1 and sets the ‘creativity’ of the AIs response. We don’t want the AI to be particularly creative, so we will set it quite low for this example. 

max_tokens: A token is 4 characters. It sets the total size of the prompt plus the AI response. We can keep it low for this example.



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;response = openai.completions(&lt;br&gt;
 parameters: {&lt;br&gt;
   model: "text-davinci-003",&lt;br&gt;
   prompt: prompt,&lt;br&gt;
   temperature: 0.3,&lt;br&gt;
   max_tokens: 500,&lt;br&gt;
 }&lt;br&gt;
)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


##Step 5. Convert the JSON into a Ruby object

We will use the built-in JSON library in Ruby to convert JSON into a Ruby object. To do this, we use the JSON.parse() method.



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;jsonResponse = JSON.parse(response['choices'][0]['text'].lstrip)&lt;/p&gt;

&lt;p&gt;puts "Email Author: #{jsonResponse["firstName"]}"&lt;br&gt;
puts "Summary: #{jsonResponse["summary"]}"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

##The output



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Email Author: Jimmy&lt;br&gt;
Summary: The author is interested in purchasing one of the new X-225 lawn mowers and inquiring about availability.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


##I found an issue

As I was testing my code I found an issue. The AI model was not providing the same parameter names in the JSON response each time.

Sometimes it set the first parameter to “authorsFirstName”, this of course caused to program to terminate.

I han to reengineer the prompt a few times until I found a wording which provided a consistent response each time.



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;prompt = "Analyse the following email and return the author's first name and a summary of the email. Your response should be in JSON format with two parameters 'firstName' and 'summary'. \n\n'#{email}'"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

Effectively I am telling the AI model how to structure the JSON output. 

Prompt engineering is definitely an art rather than an exacting science. You have to think like the AI model and adjust the prompt to get the results you are looking for. For an application like an email parser you have to be very explicit about how the data is presented. For something more creative such as providing suggestions for a blog title, you want to allow the model to be more deterministic and ‘creative’.

I think there will be a job 5 years from now known as a Prompt Engineer. 

##Summary

And there you have it! With just a few lines of code, you can easily generate OpenAI (GPT) output in JSON format for your Ruby application.

Good luck, and happy coding!

---

Kane Hooper is the CEO of reinteractive, the longest running dedicated Ruby on Rails development firm in the world.
You can contact Kane directly for any help with your Ruby on Rails application.

www.reinteractive.com

kane.hooper@reinteractive.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>azure</category>
      <category>deployment</category>
      <category>cloudcomputing</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Generate GPT Output in JSON Format for Ruby developers</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Sat, 14 Jan 2023 10:14:48 +0000</pubDate>
      <link>https://dev.to/kanehooper/how-to-generate-gpt-output-in-json-format-for-ruby-developers-3g66</link>
      <guid>https://dev.to/kanehooper/how-to-generate-gpt-output-in-json-format-for-ruby-developers-3g66</guid>
      <description>&lt;p&gt;I was playing around with OpenAI (GPT-3) today, building a reasonably complicated email parser for a client. I was running into issues working with the AI’s response. Because OpenAI (GPT-3) is based on a natural language model the response is always a string. While you can prompt the AI model on how you would like it to structure this string, it was going to require me to write processing methods to extract the output I needed.&lt;/p&gt;

&lt;p&gt;I was looking for a way to easily convert the response into JSON format. I had a bright idea, why don’t I ask ChatGPT how I can get responses in JSON. In this blog post, I’ll show you how to do it quickly and easily with Ruby.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2w8zhqx43yb0rt7dr86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2w8zhqx43yb0rt7dr86.png" alt="ChatGPT discussion on JSON format" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are new to OpenAI, I have written an article on how to get started using Ruby.&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kanehooper" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1006947%2Fec5818b5-a4ba-4daa-bf81-fb5dcacf53bb.jpeg" alt="kanehooper"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/kanehooper/how-to-integrate-ruby-with-openai-gpt-3-5h96" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to integrate Ruby with OpenAI (GPT-3)&lt;/h2&gt;
      &lt;h3&gt;Kane Hooper ・ Jan 12 '23&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#discuss&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#coding&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#learning&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;An intermediate understanding of Ruby.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we will build
&lt;/h2&gt;

&lt;p&gt;We are going to write a short Ruby script that will extract the sender's first name and a summary of the email. The AI will return the data in JSON so it can be easily be manipulated within the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;p&gt;Here are the instructions to implement our code.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install openai gem&lt;/li&gt;
&lt;li&gt;Configure openai with the API key&lt;/li&gt;
&lt;li&gt;Create the prompt (instructions) to the AI requesting the response in JSON&lt;/li&gt;
&lt;li&gt;Call the createCompletion method and get the JSON&lt;/li&gt;
&lt;li&gt;Convert the JSON into a Ruby object&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1: Install the openai gem
&lt;/h2&gt;

&lt;p&gt;First things first, you’ll need to install the OpenAI Ruby gem. You can do this by running the following command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gem install ruby-openai&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Require the openai gem
&lt;/h2&gt;

&lt;p&gt;You will need an OpenAI API key, which you can obtain from this URL: &lt;a href="https://openai.com/api/" rel="noopener noreferrer"&gt;https://openai.com/api/&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require 'ruby/openai'

openai = OpenAI::Client.new(access_token: 'YOUR_API_KEY')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Create the prompt
&lt;/h3&gt;

&lt;p&gt;Start by adding the text for the email we want the AI model to parse.&lt;/p&gt;

&lt;p&gt;Now for the prompt. This is the most important part of working with the OpenAI model. You need to craft the prompt in such a way that it will provide you the results you are specifically looking for.&lt;/p&gt;

&lt;p&gt;We’ll ask the AI to extract the first name and summary from the email and return it as JSON.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;email = "I would love to purchase one of your new X-225 
         lawn mowers. Do you still have some in stock? If 
         so I will purchase it today. 
         \n\nRegards,\nJimmy Jackson"

prompt = ‘Analyse the following email and return the author’s
          first name and a summary of the email. Your
          response should be in JSON. \n\n”#{email}”’
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI model has been trained to respond to natural language requests. You will find much better results if you make your requests in a similar manner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Call the completions method
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;completions&lt;/code&gt; method is how we make our request to the AI model. It takes several parameters. The most important are the &lt;code&gt;model&lt;/code&gt;, &lt;code&gt;prompt&lt;/code&gt;, &lt;code&gt;temperature&lt;/code&gt; and &lt;code&gt;max_tokens&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;model: The AI model to use. The most advanced model is the &lt;code&gt;davinci-003&lt;/code&gt; model.&lt;/p&gt;

&lt;p&gt;prompt: This is the instruction provided to the AI. &lt;/p&gt;

&lt;p&gt;temperature: This is a number between 0 and 1 and sets the ‘creativity’ of the AIs response. We don’t want the AI to be particularly creative, so we will set it quite low for this example. &lt;/p&gt;

&lt;p&gt;max_tokens: A token is 4 characters. It sets the total size of the prompt plus the AI response. We can keep it low for this example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = openai.completions(
 parameters: {
   model: "text-davinci-003",
   prompt: prompt,
   temperature: 0.3,
   max_tokens: 500,
 }
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5. Convert the JSON into a Ruby object
&lt;/h2&gt;

&lt;p&gt;We will use the built-in JSON library in Ruby to convert JSON into a Ruby object. To do this, we use the &lt;code&gt;JSON.parse()&lt;/code&gt; method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jsonResponse = JSON.parse(response['choices'][0]['text'].lstrip)

puts "Email Author: #{jsonResponse["firstName"]}"
puts "Summary: #{jsonResponse["summary"]}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The output
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Email Author: Jimmy
Summary: The author is interested in purchasing one of the
         new X-225 lawn mowers and inquiring about
         availability.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  I found an issue
&lt;/h2&gt;

&lt;p&gt;As I was testing my code I found an issue. The AI model was not providing the same parameter names in the JSON response each time.&lt;/p&gt;

&lt;p&gt;Sometimes it set the first parameter to “authorsFirstName”, this of course caused to program to terminate.&lt;/p&gt;

&lt;p&gt;I han to reengineer the prompt a few times until I found a wording which provided a consistent response each time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prompt = "Analyse the following email and return the author's
          first name and a summary of the email. Your
          response should be in JSON format with two
          parameters 'firstName' and 'summary'. 
          \n\n'#{email}'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Effectively I am telling the AI model how to structure the JSON output. &lt;/p&gt;

&lt;p&gt;Prompt engineering is definitely an art rather than an exacting science. You have to think like the AI model and adjust the prompt to get the results you are looking for. For an application like an email parser you have to be very explicit about how the data is presented. For something more creative such as providing suggestions for a blog title, you want to allow the model to be more deterministic and ‘creative’.&lt;/p&gt;

&lt;p&gt;I think there will be a job 5 years from now known as a Prompt Engineer. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;And there you have it! With just a few lines of code, you can easily generate OpenAI (GPT) output in JSON format for your Ruby application.&lt;/p&gt;

&lt;p&gt;Good luck, and happy coding!&lt;/p&gt;




&lt;p&gt;Kane Hooper is the CEO of reinteractive, the longest running dedicated Ruby on Rails development firm in the world.&lt;br&gt;
You can contact Kane directly for any help with your Ruby on Rails application.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.reinteractive.com" rel="noopener noreferrer"&gt;www.reinteractive.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="mailto:kane.hooper@reinteractive.com"&gt;kane.hooper@reinteractive.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>welcome</category>
      <category>community</category>
      <category>dotnet</category>
      <category>csharp</category>
    </item>
    <item>
      <title>Integrating JavaScript with OpenAI: A Beginner’s Guide</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Sat, 14 Jan 2023 07:59:34 +0000</pubDate>
      <link>https://dev.to/kanehooper/integrating-javascript-with-openai-a-beginners-guide-gp6</link>
      <guid>https://dev.to/kanehooper/integrating-javascript-with-openai-a-beginners-guide-gp6</guid>
      <description>&lt;p&gt;OpenAI’s AI models have been causing a stir online, and it’s no wonder — they’ve got programming and tech enthusiasts swooning with their impressive abilities. From churning out eerily human-like text to crafting breathtaking photographs, these models have proven their worth in creating all sorts of intelligent applications. No wonder they’ve become such a hit in recent years.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg88iq4ub1aesm1gl03a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg88iq4ub1aesm1gl03a.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Many people are interested in ChatGTP because of its capacity to hold a variety of realistic, human-like discussions. As a result, many ChatGTP-powered chatbots and other interactive applications have been created, allowing people to connect with and firsthand experience the power of AI.&lt;br&gt;
As a Node developer, you can add AI into your products right now by using the OpenAI API.&lt;/p&gt;

&lt;p&gt;With just a few lines of code, you can start designing intelligent applications that can generate text, answer enquiries, and much more.&lt;br&gt;
This is a short introduction to help you get started. In subsequent posts, I’ll go into more technical detail so you may develop a better understanding of AI. In this blog post, I’ll show you how to get started with Node and OpenAI by providing some examples.&lt;/p&gt;

&lt;p&gt;Before you start coding, it’s a good idea to experiment with ChatGPT to get a flavour of the kinds of queries you can ask and the responses you’ll get. &lt;a href="https://chat.openai.com/chat" rel="noopener noreferrer"&gt;https://chat.openai.com/chat&lt;/a&gt;. There are several techniques to train OpenAI, which we will cover in future postings.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To follow this article you will need to understand the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Running a Node application&lt;/li&gt;
&lt;li&gt;Working with API Keys&lt;/li&gt;
&lt;li&gt;Using Async/Await&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you are a Ruby developer I have a similar article written with Ruby code examples:&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kanehooper" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1006947%2Fec5818b5-a4ba-4daa-bf81-fb5dcacf53bb.jpeg" alt="kanehooper"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/kanehooper/how-to-integrate-ruby-with-openai-gpt-3-5h96" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to integrate Ruby with OpenAI (GPT-3)&lt;/h2&gt;
      &lt;h3&gt;Kane Hooper ・ Jan 12 '23&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#discuss&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#coding&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#learning&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Integrating Node and OpenAI
&lt;/h2&gt;

&lt;p&gt;You must first register with OpenAI and obtain your API Key. The OpenAI API is currently available for free (though limited) use. To sign up, go to &lt;a href="https://openai.com/api/" rel="noopener noreferrer"&gt;https://openai.com/api/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Install the OpenAI package first. In your terminal, type the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm install openai&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once installed, you can use the openai package to access the OpenAI API. We can now ask GPT-3 a question by writing a few lines of Javascript code. Let’s ask it ‘What is async/await in Javascript”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { Configuration, OpenAIApi } = require("openai");

// Configure OpenAI with your API key
const configuration = new Configuration({
  apiKey: “Your API KEY”,
});

// Create a new instance of OpenAI
const openai = new OpenAIApi(configuration);

// Use the createCompletion method to ask OpenAI your question
const response = await openai.createCompletion({
  model: "text-davinci-003",
  prompt: "What is async/await in Javascript",
});

// Log the response to the console
console.log(response.data.choices[0].text)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will result in a short explanation of what async/await is.&lt;/p&gt;

&lt;p&gt;When generating text with the OpenAI API, the model variable is used to determine which AI model should be used. The term “davinci” refers to the OpenAI GPT-3 model. Other models exist, but this one is the most accurate.&lt;br&gt;
The prompt variable in the code sample specifies the text to which the OpenAI model will answer. This text could be a short comment, an enquiry, or a lengthy paragraph. The model will then expand on this prompt to generate more content that is related to it.&lt;/p&gt;

&lt;p&gt;For example, in the code below, the prompt is set to “What is async/await in Javascript?”:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;prompt = "What is async/await in Javascript?"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The OpenAI model will create a paragraph defining async/await.&lt;/p&gt;

&lt;p&gt;The prompt text you select should be carefully examined because it will influence the tone and content of the resulting text. A more precise and detailed prompt, for example, will almost certainly provide more accurate and instructional information, whereas a more broad or ambiguous prompt may yield more imaginative and abstract created prose.&lt;/p&gt;

&lt;p&gt;Assume you wish to use OpenAI to simplify your explanation. The code below could be used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { Configuration, OpenAIApi } = require("openai");

const configuration = new Configuration({
  apiKey: “Your API KEY”,
});

const openai = new OpenAIApi(configuration);

const response = await openai.createCompletion({
  model: "text-davinci-003",
  prompt: "Explain async/await in Javascript ot a 10-year-old ",
});

console.log(console.log(response.data.choices[0].text))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI will attempt to create a response that a 10-year-old will understand.&lt;br&gt;
Using the command “Explain async/await in javascript to a 10-year-old,” the following is an example of the results you might get:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Async/await is a way to write asynchronous code in a way that looks and feels like synchronous code.&lt;br&gt;
Imagine you have a recipe that tells you to bake a cake. The recipe has a list of ingredients and a set of instructions. You follow the instructions step by step, mixing the ingredients together, pouring the batter into a cake pan, and putting the pan in the oven. You can’t move on to the next step until the current step is finished.&lt;br&gt;
Now imagine that instead of a recipe, you have a list of tasks to do. One task might be to go to the store and buy ingredients for the cake. Another task might be to mix the ingredients together. You can’t mix the ingredients until you have bought them, and you can’t bake the cake until the ingredients are mixed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Async/await helps you write code that works like a list of tasks. You can tell the computer to “wait” until one task is finished before moving on to the next task. This is especially useful when you are working with tasks that take a long time, like downloading a file from the internet or waiting for a user to respond to a question.&lt;/p&gt;

&lt;p&gt;Other parameters can be used to regulate the output’s creativity, which will be addressed in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;OpenAI can provide hours of entertainment. Request that it compose you a poem, summarise the biography of a renowned person, or even produce a blog entry (like this one).&lt;/p&gt;

&lt;p&gt;I hope you found this introduction to utilising Node with OpenAI beneficial. Please let me know if you have any questions or would want to see more examples.&lt;/p&gt;

&lt;p&gt;In the following post, we will go over the API in further detail, as well as explore “prompt” engineering and how to train the AI model to achieve the desired outcomes.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to integrate Ruby with OpenAI (GPT-3)</title>
      <dc:creator>Kane Hooper</dc:creator>
      <pubDate>Thu, 12 Jan 2023 13:45:11 +0000</pubDate>
      <link>https://dev.to/kanehooper/how-to-integrate-ruby-with-openai-gpt-3-5h96</link>
      <guid>https://dev.to/kanehooper/how-to-integrate-ruby-with-openai-gpt-3-5h96</guid>
      <description>&lt;p&gt;In recent years, OpenAI and its potent AI models have proliferated online, drawing a lot of interest and acclaim from programmers and tech enthusiasts all around the world. The models developed by OpenAI have shown an astounding range of abilities that have been utilised to develop a variety of intelligent applications, from producing text that is similar to human speech to creating beautiful photographs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nnmyfck77qjebro6zax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nnmyfck77qjebro6zax.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Particularly, ChatGPT has attracted the interest of many because of its ability to carry on a range of realistic, human-like conversations. As a result, a large number of ChatGPT-powered chatbots and other interactive applications have been developed, enabling people to engage with and personally experience the power of AI.&lt;/p&gt;

&lt;p&gt;As a Ruby developer, you can immediately incorporate AI into your applications through the use of the OpenAI API.&lt;/p&gt;

&lt;p&gt;Using the power of OpenAI’s models, you can create intelligent applications that can create text, respond to queries, and much more with just a few lines of code.&lt;/p&gt;

&lt;p&gt;This post is a basic introduction to getting started. In future posts, I’ll dive into the technical details so you can gain deeper AI insights. I’ll show you how simple it is to get started by writing just a few lines of Ruby code.&lt;/p&gt;

&lt;p&gt;Before doing any coding it's a good idea to play with ChatGPT directly, to get a sense of the sorts of questions you can ask and the responses you will get. &lt;a href="https://chat.openai.com/chat" rel="noopener noreferrer"&gt;https://chat.openai.com/chat&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F102mtje7pk2x4d85zymb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F102mtje7pk2x4d85zymb.png" alt="Image description" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Ruby and OpenAI
&lt;/h2&gt;

&lt;p&gt;You need to sign up with OpenAI and get your API Key. At the moment there is free (albeit limited) access to the OpenAI API. Go to &lt;a href="https://openai.com/api/" rel="noopener noreferrer"&gt;https://openai.com/api/&lt;/a&gt; to signup.&lt;/p&gt;

&lt;p&gt;Install the OpenAI gem by running the following command in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gem install ruby-openai&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can use the gem to access the OpenAI API once it has been installed. We can now write a few lines of Ruby code and begin asking GPT-3 a question.&lt;/p&gt;

&lt;p&gt;Let’s ask it ‘What is ruby metaprogramming”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require "ruby/openai"

client = OpenAI::Client.new(access_token: 'YOUR_API_TOKEN')

prompt = 'What is ruby metaprogramming?'

response = client.completions(
    parameters: {
      model: "text-davinci-003",
      prompt: prompt,
      max_tokens: 2000
    })

puts response['choices'][0]['text']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will produce a brief explanation of what metaprogramming is.&lt;/p&gt;

&lt;p&gt;The model variable is used to identify which AI model should be used when generating text using the OpenAI API. The OpenAI GPT-3 model is referenced by the term “davinci”. There are other models, but this one is the most advanced.&lt;/p&gt;

&lt;p&gt;The text that the OpenAI model will respond to is specified via the prompt variable in the code sample. This text may be a brief comment, an enquiry, or even a lengthy paragraph. The model will then build on this prompt to produce more text that is connected to it.&lt;/p&gt;

&lt;p&gt;For example, in the code below, the prompt is set to “What is ruby metaprogramming?”:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;prompt = "What is ruby metaprogramming?"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The OpenAI model will create a paragraph defining metaprogramming using this question.&lt;/p&gt;

&lt;p&gt;The prompt text you choose should be carefully considered as it will have an impact on the tone and content of the created text. A more explicit and detailed prompt, for instance, will probably provide more accurate and educational content, whereas a more wide or ambiguous prompt may produce more imaginative and abstract created prose.&lt;/p&gt;

&lt;p&gt;Let’s say you want to use OpenAI to teach a 10-year-old about metaprogramming. The following code could be used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require "ruby/openai"

client = OpenAI::Client.new(access_token: 'YOUR_API_TOKEN')

prompt = 'Explain metaprogramming to a 10-year-old?'

response = client.completions(
    parameters: {
      model: "text-davinci-003",
      prompt: prompt,
      max_tokens: 2000
    })

puts response['choices'][0]['text']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI will attempt to create a response that a 10-year-old will understand.&lt;/p&gt;

&lt;p&gt;Using the command “Explain metaprogramming to a 10-year-old,” the following is an example of the results you might get:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Metaprogramming is a way for a computer program to write or modify itself while it is running. It’s kind of like having a special set of instructions that can change the way the program works on the fly. For example, imagine you have a computer game where you can collect different power-ups. With metaprogramming, you could write the game so that it can automatically create new power-ups whenever you collect one. So, if you collect a power-up that lets you fly, the game might create a new power-up that lets you breathe underwater. It might sound a little confusing, but metaprogramming can be really useful for making computer programs more flexible and adaptable. And who knows, maybe one day you’ll be the one writing programs that use metaprogramming!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are ways to control the creativity of the output through other settings, which will be covered in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;You can have a lot of fun with OpenAI. Ask it to write you a poem, write Ruby code for a particular problem or even check the problems with your code.&lt;/p&gt;

&lt;p&gt;I hope this introduction to using Ruby with OpenAI was useful. If you have any enquiries or would like to see more examples, please let me know.&lt;/p&gt;

&lt;p&gt;In the next article, we will discuss prompt engineering and how to train the AI model to produce the specific results you are looking for.&lt;/p&gt;




&lt;p&gt;Kane Hooper is the CEO of reinteractive, a specialist Ruby on Rails development firm.&lt;/p&gt;

&lt;p&gt;If you need help with your Rails application you can reach out to Kane directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="mailto:kane.hooper@reinteractive.com"&gt;kane.hooper@reinteractive.com&lt;/a&gt;&lt;br&gt;
reinteractive.com&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>tutorial</category>
      <category>coding</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
