<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Asiqur Rahman</title>
    <description>The latest articles on DEV Community by Asiqur Rahman (@asiqurrahman).</description>
    <link>https://dev.to/asiqurrahman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/asiqurrahman"/>
    <language>en</language>
    <item>
      <title>This is a test of styled quotes</title>
      <dc:creator>Asiqur Rahman</dc:creator>
      <pubDate>Sun, 03 Dec 2023 21:59:19 +0000</pubDate>
      <link>https://dev.to/asiqurrahman/this-is-a-test-of-styled-quotes-3d8i</link>
      <guid>https://dev.to/asiqurrahman/this-is-a-test-of-styled-quotes-3d8i</guid>
      <description>&lt;p&gt;this is a tst&lt;br&gt;
&lt;code&gt;wefeffe&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;efwkfmkwemf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Generate a Blog with OpenAI</title>
      <dc:creator>Asiqur Rahman</dc:creator>
      <pubDate>Wed, 19 Oct 2022 20:06:42 +0000</pubDate>
      <link>https://dev.to/codedex/generate-a-blog-with-openai-5eio</link>
      <guid>https://dev.to/codedex/generate-a-blog-with-openai-5eio</guid>
      <description>&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt; Python fundamentals&lt;br&gt;
&lt;strong&gt;Versions:&lt;/strong&gt; Python 3.10, python-dotenv 0.21.0, openai 0.23.0&lt;br&gt;
&lt;strong&gt;Read Time:&lt;/strong&gt; 60 minutes&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Artificial_intelligence" rel="noopener noreferrer"&gt;Artificial Intelligence (AI)&lt;/a&gt; is becoming the next big technology to harness. From smart fridges to self-driving cars, AI is implemented in almost everything you can think of. So let's get ahead of the pack and learn how we can leverage the power of AI with Python and OpenAI.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll learn how to create a blog generator with &lt;a href="https://openai.com/api/" rel="noopener noreferrer"&gt;GPT-3&lt;/a&gt;, an AI model provided by &lt;a href="https://www.openai.com" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt;. The generator will read a topic to talk about as the input, and GPT-3 will return us a paragraph about that topic as the output. &lt;/p&gt;

&lt;p&gt;So AI will be "writing" stuff for us. Say goodbye to writer's block!&lt;/p&gt;

&lt;p&gt;But wait, hold on! Artificial intelligence?! AI models?! This must be complicated to code. 😵&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fcodedex-io%2Fprojects%2Fmain%2Fprojects%2Fgenerate-a-blog-with-openai%2Fcalculation-math.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fcodedex-io%2Fprojects%2Fmain%2Fprojects%2Fgenerate-a-blog-with-openai%2Fcalculation-math.gif" alt="meme"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nope, it's easier than you think. It takes around 25 lines of Python code!&lt;/p&gt;

&lt;p&gt;The final result will look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fcodedex-io%2Fprojects%2Fmain%2Fprojects%2Fgenerate-a-blog-with-openai%2Fgenerator-demo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fcodedex-io%2Fprojects%2Fmain%2Fprojects%2Fgenerate-a-blog-with-openai%2Fgenerator-demo.gif" alt="generator demo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Who knows, maybe this entire project was written by the generator we're about to create. 👀&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  What is GPT-3?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/GPT-3" rel="noopener noreferrer"&gt;GPT-3&lt;/a&gt; is an AI model released by OpenAI in 2020. An AI model is a program trained on a bunch of data to perform a specific task. In this case, GPT-3 was trained to speak like a human and predict what comes next given the context of a sentence, with its training dataset being 45 terabytes of text (!) from the internet. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For reference, if you had to keep writing until your paper hits 45 terabytes in size, you would have to write &lt;a href="https://www.techtarget.com/searchstorage/definition/How-many-bytes-for" rel="noopener noreferrer"&gt;22,500,000,000&lt;/a&gt; pages worth of plain text. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Since GPT-3 was trained on internet data, it knows what the internet knows (not everything of course). This means that if we were to give GPT-3 a sentence, it would be able to predict what comes next in that sentence with high accuracy, based on all the text that was used to train it.&lt;/p&gt;

&lt;p&gt;Now we know what we'll be working with, let's build the program!&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting Up
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;OpenAI Account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before we do anything, we need an &lt;a href="https://openai.com/api" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; account. We'll need this account access to an API key that we can use to work with GPT-3.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/API" rel="noopener noreferrer"&gt;API (Application Programming Interface)&lt;/a&gt; is a way for two computers to communicate with each other. Think of it like two friends texting back and forth. An API key is a code we receive to access the API. Think of it like an important password, so don’t share it with others!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Go to &lt;a href="http://www.openai.com" rel="noopener noreferrer"&gt;www.openai.com&lt;/a&gt; and sign up for an OpenAI account.&lt;/p&gt;

&lt;p&gt;After you've created an account, click on your profile picture on the top right, then click "View API keys" to access your API key. You should see &lt;a href="https://beta.openai.com/account/api-keys" rel="noopener noreferrer"&gt;this page&lt;/a&gt; and it should look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fcodedex-io%2Fprojects%2Fmain%2Fprojects%2Fgenerate-a-blog-with-openai%2Fapi-key.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fcodedex-io%2Fprojects%2Fmain%2Fprojects%2Fgenerate-a-blog-with-openai%2Fapi-key.png" alt="API Key"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we know where the API key is located, let's keep it in mind for later.&lt;/p&gt;

&lt;p&gt;With the API key, we get access to GPT-3 and \$18 worth of free credit. Meaning that we can use GPT-3 for free until we go over the \$18, which is more than enough to complete this project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Python Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For this project, we'll need &lt;a href="https://www.python.org/downloads/" rel="noopener noreferrer"&gt;Python 3&lt;/a&gt; and &lt;a href="https://pip.pypa.io/en/stable/" rel="noopener noreferrer"&gt;pip&lt;/a&gt; (package installer) installed.&lt;/p&gt;

&lt;p&gt;Assuming that we have those two installed, let's open up the code editor of our choice (we recommend &lt;a href="https://code.visualstudio.com" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt;) and create a new file called &lt;strong&gt;blog_generator.py&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You can name this file anything except for &lt;strong&gt;openai.py&lt;/strong&gt;, since the name will clash with a package we'll be installing.&lt;/p&gt;
&lt;h2&gt;
  
  
  Beginning the Project
&lt;/h2&gt;

&lt;p&gt;At the core of this project, all we'll be doing is sending data with instructions to a server owned by OpenAI, then receiving a response back from that server and displaying it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install openai&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We'll be interacting with GPT-3 model using a python package called &lt;code&gt;openai&lt;/code&gt;. This package consists of methods that can connect to the internet and grant us access to the GPT-3 model hosted by OpenAI, the company.&lt;/p&gt;

&lt;p&gt;To install &lt;code&gt;openai&lt;/code&gt;, all we have to do is run the following command in our terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now use this package by importing it into our &lt;strong&gt;blog_generator.py&lt;/strong&gt; file like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Authorize API Key&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before we can work with GPT-3 we need to set our API key in the &lt;code&gt;openai&lt;/code&gt; module. Remember, the API key is what gives us access to GPT-3; it authorizes us and says we're allowed to use this API.&lt;/p&gt;

&lt;p&gt;We can set our API key by extending a method in the &lt;code&gt;openai&lt;/code&gt; module called &lt;code&gt;api_key&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Your_API_Key&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The method will take in the API key as a string. Remember, your API key is located in your &lt;a href="https://beta.openai.com/account/api-keys" rel="noopener noreferrer"&gt;OpenAI account&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So far, the code should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;

&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sk-jAjqdWoqZLGsh7nXf5i8T3BlbkFJ9CYRk&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="c1"&gt;# Fill in your own key
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Core Function
&lt;/h2&gt;

&lt;p&gt;Now that we have access to GPT-3, we can get to the meat of the application, which is creating a function that takes in a prompt as user input and returns a paragraph about that prompt. &lt;/p&gt;

&lt;p&gt;That function will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_blog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Completion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;text-davinci-002&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Write a paragraph about the following topic. &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_tokens&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="n"&gt;retrieve_blog&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;retrieve_blog&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down this function and see what's going on here.&lt;/p&gt;

&lt;p&gt;First, we defined a function called &lt;code&gt;generate_blog()&lt;/code&gt;. There's a single parameter called &lt;code&gt;paragraph_topic&lt;/code&gt;, which will be the topic used to generate the paragraph:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_blog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="c1"&gt;# The code inside
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And let's go inside the function. Here's the first part:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_blog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Completion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;text-davinci-002&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Write a paragraph about the following topic. &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_tokens&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the bulk of our function and where we use GPT-3. We created a variable called &lt;code&gt;response&lt;/code&gt; to store the response generated by the output of the &lt;code&gt;Completion.create()&lt;/code&gt; method call in our &lt;code&gt;openai&lt;/code&gt; module. &lt;/p&gt;

&lt;p&gt;GPT-3 has different endpoints for specific purposes, but for our goal, we'll use the &lt;a href="https://beta.openai.com/docs/api-reference/completions" rel="noopener noreferrer"&gt;completion&lt;/a&gt; endpoint. The completion endpoint will generate text depending on the provided prompt. You can read about the different endpoints in the &lt;a href="https://beta.openai.com/docs/introduction" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now that we have access to the completion endpoint, we need to specify a few things, The first one being:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;model&lt;/code&gt;: The model parameter will take in the model we want to use. GPT-3 has four models that we can use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;text-davinci-002&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;text-curie-001&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;text-babbage-001&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;text-ada-001&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These models perform the same task but at a different power level. More power equals better and more coherent responses, with &lt;code&gt;text-davinci-002&lt;/code&gt; being the most powerful and &lt;code&gt;text-babbage-001&lt;/code&gt; being the least. You can think of it like a car vs. a bike. They both perform the same task of taking you from one place to another, but the car will perform better. You can read more about the models in the &lt;a href="https://beta.openai.com/docs/models/gpt-3" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Write a paragraph about the following topic. &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;prompt&lt;/code&gt;: This is where we design the main instructions for GPT-3. This parameter will take in our &lt;code&gt;paragraph_topic&lt;/code&gt; argument, but before that, we can tell GPT-3 what to do with that argument. Currently, we are instructing GPT-3 to &lt;code&gt;Write a paragraph about the following topic&lt;/code&gt;. GPT-3 will try its best to follow this instruction and return us a paragraph. &lt;/p&gt;

&lt;p&gt;GPT-3 is very flexible; if the initial string is changed to &lt;code&gt;Write a blog outline about the following topic&lt;/code&gt;, it will give us an outline instead of a normal paragraph. You can later play around with this by telling the model exactly what it should generate and seeing what interesting responses you get.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;max_tokens&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;tokens&lt;/code&gt;: The token number decides how long the response is going to be. A larger token number will produce a longer response. By setting a specific number, we're saying that the response can't go past this token size. The way tokens are counted towards a response is a bit complex, but you can read this &lt;a href="https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them" rel="noopener noreferrer"&gt;article&lt;/a&gt; by OpenAI that explains how token size is calculated.&lt;/p&gt;

&lt;p&gt;Roughly 75 words is about 100 tokens. A paragraph has 300 words on average. So, 400 tokens is about the length of a normal paragraph. The model &lt;code&gt;text-davinci-002&lt;/code&gt; has a token limit of 4,000.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;temperature&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;temperature&lt;/code&gt;: Temperature determines the randomness of a response. A higher temperature will produce a more creative response, while a lower temperature will produce a more well-defined response.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;0&lt;/code&gt;: The same response every time.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;1&lt;/code&gt;: A different response every time, even if it's the same prompt.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are plenty of other fields that we can specify to fine-tune the model even more, which you can read in the &lt;a href="https://beta.openai.com/docs/api-reference/completions/create" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, but for now, these are the four fields we need to concern ourselves with.&lt;/p&gt;

&lt;p&gt;Now that we have our model setup, we can run our function, and the following things will happen:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First, the &lt;code&gt;openai&lt;/code&gt; module will take our API key, along with the fields we specified in the &lt;code&gt;response&lt;/code&gt; variable, and make a request to the completion endpoint.&lt;/li&gt;
&lt;li&gt;OpenAI will then verify that we're allowed to use GPT-3 by verifying our API key.&lt;/li&gt;
&lt;li&gt;After verification, GPT-3 will use the specified fields to produce a response.&lt;/li&gt;
&lt;li&gt;The produced response will be returned back in the form of an object and stored in the &lt;code&gt;response&lt;/code&gt; variable.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That returned object will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "text": "\n\nPython is a programming language with many features, such as an intuitive syntax and powerful data structures. It was created in the late 1980s by Guido van Rossum, with the goal of providing a simple yet powerful scripting language. Python has since become one of the most popular programming languages, with a wide range of applications in fields such as web development, scientific computing, and artificial intelligence."
    }
  ],
  "created": 1664302504,
  "id": "cmpl-5v9OiMOjRyoyypRQWAdpyAtjtgVev",
  "model": "text-davinci-002",
  "object": "text_completion",
  "usage": {
    "completion_tokens": 80,
    "prompt_tokens": 19,
    "total_tokens": 99
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re provided with tons of information about the response, but the only thing we care about is the &lt;code&gt;text&lt;/code&gt; field containing generated text.&lt;/p&gt;

&lt;p&gt;We can access the value in the &lt;code&gt;text&lt;/code&gt; field like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;retrieve_blog&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we return the &lt;code&gt;retrieve_blog&lt;/code&gt; variable which holds the paragraph we just dug out of the dictionary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;retrieve_blog&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whoah! Let's take a moment and breathe. That was a lot we just covered. Let's give ourselves a pat on the back as we're 90% done with the application.&lt;/p&gt;

&lt;p&gt;We can test to see if our code works so far by printing out the &lt;code&gt;generate_blog()&lt;/code&gt; function we just created, giving it a topic to write about, and seeing the response we get.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;generate_blog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Why NYC is better than your city.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's the complete code so far:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;

&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sk-jAjqdWoqZLGsh7nXf5i8T3BlbkFJ9CYRk&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="c1"&gt;# Fill in your own key
&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_blog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Completion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;text-davinci-002&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Write a paragraph about the following topic. &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_tokens&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="n"&gt;retrieve_blog&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;retrieve_blog&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;generate_blog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Why NYC is better than your city.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And boom, after 2-3 seconds, it should spit out a paragraph like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fcodedex-io%2Fprojects%2Fmain%2Fprojects%2Fgenerate-a-blog-with-openai%2Foutput-nyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fcodedex-io%2Fprojects%2Fmain%2Fprojects%2Fgenerate-a-blog-with-openai%2Foutput-nyc.png" alt="Output: NYC"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try running the code a couple more times; the output should be different every time! 🤯&lt;/p&gt;

&lt;h2&gt;
  
  
  Multiple Paragraphs
&lt;/h2&gt;

&lt;p&gt;Right now, if we run our code, we'll only be able to generate one paragraph worth of text. Remember, we're trying to create a blog generator, and a blog has multiple sections, with each paragraph having a different topic.&lt;/p&gt;

&lt;p&gt;Let's add some additional code to generate as many paragraphs as we want, with each paragraph discussing a different topic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;keep_writing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;

&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;keep_writing&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Write a paragraph? Y for yes, anything else for no. &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;paragraph_topic&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;What should this paragraph talk about? &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;generate_blog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
  &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;keep_writing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we defined a variable called &lt;code&gt;keep_writing&lt;/code&gt;, to use as a boolean value for the following &lt;code&gt;while&lt;/code&gt; loop.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;while&lt;/code&gt; loop, we created an &lt;code&gt;answer&lt;/code&gt; variable that will take in an input from the user using the built-in &lt;code&gt;input()&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;We then created an &lt;code&gt;if&lt;/code&gt; statement that will either continue the loop or stop the loop.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the input from the user is &lt;code&gt;Y&lt;/code&gt;, then we will ask the user what topic they want to generate text about, storing that value in a variable called &lt;code&gt;paragraph_topic&lt;/code&gt;. Then we will execute and print the &lt;code&gt;generate_blog()&lt;/code&gt; function using the &lt;code&gt;parapgraph_topic&lt;/code&gt; variable as its argument.&lt;/li&gt;
&lt;li&gt;Else, we will stop the loop by assigning the &lt;code&gt;keep_writing&lt;/code&gt; variable to &lt;code&gt;False&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that complete, we can now write as many paragraphs as we want by running the program once!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rate Limit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since we're using a &lt;code&gt;while&lt;/code&gt; loop, we have the potential to be rate limited.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Rate_limiting" rel="noopener noreferrer"&gt;Rate limit&lt;/a&gt; is the number of API calls an app or user can make within a given time period.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is normally done to protect the API from abuse or &lt;a href="https://en.wikipedia.org/wiki/Denial-of-service_attack" rel="noopener noreferrer"&gt;DoS&lt;/a&gt; attacks.&lt;/p&gt;

&lt;p&gt;For GPT-3, the rate limit is 20 requests per minute. As long as we don't run the function that fast, we'll be fine. But in a rare case that it does occur, GPT-3 will stop producing responses and make us wait a minute to produce another response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Credit Limit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By this point, if you have been playing with the API nonstop, there's a chance that you might have exceeded the $18 limit. The following error is thrown when that happens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openai.error.RateLimitError:  
You exceeded your current quota, please check your plan and billing details.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If that's the case, go to OpenAI's &lt;a href="https://beta.openai.com/account/billing/overview" rel="noopener noreferrer"&gt;Billing overview&lt;/a&gt; page and create a paid account.&lt;/p&gt;

&lt;p&gt;Let's take another breather. We're almost done!&lt;/p&gt;

&lt;h2&gt;
  
  
  Securing Our App
&lt;/h2&gt;

&lt;p&gt;Let's think about this for a minute. We created this amazing application and want to share it with the world, right? Well, when we deploy it to the web or share it with our friends, they'll be able to see every piece of code in the program. That's where the issue lies!&lt;/p&gt;

&lt;p&gt;At the beginning of this article, we created an account with OpenAI and were assigned an API key. Remember, this API key is what gives us access to GPT-3. Since GPT-3 is a paid service, the API key is also used to track usage and charge us accordingly. So what happens when someone knows our API key? They'll be able to use the service with our key, and we'll be the one charged, potentially thousands of dollars!&lt;/p&gt;

&lt;p&gt;In order to protect ourselves, we need to hide the API key in our code but still be able to use it. Let's see how we can do that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install python-dotenv&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pypi.org/project/python-dotenv" rel="noopener noreferrer"&gt;&lt;code&gt;python-dotenv&lt;/code&gt;&lt;/a&gt; is a package that allows us to create and use environment variables without having to set them in the operating system manually.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Environment variables are variables whose values are set outside the program, typically in the operating system.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can install &lt;code&gt;python-dotenv&lt;/code&gt; by running the following command in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;python-dotenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;.env File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Then in our project's root directory, create a file called &lt;strong&gt;.env&lt;/strong&gt;. This file will hold our environment variable.&lt;/p&gt;

&lt;p&gt;Open up the &lt;strong&gt;.env&lt;/strong&gt; file and create a variable like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;API_KEY=&amp;lt;Your_API_Key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The variable will take in our API key without any quotation marks or spaces. Remember to name this variable as &lt;code&gt;API_KEY&lt;/code&gt; only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Python File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that we have our environment variable set, let's open up the &lt;strong&gt;blog_generator.py&lt;/strong&gt; file, and paste this code under &lt;code&gt;import openai&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;dotenv_values&lt;/span&gt;

&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;dotenv_values&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.env&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we've imported a method called &lt;code&gt;dotenv_values&lt;/code&gt; from the module.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;dotenv_values()&lt;/code&gt; will take in the path to the &lt;strong&gt;.env&lt;/strong&gt; file and return us a dictionary with all the variables in the &lt;strong&gt;.env&lt;/strong&gt; file. We then created a &lt;code&gt;config&lt;/code&gt; variable to hold that dictionary.&lt;/p&gt;

&lt;p&gt;Now, all we have to do is replace the exposed API key with the environment variable in the &lt;code&gt;config&lt;/code&gt; dictionary like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;API_KEY&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! Our API key is now safe and hidden from the main code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you want to push your code to &lt;a href="https://www.github.com" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, you don't want to push the &lt;strong&gt;.env&lt;/strong&gt; file as well. In the root directory of your project, create a file called &lt;strong&gt;.gitignore&lt;/strong&gt;, and in the Git ignore file, type in &lt;code&gt;.env&lt;/code&gt;. This will prevent the file from being tracked by Git and ultimately pushed to GitHub.&lt;/p&gt;

&lt;p&gt;With all that set and done, we’re finished! The code should now look like this!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;blog_generator.py&lt;/strong&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Generate a Blog with OpenAI 📝
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;dotenv_values&lt;/span&gt;

&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;dotenv_values&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.env&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;API_KEY&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_blog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Completion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;text-davinci-002&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Write a paragraph about the following topic. &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;max_tokens&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;temperature&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;retrieve_blog&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;retrieve_blog&lt;/span&gt;

&lt;span class="n"&gt;keep_writing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;

&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;keep_writing&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Write a paragraph? Y for yes, anything else for no. &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;paragraph_topic&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;What should this paragraph talk about? &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;generate_blog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;paragraph_topic&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
  &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;keep_writing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;.env&lt;/strong&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sk-jAjqdWoqZLGsh7nXf5i8T3BlbkFJ9CYRk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Finish Line
&lt;/h2&gt;

&lt;p&gt;Congrats, you just created a blog generator with OpenAI and Python! Throughout the project, we learned how to use GPT-3 to generate a paragraph, use a &lt;code&gt;while&lt;/code&gt; loop to create multiple paragraphs, and secure our app with a &lt;strong&gt;.env&lt;/strong&gt; file. 🙌&lt;/p&gt;

&lt;p&gt;AI is expanding rapidly, and the first few to utilize it properly through services like GPT-3 will become the inovators in the field. Hope this project helps you understand it a bit more.&lt;/p&gt;

&lt;p&gt;And lastly, we would love to see what you build with this tutorial! Tag &lt;a href="https://dev.to@codedex_io"&gt;@codedex_io&lt;/a&gt; and &lt;a href="https://twitter.com/openai" rel="noopener noreferrer"&gt;@openai&lt;/a&gt; on Twitter if you make something cool!&lt;/p&gt;

&lt;h2&gt;
  
  
  More Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/codedex-io/projects/blob/main/projects/generate-a-blog-with-openai/blog_generator.py" rel="noopener noreferrer"&gt;Solution on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openai.com" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pypi.org/project/python-dotenv" rel="noopener noreferrer"&gt;python-dotenv&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>tutorial</category>
      <category>ai</category>
      <category>codedex</category>
    </item>
  </channel>
</rss>
