System used
- Windows 11
- Visual Studio Code
- Python
Use ChatGPT in your Python script
ChatGPT and other AI models are likely to be the future of software development, and a string of other fields. This is not a bad thing, it is just a great tool for us developers. Most likely you have heard all about it, and perhaps you've used the GUI provided on the OpenAI website. It is also possible to integrate the model in your script! Let's do so 😄
The hierarchy
root/
├── env
└── project/
└── main.py
First of all I want to create a folder, in which a create and activate a virtual environment, as well as a project folder where the .py
files go.
Note: The
#
initiates a comment, so that bash command is explained
PS C:\root\py_scripts> mkdir pyterminal #create application folder
PS C:\root\py_scripts> cd pyterminal #switch to said folder
PS C:\root\py_scripts\pyterminal> py -m venv env #create virtual environment
PS C:\root\py_scripts\pyterminal> env\Scripts\Activate.ps1 #activate said environment
PS C:\root\py_scripts\pyterminal> mkdir project #create folder for .py files
When the virtual environment is up and running, I install the openai
module, create a main.py
file and pop open visual studio code.
(env) PS C:\root\py_scripts\pyterminal> pip install openai #install openai module
(env) PS C:\root\py_scripts\pyterminal> cd project #switch to project folder
(env) PS C:\root\py_scripts\pyterminal\project> new-item main.py #create .py file
(env) PS C:\root\py_scripts\pyterminal\project> cd.. #switch back
(env) PS C:\root\py_scripts\pyterminal> code . #open visual studio code
The main.py
is content is shown down below. Four modules are imported:
-
os
, used to clear the terminal screen -
json
, used to parse json data -
openai
, well... that's why were here! -
time/sleep
, used to delay so that the ChatGPT response doesn't disappear immediately
The function configOpenAI()
sets the API key. Get YOUR_API_KEY, to do so you will need to create an account. Do not share this key as it is personal information.
The function promptChatGPT(question)
relies primarily on the openai.Completion.create()
function. The functions arguments are of importance:
- Identifier of the model you want to use, in this case it is GPT-3. All models are found here.
- The prompt itself, a string or tokens as OpenAI puts it.
- Maximal amount of tokens in return from the AI model, limit of 2048-4096.
- Temperature decides the creativity of the response, ranged 0 to 1 where 1 is maximum.
- There are many arguments left unsaid, read the docs and let a magical future unfold.
The return value of said function is casted to a string, and subsequently the JSON is parsed as we only want the text response in return from ChatGPT.
Lastly, I started out with C/C++, therefore we have a 'main()' function. It is a simple continuous while
loop, where a menu is presented.
import os
import json
import openai
from time import sleep
def configOpenAI():
openai.api_key = "YOUR_API_KEY"
def promptChatGPT(question="Test"):
configOpenAI()
chatGPT = "text-davinci-003"
jsonData = openai.Completion.create(model=chatGPT, prompt=question, temperature=0, max_tokens=100)
data = json.loads(str(jsonData))
text = data['choices'][0]['text']
print(text)
def main():
choice = ""
while True:
os.system('cls')
print("\nMenu:")
print("\n[1] Prompt ChatGPT")
print("[2] Quit")
choice = input("Input: ")
if choice == '1': promptChatGPT(input("Enter a prompt: "))
elif choice == 'q': break
sleep(5)
if __name__ == '__main__':
main()
After running script as, the program runs!
(env) PS C:\root\py_scripts\pyterminal\project> py main.py
The AI is the limit, happy coding!
Top comments (0)