Most beginners make a critical mistake when working with the OpenAI API: they assume the AI remembers them.
By default, Large Language Models (LLMs) are "Stateless". This means if you say "My name is Shakar," and then ask "What is my name?" in the next request, the API will have no idea who you are.
In this tutorial, we are going to fix that. We will build a Stateful chatbot in Python that maintains conversation history, handles errors gracefully, and runs locally in your terminal.
📺 Watch the Full Masterclass
If you prefer text, check out the channel here: IT Solutions Pro
import os
from dotenv import load_dotenv
from openai import OpenAI
# 1. Load environment variables securely
load_dotenv()
# 2. Initialize the Client
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
print("--- AI Chatbot Initialized (Type 'quit' to exit) ---")
# 3. Setup Memory (System Context)
# This sets the behavior of the AI
messages = [
{"role": "system", "content": "You are a helpful, friendly IT Assistant."},
]
# 4. The Main Loop
while True:
try:
user_input = input("\nYou: ")
# Exit Condition
if user_input.lower() in ['quit', 'exit']:
print("Shutting down...")
break
# STEP A: Add User Input to Memory
messages.append({"role": "user", "content": user_input})
# STEP B: Send the WHOLE history to the API
response = client.chat.completions.create(
model="gpt-4o", # You can use "gpt-3.5-turbo" to save cost
messages=messages,
temperature=0.7
)
# STEP C: Extract Answer & Add to Memory
ai_response = response.choices[0].message.content
# Crucial Step: Save the AI's own words back to the list
messages.append({"role": "assistant", "content": ai_response})
print(f"AI: {ai_response}")
except Exception as e:
print(f"An error occurred: {e}")
break

Top comments (0)