DEV Community

Cover image for Chat with OpenAI: SME Fast AI Assistant
Isaac Kinyanjui Ngugi
Isaac Kinyanjui Ngugi

Posted on

Chat with OpenAI: SME Fast AI Assistant

OVERVIEW

I will show you how to build a project that connects to OpenAI's API.
You'll learn:

  1. How to set up Python and Anaconda.
  2. How to safely use API keys.
  3. How to send your messages to an AI model
  4. How to read OpenAI's responses.
  5. How to give the AI a "personality"

Task 1: How to set up Python and Anaconda.

Anaconda is a free downloadable Python toolkit for data science and AI. Download it here:

Task 2: How to safely use API keys.

OpenAI API Key Setup:

  1. Sign up at platform.openai.com.
  2. Copy your API key.
  3. Create a file named .env in your project folder.
  4. Inside .env, add this line:

OPENAI_API_KEY = "sk-your-real-api-key"

Load API Key and Configure Client

# Install needed packages
# !pip install --upgrade openai python-dotenv

from openai import OpenAI
import os
from dotenv import load_dotenv

# Load API key from .env file
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")

# Configure OpenAI client
openai_client = OpenAI(api_key=openai_api_key)
print("OpenAI client ready")
Enter fullscreen mode Exit fullscreen mode

Task 3: How to send your messages to an AI model

You will send a message and get an AI reply.
How it works:

  • model: which AI brain to use (start with "gpt-4o-mini").
  • message: the conversation (user, assistant, or system)

Code to Send a Message

my_message = "What is the tallest mountain in the world?"

response = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": my_message}]
)

ai_reply = response.choices[0].message.content
print("AI says:\n", ai_reply)
Enter fullscreen mode Exit fullscreen mode

Task 4: How to read OpenAI's responses.

OpenAI replies come with detailed metadata and information. Below is an example:

ChatCompletion(
 id='chatcmpl-CGMnpfRxsx23fmmwFGIn3rR0uLAw7', 
 choices=[
  Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(
   content='The tallest mountain in the world is Mount Everest, which stands at an elevation of 8,848.86 meters (29,031.7 feet) above sea level. It is located in the Himalayas on the border between Nepal and the Tibet Autonomous Region of China.', 
   refusal=None, 
   role='assistant', 
   annotations=[], 
   audio=None, 
   function_call=None, 
   tool_calls=None)
        )
     ], 
 created=1758016937, 
 model='gpt-4o-mini-2024-07-18', 
 object='chat.completion', 
 service_tier='default', 
 system_fingerprint='fp_560af6e559', 
 usage=CompletionUsage(
   completion_tokens=55,
   prompt_tokens=16, 
   total_tokens=71,
   completion_tokens_details=CompletionTokensDetails(
     accepted_prediction_tokens=0, 
     audio_tokens=0, 
     reasoning_tokens=0, 
     rejected_prediction_tokens=0), 
 prompt_tokens_details=PromptTokensDetails(
  audio_tokens=0, 
  cached_tokens=0)
 )
)
Enter fullscreen mode Exit fullscreen mode

Most of the extra metadata are meant for debugging. The parts that actually matter include:

  1. choices[0].message.content: the actual text answer from the AI that you show to the user. โ€œThe tallest mountain in the world is Mount Everestโ€ฆโ€

  2. usage: token breakdown.
    prompt_tokens: how many tokens your input used.
    completion_tokens: how many tokens the AI generated.
    total_tokens: cost = prompt + completion.

  3. finish_reason (inside choices): tells you why the AI stopped.
    "stop": finished normally.
    "length": got cut off (you might want to continue).
    "tool_calls" / "function_call": it wants to call a function/tool.

Tokens matter because more tokens = higher cost.

Code: Check Token Usage

def ask_llm(message: str, model="gpt-4o-mini"):
    response = openai_client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": message}]
    )

    ai_reply = response.choices[0].message.content
    usage = response.usage

    print("AI Reply:\n", ai_reply[:200], "...")  # print first part
    print("\n--- Token Info ---")
    print("Prompt Tokens:", usage.prompt_tokens)
    print("Completion Tokens:", usage.completion_tokens)
    print("Total Tokens:", usage.total_tokens)

ask_llm("Explain the difference between supervised and unsupervised learning in AI.")
Enter fullscreen mode Exit fullscreen mode

Task 5: How to give the AI a "personality"

You only need to add a system prompt to shape how the AI talks.

Code: AI as a Character

character_personalities = {
    "Tony Stark": "You are Tony Stark. Be witty, sarcastic, and confident. End some replies with: 'Because I'm Tony Stark.'",
    "Sleepy Cat": "You are a very sleepy cat. Always sound drowsy. Mention naps often."
}

chosen_character = "Tony Stark"
system_instructions = character_personalities[chosen_character]

user_message = "Hey, how you doing?"

response = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": system_instructions},
        {"role": "user", "content": user_message}
    ]
)

print(f"{chosen_character} says:\n")
print(response.choices[0].message.content)
Enter fullscreen mode Exit fullscreen mode

PRACTICE IDEAS

  • 1. Change my_message to ask your own question.
  • 2. Switch models ("gpt-4o" vs "gpt-4o-mini") and compare.
  • 3. Create your own character (e.g., โ€œSoccer Commentatorโ€).

๐Ÿ‘‰ For more AI content and job reposts, connect with me, follow me, and subscribe to my newsletter.

Top comments (0)