DEV Community

sreejinsreenivasan
sreejinsreenivasan

Posted on

Prompt Engineering for OpenAI Chat Completions

A prompt, essentially a piece of text, is what we offer to the model to steer its response. The quality and clarity of the prompt directly influence the accuracy and relevance of the generated responses. With well-crafted prompts, developers can shape the model's understanding, context, and expected output, leading to more precise and useful responses while saving time and effort in the interaction process.

Setting Up

To get started with prompting, you'll need to set up the OpenAI Python library. Install the library using pip:

pip install openai
Enter fullscreen mode Exit fullscreen mode

Once the library is installed, the next step is to import the OpenAI library and set your API key:

from openai import OpenAI

client = OpenAI(api_key="your-api")
Enter fullscreen mode Exit fullscreen mode

Quickstart

Let's dive right in with a simple example. Suppose we want a joke about cats. Our prompt could be:

prompt = "Tell me a joke about cats"
Enter fullscreen mode Exit fullscreen mode

Using the OpenAI library, we can obtain a response from the model based on this prompt:

response = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {
      "role": "user",
      "content": prompt
    }
  ],
)
Enter fullscreen mode Exit fullscreen mode
response.choices[0].message.content
Enter fullscreen mode Exit fullscreen mode

Output:

"Why don't cats play poker in the wild? Too many cheetahs!"
Enter fullscreen mode Exit fullscreen mode

That's it! We've successfully obtained a joke about cats using a simple prompt.

Let's enhance our prompt dynamically by incorporating a placeholder for the topic:

topic = "dog"

prompt = "Tell me a joke about {topic}"
Enter fullscreen mode Exit fullscreen mode
print(prompt.format(topic=topic))
Enter fullscreen mode Exit fullscreen mode

What is happening here is that we are using the format method to replace the placeholder {topic} with the value of the topic variable. This results in the following prompt:

Output:

 "Tell me a joke about dog"
Enter fullscreen mode Exit fullscreen mode

A helper function can be created to simplify the process of obtaining chat completions:

def get_chat_response(prompt):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "user",
                "content": prompt
            }
        ],
    )
    return response.choices[0].message.content
Enter fullscreen mode Exit fullscreen mode

Now, we can obtain a response based on this dynamic prompt:

get_chat_response(prompt.format(topic=topic))
Enter fullscreen mode Exit fullscreen mode

Output:

"Why did the dog sit in the shade? Because he didn't want to be a hot dog!"
Enter fullscreen mode Exit fullscreen mode

Hilarious!

Fine-tuning Model Parameters

OpenAI allows control over model parameters such as temperature and top-p, influencing the randomness and creativity of the generated responses. Lower temperature values result in more accurate responses, while higher values foster creativity.

Let's modify our function to include a temperature parameter:


def get_chat_response(prompt, temperature=0):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "user",
                "content": prompt
            }
        ],
        temperature=temperature,
    )
    return response.choices[0].message.content
Enter fullscreen mode Exit fullscreen mode

Increasing the temperature may produce different responses:

get_chat_response(prompt.format(topic=topic), temperature=1)
Enter fullscreen mode Exit fullscreen mode

Output:

"Why did the cat sit on the computer? Because it wanted to keep an eye on the mouse!"
Enter fullscreen mode Exit fullscreen mode

There are many other parameters that can be used to fine-tune the model's behavior, such as max_tokenstop_p, and presence_penalty etc.

Use max_tokens to control the length of the response. For example, setting max_tokens=20 will limit the response to 20 tokens. This is useful for controlling the prevent the model from generating overly long responses, and save resources.

Elements of a Good Prompt

A good prompt is clear, concise, and specific, providing enough context for the model to understand the user's intent and generate a relevant response. Here are key elements to consider:

  1. Instruction: Clearly state the task or instruction you want the model to perform.
  2. Context: Provide external information or additional context to steer the model towards better responses.
  3. Input Data: Offer the input or question for which you seek a response.
  4. Output Indicator: Specify the type or format of the desired output.

Crafting Complex Prompts

This examples are very basic. For generating more complex and specific responses, we can use more advanced prompts and context to guide the model's understanding.

Let's say I want to generate a weekly plan based on specific goals.

prompt = """Efficiently plan the week to achieve all goals while maximizing productivity and allowing time for relaxation.\

Goals of the week:\

1. Complete development sprint for project X.\
2. Lead team meeting to discuss project updates and assign tasks.\
3. Research and prototype innovative features for product Y.\
4. Attend networking event to expand professional network.\
5. Review industry trends and advancements for potential business opportunities.\
"""
Enter fullscreen mode Exit fullscreen mode

let's see what the model comes up with for this prompt:

get_chat_response(prompt, temperature=1)

Enter fullscreen mode Exit fullscreen mode

Output:

To efficiently plan the week and achieve all goals while maximizing productivity and allowing time for relaxation, consider the following schedule:

Monday:
- Start the day by reviewing project X and outlining tasks for the development sprint.
- Lead team meeting in the afternoon to discuss project updates and assign tasks accordingly.
- Dedicate time in the evening to research and prototype innovative features for product Y.

Tuesday:
- Focus on completing tasks for project X development sprint.
- Attend the networking event in the evening to expand professional network.

Wednesday:
- Review industry trends and advancements in the morning for potential business opportunities.
- Continue working on project X development sprint in the afternoon.
- Allocate time in the evening for relaxation and self-care.

Thursday:
- Finalize development sprint for project X and prepare for presentation or review.
- Work on prototyping innovative features for product Y in the afternoon.

Friday:
- Present completed development sprint for project X to the team.
- Follow up with any additional tasks or updates from the team meeting.
- Reflect on the week's accomplishments and plan for the following week.

Throughout the week, ensure to prioritize tasks, allocate time for breaks and relaxation, and maintain open communication with the team. By organizing tasks and allocating time effectively, you can achieve all goals while also maintaining a healthy work-life balance.
Enter fullscreen mode Exit fullscreen mode

That's nice! The model has generated a detailed plan for the week based on the provided goals. But the generated output does have a chatgpt conversational style to it.

The Prompt does provide a clear instruction, context and input data. But in order to get a more structured output, we can provide an output indicator to the prompt.

Few Shot Prompting

Few-shot learning is a technique that allows models to learn from a small number of examples or instructions. This can be particularly useful when you want to guide the model's understanding with specific examples or context.

Let's modify the prompt to include an output indicator:

prompt = """Efficiently plan the week to achieve all goals while maximizing productivity and allowing time for relaxation.

Goals of the week:
1. Generate an end-to-end plan for automating task creation in Notion using LLMs.
2. Conduct necessary research before diving into automation.
3. Consolidate the plan and generate sprints and tasks to complete the automation.
4. Learn Prompt Engineering for developers using OpenAI.
5. Complete development sprint for project X.
6. Lead team meeting to discuss project updates and assign tasks.
7. Research and prototype innovative features for product Y.
8. Attend networking event to expand professional network.
9. Review industry trends and advancements for potential business opportunities.

Output Format:

9:00 AM - 10:30 AM: Task 1
10:30 AM - 11:00 AM: Break
11:00 AM - 12:30 PM: Task 2
12:30 PM - 1:30 PM: Lunch
1:30 PM - 3:00 PM: Task 3
3:00 PM - 3:30 PM: Break
3:30 PM - 5:00 PM: Task 4

Example Outputs:
Monday, 12/02/2024

- 9:00 AM - 10:30 AM: Brainstorming session for innovative Notion automation ideas
- 10:30 AM - 11:00 AM: Break
- 11:00 AM - 12:30 PM: Research session on latest AI advancements for task automation
- 12:30 PM - 1:30 PM: Lunch
- 1:30 PM - 3:00 PM: Prototyping session for Notion automation features
- 3:00 PM - 3:30 PM: Break
- 3:30 PM - 5:00 PM: Review and refine Notion automation plan
"""
Enter fullscreen mode Exit fullscreen mode

Now, let's see what the model comes up this time:

**Tuesday, 12/03/2024**

- 9:00 AM - 10:30 AM: Learn Prompt Engineering for developers using OpenAI
- 10:30 AM - 11:00 AM: Break
- 11:00 AM - 12:30 PM: Continue learning Prompt Engineering for developers
- 12:30 PM - 1:30 PM: Lunch
- 1:30 PM - 3:00 PM: Apply Prompt Engineering knowledge to project work
- 3:00 PM - 3:30 PM: Break
- 3:30 PM - 5:00 PM: Attend team meeting to discuss project updates and assign tasks

**Wednesday, 12/04/2024**

- 9:00 AM - 10:30 AM: Research and prototype innovative features for product Y
- 10:30 AM - 11:00 AM: Break
- 11:00 AM - 12:30 PM: Continue research and prototyping for product Y
- 12:30 PM - 1:30 PM: Lunch
- 1:30 PM - 3:00 PM: Networking event to expand professional network
- 3:00 PM - 3:30 PM: Break
- 3:30 PM - 5:00 PM: Review industry trends and advancements for potential business opportunities

**Thursday, 12/05/2024**

- 9:00 AM - 10:30 AM: Generate an end-to-end plan for automating task creation in Notion using LLMs
- 10:30 AM - 11:00 AM: Break
- 11:00 AM - 12:30 PM: Conduct necessary research before diving into automation
- 12:30 PM - 1:30 PM: Lunch
- 1:30 PM - 3:00 PM: Consolidate the plan and generate sprints and tasks to complete the automation
- 3:00 PM - 3:30 PM: Break
- 3:30 PM - 5:00 PM: Continue working on automation plan and tasks

**Friday, 12/06/2024**

- 9:00 AM - 10:30 AM: Complete development sprint for project X
- 10:30 AM - 11:00 AM: Break
- 11:00 AM - 12:30 PM: Continue development sprint for project X
- 12:30 PM - 1:30 PM: Lunch
- 1:30 PM - 3:00 PM: Finish up any pending tasks for project X
- 3:00 PM - 3:30 PM: Break
- 3:30 PM - 5:00 PM: Wrap up any remaining tasks and prepare for next week's goals
Enter fullscreen mode Exit fullscreen mode

Now that's some detailed planning! The model has generated a weekly schedule based on the specific goals provided in the prompt. This demonstrates how prompts can be used to guide the model's understanding and generate specific, relevant outputs.

We successfully used few-shot learning to guide the model's understanding and generate specific outputs based on the provided examples.

Conclusion and What's Next

There are other advanced techniques such chain-of-thought (CoT), and Tree of Thoughts (ToT) etc, which can enhance the capabilities of these models for a wide range of applications, from simple conversations to complex problem-solving tasks.

I encourage you to read the source material at PromptingGuide.ai To delve deeper into prompting techniques.

Stay Connected

Join me on LinkedIn, where I share insights and updates on AI, Automation, Productivity, and more.

Connect with me on LinkedIn

Additionally, if you're interested in learning more about how I'm leveraging AI for simple automations and productivity hacks, subscribe to my newsletter "Growth Journal". Be the first to receive exclusive content and stay up-to-date with the latest trends in AI and automation.

Subscribe to my newsletter

Until next time, happy prompting!

Top comments (0)