DEV Community

Crazyfox_4396
Crazyfox_4396

Posted on

prompt engineering foundation notes(1)

#ai

All material comes from:
https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/
This article is for personal study notes only, and all copyrights belong to DeepLearning.AI and the original author.

📖 This note includes:

  1. LLM types:Base vs. Instruction-tuned
  2. Principles 1: clear and specific instructions(with 4 tactics)
  3. Principles 2: give model more time to think before rushing out an answer (with 2 strategies)
  4. Limit and solution on hallucination

Broadly two types of LLM

Base LLMs has been trained to predict the next word likely to follow, whereas the instruction tuned LLMs focus more on giving output based on your instructions.

For example, if you input 'where is the capital in France':

  • base LLMs will try to figure out what would be the next similar questions, such as what is the biggest city in France , or where is the capital city of Britain.
  • However, if we want to actually gain the answer, instruction tuned LLM is suited to give the answer of Paris.

Some of the best practices may be more suited for a base LLM, but for most practical applications today, instruction-tuned LLMs which are easier to use.

Helper funcs

Helper function will make it easier to use prompts and look at the generated outputs.

Setup

import openai 
import os
Enter fullscreen mode Exit fullscreen mode

openai: allows developers to connect to and interact with OpenAI's AI models (such as GPT-4o, GPT-3.5, and DALL-E) through their API

os: imports the built-in os module, providing portable functions to interact with the underlying operating system.

  • Enables file management (create, delete, rename), path manipulation, directory navigation, and environment variable access. - It is essential for automating tasks and managing system-level operations in scripts

In the sample doc, they provide the code for a hidden environment where stores a test api key:

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())

openai.api_key  = os.getenv('OPENAI_API_KEY')
Enter fullscreen mode Exit fullscreen mode

However, to implement other tasks elsewhere, we apply open.ai.key on our own. Using the OpenAI API outside of this classroom:
To install the OpenAI Python library:

!pip install openai
Enter fullscreen mode Exit fullscreen mode

The library needs to be configured with your account's secret key, which is available on the website.

You can either set it as the OPENAI_API_KEY environment variable before using the library:

!export OPENAI_API_KEY='sk-...'
Or, set openai.api_key to its value:

import openai
openai.api_key = "sk-..."
Enter fullscreen mode Exit fullscreen mode

chat completion endpoint

def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=0, # this is the degree of randomness of the model's output
    )
    return response.choices[0].message["content"]
Enter fullscreen mode Exit fullscreen mode

alternative code if use the OpenAI library version 1.0.0

client = openai.OpenAI()

def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = client.chat.completions.create(
        model=model,
        messages=messages,
        temperature=0
    )
    return response.choices[0].message.content
Enter fullscreen mode Exit fullscreen mode

Principles 1: Write clear and specific instructions

1. Use delimiters to clearly indicate distinct parts of the input

  • Prevent possible 'prompt injection' created by the content inside -> the AI might mix its instruction with input
  • some common tactics: triple quotes: """ ; triple backticks; triple dashes: --- ; angle brackets: <> ; XML tags: , ;
text = f"""
You should express what you want a model to do by \ 
providing instructions that are as clear and \ 
specific as you can possibly make them. \ 
This will guide the model towards the desired output, \ 
and reduce the chances of receiving irrelevant \ 
or incorrect responses. Don't confuse writing a \ 
clear prompt with writing a short prompt. \ 
In many cases, longer prompts provide more clarity \ 
and context for the model, which can lead to \ 
more detailed and relevant outputs.
"""
prompt = f"""
Summarize the text delimited by triple backticks \ 
into a single sentence.
<{text}>
"""
response = get_completion(prompt)
print(response)
Enter fullscreen mode Exit fullscreen mode

2. Ask for structured output like HTML or JSON;

prompt = f"""
Generate a list of three made-up book titles along \ 
with their authors and genres. 
Provide them in JSON format with the following keys: 
book_id, title, author, genre.
"""
response = get_completion(prompt)
print(response)
Enter fullscreen mode Exit fullscreen mode

Output sample: -> like dictionary or list ;

3.Check ** Assumptions required ** to do the tasks -- whether conditions are satisfied

text_2 = f"""
The sun is shining brightly today, and the birds are \
singing. It's a beautiful day to go for a \ 
walk in the park. The flowers are blooming, and the \ 
trees are swaying gently in the breeze. People \ 
are out and about, enjoying the lovely weather. \ 
Some are having picnics, while others are playing \ 
games or simply relaxing on the grass. It's a \ 
perfect day to spend time outdoors and appreciate the \ 
beauty of nature.
"""
prompt = f"""
You will be provided with text delimited by triple quotes. 
If it contains a sequence of instructions, \ 
re-write those instructions in the following format:

Step 1 - ...
Step 2 - …
…
Step N - …

If the text does not contain a sequence of instructions, \ 
then simply write \"No steps provided.\"

\"\"\"{text_2}\"\"\"
"""
response = get_completion(prompt)
print("Completion for Text 2:")
print(response)
Enter fullscreen mode Exit fullscreen mode

4. Few-shot prompting

  • Give successful examples to completing tasks, and then ask the model to perform the task.
prompt = f"""
Your task is to answer in a consistent style.

<child>: Teach me about patience.

<grandparent>: The river that carves the deepest \ 
valley flows from a modest spring; the \ 
grandest symphony originates from a single note; \ 
the most intricate tapestry begins with a solitary thread.

<child>: Teach me about resilience.
"""
response = get_completion(prompt)
print(response)
Enter fullscreen mode Exit fullscreen mode

Principles 2: Give the model time to “think”

1: Specify the steps required to complete a task

text = f"""
In a charming village, siblings Jack and Jill set out on \ 
a quest to fetch water from a hilltop \ 
well. As they climbed, singing joyfully, misfortune \ 
struck—Jack tripped on a stone and tumbled \ 
down the hill, with Jill following suit. \ 
Though slightly battered, the pair returned home to \ 
comforting embraces. Despite the mishap, \ 
their adventurous spirits remained undimmed, and they \ 
continued exploring with delight.
"""
# example 1
prompt_1 = f"""
Perform the following actions: 
1 - Summarize the following text delimited by triple \
backticks with 1 sentence.
2 - Translate the summary into French.
3 - List each name in the French summary.
4 - Output a json object that contains the following \
keys: french_summary, num_names.

Separate your answers with line breaks.

Text: {text}
"""
response = get_completion(prompt_1)
print("Completion for prompt 1:")
print(response)
Enter fullscreen mode Exit fullscreen mode

And we can also ask for output with specified format:

prompt_2 = f"""
Your task is to perform the following actions: 
1 - Summarize the following text delimited by 
  <> with 1 sentence.
2 - Translate the summary into French.
3 - List each name in the French summary.
4 - Output a json object that contains the 
  following keys: french_summary, num_names.

Use the following format:
Text: <text to summarize>
Summary: <summary>
Translation: <summary translation>
Names: <list of names in summary>
Output JSON: <json with summary and num_names>

Text: <{text}>
"""
response = get_completion(prompt_2)
print("\nCompletion for prompt 2:")
print(response)
Enter fullscreen mode Exit fullscreen mode

2. Instruct the model to work out its own solution before rushing to a conclusion

In this example, our aim is to establish whether the student's solution is correct. However, if AI only takes a brief look at the solution, there is a high probability that it will misclassify it and not identify any mistakes.

prompt = f"""
Determine if the student's solution is correct or not.

Question:
I'm building a solar power installation and I need \
 help working out the financials. 
- Land costs $100 / square foot
- I can buy solar panels for $250 / square foot
- I negotiated a contract for maintenance that will cost \ 
me a flat $100k per year, and an additional $10 / square \
foot
What is the total cost for the first year of operations 
as a function of the number of square feet.

Student's Solution:
Let x be the size of the installation in square feet.
Costs:
1. Land cost: 100x
2. Solar panel cost: 250x
3. Maintenance cost: 100,000 + 100x
Total cost: 100x + 250x + 100,000 + 100x = 450x + 100,000
"""
response = get_completion(prompt)
print(response)
Enter fullscreen mode Exit fullscreen mode

Instead, we can fix this by instructing the model to work out its own solution first.

prompt = f"""
Your task is to determine if the student's solution \
is correct or not.
To solve the problem do the following:
- First, work out your own solution to the problem including the final total. 
- Then compare your solution to the student's solution \ 
and evaluate if the student's solution is correct or not. 
Don't decide if the student's solution is correct until 
you have done the problem yourself.
"""
response = get_completion(prompt)
print(response)
Enter fullscreen mode Exit fullscreen mode

3. Model Limitations: Hallucinations

LLMs might not be well-trained on knowing its boundary. This means it might tries to answer the question about obscure topics and can make things up that sounds plausible but not true, such as confabulation on description
To reduce such hallucinations, we can firstly find relevant information, then answer the question based on the relevant information.

Top comments (0)