Introduction
In our previous post, which kicked off this series on Large Language Models (LLMs), we explored how LLMs differ from traditional models. One key difference is how LLMs work—they are designed to generate text based on the prompts we give them. Think of it like starting a conversation: you give the LLM a prompt, and it responds by completing it by generating text.
The quality of the output from an LLM heavily depends on the prompt you provide. The more detailed and clear your prompt is, the better the response you’ll get from the LLM. That’s why understanding how to craft good prompts is so important when working with these models. In this post, we'll dive deeper into working with the OpenAI SDK and explore some popular techniques for crafting effective prompts.
If you want to master the art of prompting, DeepLearning.AI's "ChatGPT Prompt Engineering for Beginners" is the perfect launchpad. This course breaks down the basics of prompt engineering into easy-to-digest lessons, making it accessible even if you're new to AI. You'll explore various prompting techniques that can significantly enhance the effectiveness of your interactions with ChatGPT. It's a must for anyone looking to unlock the full potential of AI.
Basics of Prompting
Let’s start with some basic prompts. Imagine we want to write a LinkedIn post about a potential business deal. Below are two prompts—one basic and one with more context —showing how different prompts can lead to different responses. You’ll see how adding context and a system message helps generate more relevant and effective results.
Basic Prompt -
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{
role: "user",
content: "Write a LinkedIn connection request..",
}],
});
Advanced Prompt -
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{
role: 'system',
content: 'You are a professional Linkedin user.'
},
{
role: "user",
content: "Write a personalized LinkedIn connection request to a potential business partner. Mention a shared interest in technology and set a collaborative tone.",
}],
});
in this advanced example, the system message “You are a professional LinkedIn user” helps the LLM generate a more tailored and context-aware connection request. By providing specific details in the prompt, we guide the LLM to produce a message that is not only more engaging but also aligns with the intended purpose. Give the code a try to see the generated response. You can also experiment with the temperature setting to get different results. The OpenAI Playground provides a user-friendly interface for this.
How to Write Good Prompts:
- Your Prompt should comprise of the main goal: What you want the LLM to do and Extra info: Details to help the LLM understand better.
- Give the AI a role (if needed): Example: You're a Twitter expert who writes popular tweets. Write a tweet about hiking.
- Add extra rules or information: Example: Write a tweet about hiking. Use no more than two emojis. Focus on nature lovers. Mention 2 benefits of hiking often.
- LLM Output is a starting point, fine tune and adjust it as needed. Tell it what parts need to be improved.
Add Meaningful Context to your Prompts:
- Prefer short, focused sentences.
- Add important keywords & avoid unnecessary information.
- Define the target audience (for tweets, LinkedIn posts, blog posts)
- Control tone, style & length of the output.
- You can also control the output format (JSON, etc.)
Zero Shot Prompting
Zero-shot prompting is a way to use LLM without giving it special examples. You simply ask the LLM to do a task, and it tries its best using what it already knows. Use zero-shot prompting when:
- You need a quick answer.
- The task is simple.
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{
role: "user",
content: "Tell me how AI helps in hospitals.",
}],
});
In the above example, the LLM gives a helpful answer just from your question, without needing extra information or examples.
One-Shot Prompting
One-shot prompting is when you give the LLM one example to help it understand what you want. It's like showing someone how to do something once before asking them to try it themselves. Use one-shot prompting when:
- You want the LLM to follow a specific style.
- The task is a bit tricky.
- You need to guide the LLM without giving too many instructions.
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{
role: 'system',
content: `You are a professional LinkedIn user.
Your task is to write a LinkedIn post announcing new events like the below example.
Example: 'Greetings from XYZ Corp. Excited to announce our new partnership with XYZ Corp. Together, we're driving innovation in the tech industry!`
},
{
role: "user",
content: "Now write a LinkedIn post announcing a new product launch.",
}],
});
By giving one example, you help the LLM understand the tone and style you want, so it can give you a better answer.
Few-Shot Prompting
Few-shot prompting is when you give the LLM a handful of examples before asking it to do a task. It's like showing someone how to do something a few times before they try it on their own. Use few-shot prompting when:
- You want the LLM to follow a specific pattern
- You need the LLM to sort things into categories
- You're working on tasks where examples really help, like translating languages in a particular style or particular tone, e.g. Translate some German text into Shakespearian English.
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{
role: 'system',
content: `You are a professional news reader.
Your task is to categorize news articles the below examples.
Article 1: "New electric car saves energy and drives far." Topic: Technology
Artilce 2: "Latest clothing styles are changing how people dress on the street." Topic: Fashion`
},
{
role: "user",
content: `What's the topic of this article: "Meditation helps people feel less stressed and focus better".`,
}],
});
By showing a few examples, you help the LLM understand how to sort articles into the right topics.
Chain-of-Thought Prompting
Chain-of-Thought prompting is like asking the AI to "show its work." Instead of just giving an answer, the AI explains how it got there, step by step. Use Chain-of-Thought prompting when:
- You have a tricky problem to solve.
- You want to see how the AI thinks and want to verify it.
- The task has multiple steps.
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{
role: "user",
content: `Solve the following math problem and show your work: What is 25% of 80?`,
}],
});
By explaining each step, the AI shows you its thinking process. This helps you understand the answer better and makes sure the AI is on the right track.
Prompts with Output Templates
Output templates are like forms you give the AI to fill out. They help you get information in a neat, organized way. Use output templates when:
- You want information in the same format every time
- You need to compare different things easily
- You're making lists or collecting specific details
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{
role: "user",
content: `Tell me about 5 great summer vacation spots. Use this form:
Place:
When to go:
How warm it gets:
How much sun you'll see:
How often it rains:
`,
}],
});
By using a template, you get all the information you want in a tidy, easy-to-read format.
Perspective Prompting
Perspective prompting involves framing your prompt from a specific point of view or role. By doing this, you guide the LLM to generate responses that align with a particular perspective, making the output more tailored and relevant. Use perspective prompting when:
- You want answers from a specific point of view
- You need to understand different roles or experiences
- You're creating content for particular groups of people
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{
role: "user",
content: `Create a travel plan for a 2-week yoga retreat in Austria from the perspective of a yoga trainer.`,
}],
});
Laddering Prompting
Laddering prompting involves breaking down a complex task into smaller, manageable prompts. Instead of tackling everything at once, you handle each part step by step. This approach helps in building detailed and accurate responses, especially for intricate projects. Use laddering prompting when:
- The task is too big or complex for one question
- You need to break down a problem into smaller steps
- You want a well-organized, step-by-step answer
For example, if you're writing a long report, you might first ask about the main topics, then about each topic in detail, and finally how to put it all together.
Another example, imagine you want to create a complete REST API. Instead of trying to do it all in one prompt, you can break it down into smaller steps, setup the project, connect to the database, create schemas, create endpoints, etc.
Delimiters in Prompts
Delimiters are used to clearly separate different parts of a prompt, making it easier for AI to understand and respond accurately. Common delimiters include triple quotation marks (""")
and XML tags (<tag>)
. These helps structure the prompt, ensuring that instructions, examples, and other information are distinct and easy to follow.
Example 1: Triple Quotation Marks
Triple quotation marks are useful for enclosing larger blocks of text. They help in clearly demarcating sections within a prompt.
Instruction: Please summarize the text delimited by """.
""" your text .... """
Example 2: XML Tags
XML tags provide a more structured way to label different parts of a prompt. This method is particularly useful for complex prompts with multiple sections.
<instruction>
Please summarize the following text.
</instruction>
<text>
your text ....
</text>
Enhancing your Prompts
If you're ever unsure about crafting the perfect prompt, why not let the LLM help you out? You can ask the model itself to generate prompts, making the process much easier. Once you've got your prompts, it's crucial to compare the outputs to see which ones deliver the best results. Tools like Anthropic's workbench are great for this—allowing you to generate, test, and compare prompts in one powerful platform. You can even run test cases to fine-tune your prompts. If you're curious about how different LLMs respond to the same prompts, Airtrain.ai lets you compare outputs side by side, helping you choose the best LLM for your needs. And for managing and observing your prompts, Pezzo offers a comprehensive platform to keep everything organized and optimized.
Conclusion
In this article, we explored various prompting techniques and discussed how to tailor prompts for better results with LLMs. To reiterate, the prompt you define directly influences the output you get from the model. Crafting effective prompts is crucial for your day-to-day tasks and applications.
Make sure to include relevant context in your prompts to guide the LLM effectively. Don't hesitate to experiment with different prompts and compare multiple responses to find the best outcome. LLMs are versatile tools used for various tasks, including translation, content creation, and more. Tailor your prompts to fit each specific use case to maximize their effectiveness.
In the next post, we'll dive into using other LLM SDKs. We'll explore how to work with open-source models such as Mistral and Llama, as well as proprietary LLMs offered by Cohere and Anthropic. Until then, keep coding!
Top comments (0)