Prompt engineering emerged with GPT-2 and GPT-3, gaining major attention after ChatGPT's release in 2022. Though called "engineering," it is often more art than science, relying on iterative prompt refinement.
For AI Practitioner level exams, you should understand how instructions, context, data, and output shape foundation model performance, recognize prompting techniques (few-shot, zero-shot, chain-of-thought), and be aware of security risks like prompt injection, model poisoning, and jailbreaking.
Prompt Anatomy
The prompt can be any length, however, different LLM providers have different maximum limits. However, generally this is quite large for you to worry about. You can break down the prompts into four components.
- Instructions: What do you need
- Context: Background information so model knows the context
- Input: Data that the model needs to process
- Output: Output type or format
Prompt Example: Classification
Instruction: Classify this customer review as Positive, Negative, or Neutral.
Context: The review is about a restaurant experience.
Input Data: "The food was great, but the service was very slow."
Output Format: One word: Positive, Negative, or Neutral.
In example 2, if you type the following in ChatGPT,
Classify this customer review as Positive, Negative, or Neutral. "The food was great, but the service was very slow."
You get the response "This review is Neutral — it contains both positive (great food) and negative (slow service) aspects."
Whereas if you supply the instructions as specified, you get a single response Neutral.
This shows the difference in output using different prompts.
Instructions
Instruction is the most important thing in prompts. Without it, LLMs do not know what to do. Here are examples of different types of instructions.
- Summarise: Summarise this article (copy text from this article)
- Coding: Write a C# method that uploads file to AWS S3
- Analyse: Perform sentiment analysis for the review
- Compare: Tell me the differences between Android and iPhone
Context
Context is essentially the background, details, and framing that help the model understand what you want, why you want it, and under what conditions. With context, it tailors its response to your specific situation, goals, and style, making the output more relevant and accurate. Here are few examples.
- Task framing: You are a career coach helping a software engineer prepare for interviews.
- Background: The user is a high school student beginning software engineering and wants simple examples with explanations.
- Constraints: Explain in under 100 words, using plain language.
- Reference: Here is an excerpt from the company's style guide. Write the article to match this tone.
Input
The input is the text you provide to a LLM in order to guide its output. Think of it as the instructions + context + data you give the model.
Output
The Output is the response generated by the model after it processes the given input (prompt). It can be (text, code, classification, summary, answer, etc.) You can also define the output format like "single word", JSON, CSV or a text summary.
Top comments (0)