DEV Community

Dhanush B
Dhanush B

Posted on

Write the best prompts for ChatGPT and other LLMs– Learn Key Techniques & Best Practices in Under 20 Minutes

Prompt engineering is the strategic practice of crafting and refining input prompts for Large Language Models (LLMs) like Gemini, GPT, Claude, and open-source models such as Gemma or LLaMA. This discipline involves designing effective prompts that guide the model to generate accurate, relevant, and useful responses, while balancing creativity and determinism through the careful adjustment of model parameters such as temperature, top-K, top-P, and token limits. Effective prompt engineering leverages advanced methods and frameworks, such as Few-Shot Learning, Chain-of-Thought (CoT), Step-Back prompting, Tree of Thoughts (ToT), Self-Consistency, ReAct (Reason & Act), and Automatic Prompt Engineering, to optimize the output quality and consistency.


Detailed Explanation & Examples:

1. Zero-Shot Prompting (General Prompting)

This method involves giving instructions directly to the model without providing any examples. The model relies entirely on its pre-trained knowledge.

Examples:

  • Prompt: "Classify the sentiment as POSITIVE, NEUTRAL or NEGATIVE. Review: 'The plot was exciting and brilliantly executed.' Sentiment:"
    Response: POSITIVE

  • Prompt: "Summarize in one sentence: AI is transforming various sectors by automating processes, enhancing decision-making, and improving efficiency."
    Response: "AI improves multiple industries through automation, smarter decisions, and greater efficiency."


2. One-shot & Few-shot Prompting

One-shot prompting provides a single illustrative example; few-shot provides several, allowing the model to learn and replicate the desired output pattern effectively.

One-shot Example:

Prompt:
"Translate the following into Spanish:
Example: 'Good morning' → 'Buenos días'
Now translate: 'Good night' →"

Response:
"Buenas noches"
Enter fullscreen mode Exit fullscreen mode

Few-shot Example:

Prompt:
"Classify as FRUIT or VEGETABLE:
Tomato → FRUIT
Cabbage → VEGETABLE
Cucumber → FRUIT
Now classify: Carrot →"

Response:
"VEGETABLE"
Enter fullscreen mode Exit fullscreen mode

3. System Prompting

This technique provides clear system-level instructions specifying how responses should be structured or formatted, useful for ensuring output consistency.

Example:

Prompt:
"Classify movie reviews strictly into POSITIVE, NEUTRAL, or NEGATIVE sentiment labels. Respond only with the label.
Review: 'The film was captivating but slightly too long.'"

Response:
"POSITIVE"
Enter fullscreen mode Exit fullscreen mode

4. Role Prompting

Role prompting instructs the LLM to assume a specific persona or role, which helps generate content that aligns stylistically and contextually with the specified character or profession.

Example:

Prompt:
"Act as a humorous tech reviewer. Briefly describe the latest iPhone."

Response:
"The latest iPhone is like your old iPhone, but now it costs a kidney plus tax and features slightly shinier edges!"
Enter fullscreen mode Exit fullscreen mode

5. Contextual Prompting

By supplying additional context or background information in the prompt, this method ensures responses are tailored precisely to the scenario.

Example:

Prompt:
"Context: You're a teacher preparing a geography quiz.
Write a short question about capital cities."

Response:
"What is the capital city of Canada?"
Enter fullscreen mode Exit fullscreen mode

6. Step-back Prompting

Encourages models to consider broader or more abstract aspects of a problem before responding to a specific query, enhancing critical thinking and accuracy.

Example:

Prompt:
"List general scenarios for cybersecurity threats:
- Phishing emails
- Malware infection
- Weak passwords

Now, suggest one preventive measure for 'Malware infection.'"

Response:
"Regularly update antivirus software and perform system scans."
Enter fullscreen mode Exit fullscreen mode

7. Chain-of-Thought (CoT) Prompting

This powerful method instructs the model explicitly to reason step-by-step through complex problems, significantly improving logical consistency and accuracy.

Example:

Prompt:
"A book costs $5. A pen costs half the price of the book. What is the total cost of 2 books and 4 pens? Let's think step-by-step."

Response:
"1. Book = $5
2. Pen = $5 / 2 = $2.50
3. Total = 2 books ($10) + 4 pens ($10) = $20
Final Answer: $20"
Enter fullscreen mode Exit fullscreen mode

8. Self-consistency Prompting

This involves generating multiple reasoning paths or outputs for a given query and selecting the most consistent or frequent result, thus enhancing reliability.

Example:

Prompt (run multiple times):
"Is the email 'You won a million dollars, claim now!' spam or legitimate? Explain briefly."

Response summary after multiple runs:
- Attempt 1: SPAM (Suspicious offer)
- Attempt 2: SPAM (Unrealistic claim)
- Attempt 3: SPAM (Typical phishing style)

Final Decision: SPAM
Enter fullscreen mode Exit fullscreen mode

9. Tree of Thoughts (ToT)

Allows models to explore multiple reasoning paths simultaneously by branching out into different possibilities, ideal for complex decision-making scenarios.

Example:

Prompt:
"Suggest methods to reduce carbon emissions:
Branch 1: Increase renewable energy use.
Branch 2: Enhance public transportation.
Branch 3: Promote energy-efficient appliances.

Evaluate effectiveness and cost to determine the optimal choice."

Response:
"Optimal choice: Increase renewable energy use due to highest impact on emissions reduction and long-term cost-effectiveness."
Enter fullscreen mode Exit fullscreen mode

10. ReAct (Reason & Act)

Integrates reasoning with external tool usage (e.g., APIs, web searches), empowering models to retrieve real-time or external information dynamically.

Example:

Prompt:
"Find today's weather forecast for London.
Thought: I need the current forecast data.
Action: Use weather API for London.
Observation: Cloudy, 15°C.
Final Answer: Today's forecast for London is cloudy with a temperature of 15°C."
Enter fullscreen mode Exit fullscreen mode

11. Automatic Prompt Engineering (APE)

Automates the generation and iterative refinement of prompts by producing variations, evaluating their effectiveness, and systematically choosing the best-performing option.

Example:

Prompt:
"Generate variations for clearly ordering coffee:
1. 'I'd like one cappuccino.'
2. 'Can I have a cappuccino, please?'
Evaluate clarity, politeness, and brevity."

Selected Best: "Can I have a cappuccino, please?"
Enter fullscreen mode Exit fullscreen mode

Best Practices (for optimal results):

  1. Use Clear, Simple, Explicit Prompts:
    Clearly state your task, goal, and format explicitly to reduce ambiguity.

  2. Provide Representative Examples:
    Include one-shot or few-shot examples to guide the LLM precisely toward desired outcomes.

  3. Incorporate Contextual Information:
    Provide relevant background details that refine response accuracy.

  4. Leverage Step-by-Step Reasoning (CoT):
    Explicitly instruct the model to "think step-by-step" for better logical reasoning outcomes.

  5. Balance Creativity vs. Determinism (Temperature Settings):
    Adjust temperature settings strategically; lower (0.1-0.3) for factual precision, higher (0.7-1.0) for creative scenarios.

  6. Employ Positive Instructions Over Negative Constraints:
    Prefer stating what should be done rather than restrictions or what to avoid.

  7. Clearly Specify Desired Output Formats:
    Explicitly state if responses should be JSON, XML, bulleted lists, etc., enhancing usability and consistency.

  8. Adjust Sampling Parameters (Top-K, Top-P):
    Experiment with top-K/top-P values to manage output randomness, diversity, and relevance.

  9. Use Role & System Prompting for Specificity:
    Define clear personas or system-level guidelines to ensure stylistically accurate responses.

  10. Iteratively Optimize Prompts (Automatic Prompt Engineering):
    Systematically generate, test, and refine prompts, documenting iterations to optimize performance continually.

  11. Enable External Interaction (ReAct):
    Allow models to utilize external tools or APIs to provide richer, more accurate information.

  12. Document and Analyze Results Thoroughly:
    Maintain structured documentation of prompts, parameters, and outcomes to ensure reproducibility and continual improvement.

By consistently applying these best practices, prompt engineering can effectively harness the full potential of Large Language Models, delivering precise, useful, and contextually appropriate results.

Top comments (0)