DEV Community

foxgem
foxgem

Posted on

Prompt Engineering Knowledge Cards

The Google Prompt Engineering Whitepaper is excellent, so I created a set of knowledge cards with ChatGPT, πŸ˜„.

πŸ› οΈ Best Practices for Effective Prompting

Principle Key Idea Example / Tip
Provide Examples Use one-shot or few-shot examples to show the model what good output looks like. βœ… Include 3-5 varied examples in classification prompts.
Design with Simplicity Clear, concise, and structured prompts work better than vague or verbose ones. ❌ "What should we do in NY?" -> βœ… "List 3 family attractions in Manhattan."
Be Specific About Output Explicitly define output length, format, tone, or constraints. "Write a 3-paragraph summary in JSON format."
Instructions > Constraints Tell the model what to do, not what not to do. βœ… "List top consoles and their makers." vs ❌ "Don't mention video game names."
Control Token Length Use model config or prompt phrasing to limit response length. "Explain in 1 sentence" or set token limit.
Use Variables Template prompts for reuse by inserting dynamic values. Tell me a fact about {city}
Experiment with Input Style Try different formats: questions, statements, instructions. πŸ”„ Compare: "What is X?", "Explain X.", "Write a blog about X."
Shuffle Classes (Few-Shot) Mix up response class order to avoid overfitting to prompt pattern. βœ… Randomize class label order in few-shot tasks.
Adapt to Model Updates LLMs evolve; regularly test and adjust prompts. πŸ”„ Re-tune for new Gemini / GPT / Claude versions.
Experiment with Output Format For structured tasks, ask for output in JSON/XML to reduce ambiguity. "Return response as valid JSON."
Document Prompt Iterations Keep track of changes and tests for each prompt. πŸ“ Use a table or versioning system.

🎯 Core Prompting Techniques

Technique Description Example Summary
Zero-Shot Ask the model directly without any example. 🧠 "Classify this review as positive/neutral/negative."
One-Shot Provide one example to show expected format/output. πŸ–‹οΈ Input + Example -> New input
Few-Shot Provide multiple examples to show a pattern. πŸŽ“ Use 3-5 varied examples. Helps with parsing, classification, etc.
System Prompting Set high-level task goals and output instructions. πŸ› οΈ "Return the answer as JSON. Only use uppercase for labels."
Role Prompting Assign a persona or identity to the model. 🎭 "Act as a travel guide. I'm in Tokyo."
Contextual Prompting Provide relevant background info to guide output. πŸ“œ "You're writing for a retro games blog."
Step-Back Prompting Ask a general question first, then solve the specific one. πŸ”„ Extract relevant themes -> Use as context -> Ask final question
Chain of Thought (CoT) Ask the model to think step-by-step. Improves reasoning. πŸ€” "Let's think step by step."
Self-Consistency Generate multiple CoTs and pick the most common answer. πŸ—³οΈ Run same CoT prompt multiple times, use majority vote
Tree of Thoughts (ToT) Explore multiple reasoning paths in parallel for more complex problems. 🌳 LLM explores different paths like a decision tree
ReAct (Reason & Act) Mix reasoning + action. Model decides, acts (e.g. via tool/API), observes, and iterates. πŸ€– Thought -> Action -> Observation -> Thought
Automatic Prompting Use LLM to generate prompt variants automatically, then evaluate best ones. πŸ’‘ "Generate 10 ways to say 'Order a small Metallica t-shirt.'"

βš™οΈ LLM Output Configuration Essentials

Config Option What It Does Best Use Cases
Max Token Length Limits response size by number of tokens. πŸ“¦ Prevent runaway generations, control cost/speed.
Temperature Controls randomness of token selection (0 = deterministic). 🎯 0 for precise answers (e.g., math/code), 0.7+ for creativity.
Top-K Sampling Picks next token from top K probable tokens. 🎨 Higher K = more diverse output. K=1 = greedy decoding.
Top-P Sampling Picks from smallest set of tokens with cumulative probability β‰₯ P. πŸ’‘ Top-P ~0.9-0.95 gives quality + diversity.

πŸ” How These Settings Interact

If You Set... Then...
temperature = 0 Top-K/Top-P are ignored. Most probable token is always chosen.
top-k = 1 Like greedy decoding. Temperature/Top-P become irrelevant.
top-p = 0 Only most probable token considered.
high temperature (e.g. >1) Makes Top-K/Top-P dominant. Token sampling becomes more random.

βœ… Starting Config Cheat Sheet

Goal Temp Top-P Top-K Notes
🧠 Precise Answer 0 Any Any For logic/math problems, deterministic output
πŸ› οΈ Semi-Creative 0.2 0.95 30 Balanced, informative output
🎨 Highly Creative 0.9 0.99 40 For stories, ideas, writing

Top comments (0)