DEV Community

郑沛沛
郑沛沛

Posted on

Prompt Engineering Patterns Every Developer Should Know

Prompt engineering isn't magic — it's software engineering for natural language. Here are battle-tested patterns that consistently produce better LLM outputs.

Pattern 1: Role + Context + Task + Format

The most reliable prompt structure:

prompt = """
You are a senior Python developer with 10 years of experience.

Context: I'm building a REST API with FastAPI that handles user authentication.

Task: Review this code and identify security vulnerabilities.

Format: List each vulnerability with:
- Location (line/function)
- Severity (high/medium/low)
- Fix recommendation with code example
"""
Enter fullscreen mode Exit fullscreen mode

Pattern 2: Few-Shot Examples

Show the model what you want:

prompt = """
Convert natural language to SQL queries.

Example 1:
Input: "Show all users who signed up last month"
Output: SELECT * FROM users WHERE created_at >= DATE_TRUNC('month', CURRENT_DATE - INTERVAL '1 month') AND created_at < DATE_TRUNC('month', CURRENT_DATE);

Example 2:
Input: "Count orders by status"
Output: SELECT status, COUNT(*) as count FROM orders GROUP BY status ORDER BY count DESC;

Now convert:
Input: "Find the top 5 customers by total spending"
Output:
"""
Enter fullscreen mode Exit fullscreen mode

Pattern 3: Chain of Thought

Force step-by-step reasoning:

prompt = """
Analyze this database schema for potential performance issues.

Think step by step:
1. First, identify all the tables and their relationships
2. Check for missing indexes on foreign keys
3. Look for N+1 query risks
4. Identify potential bottlenecks at scale
5. Suggest specific optimizations

Schema:
{schema}
"""
Enter fullscreen mode Exit fullscreen mode

Pattern 4: Output Constraints

Be explicit about format:

import openai, json

response = openai.chat.completions.create(
    model="gpt-4",
    messages=[{
        "role": "user",
        "content": """Extract entities from this text. Return ONLY valid JSON.

        Text: "John Smith from Acme Corp called about the Q4 report on March 15th"

        Schema: {"people": [{"name": "string", "org": "string"}], "dates": ["string"], "topics": ["string"]}"""
    }],
    response_format={"type": "json_object"}
)
data = json.loads(response.choices[0].message.content)
Enter fullscreen mode Exit fullscreen mode

Pattern 5: Self-Consistency Check

prompt = """
Generate a Python function that validates email addresses using regex.

After writing the function, test it mentally against these cases:
- valid@example.com (should pass)
- invalid@.com (should fail)
- user@domain (should fail)
- name+tag@example.co.uk (should pass)

If any test fails, revise the function before giving your final answer.
"""
Enter fullscreen mode Exit fullscreen mode

Pattern 6: Structured Decomposition

Break complex tasks into subtasks:

def analyze_codebase(code: str) -> dict:
    # Step 1: Identify components
    components = call_llm(f"List all classes and functions:\n{code}")
    # Step 2: Analyze each
    analyses = [call_llm(f"Analyze for bugs:\n{c}") for c in components]
    # Step 3: Synthesize
    return call_llm(f"Summarize into action plan:\n{analyses}")
Enter fullscreen mode Exit fullscreen mode

Temperature Guide

# Factual tasks: low temperature
response = openai.chat.completions.create(model="gpt-4", temperature=0.1, messages=[...])

# Creative tasks: higher temperature
response = openai.chat.completions.create(model="gpt-4", temperature=0.8, messages=[...])
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

  1. Structure prompts: Role + Context + Task + Format
  2. Use few-shot examples for consistent output
  3. Chain of thought improves reasoning quality
  4. Constrain output format explicitly
  5. Decompose complex tasks into steps

6. Low temperature for precision, high for creativity

🚀 Level up your AI workflow! Check out my AI Developer Mega Prompt Pack — 80 battle-tested prompts for developers. $9.99

Top comments (0)