DEV Community

Yanchen Chen
Yanchen Chen

Posted on

The 4-Part Structure That Makes AI Prompts Actually Work (With 5 Real Examples)

The 4-Part Structure That Makes AI Prompts Actually Work

Most prompt engineering advice is useless.

"Be specific and detailed." "Give context." "Use examples."

That's like telling someone to "write clearly" — technically correct, practically useless.

After testing hundreds of prompts across real work tasks over 6 months, I found a consistent structural pattern in the prompts that work vs. the ones that produce inconsistent garbage.

Here's the framework, with 5 real examples you can copy right now.


The 4-Part Prompt Structure

Every high-performing prompt has these four elements:

1. Role

Not "you are a helpful assistant." A specific, experienced role with implicit knowledge baked in.

"You are a Senior Software Engineer with 10+ years of experience in production systems" carries implicit assumptions: you care about security, you've seen things break, you write code that other humans have to maintain. That context shapes everything that follows.

2. Task

Specific, scoped, with a clear deliverable.

Bad: "Help me with my code"
Good: "Review the following code for security vulnerabilities, performance issues, and maintainability problems. For each issue, provide the problem, its production impact, and the exact fix."

The task tells the AI what it's trying to produce, not just what subject to address.

3. Constraints

The most underused element. Negative constraints prevent the most common failure modes.

For writing: "Never use: 'I hope this finds you well', passive voice, corporate jargon"
For analysis: "Don't hedge everything with 'it depends' — give me a recommendation"
For code: "Only modify the function I specify, don't refactor surrounding code"

Constraints are how you prevent the AI from doing the annoying thing it always does.

4. Output Format

Explicit structure eliminates the guesswork.

"Return a JSON object with fields: {title, summary, tags[]}" → always consistent
"Give me the results" → wildly different format every time


5 Prompts Built With This Framework

Prompt 1: Code Reviewer (The One That Found My 3-Year-Old Bug)

You are a Senior Software Engineer with 10+ years of experience in production systems.

Review the following code for:
- Logic errors and edge cases
- Security vulnerabilities (injection, auth, data exposure)
- Performance bottlenecks
- Maintainability issues

For each issue:
1. Describe the exact problem
2. Explain why it matters in production
3. Provide the corrected code

Code to review:
[CODE]
Enter fullscreen mode Exit fullscreen mode

Why it works: The role primes the model to think like someone who's been paged at 3am. The four categories focus the review. The three-part output format prevents vague "you should improve this" responses.

Real result: Found a SQL injection vector I'd had in my codebase for 3 years. The model saw it immediately.


Prompt 2: Debug Detective (Goes Beyond "Fix This Error")

Act as a principal engineer doing root cause analysis. You don't fix symptoms — you find the underlying cause.

Given this error:
[ERROR MESSAGE AND STACK TRACE]

Context: [Brief description of your codebase]

Provide:
1. ROOT CAUSE (not the error itself, but why it happened)
2. EXACT FIX with code changes
3. RELATED ISSUES (other problems from the same pattern)
4. PREVENTION (how to avoid this class of bug going forward)
Enter fullscreen mode Exit fullscreen mode

Why it works: "Root cause analysis" is a specific mental mode. The constraint "you don't fix symptoms" prevents the default "here's how to handle this error" response. The four outputs force completeness.

Real result: Instead of just handling a KeyError, the model identified that my entire assumptions about dict structure were wrong across 5 functions.


Prompt 3: Cold Email Personalizer (11% Reply Rate)

You are a B2B sales expert who writes emails that feel like they came from someone who genuinely researched the prospect — not from a template.

Write a cold email to [NAME] at [COMPANY].

What I know about them: [2-3 specific facts from LinkedIn or their website]

Rules:
- Max 5 sentences total
- First sentence must reference one specific fact about them (not "I saw you're at [COMPANY]")
- One clear ask in the last sentence
- NEVER USE: "I hope this finds you well", "I wanted to reach out", "synergy", "leverage", "circle back"
Enter fullscreen mode Exit fullscreen mode

Why it works: The role creates implicit knowledge (sales experts know what doesn't work). The constraints are the most important part — they prevent every cliché cold email pattern.

Real result: Reply rate on cold outreach went from 2% to 11%.


Prompt 4: Meeting Summarizer (2 Hours → 30 Seconds)

You are an executive assistant known for ruthless clarity.

Transform this transcript into EXACTLY:

## DECISIONS MADE (3 max)
[Firm commitments only — not discussions]

## ACTION ITEMS (5 max)
[Format: [OWNER] will [ACTION] by [DEADLINE]]

## OPEN QUESTIONS (2 max)
[Unresolved issues needing follow-up]

## ONE-LINE SUMMARY
[Most important thing that happened, 20 words max]

Rules: Ruthlessly compress. Max 150 words total. If no deadline was mentioned, write "no deadline set."

Transcript: [PASTE HERE]
Enter fullscreen mode Exit fullscreen mode

Why it works: The exact structure and max limits force actual summarization. "Firm commitments only" prevents fluffy "we discussed" entries from appearing as decisions.


Prompt 5: The Prompt Improver (Meta-Prompt)

You are a prompt engineering expert who has studied thousands of high-performing prompts.

Analyze and improve this prompt:
[PASTE YOUR PROMPT]

Intended use: [WHAT YOU'RE TRYING TO DO]
Model: [WHICH AI YOU'RE USING]

Provide:
1. DIAGNOSIS: 3 specific weaknesses (not "it's vague")
2. IMPROVED VERSION: The complete improved prompt, ready to use
3. WHAT CHANGED: Each significant change with the principle behind it
4. ONE-LINE SUMMARY: The core problem with the original
Enter fullscreen mode Exit fullscreen mode

Why it works: This is recursive — it uses the framework to improve prompts that don't use the framework. "3 specific weaknesses (not 'it's vague')" prevents generic feedback.


The Checklist

Before you send any prompt, check:

  • [ ] Role: Is this a specific, experienced person, not just "assistant"?
  • [ ] Task: Would I know exactly what to produce if someone gave me this?
  • [ ] Constraints: Have I listed the 2-3 most common ways this goes wrong?
  • [ ] Format: Does the output format remove all ambiguity about structure?

If any are missing, the prompt will probably underperform.


Where to Go Deeper

I've packaged 50 prompts built with this framework — covering code review, content writing, data analysis, research synthesis, image generation, automation, business/marketing, and meta-prompting.

All Markdown format, works with Claude, GPT-4, Gemini, or any capable model. $9:
👉 https://yanchen5.gumroad.com/l/gmfvxd

Or just use the 5 prompts above — they're the highest-leverage ones from the set.

What patterns have you noticed in the prompts that work for you? Curious what constraints other people find most useful.

Top comments (0)