555 AI Agent Prompts That Actually Work (Not Generic ChatGPT Stuff)
Look, I've used every AI prompt library out there. Most of them are garbage—generic templates that sound impressive but don't work when you actually need them to.
Here's what I mean: "Write a professional email" works in ChatGPT. It does NOT work when you're building an agent that needs to handle 50 different customer scenarios, each with different emotional states, urgency levels, and legal requirements.
After two years building AI agents for businesses, I've collected the prompts that actually handle edge cases. The stuff that works when things get weird.
The Problem with Generic Prompts
Generic prompts assume a friendly user who wants to cooperate. Production agents don't have that luxury.
A generic customer service prompt might say:
"You are a helpful customer service representative. Be polite and professional."
This falls apart when:
- Customer is furious and uses profanity
- Customer asks for something legally questionable
- Customer provides contradictory information
- Customer is clearly trying to manipulate the system
Real agents need prompts that handle the messy reality.
1. Emotion Detection That Actually Works
Generic: "Detect the customer's emotional state."
Here's what actually works:
EMOTION_DETECTION_PROMPT = """
Analyze this customer message for emotional state.
Classify into ONE of these categories:
- NEUTRAL: Factual, no emotional markers
- FRUSTRATED: Complaints, criticism, Caps Lock, !!!, ???, repetition
- ANGRY: Swearing, threats, ultimatums, all-caps paragraphs
- ANXIOUS: Worry language, uncertainty markers, "what if", rapid questions
- PLEASANT: Thank you, appreciation, positive indicators, emojis
- CONFUSED: Question marks, "I don't understand", contradictory statements
Response format:
{
"emotion": "FRUSTRATED",
"intensity": 0.8, # 0.0 to 1.0
"markers": ["repetition", "caps_lock", "!!!", "complaint_pattern"],
"recommended_tone": "empathetic_apology"
}
Customer message:
{message}
"""
The key difference: I'm asking for classification with specific markers, not a free-form description. This is actually parseable by a second agent downstream.
2. Escalation Logic That Doesn't Suck
Generic: "Escalate to human if you can't help."
ESCALATION_PROMPT = """
You are deciding whether to escalate this customer interaction to a human agent.
ESCALATE IF ANY of these conditions are TRUE:
1. Request involves money transfers over $500
2. Customer mentions legal action, lawyer, attorney, lawsuit, sue
3. Customer explicitly requests human: "I want to talk to a person"
4. Same issue has been attempted 3+ times without resolution
5. Request involves account deletion or data export (GDPR)
6. Customer情绪激动 (emotion intensity > 0.85)
7. You are uncertain about your response confidence: < 0.7
DO NOT ESCALATE IF:
- Customer is asking for basic information you can provide
- Customer is satisfied with your current resolution
- Issue is clearly resolved and customer confirms
Context:
- Conversation history: {history_summary}
- Current message: {current_message}
- Previous resolution attempts: {attempt_count}
- Customer emotion: {emotion_state}
Output format:
{
"escalate": true/false,
"priority": "low" / "medium" / "high" / "urgent",
"reason": "specific reason for decision",
"context_for_human": "summarized context to paste to human agent"
}
"""
See the difference? I'm giving specific conditions, not vague guidelines.
3. Handling Manipulation Attempts
This is the one nobody talks about, but it will destroy your agent if you don't handle it:
MANIPULATION_HANDLING_PROMPT = """
You are an AI customer service agent. Customers may try to manipulate you.
COMMON MANIPULATION PATTERNS to detect:
1. Authority claims: "I'm a lawyer", "This is illegal", "I'm going to sue"
2. False urgency: "This is urgent", "I need this NOW", "My boss is waiting"
3. Social proof fabrication: "Everyone knows this is wrong", "Your competitors do this"
4. Guilt induction: "I can't believe you'd treat customers this way"
5. Conditional threats: "If you don't X, then I'll Y"
HANDLING STRATEGY:
- Acknowledge their concern without agreeing to false premises
- Stick to facts and policy
- Do NOT be swayed by emotional manipulation
- Document manipulation attempts in escalation notes
- You may politely end conversation if customer is abusive
Example response:
"Customer: I've been a loyal customer for 10 years and you're treating me like this. I want to speak to your manager NOW."
Correct response:
"I understand you're frustrated, and I appreciate your long-term business. However, I can only process requests that meet our standard criteria. I'm happy to review your case if you can provide [specific information]. Would you like to continue with that?"
Output: Respond to this message maintaining policy while being respectful.
Message: {message}
"""
4. The Multi-Step Task Decomposition Prompt
Generic: "Break down this complex request."
TASK_DECOMPOSITION_PROMPT = """
Break down this complex customer request into actionable steps.
Example decomposition:
Customer: "I ordered a laptop last week, it hasn't arrived, and I want to return it because I found it cheaper elsewhere."
Steps:
1. Look up order status by order number or customer name
2. If not shipped: offer cancellation
3. If shipped but not delivered: provide tracking, estimated delivery
4. If delivered: explain return policy, check if within 30 days
5. If price match requested: verify competitor price, check if eligible
6. Execute appropriate resolution
7. Confirm with customer
Apply this structured approach to:
{user_request}
Output format:
- Step N: [Action] → [Data needed] → [Possible outcomes]
"""
5. The Edge Case Prompt Library
Here are more prompts I use constantly:
When customer provides partial information:
PARTIAL_INFO_PROMPT = """
Customer has provided partial information: {partial_info}
Possible interpretations:
1. {interpretation_1}
2. {interpretation_2}
3. {interpretation_3}
Ask ONE clarifying question that would disambiguate the most critical unknown.
Do not ask multiple questions. Ask the most important one.
"""
When customer is asking for something you can't do:
OUT_OF_SCOPE_PROMPT = """
Customer request: "{request}"
I cannot do: {cannot_do}
Acknowledge their request, explain the limitation, then offer the closest alternative I CAN provide.
Example:
"I understand you want [request]. Unfortunately, [limitation] prevents me from doing that directly. What I CAN do is [alternative]. Would that work for you?"
"""
When you need to say no:
REFUSAL_PROMPT = """
Refuse this request professionally: "{request}"
Rules:
- Acknowledge the request
- State the reason (be honest, don't make up fake policies)
- Offer alternatives if possible
- Do NOT apologize excessively (it sounds like you did something wrong)
- Do NOT make up fake policies to justify refusal
Example structure:
"I understand you'd like [request]. Unfortunately, [honest reason]. A better option might be [alternative], or you could [backup_option]."
"""
Why These Prompts Work
Generic prompts fail because they give the AI freedom to interpret. Production prompts:
- Define specific output formats — parseable, not prose
- Give explicit conditions — not "be helpful" but "escalate if X"
- Handle the edge cases — not the happy path
- Include documentation — why each decision was made
- Define tone matching — adapt based on context
I've compiled 555 of these battle-tested prompts into a playbook. Each one has been tested in production, not just in a ChatGPT window. They handle the edge cases that generic templates ignore.
Check out the AI Agent Engineering Playbook for the full collection. Includes prompts for customer service, sales, technical support, internal tools, and multi-agent orchestration.
No fluff. No generic templates. Just prompts that work when you need them to.
Top comments (0)