I've been running an AI agent in production for months. What separates a dev who gets 2x output from AI from one who gets 10x? It's not the model. It's the prompts.
These aren't cute one-liners. These are battle-tested prompts I use daily — each one solves a specific friction point in my workflow. I'll give you the exact prompt text and explain why it works.
1. The Code Archaeologist
Prompt:
I need to understand this codebase quickly. Don't explain every line — give me:
1. The core data flow (input → processing → output)
2. The 3 most important files and why
3. Any non-obvious design decisions I'd miss without context
4. Where I'd make changes if I needed to add [FEATURE]
Code/repo: [PASTE OR DESCRIBE]
Why it works: Forces the AI to prioritize ruthlessly instead of narrating. You get a map, not a tour.
2. The Bug Interrogator
Prompt:
This code has a bug. Before suggesting fixes, interrogate it:
1. What is this code TRYING to do?
2. What is it ACTUALLY doing?
3. Where does the assumption break down?
4. List 3 possible root causes ranked by likelihood
5. Now give me the fix for the most likely cause
Bug description: [DESCRIBE]
Code: [PASTE]
Error: [PASTE IF ANY]
Why it works: The AI stops jumping to the first plausible fix and actually diagnoses. The structured approach catches 80% of bugs in one shot.
3. The Refactor Strategist
Prompt:
Analyze this code for refactoring opportunities. Rate each opportunity by:
- Impact (1-10): How much better will the code be?
- Risk (1-10): How likely to introduce bugs?
- Effort (1-10): How long will this take?
Only recommend changes where Impact > (Risk + Effort/2).
Explain your reasoning for the top 3 recommendations.
Code: [PASTE]
Why it works: Makes trade-offs explicit. You stop doing refactors that feel good but add no real value.
4. The PR Reviewer
Prompt:
Review this PR diff like a senior engineer who:
- Cares deeply about correctness, not style points
- Has been burned by subtle race conditions and edge cases
- Wants to ship fast but not break production
For each issue found:
1. Severity: Critical / High / Medium / Low
2. The problem in one sentence
3. The fix in code
Skip any issue that's stylistic preference only. Focus on bugs, security holes, performance killers, and missing error handling.
Diff: [PASTE]
Why it works: The persona constraint eliminates nitpicky style comments and surfaces real problems. You get the review you actually need.
5. The Documentation Writer
Prompt:
Write documentation for this code. Target audience: a competent developer who has never seen this codebase.
Include:
- What this does (one sentence)
- When to use it (and when NOT to)
- Parameters/inputs with types and examples
- Return values and possible errors
- One real-world usage example
- Any gotchas or common mistakes
Code: [PASTE]
Why it works: The "when NOT to use it" constraint forces the AI to understand boundaries, which produces far more honest and useful docs.
6. The Test Generator
Prompt:
Generate tests for this function. I need:
1. Happy path (the thing it's supposed to do)
2. Edge cases that would break a naive implementation
3. Error cases (bad input, null values, boundary conditions)
4. One test that would catch a subtle regression if someone refactors this
Use [TEST FRAMEWORK]. Make the test descriptions readable — they should document behavior, not just say "it works."
Function: [PASTE]
Why it works: The "subtle regression" test forces deep thinking about invariants. These are the tests that actually save you at 2am.
7. The Architecture Explainer
Prompt:
Explain this architecture decision to two audiences:
Audience 1 — Junior dev: What is this pattern, why do we use it, and what problem does it solve?
Audience 2 — Business stakeholder: Why did we build it this way, what's the risk of NOT doing it, and what are we trading off?
Architecture/decision: [DESCRIBE]
Why it works: Forcing dual audiences surfaces assumptions. If you can't explain a decision to both audiences, you don't fully understand it yet.
8. The Performance Detective
Prompt:
This code is slow. Diagnose it like a performance engineer:
1. Identify every O(n²) or worse operation
2. Find any unnecessary repeated work
3. Spot any blocking operations that could be async
4. Look for memory allocation patterns that will hurt GC
5. Check for N+1 query patterns
For each issue: what's the theoretical improvement if fixed?
Code: [PASTE]
Context: [HOW BIG IS THE DATA? HOW OFTEN DOES THIS RUN?]
Why it works: The context constraint (data size, frequency) changes everything. A O(n²) loop on 10 items is fine. On 10,000 items, it's a fire.
9. The Error Handler
Prompt:
Audit this code's error handling. Find every place where:
1. Errors are silently swallowed
2. Error messages are too vague to debug
3. Recovery is attempted but might make things worse
4. A failure in one place will cascade to another
For each: show me what bad behavior this causes and write the corrected version.
Code: [PASTE]
Why it works: Silent failures and vague errors are responsible for most "impossible to debug" incidents. This prompt finds them proactively.
10. The Security Auditor
Prompt:
Security audit this code. Check specifically for:
1. Input that reaches a database without sanitization (SQL injection)
2. User-controlled data reaching eval(), exec(), or similar
3. Secrets, tokens, or keys hardcoded or logged
4. Missing auth checks on sensitive operations
5. Race conditions on shared state
For each finding: severity (Critical/High/Medium), exact line, and the attack vector.
Code: [PASTE]
Why it works: Specific attack vector enumeration forces the AI to think like an attacker, not just a code reviewer.
11. The Migration Planner
Prompt:
I need to migrate [OLD SYSTEM] to [NEW SYSTEM]. Plan this as a senior engineer who has done painful migrations before.
Give me:
1. What can go wrong (ranked by probability)
2. The safest migration sequence (what order to migrate what)
3. The rollback plan for each step
4. How to validate each step worked before proceeding
5. What monitoring to set up during migration
Current state: [DESCRIBE]
Target state: [DESCRIBE]
Constraints: [TIME, DOWNTIME TOLERANCE, TEAM SIZE]
Why it works: The "painful migrations before" persona activates conservative, production-aware thinking. You get a war plan, not a happy path.
12. The Code Explainer (for Meetings)
Prompt:
I need to explain this technical concept/code to [AUDIENCE: e.g., "product manager", "new team member", "CEO"] in a 5-minute verbal explanation.
Give me:
- The 30-second version (elevator pitch)
- The 3-minute version (main explanation)
- 2 analogies I can use if they look confused
- The 2 questions they're most likely to ask and how to answer them
Topic: [PASTE OR DESCRIBE]
Why it works: Preparing for questions is what separates a confident explainer from someone who gets flustered mid-presentation.
13. The Dependency Auditor
Prompt:
Audit my dependencies for:
1. Packages that are unmaintained (last commit > 1 year, no security patches)
2. Packages where a newer major version exists with breaking changes I need to plan for
3. Packages that do the same thing (consolidation opportunity)
4. Packages that are overkill for how I use them (could replace with 10 lines of code)
package.json / requirements.txt: [PASTE]
Why it works: Dependency debt is slow death. This prompt makes it visible before it's a crisis.
14. The Naming Critic
Prompt:
Critique the naming in this code. For each bad name:
1. What's wrong with it (too vague, misleading, too abbreviated, etc.)
2. What a reader would wrongly assume it does
3. A better name
Be ruthless. Treat bad naming as a form of technical debt.
Code: [PASTE]
Why it works: Good naming is the highest leverage low-cost improvement in any codebase. Most code reviews skip it. This one doesn't.
15. The "What Could Go Wrong" Pre-mortem
Prompt:
I'm about to ship this. Do a pre-mortem:
Assume it's 3 months from now and this has caused a production incident.
1. What went wrong? (List 5 plausible scenarios)
2. For each scenario: how likely is it? how bad would it be?
3. Which failure mode should I add a test/monitor/alert for right now?
4. What's the one thing I should fix before shipping?
Code/feature: [DESCRIBE OR PASTE]
Why it works: Pre-mortems work because they bypass optimism bias. You're not asking "will this fail?" — you're asking "HOW will this fail?" The framing change produces dramatically better answers.
The Full Pack
These 15 prompts cover the most common friction points in a dev's day. But they're just the surface.
I've been building and refining a full library of 50+ AI prompts organized by workflow category — code review, debugging, architecture, documentation, security, and more. Each prompt is tested against real production code, not toy examples.
📦 Get the complete Forge AI Prompt Pack ($9):
👉 https://xadenai.github.io/forge-ai-prompts/
It's $9 — one-time, instant download. 55 prompts across 5 categories (Productivity, Code, Business, Writing, AI Agents), each one with the template, a real example, and a full breakdown of why it works. Copy-paste ready, model-agnostic.
If you build something useful with these — I want to hear about it.
Top comments (0)