DEV Community

Tom Herbin
Tom Herbin

Posted on

The Prompt Engineering Playbook for Developers: 10 Prompts That Actually Work

Most developers use AI coding assistants the same way: "fix this bug" or "write a function that does X." And then they wonder why the output is mediocre.

The problem isn't the AI — it's the prompt. After months of using ChatGPT, Claude, and Copilot for 8+ hours a day, I've found that structured prompts consistently produce 10x better results than vague requests.

Here are 10 prompts from my toolkit that actually work. Copy them, customize them, use them today.


1. The System Design Prompt

You are a senior software architect. Design a system for [SYSTEM_DESCRIPTION].

Requirements:
- Expected load: [USERS/RPS]
- Data characteristics: [DATA_VOLUME, READ/WRITE_RATIO]
- Key constraints: [LATENCY, CONSISTENCY, AVAILABILITY]

Provide:
1. High-level architecture diagram (describe in text)
2. Component breakdown with responsibilities
3. Data flow for the top 3 critical paths
4. Database schema for core entities
5. API contracts between services
6. Trade-offs you considered and why you chose this approach
Enter fullscreen mode Exit fullscreen mode

This works because it gives the AI a role, constraints, and a structured output format. Compare this to "design a system for X" — night and day.


2. The Debugging Prompt

I have a bug in my [LANGUAGE] application. Here's what I know:

**Expected behavior:** [WHAT_SHOULD_HAPPEN]
**Actual behavior:** [WHAT_HAPPENS_INSTEAD]
**Steps to reproduce:** [STEPS]
**Error message/stack trace:**
[PASTE_ERROR]

**Code:**
[PASTE_RELEVANT_CODE]

Analyze this systematically:
1. What are the most likely root causes? (rank by probability)
2. For each cause, what would you check to confirm/eliminate it?
3. Suggest a fix for the most likely cause
4. How would you prevent this class of bug in the future?
Enter fullscreen mode Exit fullscreen mode

3. The Code Review Prompt

Review this [LANGUAGE] code as a senior engineer. Be specific and actionable.

[PASTE_CODE]

Review for:
1. **Bugs**: Logic errors, edge cases, null/undefined handling
2. **Security**: Injection, auth issues, data exposure
3. **Performance**: Time/space complexity, unnecessary operations
4. **Maintainability**: Naming, structure, SOLID principles
5. **Testing**: What test cases are missing?

Format: For each issue, provide:
- Severity: 🔴 Critical | 🟡 Warning | 🔵 Suggestion
- Line/section reference
- What's wrong
- How to fix it (with code)
Enter fullscreen mode Exit fullscreen mode

4. The Test Generation Prompt

Generate a comprehensive test suite for this [LANGUAGE] [FUNCTION/CLASS]:

[PASTE_CODE]

Include:
1. Happy path tests for all main scenarios
2. Edge cases (empty inputs, nulls, boundaries, overflow)
3. Error cases (invalid inputs, network failures, timeouts)
4. Use [TESTING_FRAMEWORK] syntax
5. Use descriptive test names that explain the scenario
6. Add comments explaining WHY each edge case matters
Enter fullscreen mode Exit fullscreen mode

5. The Refactoring Prompt

Refactor this [LANGUAGE] code to improve [READABILITY/PERFORMANCE/MAINTAINABILITY]:

[PASTE_CODE]

Constraints:
- Maintain the same public API/interface
- Don't change behavior (all existing tests must pass)
- Target: reduce complexity from [CURRENT] to [TARGET]

For each change:
1. Explain what you changed and why
2. Show before/after
3. Rate the risk of the change (low/medium/high)
Enter fullscreen mode Exit fullscreen mode

6. The Documentation Prompt

Write an API reference for this [LANGUAGE] [MODULE/CLASS]:

[PASTE_CODE]

For each public method, include:
- One-line description
- Parameters with types and descriptions
- Return type and description
- Example usage (realistic, not trivial)
- Throws/errors
- Edge cases to be aware of
Enter fullscreen mode Exit fullscreen mode

7. The SQL Query Optimizer

Optimize this SQL query for performance:

[PASTE_QUERY]

Context:
- Database: [POSTGRES/MYSQL/etc]
- Table sizes: [APPROXIMATE_ROW_COUNTS]
- Current execution time: [TIME]
- Available indexes: [LIST_INDEXES]

Provide:
1. Analysis of current query plan bottlenecks
2. Optimized query with explanation
3. Index recommendations
4. If the query can't be optimized further, suggest schema changes
Enter fullscreen mode Exit fullscreen mode

8. The Security Audit Prompt

Perform a security audit on this [LANGUAGE] code:

[PASTE_CODE]

Check for:
1. OWASP Top 10 vulnerabilities
2. Input validation gaps
3. Authentication/authorization flaws
4. Data exposure risks
5. Dependency vulnerabilities

For each finding:
- Severity (Critical/High/Medium/Low)
- CWE reference if applicable
- Proof of concept (how could this be exploited?)
- Remediation with code example
Enter fullscreen mode Exit fullscreen mode

9. The CI/CD Pipeline Prompt

Create a [GITHUB_ACTIONS/GITLAB_CI/etc] pipeline for a [LANGUAGE/FRAMEWORK] project.

Requirements:
- Build and test on every PR
- Deploy to [STAGING/PRODUCTION] on merge to main
- Run [LINTING/TYPE_CHECKING/SECURITY_SCANNING]
- Cache dependencies for faster builds
- Notify on failure via [SLACK/EMAIL]

Include:
1. Complete YAML configuration
2. Required secrets/environment variables
3. Explanation of each stage
Enter fullscreen mode Exit fullscreen mode

10. The Chain Prompt: Feature From Scratch

This is a multi-step prompt chain — each step builds on the previous:

Step 1 - Spec: "Write a technical spec for [FEATURE]. Include user stories, acceptance criteria, and technical approach."

Step 2 - Design: "Based on this spec, design the database schema and API endpoints. Include request/response examples."

Step 3 - Implement: "Implement the API endpoints from the design above using [FRAMEWORK]. Include input validation and error handling."

Step 4 - Test: "Write integration tests for these endpoints using [TEST_FRAMEWORK]. Cover happy paths, edge cases, and error scenarios."

Step 5 - Review: "Review the complete implementation. Check for security issues, performance bottlenecks, and missing edge cases."


Why These Work

Every prompt above follows the same pattern:

  1. Role — Tell the AI who it is (senior engineer, architect, etc.)
  2. Context — Give it everything it needs to understand the problem
  3. Structure — Define the exact output format you want
  4. Constraints — Set boundaries so it doesn't go off-track

The difference between a junior and senior developer using AI isn't the AI — it's the prompts.


Want the Full Toolkit?

These 10 prompts are a sample from my AI Developer's Prompt Toolkit — a collection of 130+ production-grade prompts organized into 11 categories: architecture, code generation, debugging, code review, testing, documentation, refactoring, DevOps, database, security, and bonus chain prompts.

Each prompt has variables to customize, structure that gets consistent results, and works with any LLM (ChatGPT, Claude, Gemini, Copilot).

$9 — less than the mass of time one good prompt saves you.


What are your go-to AI coding prompts? Drop them in the comments — I'm always looking to add more to the toolkit.

Top comments (0)