7 Prompt Patterns I Use to Turn ChatGPT Into a Reliable Coding Assistant
Most people use ChatGPT like a search box: one vague question, one vague answer.
For coding work, that breaks quickly. The model needs a repeatable operating frame: role, repo context, constraints, test plan, and a way to challenge its own output.
Below are seven prompt patterns I use when I want an AI assistant to produce work I can actually ship.
1. The repo-context prompt
You are a senior engineer joining this codebase.
Goal: [FEATURE_OR_FIX]
Context:
- Stack: [LANGUAGE/FRAMEWORK]
- Relevant files: [FILES]
- Constraints: [PERFORMANCE/STYLE/API]
First, summarize the existing flow. Then propose the smallest safe change. Do not write code until the plan references exact files.
Why it works: it prevents the model from jumping into generic snippets before it has modeled the current code.
2. The failure-reproduction prompt
Bug: [BUG_DESCRIPTION]
Observed behavior: [WHAT_HAPPENS]
Expected behavior: [WHAT_SHOULD_HAPPEN]
Logs/errors: [PASTE]
Write a minimal reproduction path, list the top 3 likely root causes, then suggest the first diagnostic command or test to run.
This turns the model into a debugging partner instead of a guess generator.
3. The diff-review prompt
Review this diff as if it is going to production today.
Focus only on:
1. correctness bugs
2. security/privacy issues
3. missing tests
4. unnecessary complexity
Return findings by severity. If there are no critical issues, say so clearly.
[PASTE_DIFF]
The key is limiting scope. Broad "review this" prompts tend to produce style noise.
4. The test-first prompt
Feature: [FEATURE]
Before writing implementation, propose tests that cover:
- happy path
- edge cases
- failure modes
- regression risk
Then write only the test names and expected assertions.
When the tests look wrong, the implementation would probably be wrong too.
5. The migration-risk prompt
We need to change [OLD_BEHAVIOR] to [NEW_BEHAVIOR].
List all backward-compatibility risks, data migration risks, and rollout risks.
Then propose a phased release plan with a rollback condition.
This is useful for database changes, API changes, and auth/payment logic.
6. The refactor boundary prompt
Refactor goal: [GOAL]
Do not change external behavior.
Preserve public interfaces unless explicitly justified.
First identify safe extraction boundaries. Then provide a step-by-step refactor plan where each step can be tested independently.
This keeps refactors from turning into rewrites.
7. The adversarial-check prompt
Act as a skeptical maintainer. Attack the proposed solution.
Find cases where it fails, assumptions that are weak, and simpler alternatives.
Only after that, give a revised recommendation.
The best AI coding workflow is not "ask once and paste." It is loop-based: generate, challenge, test, revise.
A reusable structure
For most engineering tasks, I use this skeleton:
Role: [EXPERT_ROLE]
Task: [SPECIFIC_OUTCOME]
Context: [FILES, STACK, BUSINESS RULES]
Constraints: [STYLE, SECURITY, PERFORMANCE, DEADLINES]
Output format: [PLAN / PATCH / TESTS / REVIEW]
Quality bar: include assumptions, risks, and verification steps.
I keep a larger private prompt library for this because rewriting these patterns every day is tedious. I packaged the developer version as Developer's Prompt Bible here:
There are also packs for marketing copy and commercial Midjourney/design workflows:
- AI Marketing Copy Prompt Pack: https://payhip.com/b/6lqVh?utm_source=devto&utm_medium=organic&utm_campaign=promptcraft_launch&utm_content=dev_article_2
- Midjourney Commercial Design Prompt Pack: https://payhip.com/b/XLNPm?utm_source=devto&utm_medium=organic&utm_campaign=promptcraft_launch&utm_content=dev_article_2
The main lesson: prompts are not magic sentences. They are lightweight operating procedures. The more repeatable the procedure, the more useful the model becomes.
Top comments (0)