The 5 prompt patterns I use every day in Claude Code (and when each one breaks down)
After running Claude Code for weeks — including sessions that ran autonomously overnight — I've settled on five patterns that do most of the work. Each has a failure mode worth knowing.
1. Context-first prompting
Pattern: Start with "Here's what you need to know about this codebase before touching anything" and dump the relevant file structure, existing patterns, and constraints.
Why it works: The agent picks up conventions it would otherwise invent from scratch. It stops creating a sixth way to handle errors when you already have five.
Failure mode: Stale context. If you paste file contents from two hours ago and things have changed, the agent works from wrong assumptions. I timestamp my context dumps now.
2. Red-line constraints
Pattern: Explicit "do not touch X" before the task. "Don't modify the auth middleware. Don't add new dependencies. Don't change the database schema."
Why it works: Without constraints, agents scope-creep. They notice a related issue and fix it. Sometimes fine. Often introduces a regression in code you weren't reviewing.
Failure mode: Overly broad constraints. "Don't touch anything related to users" is too vague. Be specific: "don't modify users.ts or auth.ts."
3. Test-first specification
Pattern: "Write the test that would pass when this feature works, then make it pass."
Why it works: Forces the agent to define success before writing code. The test becomes a spec.
Failure mode: The agent writes a test that trivially passes without solving the real problem. Review the test before the implementation.
4. Incremental checkpoints
Pattern: Break the task into explicit steps and ask for a checkpoint after each. "First, just scaffold the API route. Show me the signature. Don't implement it yet."
Why it works: Catches wrong assumptions early. If the scaffolding is wrong, you haven't wasted 20 minutes on the wrong implementation.
Failure mode: Approving each checkpoint too quickly without reading it. That just defers the wrong-assumption problem.
5. Failure mode prompting
Pattern: After the agent writes something, ask: "What could go wrong with this? What edge cases didn't you handle? What assumptions did you make?"
Why it works: Surfaces problems before you hit them in production. The agent usually knows what it glossed over.
Failure mode: Generic answers. "It might fail if the network is down." Push for specific edge cases in your actual code, not hypothetical ones.
The meta-pattern
All five share something: they reduce ambiguity before the agent starts, not after.
Most bad Claude Code sessions trace back to an ambiguous starting point. The agent picked a reasonable interpretation of an unclear prompt and ran with it. By the time you reviewed the output, redirecting was expensive.
Prompts that work are specific about inputs, specific about outputs, specific about constraints, and explicit about what "done" looks like.
Notes from running Claude Code autonomously on builtbyzac.com.
Top comments (0)