I thought AI coding tools were bullshit.
Not in a loud, Twitter-argument way. More like a quiet eye-roll every time someone on a podcast said "and then I just asked the AI to refactor it." Yeah, sure you did. I'm sure that went great.
Then sometime around mid-2024, I was three weeks into a freelance project and there was a bug I could not find. The logic looked right. The tests passed. But every few hundred requests, the whole thing would just... silently drop data. I was losing my mind. On a whim, I pasted the module into Claude with a prompt I'd been tinkering with and asked it to look for concurrency issues.
It found a race condition in under ten seconds. A race condition I had been staring past for weeks.
I didn't mass-convert to the Church of AI that day. But I did shut up about it being useless. And over the months since, I've built up a set of prompts that I actually use every day. Not cute demos. Not "look what AI can do" parlor tricks. Just prompts that save me real time on real work.
Here are the ones I keep coming back to.
The one I use before I ship anything
Most developers ask AI to "review my code" and wonder why the feedback is garbage. No constraints, no focus, just "hey look at this." You wouldn't walk up to a senior dev and say "look at this" with zero context. Same thing here.
This is what I actually paste in before I push:
I'm about to ship this code. Give me a quick sanity check:
1. Any obvious bugs?
2. Will this break in production?
3. What's the one thing I probably forgot?
[paste code]
That third question does an unreasonable amount of heavy lifting. Turns out AI is weirdly good at catching the thing you stopped thinking about two hours ago because you got tunnel vision.
Debugging (but make it structured)
I learned the hard way that "fix this" is the worst possible prompt. You get back a confident, plausible, completely wrong answer. Then you waste twenty minutes implementing it before realizing the AI hallucinated the root cause.
Help me debug this issue systematically.
Expected behavior: [what should happen]
Actual behavior: [what's happening]
Error message: [paste error]
Code: [paste relevant code]
Walk me through:
1. Most likely causes (ranked by probability)
2. How to verify each hypothesis
3. The fix for the most probable cause
The key is "ranked by probability." Without that, you get a grab bag of five theories with no sense of which one to try first. With it, you get a differential diagnosis. I've had this nail the issue on the first guess more times than I expected.
Security audit
Perform a security audit on this code. Check for:
- SQL injection vulnerabilities
- XSS attack vectors
- Authentication/authorization flaws
- Sensitive data exposure
- Input validation issues
Rate each finding as Critical/High/Medium/Low.
[paste code]
The severity rating is the whole trick. Without it you get a wall of text where a missing CSRF token and a hardcoded database password look equally important.
The error message translator
Short one. When you get some cryptic stack trace from a library you didn't write:
Explain this error in plain English and tell me how to fix it:
[paste full error + stack trace]
Context: I was trying to [what you were doing]
That context line matters more than you'd think. "I was trying to deploy to Kubernetes" and "I was running tests locally" are completely different debugging paths, and without that one line the AI will guess wrong half the time.
Code review that doesn't suck
I have strong feelings about this one. The default AI code review experience is terrible. You paste code, it says "looks good, maybe add some comments." Useless.
This prompt forces it to actually look:
Review this code for:
1. Bugs and potential runtime errors
2. Security vulnerabilities
3. Performance issues
4. Code style and readability
5. Edge cases not handled
Provide specific line references and concrete suggestions.
[paste code]
The numbered list prevents it from fixating on one category. Without it, you'll get ten nitpicks about variable naming and zero actual bug reports. The "specific line references" part is what makes the feedback actionable instead of vague.
Test cases
Look, I know writing tests after the fact sucks. I do it anyway because production incidents at 2am suck more. But brainstorming what to test is the part that takes forever, and this is where AI genuinely shines.
Brainstorm test cases for this feature:
Feature: [describe it]
Inputs: [what it accepts]
Outputs: [what it produces]
Generate cases for:
- Normal usage
- Edge cases
- Invalid inputs
- Concurrency if applicable
Format as a checklist.
I don't use the generated tests as-is. I use the list as a "did I forget anything" check. And yeah, I almost always forgot something.
Breaking up long functions
This function is too long. Help me break it into smaller functions:
1. Identify logical sections
2. Suggest descriptive function names
3. Show the refactored code
4. Ensure behavior doesn't change
Step 4 is doing the important work. Skip it and the AI will "improve" your code by quietly changing behavior. I've been bitten by that.
Query optimization
Optimize this database query:
Query: [paste SQL]
Table schemas: [paste schemas]
Current execution time: [if known]
Provide:
- Optimized query
- Suggested indexes
- Explanation of improvements
If you don't include the schemas it'll suggest indexes on columns that don't exist. Ask me how I know.
The "works on my machine" prompt
Every developer's favorite sentence. This one has saved me from an embarrassing amount of Docker-related yelling.
This code works locally but fails in production/CI/Docker.
Local environment: [describe]
Production environment: [describe]
Error in production: [paste error]
What environment differences could cause this?
Give me a checklist to verify.
Environment bugs are a pain in the ass because the code looks correct. It IS correct. It's everything around the code that's wrong. The checklist format keeps you from randomly changing config values and hoping something sticks.
Documentation (the chore I actually outsource now)
Write documentation for this function:
- Purpose and description
- Parameters with types
- Return value
- Exceptions that can be thrown
- Usage example
- Edge cases to be aware of
Use your preferred doc format.
[paste function]
This is the one prompt where I mostly just take the output and paste it in. Not much creativity needed for JSDoc. I'd rather spend my brain on the actual code.
The pattern across all of these
You probably noticed: every prompt that works well has structure, context, and constraints. You tell the AI what to look at, how to format the answer, and what not to do.
Most people write prompts like they're texting a friend. "Hey can you look at this code?" That's why the answers are mid.
Fifteen seconds of prompt structure is the difference between a useful answer and a twenty-minute detour.
I've been collecting these for about six months now. These ten are the ones I reach for daily, but I've got around 40 more that cover architecture decisions, performance profiling, caching strategies, threat modeling, that kind of thing. I cleaned them all up and put them on Gumroad if you want the full set. Or don't. These ten will get you pretty far on their own.
Top comments (0)