ChatGPT for Code Review: 6 Prompts I Run Before Opening a PR
I've been writing code for about ten years, and I still submit PRs with embarrassing bugs in them. Not every time, but often enough that I've had to have the "how did this get through review?" conversation more times than I'd like.
The problem isn't that code review doesn't work. It's that by the time a reviewer looks at your PR, they're doing a lot at once: understanding what changed, why it changed, and whether it's correct. That's a lot of cognitive load. They catch the subtle architecture issues. They miss the obvious security hole you introduced three files in.
ChatGPT doesn't replace that review. What it does is handle the mechanical checks — the stuff a reviewer could catch but often doesn't because they're focused on bigger questions. Run it before you open the PR and the human review can actually do its job.
Here's my pre-PR checklist.
1. Security: The Stuff That Keeps You Awake at 2am
Security vulnerabilities are embarrassing to miss, and they're usually mechanical. SQL injection, unsanitized inputs, auth checks in the wrong layer. The kind of stuff a linter might catch if you had the right rules configured, but you don't.
Prompt: "Review this code for security vulnerabilities. Focus specifically on: SQL injection or NoSQL injection risks, missing authentication or authorization checks, unsanitized user inputs, hardcoded credentials or secrets, and any place where user-controlled data touches a system call or file path. Here's the code: [paste code]"
I learned to add "and any place where user-controlled data touches a system call" after a junior on my team wrote a Python script that used subprocess with a filename from user input. Caught it in ChatGPT review before it hit staging.
2. Performance Anti-Patterns in the Diff
Performance issues are notoriously hard to spot in review because you have to think about execution, not just correctness. N+1 query problems, synchronous operations that should be async, unnecessary data loading — reviewers see the code path but not the query count.
Prompt: "Look at this code and identify performance anti-patterns. Specifically check for: N+1 query patterns if there's any ORM usage, synchronous blocking calls that could be async, repeated computation inside loops that could be lifted out, and any place where we're loading more data than we need. Here's the TypeScript: [paste code]"
Adjust the language to match your stack. I use TypeScript a lot, but the same prompt works for Python or Go with minor tweaks.
3. Edge Cases and Error Handling Completeness
This is where I personally miss things most often. The happy path is complete and correct. The error handling is present. But is it complete? What happens when the network call returns a 202 instead of a 200? What if that array is empty? What if two requests race?
Prompt: "Review this function for edge case and error handling completeness. Assume adversarial inputs — what's the worst realistic input that could reach this code? Check for: empty collections, null/undefined where it's not expected, race conditions if there's any async code, and error cases that are caught but not handled meaningfully. Here's the code: [paste code]"
The "adversarial inputs" framing is important. It puts ChatGPT in a testing mindset rather than a documentation mindset.
4. "What Will Confuse the Next Engineer?"
This is a question I ask in real code reviews but often forget to ask myself. Code that's perfectly correct can still be a nightmare to maintain. Clever one-liners. Variables named after what they were, not what they are. Functions that do three things and are named for one.
Prompt: "Read this code as if you're a senior engineer doing a review, not the person who wrote it. What will confuse the next engineer who has to modify this in six months? Flag: variable names that don't communicate intent, functions doing more than one thing, any logic that requires a comment to understand but doesn't have one, and anything you'd ask in a code review comment."
This is the most underrated check on the list. I've gotten back feedback on my own code that made me genuinely cringe.
5. Test Coverage Gaps Based on the Actual Code
Most developers write tests for what they expect to happen. ChatGPT reads the implementation and tells you what tests you're missing based on what could happen. These are different things.
Prompt: "Here's a function and its current test suite. Based on the implementation, identify test coverage gaps — cases that the implementation handles (or mishandles) that aren't covered by the existing tests. Don't suggest tests for imaginary edge cases; only flag gaps based on what the code actually does. Implementation: [paste code] Tests: [paste tests]"
The "don't suggest tests for imaginary edge cases" clause matters. Without it you get a list of 20 hypothetical scenarios, most of which your application could never encounter.
6. PR Description Generation
This sounds trivial but it isn't. A good PR description tells the reviewer what changed, why, and what to look for. Most PR descriptions are either empty or copy-paste from the ticket. Writing a good one takes five minutes you usually don't have.
Prompt: "Write a PR description for this diff. Include: a one-paragraph summary of what changed and why, any non-obvious decisions I made and the reasoning behind them, what the reviewer should focus on most, and any areas I'm uncertain about. Diff: [paste git diff output]"
I run git diff main and paste the output. It produces a better description than I'd write in a hurry, and more importantly it forces me to read the diff one more time before I submit.
The Pattern
Every prompt above shares a structure: it names what you're looking for, gives a constraint (don't invent hypotheticals, focus on this specific language), and asks for actionable output. Vague prompts get vague answers. These are tuned to get specific, useful ones.
If this workflow is useful and you want a full set of prompts covering debugging, architecture review, documentation, and test generation, I've put together 200 of them for exactly this kind of daily developer workflow.
Get 200 ChatGPT Prompts for Developers — $19 instant download
Top comments (0)