Recently, during a conversation, I was asked what I look for in a pull request when reviewing code as an experienced software engineer. I mentioned a few criteria, but not all of them came to mind at that moment. It made me think, and I decided to document them, since one of my strengths is spotting issues in others’ code 😜.
Some might wonder whether this is still relevant today, given AI tools like Copilot that review code. While AI does a good job at code review, in my experience it still misses certain nuances and isn’t perfect. Our judgment, based on our knowledge of the product or feature, allows us to determine what to dismiss and what to accept. We assess whether the review is too detailed or too high-level, what is acceptable and what isn’t. We identify what is important versus what is not. Finding the right balance is something AI can’t fully replicate, and this is where our human judgment is invaluable. Additionally, we can give AI instructions to address its gaps.
Here are the key points I look out for during a code review as a frontend developer.
Reusability
Are there repeated pieces of logic in the code?
Could they be extracted to create a reusable component or function? Once extracted, are they properly tested, so they can be reused wherever applicable without having to rewrite or retest them extensively? When there is a future change in logic, you only need to update it in one place, and it will be automatically reflected everywhere. This also helps minimize bugs caused by inconsistent logic when one place is updated and another is not.
Are there any hard-coded, repeated values?
As with logic, these can be stored as constants in a common file and reused wherever applicable. This helps reduce typos when repeatedly using hard-coded strings in multiple places, which can cause unexpected behavior and bugs. Instead, use constants. Today’s IDEs help with accurately autocompleting imported constants, which further reduces typos. Moreover, any future change in the value can be made in one place, avoiding the need to painstakingly update hard-coded strings everywhere and risk missing a few.
Testability
Did previously passing tests or assertions start failing or have to be changed?
If previously passing tests are now failing or require changes, it indicates that the implementation logic has changed. Make sure the change is not caused by an unintended modification, typo, or side effect elsewhere. Instead of fixing the test first, verify that your implementation is correct. Review the git diff carefully to see whether there’s a mistake somewhere, it could be as simple as missing braces, a typo, or a small change you thought was harmless but that breaks tested code.
Readability and Understandability
Naming of components, files, variables, and functions:
Are they named according to what they actually do? For example, if something returns a boolean indicating whether to show or hide a button, you might name it isButtonVisible. This name clearly states what it does and removes the need for additional comments. This is helpful for humans reviewing and reading the code, and it also helps AI agents understand the context. It’s a win–win.
Complex logic:
Are there too many if and else statements? Can it be simplified with a switch statement or refactored into smaller functions? This will make the code more readable.
Function arguments:
Are there too many arguments being passed to a function? Can they be passed as a single object with multiple key–value pairs instead? Arguments must be passed in the correct order, and when there are many, we are bound to make mistakes. Optional arguments make this even more complicated. To avoid this confusion, you can pass multiple values in an object. From the keys in the object parameter, one can easily see what is what and not worry about the order.
Comments and business logic:
If it’s difficult to understand what is going on when reading the code, that’s an indication it may need some comments explaining the underlying business logic. This will help you and your team understand why it was done in a certain way later on. Important logic should also be covered by unit tests and end-to-end tests.
Security
Is the code written securely? Nowadays, static code analysis catches most insecure patterns, but it still doesn’t cover everything.
Things to look out for:
- Never trust user input. Are all raw user inputs sanitized for unsafe scripts and characters before sending them to the server or rendering them in the browser? This avoids most XSS attacks.
- Are all PII (Personally Identifiable Information) fields masked or encrypted before sending them to third-party analytics or monitoring tools?
- Are there any exposed API tokens or secrets, or
.envfiles accidentally committed? If anything is exposed, rotate and update the tokens.
Error handling
Are all error scenarios handled carefully?
Are try/catch blocks used where errors are expected, such as around fetch or network requests?
Are we mutating values unintentionally?
For example, suppose you want to display a future date in date-only format in one component and date time format in another. You might accidentally modify the original date object when formatting it for one of these cases. To avoid mutating original values, try to clone or copy them and then modify them as needed. Or use a getter that only reads and returns the original value without modifying it.
Do the error messages shown to users reveal too much information, such as stack traces or server details?
Overly detailed error messages can help attackers gain information about the type of server in use or expose vulnerabilities in your code.
Type safety
Is your code properly typed, especially when you’re manipulating API responses for the UI? If interfaces or types are well defined, there’s less chance of mistakes. For example, you might check whether a key has a particular name before converting its value to uppercase. If you type in the wrong key name, the code will not work as expected. Another issue is when a field is optional; if you don’t know it’s optional, you might skip checking whether it’s defined, causing runtime errors when it’s missing. Having interfaces defined allows your IDE to help you identify the exact key names and avoid mistakes caused by wrong assumptions.
Optimization
I also look for places where lines of code can be optimized. Fewer lines of code often mean fewer opportunities for bugs.
In some cases, you can check if there’s already a utility function with similar logic that can be reused. Look for ways to adapt or slightly extend that utility function so it can support the new implementation.
Do you really need to assign intermediate variables where you could use object mapping or chaining directly? Extra variables can use more memory, often negligible on their own, but small changes like this can add up and improve performance over time.
I am sure there are more things I look for in a PR, but these are the top ones I can think of right now. What's important is that you care about the product, your application's users, yourself, and your teammates. If you care enough, you'll notice most mistakes in the code and try to fix them. That’s what makes us special as human reviewers.
Top comments (1)
Great perspective — human code review is fundamentally about understanding intent, not just syntax. AI tools catch the "what" but miss the "why" behind design decisions.
Tangentially relevant: the same challenge applies to prompting AI for code review. Vague prompts get surface-level feedback. I built flompt (flompt.dev) — a free visual prompt builder that structures prompts into typed semantic blocks (role, objective, constraints, examples, output format, chain-of-thought, etc.) and compiles them to Claude-optimized XML. A code review prompt with an explicit "constraints" block ("focus only on logic bugs, not style") and "chain-of-thought" block ("explain your reasoning before each finding") gives you dramatically more useful AI code review output.
No account needed, open-source, browser extension for Claude/ChatGPT/Gemini. Good post!