DEV Community

Cover image for 3 Problems to Stop Looking For in Code Reviews
Steven Lemon
Steven Lemon

Posted on • Originally published at Medium

3 Problems to Stop Looking For in Code Reviews

Reviewing code is one of the most valuable tasks we do each day as a software developer. In that one activity, we spot problems and preempt issues before they can grow. We learn new approaches and gain familiarity with features we might have to update or borrow from in the future. We provide guidance and teaching, ensure quality and maintainability, and keep an eye on the future direction of the codebase.

But with code review doing so much for us, we can overload it, give it too much responsibility, accumulate tasks and checks that belong elsewhere. By treating code reviews as the final gate work passes through, we risk it also becoming the ambulance at the bottom of the cliff. Code review becomes time-consuming, problems slip through the cracks, and time gets wasted doing rework.

The types of issues you could discover

Many of these issues only come to light during code review, but proper planning and tooling could catch them much earlier, especially the 'easy ones'.

Easy issues

Lint - Simple or common mistakes; not meeting best practice or the recommended style for the language or framework.

Coding Standards - Does your team or company have a list of rules or guidelines for writing code? Does this code follow them?

Documentation - If your coding standard requires comments, are they present? Are the comments correct and add value? Has someone copy-pasted a block of code and forgotten to update the comments?

Style consistency - Does this code look like the code around it?

Language conventions - Is the code you are writing appropriate for the current language and framework?

Code mistakenly left in - Unused variables, TODOs that still need doing. Code that has been commented out, but not removed.

Formatting

Spelling

Medium issues

Readability and maintainability - Are you going to be able to understand this in 6 months? Can you maintain and update this code if the original author has left the company?

Naming - Are functions/methods/classes well named? Is it clear what they are doing? Can someone understand what that method does without reading the comments?

Coverage - Do unit tests cover the new functionality? Can you think of any significant use cases that still need covering?

Niggles - Something doesn't feel right. It looks like a hack or workaround. It might be fine, but is worth understanding the thinking behind this piece of code, just in case. At the very least, it might require a comment for the next person who comes along and also thinks this looks a bit odd.

Approach consistency - The same area of code is now using two different approaches to do the same thing.

Performance - Is this going to start having issues once it hits production?

Insufficient logging - If this does look a little risky, is there enough logging in place to tell that it has gone wrong?

Common pitfalls - Lessons you've learnt the hard way. Perhaps a framework, library or shared component isn't as reliable or useful as it first appears. Perhaps there is a customer with a unique setup that requires additional care.

Re-inventing the wheel - This task could have been completed using some pre-existing code. Perhaps it reimplemented some utility code or could have leveraged a shared library.

Edge cases - Will this code blow up or act in an undefined manner in some circumstances?

Complexity - The code is too hard to follow. Perhaps everything is crammed into a couple of giant classes or spread too thinly between too many loosely connected classes.

The old way of doing things - Best practice changes over an application's lifetime. Sometimes, you don't even realize you've done something using an outdated approach until the code review.

Public-facing problems - Spelling mistakes in your API or not following the guidelines for your public API's URL structure.

Inappropriate coupling - This piece of code has added a link to somewhere it shouldn't. Perhaps a generic library component is now tightly coupled to one of its consumers.

Hard Issues

Architectural problems - Before you even get into the detail, the high-level approach taken is incorrect.

Smells and patterns - The code is poorly structured or features a familiar code smell. It might be misapplying a pattern or could benefit from using a pattern.

Doesn't meet the requirements - The code is solving the wrong problem or doesn't address what the business requested.

Gold plating - The changeset includes too much extra code to provide future flexibility that you might not need.

Introducing a new approach - The pull request is introducing new patterns, libraries, or tools to the team or project. Is the approach sane and obvious? Is it documented? Do we all need to start doing things this new way? Is the new approach worth the additional way of doing something?

Bugs

Checks that can be spotted by tooling

These low-hanging fruit seem to make up the bulk of code reviews' comments; they include spelling, style and lint issues. While comparatively minor, they are still worth addressing; a consistent visual appearance makes the code easier to read, and lint issues can disguise minor bugs. Automated tools can also point out locations with high complexity, low test coverage and duplicated blocks of code.

For most of the languages, frameworks and IDEs you are using, there is an absolute wealth of quality tools that could automate the grunt checks for you. Rather than waiting for code review, these tools highlight issues during development, or just before check-in. For example, our team uses Prettier and ESlint for our front-end work, and a combination of ReSharper and Rosynator for our C# backend. We use ReSharper and a Visual Studio Code plugin for spellchecking, and SonarQube's static analysis, complexity analysis and code duplication checks in our CI/CD pipelines. Automating these checks means that the feedback is received during development, avoiding the back and forth of highlighting these issues during code review. As a reviewer, it means you're not getting bogged down on the minor issues and can focus on the rest of the code review.

This isn't to say that you should stop looking for all lint and style issues completely. Sometimes scripts aren't run, or you find a new issue that isn't covered by your automation. Occasionally, someone installs a new tool that conflicts with your established style. But rather than just fixing the current instance of the problem, take steps to stop it from happening again. Tweak your style settings, add quality checking tools to check-in or your CI/CD pipelines, adopt new tools were appropriate, and ensure everyone's tools are running with the same configuration.

Problems with the overall approach

This category includes problems with the broad strokes of a feature, rather than the details of its implementation. The feature may not fit into your application's intended architecture, be using an outdated approach, or create a new approach when it should have leveraged existing code. The developer may have misunderstood the requirements and developed the wrong solution. Perhaps you are seeing something entirely new for the first time in this code review.

For the big problems, code review happens too late in the development process. The work has already been done. To rework or to start again could be very time-consuming. The reviewers have a difficult choice: risk the current release while the work is redone, or let it through and suffer the additional technical debt.

Rather than waiting until code review to identify these problems, you should be trying to identify these problems earlier in the feature's development. Do you have junior, intermediates or new hires that need a bit more guidance? Perhaps your team needs encouragement to ask questions when they're unsure, rather than guessing and delaying vital questions until the code review stage. If you're in an agile environment, perhaps you need more detail established upfront in sprint planning. Alternatively, you might need less detail in sprint planning where you might not have the full picture and instead spend that time on spikes, proofs of concept or design discussions. Two or three developers spending 15 minutes in front of a whiteboard can avoid days wasted going in the wrong direction.

Of course, a balance needs to be struck. Most work doesn't need early checks, and it's valuable to keep moving, delaying some questions until code review. Getting bogged down and paralyzed with feeling you need to ask for permission kills your momentum, however, so does pausing development on your second task while you redo the first.

Pitfalls

Technical debt can cause enough problems on its own, but a dangerous variant is when it leaks out of the codebase and into the developers: when, rather than fixing a problem or trap, someone decides that it can be left to the code reviewers to prevent. These issues are often non-obvious, and developers will either forget or be unaware of the original problem. As a code reviewer, you might even start to build a checklist of frequent problems you need to ensure aren't present.

Issues that are surprising, or require a good knowledge of the history of the application, are often also the hardest issues to spot while reviewing. Sometimes they can be severe enough to prompt substantial rework if they haven't been taken into account earlier. Having to search for hidden, but essential issues also places undue responsibility on the code reviewer. Particularly if you don't have QA, it becomes the code reviewers' responsibility to ensure all possible use cases are covered. In one former role, no one wanted to review particular areas of the code base as the chance of missing something critical was simply too high.

As with the issues in the categories above, code reviews are not the right place to address these problems. It's seldom as simple as not having technical debt in the first place, but there are steps you can take to make unexpected issues more visible. A powerful method of achieving this is writing custom lint checks, so those common pitfalls are treated as compilation or lint errors by your IDE or build pipeline. You could have problematic test cases already set up in the development environment's test data to make them more prominent, or you could thoroughly cover the situation with its own set of integration tests.

Conclusion

As a reviewer, code reviews become so much easier when you can trust the big picture, not be distracted by a horde of tiny issues, and without having an ever-growing checklist of cases you must not let through. You can spend more time thinking about the future impact of this changeset, its maintainability and readability, and opportunities to teach and learn.

As the reviewee, the right tooling lets you can address most of the potential issues before you even create the pull request. Clarifying your approach earlier lets you proceed with the confidence you are doing things correctly. Having edge cases surface themselves sooner means you can spend less time worrying about what you don't know. With more focused reviewers, the feedback you receive is more useful and informative and less likely to be deluges of minor issues, surprises or requests for significant changes.

Top comments (0)