DEV Community

Cover image for The Senior Engineer's Guide to Code Reviews
Jayant Bhawal for Middleware

Posted on • Edited on

The Senior Engineer's Guide to Code Reviews

Code reviews.

You know how important they are.

They are one of the pillars of getting reliable code out there.

Yet, it’s one of those things you need to squeeze out some time for in your super busy days.

If you’re not reviewing code, you might as well ship landmines to your users because you never know when it’ll blow up. 🤷

Obviously, you know that. You’re not here to be told “Hey! You should have code reviews! It’s a vital thing!”

My team already does reviews. Why should I care?

Code reviews processes handled without care and diligence can have serious consequences.

At one of my previous orgs, code reviews were often not done thoroughly, and hence needed multiple passes. They were also done by reviewers on the opposite ends of the earth! 🌏

So addressing any comment took almost a whole day. And again, because the reviews would usually not be comprehensive the rework time on a PR would be in days for trivial things.

Cycle time breakdown of a team

“You can’t improve, what you don’t measure”

Often attributed to Peter Drucker, but I’ve not been able to actually find evidence for that.

But it’s a statement I found to be profound in my experiences.

I’ve made a case to my leadership in the past to make much needed organizational changes to enable all teams to have fewer inter-dependencies across time-zones to enable people to collaborate faster.

I understand how difficult it can be to do so, but it’s even harder to get any change in motion without a solid data-backed reason for why it’s needed.

P.S.: That’s part of why Dhruv & I started Middleware. 🚀

GitHub logo middlewarehq / middleware

✨ Open-source DORA metrics platform for engineering teams ✨

Middleware Logo

Open-source engineering management that unlocks developer potential

continuous integration Commit activity per month contributors
license Stars

Join our Open Source Community

Middleware Opensource

Introduction

Middleware is an open-source tool designed to help engineering leaders measure and analyze the effectiveness of their teams using the DORA metrics. The DORA metrics are a set of four key values that provide insights into software delivery performance and operational efficiency.

They are:

  • Deployment Frequency: The frequency of code deployments to production or an operational environment.
  • Lead Time for Changes: The time it takes for a commit to make it into production.
  • Mean Time to Restore: The time it takes to restore service after an incident or failure.
  • Change Failure Rate: The percentage of deployments that result in failures or require remediation.

Table of Contents





Okay, I hear ya. What are my options?

What you ideally want are code reviews that are done thoroughly, which is to say that obvious red flags, performance or security defects, or other hard-to-read code shouldn’t go unnoticed.

But you also want all of this to happen in a reasonable amount of time.

Well reviewed code merged in a reasonable amount of time, means your team's delivery predictably, and with high reliability.

Only if there were a well researched, structured way of getting a grip on this. 🤔

Have you heard of… DORA metrics?

Okay, this isn’t another one of “DORA GOOD!” articles.

These are my experiences of how keeping an eye on the four-keys (as explained by the awesome Nathen Harvey) helped me improve the code delivery experience for myself and my team in the past.

DORA Metrics

DORA Metrics as seen on Middleware Open Source

Exploring Code Reviews with DORA

How Long Reviews Inflate Lead Time

Long review cycles directly impact Lead Time for Changes.

Lead time consists of basically 5 parts.

  1. Time from first commit to the PR being open
  2. Then the PR receiving its first review (could be a comment, change, approval)
  3. Time spent on making changes to the PR, till it’s finally approved
  4. Time between approval and the PR being merged
  5. Time when the PR was eventually deployed

Naturally, any of the parts taking time will inflate your team time. But there are 2 parts that are particularly egregious factors for delays here. That’s #2 and #3.

#2. Time till the PR receives its first review (First Response Time)

After the PR is open, a dev can’t really do much on it. The PR may be totally good to go! It may need solid changes. At this point, only a review will tell. This is also the point when a dev may not be able to pick up more tasks either because technically a review could happen at any time, and they would suffer from context switching.

Context switching is one of the biggest productivity killers for devs.

The Misleading Focus on "Time per Review"

This talks about the third sub-part of the Lead Time metric.

#3. Time spent on making changes to the PR (Rework Time)

The real problem here isn’t stemming so much from how much time was spent here, but how many times back and forth happened. Let’s call that “Rework cycles”.

Because if there was only 1 rework cycle because the PR was approved, then it could still have taken a long time before approval, but it was actual implementation time, not idle time. This kind of rework could be mitigated by better training, codebase onboarding, context sharing, etc.

But… if you’re going back and forth a lot of times, then each of these cycles has some idle time associated with it, much like first response time.

During this time, the dev can’t pick up new work, because that would inevitably result in rapid context switching.

This is likely to happen when the PR is too large to review in one go, or the reviewer didn’t review thoroughly for other reasons. This is especially exacerbated when the author and reviewer are in far apart time-zones. Because each review, and rework is likely to happen during the work hours in their respective time-zones, inflating the time before the reworked changes can be checked into many many hours.

This is a snow-ball effect

The more PRs get blocked like this, the slower the teams deliver. And often the new work doesn’t stop coming, so that makes it even more challenging for devs to manage and estimate their work accurately.

If this keeps happening constantly, it also deals a blow to the morale of the team.

tl;dr

Focusing solely on reducing "time per review" can backfire.

The goal should be to optimize the review process without sacrificing thoroughness, ensuring each review adds real value.

Subpar Reviews and Change Failure Rates

Teams operate under pressure and tight deadlines all the time. And it’s unreasonable to expect that to magically change. But it’s also unrealistic to think that corners won’t be cut to ensure things don’t get shipped on time.

Since we’re talking about code reviews, one of the corners that are cut often, are:

  1. Large PRs created that contain all the code for a feature instead of well contained smaller and easier to review PRs.
  2. PRs are reviewed by just skimming over them because the reviewer may just not have the mental capacity or time to deal with it properly at the moment.

Both of those things happen from time to time. Devs are humans too. You won’t solve this by just blaming it on them or strong-arming them into reviewing “properly”.

The most important thing is for you to know that it’s happening in the first place. Because then you can do something about it. How would you know about it, you ask?

  1. Your Lead Time should be going down. Because reviews are being done faster (often than they should)
  2. Your Change Failure Rate might be going up. Of course, with subpar reviews you’re likely shipping more bugs.
    1. But, even if your CFR isn’t going noticeably down, your team might still be shipping low performance or quality code that would bite you back later, and will likely show up as higher Lead Time down the line. But by then it’ll be too difficult to correlate with the reviews of today.

This is a good time to mention that DORA is a great guide, but it’s not perfect.

Don’t treat it like a definitive rule-book. Don’t measure individuals against it.

Use it holistically for your team, but also be involved to make sure it’s actually helping your team. That’s the goal after all, isn’t it?

Great! How? 👉 Strategies for Faster, More Effective Reviews

Here’s a quick pre-review checklist

  1. Tests: Ensure all relevant tests are written and passing ✅.
    1. This can be done by a CI bot (or Github Actions)
  2. Documentation: Update relevant docs, including inline comments and README files.
  3. Clear Commit Messages: Write descriptive commit messages that explain the 'why' behind changes.
    1. This could also be enforced via commit-lint
    2. You could also use aicommit to help write good and detailed commit messages!
    3. My team often uses GH Copilot to create commit messages that actually end up being totally satisfactory to me!

Example commit message:



feat: add user authentication

  • Implemented OAuth2 for secure login

  • Added unit tests for authentication flows

  • Updated API documentation with new endpoints

Enter fullscreen mode Exit fullscreen mode




Right Reviewer, Right Time

Match reviewers to their expertise and current workload to avoid overload. Complex changes go to senior devs, simpler ones to peers.

But you also need to be aware of how much context a dev has of a specific codebase.

There’s a few challenges here:

  • If your devs are highly specialized within singular specific repos, then it’ll be pretty difficult to use their skills on a separate codebase simply due to the required time to onboard and share context.
  • If your devs are too generalized over all codebases, it might be difficult for them to solve certain issues faster due to a lack of deep context of specific codebases.
  • If one of the devs on the team has a lot of context about things, it’s super easy to overburden them. You need to make time to distribute context sooner than later, so your work doesn’t get blocked at a time when it’s most critical.

You want to ensure you have a mix of both, and that could be achieved with as few as 2-3 devs that you work with.

Visualizing bottlenecks in a team

Understanding who gets blocked on whom for code reviews is crucial. You don’t want your team to not deliver at all because someone needed to be on leave.

Tools of the Trade

Use static analysis, code linting, and automated checks to catch simple issues before human review. This lets reviewers focus on more complex feedback.

Example Tools:

  • ESLint: JavaScript linting.
  • Husky: For running pre-commit checks and static analysis.
  • CI/CD Pipelines: Automated testing and build processes.

Super important tip:

It’s easy to lose a LOT of time arguing over spaces and tabs, semicolons or not, trailing newlines.

But all that doesn’t matter.

Decide on, and agree with whatever code-style the team finalizes, and enforce them as part of the linter rules.

This stuff isn’t worth your time. 👍

The Art of Feedback

Give actionable, specific comments that focus on improvement, not nitpicking. Avoid vague statements and offer clear guidance.

Share how a file could have been restructured into multiple, along with why doing that is a good idea.

Share why making that DB call multiple times in a loop might be a bad idea because of reasons I’m sure I don’t need to explain here. 😆

If the nitpicks are largely things that could have been handled by a linter, then use one of those.

People hate reviews that mostly have only nits. But again, poor variable names, typos, etc. can’t just go to prod! 😁

Example:


Ineffective comment

"Fix this."

Effective comment

"Consider using a map here to improve lookup efficiency. This will reduce the time complexity from O(n) to O(1)."

Enter fullscreen mode Exit fullscreen mode




Streamlining the Process with Middleware

How Middleware Helps

I’m able to see specifically where my teams get stuck, why, and how I can unblock them.

That’s kind of half of my job, and now I’m able to do this stuff a lot faster than before!

Process overview of a team

Here’s a few things I focus on:

  • Review Metrics: Track how long reviews take and identify where delays occur.
  • Process Insights: Gain visibility into the entire review process and find areas for optimization.

I won’t get too much into that because then it’ll sound like a sales pitch! 😂

Beyond Technicalities: The Human Element

Fostering a Culture of Constructive Feedback

Promote a culture where feedback is seen as a growth opportunity. Constructive, respectful communication helps improve code quality and team morale 💬.

Balancing Speed with Thoroughness

Balance speed with thoroughness. Quick reviews shouldn't compromise scrutiny, and thorough reviews shouldn't drag on.

Psychological Safety

Ensure psychological safety for both reviewers and authors. Encourage open discussions and address mistakes without blame, fostering an environment of continuous improvement 🌱.

Remember, people often go guards-up when you’re sharing feedback for improvement. Be considerate, and clear.

Conclusion

Effective code reviews are crucial for maintaining code quality and delivery speed. By aligning with DORA metrics, using the right tools, and fostering a constructive feedback culture, teams can streamline their review processes. Embrace these practices to make your code reviews both efficient and impactful.

Try Middleware to gain deeper insights into your code review processes and identify areas for further improvement. 🚀

Again these are just guidelines and how we look at code reviews. Do share how code reviews are done in your organization!

Code reviews play a vital role in overall product reliability. There are instances of bad code reviews (not lousy code!) causing negative brand impact. To sum up, better code review processes contribute to less failure rate.

Frameworks like DORA are designed to be light-weight to help the engineering team be productive without too much effort from engineers or even leaders. We at Middleware are on a mission to help engineering teams ship productive code. Do check out our Middleware open-source, our open-source DORA metrics solution, that is locally hostable. Consider giving us a star if you like it!

Top comments (20)

Collapse
 
samadyarkhan profile image
Samad Yar Khan

Great read @jayantbh ! I have learned a lot from Code Reviews and its was solely because of the well structured the comments you left on my PRs followed by the reasoning for changes. The point where people optimise for review time instead of review cycles was bang on as it ends up wasting more time in totality.

Collapse
 
martinbaun profile image
Martin Baun

Thank you for this!
Feedback is a gift. The sole reason is to improve the quality of the code base. The more errors you point out in your code, the better your code will be.

Collapse
 
jayantbh profile image
Jayant Bhawal

Exactly! And it's not limited to errors either. General code quality, abstractions, etc. done well can benefit everyone working on a codebase!

Collapse
 
martinbaun profile image
Martin Baun

Yep :)

Collapse
 
adityamitra profile image
Aditya Mitra

Nice article! Should code reviews be done at the start of the developer's day or at the end?

Collapse
 
jayantbh profile image
Jayant Bhawal

Honestly there's no silver bullet answer to this, and neither should there be.

Someone might like doing this at the start, or at the end. Before, or after lunch. Or at a fixed time every day when they aren't likely to get any meetings.

Someone with more discipline and focus than me might even deal with reviews in between other work, as a break!

Collapse
 
aravind profile image
Aravind Putrevu

This one is quite extensive, Jayant.

Collapse
 
jayantbh profile image
Jayant Bhawal

Thank you Aravind! Glad you liked it!

Collapse
 
ayush2390 profile image
Ayush Thakur

Super informative article

Collapse
 
shivamchhuneja profile image
Shivam Chhuneja

Quite an insightful piece Jayant!

Collapse
 
loebkes profile image
Lui

Nice Article. It describes a lot of problems I've seen many times with reviews in the past. Middleware looks nice, but for me, it's a bit too expensive. But we don't need to discuss this here as opinions about prices definitely will differ here :D

Collapse
 
litlyx profile image
Antonio | CEO at Litlyx.com

Amazing, Amazing, Amazing!!! You guys are metrics-driven evangelists like us at Litlyx.

Share some love on our Github Open-Source Repo for Easy Analytics with 1 line of code!

Collapse
 
jayantbh profile image
Jayant Bhawal

That is an awesome project! Glad you liked the post, and thanks for sharing your project

Collapse
 
xephun profile image
X. Ephun

One observation: the further "upstream" you move the difficult decisions, the more effort (time/energy) you will save "downstream". In this case, being judicious when you hire / bring people onto your team will go a long way towards minimizing wasteful PR/MR cycles.

Collapse
 
jayantbh profile image
Jayant Bhawal

I partly agree, because yes, if you optimize for people who can collaborate really well, you'll likely deliver faster.

But despite your best efforts, interviews can be an inherently flawed process. You might find people who are great collaborators, but may not be great at more technical work.

It might be better served by training people and having processes in place that are enforced with care. Because hiring is already very hard. :(

Collapse
 
jj profile image
Juan Julián Merelo Guervós

But how does aicommit focus on the why? It literally writes the what

Collapse
 
jayantbh profile image
Jayant Bhawal

Oh yeah, definitely don't use AI assistance to do all of your job for you.
Use it as what it is meant to be. Additional help.

Some comments may only be visible to logged-in visitors. Sign in to view all comments. Some comments have been hidden by the post's author - find out more