If you are reading this, chances are, you are in some way involved with the creation of software. You are probably well aware that code reviews are considered a best practice.
What you may not know is that, 64.3% of Pull Requests are merged without any code changes as a result of the review. This means that only in 1/3 of reviews yielded a change to the proposed code.
🗃 Dataset
We have been collecting open source data for a while, and in this post we will share insights from analyzing 2,340,078 PRs spread across 7,836 organizations. The dataset is specifically derived from repositories that exhibit teamwork. Thankfully, with so many software companies embracing open source, there are plenty of examples of different styles of collaboration.
🔮 Code review outcomes
Generally speaking, the fate of a PR being reviewed will follow one of these two paths:
- Merged as is (aka LGTM'd) — 1,504,914 PRs in the dataset
- Merged with changes as a result of feedback (comments, suggestions etc.) — 835,164 PRs in the dataset
⏳ Time to merge PRs
Let's look at how much people wait for their PRs to be merged.
On average, the time between a PR being opened and it being merged is 128 hours (~5 days). If we look only at the PRs that were LGTM'd this number is 74 hours (~3 days) and for the other 35.7% of PRs the average lead time is 225 hours (~9 days).
Here are two histograms with the probability distributions of merge times for LGTM'd and non-LGTM'd PRs.
Of course, the PRs that are LGTM-merged are either simpler, smaller or simply low risk, so the lead time is significantly lower. However, the cumulative wait time for changes that were already good to go is significant.
In this dataset, the total time waiting for Pull Requests to be merged (without any changes as a result of the review) is 12,718 years. For the PRs that got corrections, the time waiting adds up to 21,437 years.
The astronomical absolute values aside, we can see that a whooping 37% of the time waited was more or less unnecessary.
Looking at the trend over the past three years, it is evident lead times are going down.
A very similar trend is also observed by GitHub themselves. This is because more and more teams are choosing to ship small and often, adopting techniques like Trunk based development and Continuous delivery.
🎭 What is code review anyways
Knowledge sharing aside, code review is meant to act as a quality gate. A good review should encompass multiple aspects of the code (e.g. fit for purpose, free of errors etc.), but, let's be honest, in practice, reviews are transactional.
As an author:
You need your code reviewed so that you can ship it and continue iterating.
As a reviewer:
You want to unblock your teammate and get back to your own code.
Of course, in both cases, we don't want to compromise on quality, but not all code changes hold the same amount of impact and risk. I'd go as far as claiming that code reviews at most teams are largely pattern matching — e.g.: "Have I seen this pattern succeed before?", "Is the author familiar with this subsystem?" etc. When a sufficient number of mental tick boxes are checked, we are ready to declare LGTM.
🤖 Automation
Programmers love to automate their work — think of all the formatters, linters and static analysis tools in existence. Just for fun, here's a comparison between car automation and code automation.
Codeball is a new and ballsy attempt at taking automation further. It simulates the developer intuition and approves safe pull requests, saving teams time.
The best way of illustrating the impact of such advanced automation is by looking at how it affected the Pull Request merge times for a team using it.
While there always are tricky Pull Requests that require human review, Codeball identifies and approves a large proportion of PRs that would have been LGTM'd by a human. Naturally, this team spends less time dealing with such PRs.
If this sounds too good to be true, this is because a technological innovation is upon us! Go and test it out on your own repository!
Thanks for reading!
Top comments (0)