(originally published on my blog)
- Should you have PR reviews? Absolutely.
- What should the reviewer be concerned about?...often seems like everything and nothing
In [[Software Engineering at Google]] they talk about a 2 tier review process. You have someone review the language use and someone review the feature implementation. In some cases, it could be the same person - if I am an engineer on the team that is shipping this feature, but I am also part of the Python approvers group, I can just approve and it will cover both aspects.
The idea here is that the use of the language needs to be consistent and people who approve for language usage all think about the language semantics in the same way. This is in addition to automated tools for linting etc.
On the other hand, I heard from one Google Engineer who complained that their PR was held up in review for a month and that she felt annoyed because she had requested feedback at the design phase which went mostly unanswered but then at PR time all of a sudden everyone had very specific opinions.
Simon, an old colleague, taught me a different way to think about PRs:
You're offering context, experience, and advice to help the author get to a great answer...its not about correctness but about guiding the author of the PR through additional trade-offs that they may not have been aware of, while empowering them to make the final call themselves.
This sounds a lot like what Dawna Markova calls "Thinking Partners" in Collaborative Intelligence and building this collaborative intelligence muscle could be a powerful way for teams to work better together.
In a way, this almost daily practice among engineers, could be turned from a source of friction to an almost fungal process that builds relationships, transfers knowledge, and teaches decision making.
To get to there, though, we need a few things in place - and this is easier for small teams that are just getting started.
Automate the hard parts that humans suck at
Lint, Prettify, etc.
If you are on Github, you will have access to Github Actions and standing up a linter, prettier, and type checker (for dynamically typed languages) is super simple. It will take you about half a day to set up the action, make it a required action, and protect a branch.
Add a style guide if you feel strongly about intangibles like when not to use list comprehension in Python, when to break apart functions even if its technically doing one thing etc. These are situations where judgment matters but the context is shared across features and hence shouldn't be re-hashed with every PR.
After that, especially with small and new codebases, you can actually start merging to main without manual reviews.
And tests and test coverage
Add PR checks for tests and test coverage. I'm not gonna go into what I think good tests look like here. You want tests, you want tests to pass, and you want some coverage metrics that gives you a signal on how much your tests cover.
This helps you bump the automation game up further: initially fail PR checks for failed tests and just collect metrics on coverage. Eventually, you set a low cutoff on coverage and push it up gradually.
When you have all this...what do you need a human review for?
What manual review shouldn't be
In most organizations I have been at, it was because they couldn't prioritize the above, and as a result said: "Here, I've protected the branch, someone needs to approve the PR."
They could not prioritize the time it would have taken (2 - 3 days max if you are using github) to set up the above tools. Mostly, the early team had no expertise in these flows and was unsure of the consequences and leadership believed it is more important to ship fast.
It is definitely important to ship fast but putting a human review in the middle of it is the worse way to try to ship fast.
A manual review shouldn't be a catch all to deal with the stuff we couldn't prioritize automating. If you can automate it, then put in the effort: code contribution will be the lifeblood of your app - you should always prioritize what makes it flow better.
What should a manual review focus on?
Focus on intangibles. Be a thinking partner. The author attempted to solve a problem a particular way. [[Eben Hewitt]] explains in [[Technology Strategy Patterns]] that we never actually solve a problem, we just choose a problem we would rather have.
In that sense, do the realities of this change reflect that? Would you consider this change readable, maintainable, friendly to other humans building more code on top of it? Is it the right balance of complexity and correctness?
Most of these are exchanges, not Change Requests. These thoughts that can help the original author confirm that they were thinking off the right trade-offs, or re-evaluate some of their ideas.
This can be extremely powerful when a reviewer has lots of historical context on a subsystem, has had extensive interactions with users about a feature, or has often spent time in the trenches dealing with support issues. The opposite can be true, too - coming in with no context on the business, a reviewer can ask "why this way?" - the author may have taken some things for granted ("we always did it that way") and as a result reflect on whether that still applies.
PR reviews then becomes a medium for knowledge sharing and up-skilling the entire org, instead of gatekeeping in hopes of keeping bad code out.
Your view in all of this might be very different (I'd love to hear about it). Even if you don't agree with anything else, make sure that the PR review process is a very deliberate exercise. PR reviews happen daily and are central to most developers but so few organizations seek to define and develop a concrete strategy around this.
Top comments (0)