Recently, a friend started to discuss with me about some approaches used when developers review his code. He seemed angry about some comments so I decided to do this post, expressing my opinion about things that he said and I frequently see and read in blogs.
So, you are developing a feature, you put your code for review, the reviewer made some comments, you complete the suggestions, the reviewers accepted your changes (The LOOKS GOOD moment) and you push your code. But ... let's talk about the real world.
You develop your feature, and you're very excited.. you place your code for review, the review started ... The reviewers made some comments and suggestions some trivial and others not so easy neither clear. You started to implement the suggestions, ask for clarification and talk about why you didn't made some things. Your updates are good but looking to your new code, there are some things that the reviewers found that they don’t like about your first code. You again made the suggested changes, but new problems are found and this cycle repeats ...
The big problem here is that in each iteration, new problems are addressed, causing an infinite loop in the review, and you might be thinking: "Ok, sometimes the code needs to be improved, it's impossible to avoid those things" although there are a million of reasons for this happens, I think that some common factors to this happen is when the team does not have good guidelines and patterns to review the code or when the comments made by the reviewers are so complicated to understand, and the author cannot achieve the goal.
The whole process needs to be clear for both sides, in the mind of the reviewer needs to be clear the list of things that are required to finish the task, and for the author what things he needs to do, to complete the task.
I don't know if you passed through for this situation, but usually, if your code has four spaces instead of two or if you don't have a blank line at the end of the file you'll receive:
What do you think about this? These tasks seem things easily replaced for an automated checker like Prettier and Eslint. If my team pass-through for this situation, may I should ask myself, Do we have good style guidelines? and Are we providing some automated solution?
Continuing the previous reasoning, if my team doesn't have good style guidelines and a more methodic method to review, probably different reviewers will have different opinions if all the team doesn't know what the priorities, or the best practices, how do they can reach consensus?
Sometimes you see yourself leaving the review for a time that you can look well and your mind is in the right place. Maybe the review is large or the architecture is complex or is it a breaking change, right?
Right, but nobody wants to be blocked, if the code is not in production, what the value it has?
Of course, there are many reasons to cause this, but if you are an author try to split your features into small pieces, and when you open the review make sure the purpose is really clear. And if you are a reviewer ensure that you have a piece of your time to review code.
Reposting, because the previous had few errors