Some weeks ago we introduced Renovate ( a tool to automate dependency updates - will write about it in another post), and we realized that some of our projects were quite outdated in terms of dependencies.
One of the biggest hassles to update was the linter we are using: XO, which is a very opinionated, but customizable linter base on ESlint.
Jumping from version 0.26 to 0.39 we found ourselves with LOTS of new rules and therefore our git hook in charge of linting our codebase before committing was failing.
Of course, running
xo --fix would "magically" solve probably most of those errors, but that would cause a massive amount of changes in the repo (basically every single file could have been affected ), the MR would be very tedious and painful ( and when that is the case, it is often overlooked and approved carelessly, potentially bringing in nasty bugs).
Merging such a monster would have also have affected all other branches currently active during the sprint, with possible conflicts and issues raised by rebasing.
How to approach all the changes required by the new linter rules in a way that is manageable?
What we did was merging the Linter update first, while disabling every new rule.
Having done that, we can create a new branch for each and every new rule currently disabled, and then enable it, run the auto-fix, or apply manually the required fix to it, run the tests, and create a Merge Request.
If everything is ok, the fixes of that rule can safely go to production. If something weird would happen, those specific changes could be easily reverted.
Repeat the process for every single run. Done!
Yes, this is indeed a boring process. but, it allows you to focus on the code changes, and the bugs, caused by a specific rule.
The reviewer can focus on tiny changes - and learn something by doing that ( there is always some interesting aspect you did not consider, if you take the time to read the docs and understand each new Linter Rule).
If tests are failing, you can easily revert, same if bugs are introduced and spotted by your QA testing that specific branch.
You can also granularly try out and apply specific settings of the rules options which would fix your project best - or decide that you want to keep the warning but not the error.
Furthermore, none forces you to fix all of them at once, you can just leave those rules disabled, and have trainees, or newly onboarded members work on them to get used to the project, or simply let anyone work on a specific rule as soon as they have a little time at the end of the sprint.