If you've used Claude Code for code reviews, you've probably noticed something: the feedback is smart, but it's vague. "This function is too long." "Consider breaking this up." Great — but why? And what's the actual risk if you don't?
I built brooks-lint to fix this. It's a Claude Code plugin that anchors every finding to a specific chapter in one of six classic software engineering books.
The Problem with AI Code Reviews
Generic AI reviews are good at spotting syntax issues and surface-level smells. But they struggle to explain consequence — why a particular pattern is dangerous in the long run, not just aesthetically displeasing.
The result: developers nod, maybe fix the obvious stuff, and move on. The deeper structural decay continues.
What brooks-lint Does Differently
Every finding follows a four-part structure I call the Iron Law:
Symptom → what the code is doing
Source → which book + chapter describes this exact pattern
Consequence → what breaks if you ignore it
Remedy → concrete fix
Example output:
Symptom:
UserServicehas 847 lines and handles auth, billing, notifications, and user CRUD.
Source: Clean Architecture, Ch. 7 — SRP violation; responsibilities should have separate reasons to change.
Consequence: Any change to billing logic risks breaking auth; test isolation becomes impossible.
Remedy: ExtractBillingServiceandNotificationService; keepUserServicefocused on identity only.
No vague advice. Just: here's the book, here's the chapter, here's exactly what goes wrong.
The Six Source Books
The plugin draws from six foundational texts:
| Book | Focus |
|---|---|
| The Mythical Man-Month | Project complexity, conceptual integrity |
| Clean Code + Refactoring | Code-level smells and refactoring patterns |
| Clean Architecture | Dependency rules, component boundaries |
| Code Complete | Implementation craft, construction quality |
| The Pragmatic Programmer | DRY, orthogonality, adaptability |
| Domain-Driven Design | Ubiquitous language, bounded contexts |
Four Review Modes
/brooks-lint:brooks-review # PR review — what changed and what it risks
/brooks-lint:brooks-audit # Architecture audit — structural health of the whole system
/brooks-lint:brooks-debt # Tech debt — classify and prioritize what to fix
/brooks-lint:brooks-test # Test quality — coverage gaps, fragile tests, wrong abstractions
The test quality mode (brooks-test) draws from xUnit Test Patterns, The Art of Unit Testing, and Google's Software Engineering book — it flags things like over-mocking, test logic duplication, and tests that verify implementation instead of behavior.
Eval Results
I ran an evaluation comparing reviews with and without the plugin on a set of intentionally seeded codebases:
- 94% of findings included a book citation and structured consequence analysis with the plugin
- 16% of findings had equivalent depth without it
The plugin doesn't make Claude smarter — it makes Claude accountable. When every finding has to cite a chapter, hand-wavy feedback can't survive.
Install
# Via plugin marketplace (recommended)
/plugin marketplace add hyhmrright/brooks-lint
/plugin install brooks-lint@brooks-lint-marketplace
# Manual
cp -r skills/brooks-lint/ ~/.claude/skills/brooks-lint
Why "Brooks"?
Fred Brooks wrote The Mythical Man-Month in 1975. Most of what he described — conceptual integrity, the second-system effect, the surgical team model — still shows up every week in production code. The plugin is named after him as a reminder that the hard problems in software aren't new, and the best answers to them are already written down.
GitHub: hyhmrright/brooks-lint
Happy to answer questions about how the skill detection works or how I structured the decay risk taxonomy.
Update (March 2026): Now on Gemini CLI Too
brooks-lint v0.5.2 now supports Gemini CLI as a first-class platform, in addition to Claude Code.
Gemini CLI install:
/extensions install https://github.com/hyhmrright/brooks-lint
Claude Code install:
/plugin marketplace add hyhmrright/brooks-lint
/plugin install brooks-lint@brooks-lint-marketplace
Both platforms use the same skill files — same decay risk framework, same Iron Law diagnostics, same book citations. The only difference is the entry point configuration. If you use both tools, one install covers both.
The extension is also listed on the official Gemini CLI extension gallery.
Top comments (0)