I gave an AI code reviewer a PaymentProcessor class — seven services injected, payment + fraud + inventory + notifications all in one 60-line method.
Here's what it flagged:
🔴 Change Propagation — Seven-service constructor signals a God Class in formation
Source: Fowler — Refactoring — Divergent Change; Martin — Clean Architecture — SRP
Consequence: This class will change for at least four independent reasons. Each change is a merge conflict waiting to happen.
Remedy: Introduce
FraudCheckService,InventoryDeductionService,PaymentNotifier.PaymentProcessorthen injects 3 services, not 7.
Not "this is bad." Here's which book explains why, what breaks if you ignore it, and how to fix it.
That's brooks-lint — an open-source plugin for Claude Code, Codex CLI, and Gemini CLI.
⭐ github.com/hyhmrright/brooks-lint · MIT · v1.0.0
Why I Built It
ESLint flags unused variables. Complexity checkers count branches. SonarQube tracks duplication percentages. All useful. None of them answer the question a senior engineer actually asks:
"What architectural principle is this violating, and what happens if we ignore it?"
I've spent years reading the classics — Fowler, Martin, Evans, Brooks, Ousterhout, McConnell. Each book has a chapter that makes you think "I've seen this exact failure before." But the insight stays locked in the book.
brooks-lint is an attempt to make that insight executable.
Every finding follows the same shape:
Symptom → Source → Consequence → Remedy
- Symptom is what the linter sees
- Source is the book + chapter it comes from
- Consequence is what breaks in six months if you ignore it
- Remedy is a concrete refactor
The Six Decay Risks
After synthesizing twelve books, six patterns kept appearing as root causes of software decay:
| Code | Risk | Core Insight |
|---|---|---|
| R1 | Cognitive Overload | Mental load exceeds working memory → mistakes and avoidance |
| R2 | Change Propagation | One change forces unrelated changes elsewhere |
| R3 | Knowledge Duplication | Same fact in two places → they diverge |
| R4 | Responsibility Rot | One module does too many things |
| R5 | Dependency Disorder | Modules depend on modules that depend on modules… |
| R6 | Domain Model Distortion | Code doesn't speak the business's language |
There are also six test-space variants (T1–T6) covering test brittleness, mock abuse, coverage theater, and more.
The Twelve Books
| Book | Author | Risks |
|---|---|---|
| The Mythical Man-Month | Frederick Brooks | R2, R4, R5 |
| Code Complete | Steve McConnell | R1, R4 |
| Refactoring | Martin Fowler | R1, R2, R3, R4, R6 |
| Clean Architecture | Robert C. Martin | R2, R5 |
| The Pragmatic Programmer | Hunt & Thomas | R2, R3, R4, R5, T2, T3 |
| Domain-Driven Design | Eric Evans | R1, R3, R6 |
| A Philosophy of Software Design | Ousterhout | R1, R4 |
| Software Engineering at Google | Winters et al. | R2, R5 |
| xUnit Test Patterns | Meszaros | T1, T2, T4 |
| The Art of Unit Testing | Osherove | T1, T2, T3 |
| Working Effectively with Legacy Code | Feathers | T3, T4, T5 |
| Unit Testing: Principles, Practices, Patterns | Khorikov | T1, T2, T6 |
The books agree more than they disagree. Fowler's Divergent Change smell, Martin's Single Responsibility Principle, and Evans's Bounded Context are all describing the same underlying failure from different angles.
Five Review Modes
/brooks-review → PR code review (diff-focused)
/brooks-audit → Architecture audit
/brooks-debt → Tech debt assessment
/brooks-test → Test quality review
/brooks-health → Codebase health dashboard with score
Works on Claude Code, Codex CLI, and Gemini CLI.
Install in 60 Seconds
Claude Code:
/plugin install brooks-lint@brooks-lint-marketplace/brooks-audit```
{% endraw %}
**Codex CLI:**
{% raw %}
codex plugin install hyhmrright/brooks-lint
plaintext
**Gemini CLI:**
gemini extension install hyhmrright/brooks-lint
Source: [github.com/hyhmrright/brooks-lint](https://github.com/hyhmrright/brooks-lint)
---
## The Hard Part: False Positives
A 60-line function in a data-migration script isn't the same as a 60-line function in a payment handler. The tool needed to know **when not to flag something** — which meant reading the exception clauses in each book more carefully than expected.
The benchmark suite now has **49 scenarios**, including explicit false-positive cases that must not be flagged. That's probably the most useful artifact in the repo — it forces the skill to be calibrated, not just pattern-matching.
---
## I'd Love Your Help
v1.0 is out, but a linter grounded in books is only as good as the communities that stress-test it.
**Three specific ways you can help:**
1. **Try it on a real repo** and open an issue if a finding feels wrong — false positives are the highest-priority bug class
2. **Propose a 13th book** — if there's a classic that covers a failure mode R1–R6 misses, tell me which chapter
3. **Share a code smell** you see constantly in the comments — I'll run brooks-lint on a representative example and post the raw output
⭐ [github.com/hyhmrright/brooks-lint](https://github.com/hyhmrright/brooks-lint)
**Which of the six risks (R1–R6) do you hit most often?** Drop a comment — happy to dig into specific examples.
---
## Also From Me
If brooks-lint's *why-this-matters* framing resonated, I also built **logic-lens** — same philosophy but focused on execution-time behavioral bugs (callee contract mismatches, control flow escapes, boundary blindspots) rather than architectural decay:
[Why AI Code Review Misses Logic Bugs — And How Structured Execution Tracing Fixes It](https://dev.to/hyhmrright/why-ai-code-review-misses-logic-bugs-and-how-structured-execution-tracing-fixes-it-3n0p)
The two tools cover different failure modes and work well together: **logic-lens** catches runtime behavioral bugs via execution tracing; **brooks-lint** diagnoses architectural decay against 12 classic books.
Top comments (0)