DEV Community

hyhmrright
hyhmrright

Posted on

I used an AI code review tool to review its own code — then fixed everything in v0.8.5

A story about dogfooding.

While wrapping up brooks-lint v0.8.5, I had a thought: this tool claims to review code quality using the lens of 12 classic software engineering books. Could it hold up under its own scrutiny?

So I ran all four review skills against the validator script in the repo itself.


What is brooks-lint?

It's a plugin for Claude Code, Codex CLI, and Gemini CLI that surfaces decay risks in your code — grounded in books like The Mythical Man-Month, Refactoring, Clean Architecture, and nine others.

Every finding follows a fixed structure: Symptom → Source → Consequence → Remedy

Four modes: PR Review, Architecture Audit, Tech Debt Assessment, Test Quality Review.


The self-review process

I ran:

  • /brooks-audit (Architecture Audit)
  • /brooks-debt (Tech Debt Assessment)
  • /brooks-test (Test Quality Review)
  • /brooks-review (PR Code Review)

Target: scripts/validate-repo.mjs — the repo's own consistency validation script.


What it found

🔴 R2 Shotgun Surgery · High

The book count (12) was hardcoded in three places: package.json, validate-repo.mjs, and the docs. Adding a new source book required changing four or five files — classic Shotgun Surgery (Fowler, Refactoring).

🟡 R1 Cognitive Overload · Medium

A 250-line flat script with all validation logic mixed together. No separation of concerns. The Divergent Change antipattern from Clean Code.

🟡 R2 Magic Numbers · Medium

Risk counts (6 production risks, 6 test risks) written as literals in two scripts with no named constants.

🟡 T6 Architecture Mismatch · Medium

parseFrontmatterBooks() was embedded in validate-repo.mjs, making it impossible for a test file to import it without triggering the full validation as a side effect. Testability sacrificed at the altar of coupling.

🔵 T5 Coverage Illusion · Low

The skills content checks had no automated verification — purely manual inspection.


What was fixed

Single source of truth: Book list moved into a YAML frontmatter books: key in skills/_shared/source-coverage.md. validate-repo.mjs now derives the count dynamically. Adding a new book = changing one file.

Shared module: New scripts/frontmatter.mjs exports parseFrontmatterBooks() so any file can import it without side effects.

Named constants: PRODUCTION_RISK_COUNT = 6 and TEST_RISK_COUNT = 6 — no more magic numbers.

Named functions: The flat script became 12 named check functions (checkVersionConsistency, checkSkillsContent, checkEvalSuite, etc.). The call site now reads like a checklist.

Real unit tests: New scripts/validate-repo.test.mjs with 10 tests for parseFrontmatterBooks using Node.js built-in assert. Run with npm test.

Skills content CI: The validator now asserts that every SKILL.md has ## Setup and ## Process sections, and every mode guide references the Iron Law.


The interesting part

Every finding was real and actionable. No hallucinations — because every brooks-lint finding cites a specific book and principle, you can actually evaluate whether you agree. You're not blindly trusting an AI; you're checking its reasoning.

That's the design goal: traceable, defensible findings, not vibes-based suggestions.


Install or upgrade:

/plugin marketplace add hyhmrright/brooks-lint
/plugin install brooks-lint@brooks-lint-marketplace
Enter fullscreen mode Exit fullscreen mode

Works on Claude Code, Codex CLI, and Gemini CLI. MIT license.

GitHub: hyhmrright/brooks-lint

Top comments (0)