AI writes the code. Who reviews it?
In my day job I work on firmware for critical systems the kind of code
where a buffer overflow isn't just a crash, it's a potentially mis-billed
customer or a compliance violation. Code review matters. A lot.
When AI coding assistants went mainstream, our team started shipping code
faster than ever. But reviews slowed down. Everyone was asking Copilot
and Claude to "review this function," but every developer wrote a
different prompt. One cared about security, another about performance,
a third about style. The reviews were inconsistent, and our "team
standards" only existed in people's heads.
So I built Revvy a VS Code extension that does code review based on
rules you define in YAML, not generic AI prompts. This post walks through
why I designed it this way, how it works, and what I learned building it.
What's in the box
- Inline review comments — shown on the exact line, with severity, rule ID, and a suggested fix
- Works with what you have — GitHub Copilot (no extra key), OpenAI, or Anthropic Claude, with automatic fallback
- Multi-repo PR/MR review — paste multiple GitHub PR and GitLab MR links, get one unified review across all of them (via MCP)
- Jira ticket validation — checks your code against the ticket's requirements so scope drift gets flagged before review
- Integration test generation — functional + regression test cases based on the flagged issues
- Copy-as-Markdown — export all review comments as a ready-to-paste prompt for any AI to apply the fixes in one shot
Try it
- Install: search "Revvy" in VS Code Extensions, or
- Source + issues: https://github.com/YonK0/revvy
- MIT licensed, no telemetry, no cloud backend

Top comments (0)