DEV Community

Cover image for The tool that stops 10x more AI slop than anything else my team has tried. Open source, FREE, and growing 🚀
Spencer Marx
Spencer Marx

Posted on

The tool that stops 10x more AI slop than anything else my team has tried. Open source, FREE, and growing 🚀

AI slop doesn't crash your app. It passes your tests, your linters, your type checks. It looks like code a competent person wrote. Then three weeks later you're staring at a service full of abstractions that exist for no reason, functions that do the same thing behind slightly different signatures, and variable names that technically make sense but communicate nothing to the next person reading them.

That's the version of slop nobody talks about. The kind that compounds.

How this started

I've been writing software for a long time. IC, staff, principal, EM, director, now CTO. When AI coding assistants became part of our daily workflow I started doing something that felt almost paranoid at first: after an agent finished implementing something, I'd spin up separate Claude Code agents to review the output, each with a different focus area. One looking at architecture, one at security, one at quality.

It worked. Way better than single-pass review. But it was completely ad-hoc. I'd manually prompt each reviewer, mentally track who said what, try to reconcile conflicting findings in my head.

So I started structuring it. I gave each reviewer a defined role and explicit focus areas, and I ran them in parallel instead of sequentially. Then I added a step that changed everything: after the individual reviews, I had the reviewers talk to each other. They'd challenge findings, connect issues one reviewer found with something another reviewer flagged in a different file, push back on false positives, and reach consensus.

Over a few weeks this turned into custom slash commands. Over a few months it became a real system. And when it came time to standardize the workflow across our engineering team and stop being dependent on one specific CLI tool, Open Code Review was born.

Open Code Review Logo

What it actually does

Under the hood, OCR is really just a set of bootstrapped slash commands, markdown reference files, and orchestration rules that allow your native agentic CLI to perform structured multi-agent review. You configure a team of specialized reviewer agents (architect, security, quality, QA, whatever custom reviewers make sense for your codebase) and they review the PR independently, in parallel, with intentional overlap.

Then they deliberate.

Customize your Default Reviewer Team

It's a structured discourse where they challenge each other's findings, connect related issues across files, and reach actual consensus. One synthesized review comes out the other end. Not five separate opinions, but a team opinion that's been pressure-tested before you ever see it.

There's a local dashboard on top of all this that makes the whole process visible and easy to follow in real time. When it's done you post the review straight to your GitHub PR.

A Focused Open Code Review Session

That deliberation step is the thing. I've seen other multi-agent approaches where each reviewer just dumps findings independently. That's parallel linting. The structured debate between reviewers with genuinely different concerns is where the signal lives.

Try it

Every feature in OCR exists because I hit a wall without it while doing this manually. It grew out of months of ad-hoc multi-agent reviews, and I automated away every piece of friction incrementally.

It's also agentic-CLI agnostic. It started in Claude Code but works standalone with Codex, Gemini CLI, Cursor, Windsurf, Copilot, whatever you're using. That was a hard requirement when we open-sourced it because locking a review tool to one assistant's ecosystem felt wrong.

npm install -g @open-code-review/cli
cd your-project
ocr init
Enter fullscreen mode Exit fullscreen mode

Two minutes to get started. Free. Open source. No SaaS. No per-seat pricing. Reviewer teams are fully customizable or just use the defaults.

➡️ GitHub: github.com/spencermarx/open-code-review

If OCR helps your workflow, a GitHub star helps other engineers dealing with the same slop problem find it. And if you run into rough edges or have ideas for how it should work differently, open an issue. The feedback loop from actual users has shaped this tool more than anything else 💪

If you've found other code review approaches that actually work well with AI-generated code, I'd love to hear about those too.

Cheers and Happy Reviewing 🚀

Top comments (0)