Every AI code review starts the same way.
The bot opens your PR. It scans the diff. It flags a missing try/catch, suggests a more descriptive variable name, and notes that you could memoize that function for performance.
All technically correct. None of it useful.
Because it doesn't know that fetchUser is an intentional naming convention your team enforces. That error handling is delegated to a global boundary. That performance isn't the concern here — correctness is. The bot doesn't know your project. It never did.
This isn't a model problem. It's a context problem.
The fix: context-aware review
That's what pi-reviewer is built around — a GitHub Action and pi TUI extension that brings your project conventions into every review.
Before the agent sees a single line of diff, it reads:
-
AGENTS.mdorCLAUDE.md— your general project conventions: naming rules, architecture decisions, patterns to follow -
REVIEW.md— review-specific rules: what to always flag, what to explicitly skip
Markdown links in those files are followed recursively. If your AGENTS.md links to docs/api-conventions.md, that file gets inlined too. The agent sees the full picture, not just the summary.
# Review Guidelines
## Always flag
- `fetch` calls missing `res.ok` check before `.json()`
- API endpoints not versioned under `/api/v1/`
- Functions named `getData`, `doStuff`, or other generic names
## Skip
- Formatting-only changes
- Changes inside `pi-review.md`
That's a REVIEW.md. The agent now knows what your team cares about — not what a generic model thinks good code looks like.
What a context-aware review looks like
Here's what changed after adding project context.
Before: the agent flagged a missing type annotation on an internal helper. Suggested renaming a variable. Noted a console.log left in.
After: it caught an unversioned API endpoint added in the same PR. Flagged a fetch call missing the res.ok check — exactly the rule in REVIEW.md. Skipped the formatting-only change in the generated file, as instructed.
Same model. Same diff. Completely different review.
Severity you control
Not every finding deserves equal weight. pi-reviewer lets you filter by severity — so you can focus on what matters.
- uses: zeflq/pi-reviewer@main
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
pi-api-key: ${{ secrets.PI_API_KEY }}
min-severity: warn
Set min-severity: warn and the agent skips INFO-level suggestions entirely — both in what it generates and in what gets posted to the PR. You can also trigger a manual review from the GitHub Actions UI and choose the severity level on the fly.
Three tiers: 🔴 CRITICAL for bugs and security issues, 🟡 WARN for logic and type errors, 🔵 INFO for style and suggestions.
Model-agnostic, built on pi mono
pi-reviewer runs on pi — a terminal-based coding agent that sits on top of the pi mono platform. One PI_API_KEY works across all supported models and providers. You pick the model, pi routes the request.
That means you're not locked into a single provider. Swap models without touching your workflow. The review logic stays the same.
It also works over SSH. If your project lives on a remote machine, --ssh mode lets the agent fetch the diff and read your conventions directly on the remote — no local copy needed.
How it compares to Claude Code PR Review
Anthropic recently shipped Code Review — a managed PR review service built into Claude Code. It reads CLAUDE.md and REVIEW.md, runs multiple specialized agents against your full codebase in parallel, and posts inline findings with severity tags. It's genuinely impressive.
But it comes with constraints that may not fit every team.
It's a managed service — runs on Anthropic's infrastructure, requires a GitHub App installation, available on Teams and Enterprise plans only. Reviews average $15–25 each. It's Claude-only, and you don't control where it runs.
pi-reviewer runs in your own CI, costs what your token usage costs, works with any model through pi mono, and needs nothing more than a secret and a workflow file. No GitHub App. No admin approval flow.
And if you want to review locally before you push — without opening a PR at all — the pi TUI extension gives you /review in your terminal.
Both tools read your CLAUDE.md and REVIEW.md. The difference is where they run, what they cost, and how much control you keep.
Set it up once, forget about it
npx github:zeflq/pi-reviewer init
That generates a workflow file. Add your PI_API_KEY secret. Every PR from that point on gets a review that knows your project.
The context files — AGENTS.md, REVIEW.md — live in your repo. Version-controlled, team-editable, evolve with the project. The better you document your conventions, the better the reviews get.
The shift
The insight isn't that AI can review code. It's that AI review without project context is just another linter with better prose.
The review that matters is the one that knows why your codebase looks the way it does — and checks the diff against that, not against some generic idea of good software.
That's the layer that's been missing.
Context is everything. Diff without it is just noise.
Top comments (0)