We just launched git-lrc on Product Hunt 🚀
👉 https://www.producthunt.com/products/git-lrc
If you’re committing AI-generated code daily, this is for you.
The Moment We Realized Something Was Off
In our team, AI tools like Copilot and Cursor clearly increased velocity. Features moved faster. Refactors felt cheaper. Boilerplate disappeared.
But careful inspection of code quietly declined.
AI would generate large diffs. They looked reasonable. They compiled. Tests passed. So they landed.
Only later would we discover subtle issues:
- A validation check removed.
- A constraint relaxed.
- An edge case dropped.
- An expensive cloud call introduced.
- Sensitive material leaking into logs.
Nothing dramatic. Just small, silent shifts.
And those are the ones that cost hours in production debugging.
We had given ourselves a race car.
We forgot the brakes.
Why We Didn’t Build Another Dashboard
The obvious solution would’ve been another SaaS review tool or another CI gate.
I didn’t want that.
Responsibility in software engineering has a natural anchor point: git commit.
Every editor, every IDE, every AI agent eventually hits Git.
Committing is mandatory. It’s the moment a developer says:
“I stand behind this change.”
So we built git-lrc to live exactly there.
It hooks into git commit and reviews every staged diff before it lands.
When you commit:
- A GitHub-style diff opens in your browser.
- Inline AI comments appear at the exact lines that matter.
- Issues are tagged with severity.
- A high-level summary explains what changed.
- You can copy flagged issues back into your AI agent.
- Lines added/removed per file are shown for quick scope awareness.
No dashboards. No external process.
Just a structural nudge at the right moment.
Review, Vouch, or Skip — Your Choice
git-lrc is engineer-centric by design.
You can:
- Review — run AI analysis on the diff.
- Vouch — skip AI and explicitly take responsibility.
- Skip — commit without review.
Every commit records what happened directly in git log, for example:
LiveReview Pre-Commit Check: ran (iter:3, coverage:85%)
-
itershows how many review cycles you ran. -
coverageshows how much of the final diff was AI-reviewed.
Your team can see exactly which commits were reviewed, vouched, or skipped — without any external reporting tool.
Review becomes part of authorship, not an afterthought.
Designed for AI Workflows
A typical cycle looks like this:
- Generate code with your AI agent.
git add .git lrc review- AI flags issues inline.
- Copy issues back to your agent.
- Fix.
- Review again.
git lrc review --vouch- Commit.
Each review is tracked as an iteration.
You move fast — but deliberately.
60 Seconds to Set Up. Completely Free.
Install:
curl -fsSL https://hexmos.com/lrc-install.sh | sudo bash
Then:
git lrc setup
You bring your own Gemini API key (free tier).
There’s no billing layer. Unlimited reviews.
Only the staged diff is analyzed.
No full repository upload.
Diffs are not stored after review.
The mission is simple:
Make AI code review engineer-centric, free, and accessible to developers everywhere.
The more developers review AI-generated code early, the fewer subtle bugs make it to production
Make AI code review engineer-centric, free, and accessible to developers everywhere.
AI-assisted coding is becoming the default.
The real question isn’t whether we use AI.
It’s whether we stay responsible while using it.
If you care about shipping fast without silently degrading quality, I’d appreciate your support.
👉 Upvote git-lrc on Product Hunt:
https://www.producthunt.com/products/git-lrc
The more developers review AI-generated code early, the fewer subtle bugs make it to production.
Top comments (5)
The diff-scoped review model is smarter than full-file analysis for real-time use. The false positive rate is the critical metric here — if the reviewer flags too many low-confidence issues, developers start treating it as noise and the whole system loses value. Curious how you're tuning sensitivity versus risk tolerance for the kinds of issues that matter most.
We will bring in an upvote and downvote system so the reviews become customized to each dev or team, and repo
Congrats on the launch!
thank you
Congrats on the launch!
It sounds like a really interesting initiative, especially the idea of reviewing every diff before it lands.