I got tired of waiting for PR reviews. My team's across three timezones, and sometimes a simple "is this logic right?" question sits for 12 hours.
So I built an automated pre-commit code review using Ollama and git hooks. It runs entirely local—no API keys, no usage limits, no sending proprietary code to external servers.
Here's the setup that's been running on my machine for two months.
Why Local LLMs for Code Review?
Cloud APIs are great until:
- You're working with sensitive code
- You hit rate limits at 2 AM debugging
- Your company's security policy says no external AI
- You don't want to pay per token for every commit
Running local LLMs for coding tasks solves all of this. The quality isn't GPT-4, but for catching obvious bugs and suggesting improvements? It's surprisingly good.
What You'll Need
- Ollama - Dead simple local LLM runner
- A decent GPU - 8GB VRAM minimum, 16GB recommended
- Git - Obviously
- 10 minutes - That's genuinely it
Step 1: Install Ollama
# Linux/WSL
curl -fsSL https://ollama.com/install.sh | sh
# macOS
brew install ollama
# Start the service
ollama serve
Pull a coding-focused model. I've tested several; here's what works:
# Best balance of speed and quality
ollama pull deepseek-coder:6.7b
# If you have 16GB+ VRAM
ollama pull codellama:13b
Step 2: Create the Review Script
Save this as ~/.local/bin/ai-review:
#!/bin/bash
set -e
MODEL="${AI_REVIEW_MODEL:-deepseek-coder:6.7b}"
DIFF=$(git diff --cached --diff-filter=ACMR)
if [ -z "$DIFF" ]; then
echo "No staged changes to review"
exit 0
fi
PROMPT="Review this code diff. Be concise. Flag:
1. Obvious bugs or logic errors
2. Security issues (SQL injection, XSS, hardcoded secrets)
3. Performance problems
4. Missing error handling
If the code looks fine, just say 'LGTM'.
Diff:
$DIFF"
echo "🔍 Running local code review..."
echo ""
ollama run "$MODEL" "$PROMPT" 2>/dev/null
echo ""
echo "---"
echo "Review complete. Commit? [y/N]"
read -r response
if [[ "$response" =~ ^[Yy]$ ]]; then
exit 0
else
exit 1
fi
Make it executable:
chmod +x ~/.local/bin/ai-review
Step 3: Set Up the Git Hook
Create a pre-commit hook in your repo:
# In your project directory
cat > .git/hooks/pre-commit << 'EOF'
#!/bin/bash
~/.local/bin/ai-review
EOF
chmod +x .git/hooks/pre-commit
Want this globally? Use git templates:
mkdir -p ~/.git-templates/hooks
cp ~/.local/bin/ai-review ~/.git-templates/hooks/pre-commit
git config --global init.templateDir ~/.git-templates
Step 4: Test It
Stage some code and commit:
git add suspicious-code.py
git commit -m "add feature"
You'll see something like:
🔍 Running local code review...
Issues found:
1. **SQL Injection** (line 23): User input passed directly to query.
Use parameterized queries instead.
2. **Missing null check** (line 45): `user.profile` accessed without
verifying user exists.
3. **Hardcoded credential** (line 12): API key in source code.
Move to environment variable.
---
Review complete. Commit? [y/N]
Making It Actually Useful
The basic setup works, but here's how I've tuned mine:
Skip Reviews for Trivial Commits
# Add to the script, after getting DIFF
LINES_CHANGED=$(echo "$DIFF" | grep -c "^+" || true)
if [ "$LINES_CHANGED" -lt 5 ]; then
echo "Small change, skipping review"
exit 0
fi
Focus on Specific File Types
# Only review Python and JavaScript
DIFF=$(git diff --cached --diff-filter=ACMR -- '*.py' '*.js' '*.ts')
Bypass When Needed
# Skip the hook for quick fixes
git commit --no-verify -m "typo fix"
Log Reviews for Later
# Append to script before the prompt
REVIEW_LOG=~/.local/share/ai-reviews/$(date +%Y-%m-%d).log
mkdir -p "$(dirname "$REVIEW_LOG")"
echo "=== $(date) ===" >> "$REVIEW_LOG"
echo "$DIFF" >> "$REVIEW_LOG"
Performance Notes
On my RTX 3080:
-
deepseek-coder:6.7b- ~3 seconds for typical diffs -
codellama:13b- ~8 seconds, slightly better catches -
codellama:34b- ~25 seconds, overkill for pre-commit
The 6.7B model catches 80% of what the larger models find. For pre-commit automation, speed matters more than catching edge cases.
What It Won't Catch
Be realistic. Local LLMs miss:
- Complex architectural issues
- Business logic errors (it doesn't know your domain)
- Subtle race conditions
- Whether your code actually solves the right problem
This isn't a replacement for human review. It's a first pass that catches the embarrassing stuff before your teammates see it.
The Actual Value
Two months in, here's what I've noticed:
- Fewer "oops" commits - It catches the dumb mistakes I make at midnight
- Faster PR reviews - Human reviewers focus on architecture, not typos
- Better habits - Knowing there's a check makes me write cleaner first drafts
The whole setup took 10 minutes. The ROI has been significant.
More at dev.to/cumulus
Top comments (0)