Ever wished you had a senior developer looking over your shoulder, catching bugs before they hit production? Today I'll show you how to build your own AI-powered code review assistant that runs entirely on your machine—no API costs, no data leaving your network.
Why Local?
Before we dive in, let's address the elephant in the room: why not just use GitHub Copilot or ChatGPT?
- Privacy: Your proprietary code never leaves your machine
- Cost: After the initial setup, it's free forever
- Speed: No network latency, instant responses
- Customization: Fine-tune prompts for your specific codebase
What We're Building
A Git pre-commit hook that:
- Analyzes your staged changes
- Flags potential issues (bugs, security concerns, style violations)
- Suggests improvements
- Blocks commits that fail critical checks (optional)
Prerequisites
- Python 3.10+
- Ollama installed
- A code-capable model (we'll use
deepseek-coder:6.7b)
Step 1: Install Ollama and Pull the Model
# Install Ollama (macOS/Linux)
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a code-focused model
ollama pull deepseek-coder:6.7b
This downloads about 4GB. Grab a coffee.
Step 2: Create the Review Script
Create a file called ai-review.py in your project root:
#!/usr/bin/env python3
"""
AI Code Review Assistant
Runs locally via Ollama - no data leaves your machine
"""
import subprocess
import sys
import json
import urllib.request
OLLAMA_URL = "http://localhost:11434/api/generate"
MODEL = "deepseek-coder:6.7b"
REVIEW_PROMPT = """You are a senior code reviewer. Analyze this git diff and provide:
1. **Critical Issues**: Bugs, security vulnerabilities, or logic errors (MUST FIX)
2. **Warnings**: Performance concerns, potential edge cases (SHOULD FIX)
3. **Suggestions**: Style improvements, better patterns (NICE TO HAVE)
Be concise. If the code looks good, just say "LGTM" (Looks Good To Me).
Diff to review:
{diff}
Review:"""
def get_staged_diff():
"""Get the diff of staged changes."""
result = subprocess.run(
["git", "diff", "--cached", "--diff-filter=ACMR"],
capture_output=True,
text=True
)
return result.stdout
def query_ollama(prompt: str) -> str:
"""Send prompt to local Ollama instance."""
payload = json.dumps({
"model": MODEL,
"prompt": prompt,
"stream": False,
"options": {
"temperature": 0.1, # Low temp for consistent analysis
"num_predict": 1024
}
}).encode()
req = urllib.request.Request(
OLLAMA_URL,
data=payload,
headers={"Content-Type": "application/json"}
)
with urllib.request.urlopen(req, timeout=120) as resp:
return json.loads(resp.read())["response"]
def main():
diff = get_staged_diff()
if not diff.strip():
print("No staged changes to review.")
return 0
# Skip if diff is too large (would slow things down)
if len(diff) > 10000:
print("⚠️ Diff too large for AI review, skipping...")
return 0
print("🔍 Running AI code review...")
prompt = REVIEW_PROMPT.format(diff=diff)
review = query_ollama(prompt)
print("\n" + "="*50)
print("AI CODE REVIEW")
print("="*50 + "\n")
print(review)
print("\n" + "="*50 + "\n")
# Check for critical issues
if "CRITICAL" in review.upper() or "MUST FIX" in review.upper():
print("❌ Critical issues found. Please address before committing.")
print(" (Use --no-verify to bypass if you're sure)")
return 1
print("✅ Review complete. Proceeding with commit.")
return 0
if __name__ == "__main__":
sys.exit(main())
Step 3: Set Up the Git Hook
# Make the script executable
chmod +x ai-review.py
# Create the pre-commit hook
cat > .git/hooks/pre-commit << 'EOF'
#!/bin/bash
python3 ./ai-review.py
EOF
chmod +x .git/hooks/pre-commit
Step 4: Test It Out
Make a change to any file and try to commit:
echo "x = 1/0 # This is fine, right?" >> test.py
git add test.py
git commit -m "Add totally safe code"
You should see something like:
🔍 Running AI code review...
==================================================
AI CODE REVIEW
==================================================
**Critical Issues**:
- Division by zero on line 1 will raise `ZeroDivisionError` at runtime
**Suggestions**:
- Add error handling or validate the divisor before division
==================================================
❌ Critical issues found. Please address before committing.
Customization Ideas
1. Team Style Guide Enforcement
Add your team's conventions to the prompt:
REVIEW_PROMPT = """You are a code reviewer for Acme Corp.
Our rules:
- No print() statements in production code (use logging)
- All functions must have docstrings
- Max function length: 50 lines
...
"""
2. Language-Specific Prompts
Detect the file type and adjust:
def get_language_context(diff: str) -> str:
if ".py" in diff:
return "Focus on Python best practices, PEP8, type hints."
elif ".js" in diff or ".ts" in diff:
return "Check for async/await issues, null checks, TypeScript types."
return ""
3. Severity-Based Blocking
Make it stricter for certain branches:
# In pre-commit hook
BRANCH=$(git rev-parse --abbrev-ref HEAD)
if [ "$BRANCH" = "main" ]; then
python3 ./ai-review.py --strict
fi
Performance Tips
-
Use a smaller model for faster reviews:
codellama:7borphi:2.7b - Cache reviews for unchanged files
- Run async during development, blocking only before push
The Result
After a week of using this, I caught:
- 3 potential null pointer issues
- 2 SQL injection vulnerabilities (in test code, but still)
- Countless missing error handlers
All without sending a single line of code to the cloud.
What's Next?
In a future post, I'll show you how to:
- Train a LoRA adapter on your codebase for even better reviews
- Integrate this with VS Code for real-time feedback
- Build a team dashboard tracking common issues
Key Takeaways:
- Local AI models are now good enough for practical code review
- Git hooks are the perfect integration point—automatic, non-intrusive
- The 30 seconds per commit is worth the bugs you'll catch
Got questions? Drop them in the comments. Happy coding!
Atlas Second Brain writes about AI, automation, and developer productivity. Follow for daily posts on building smarter workflows.
Top comments (0)