DEV Community

Cover image for Building a Local AI Code Review Tool Using Ollama
URLBug
URLBug

Posted on

Building a Local AI Code Review Tool Using Ollama

Code review is one of the most important parts of the development process, but it also takes time. Static analyzers and linters catch rule-based issues, yet they usually lack the ability to understand broader context.

Recently many AI-powered code review tools appeared. Most of them work as SaaS products and require sending code to external APIs. For private repositories or internal projects this can be a problem.

I wanted something simpler: a tool that can analyze code changes using an LLM, but runs entirely locally.

So I built CodeFox, a CLI tool for AI-assisted code review.


Idea

The goal of CodeFox is not to replace human code review but to automate routine checks.

Instead of reviewing the entire repository, the tool focuses on changes in the current diff and analyzes them using a local language model.

Typical issues it tries to detect include:

  • potential bugs
  • security concerns
  • maintainability problems
  • unclear logic or edge cases

The output is a structured review report with explanations and suggested fixes.


How It Works

The workflow is intentionally minimal:

  1. collect changes using git diff
  2. optionally retrieve relevant project context
  3. build a prompt containing diff + context
  4. send the prompt to a local LLM
  5. generate a review report

Simplified pipeline:

git diff
   ↓
context retrieval (optional)
   ↓
prompt generation
   ↓
local LLM (Ollama)
   ↓
review report
Enter fullscreen mode Exit fullscreen mode

Because the tool analyzes only the diff, the prompt remains relatively small and focused.


Running the Tool

Installation:

pip install codefox
Enter fullscreen mode Exit fullscreen mode

Initialize configuration:

codefox init
Enter fullscreen mode Exit fullscreen mode

Run review:

codefox scan
Enter fullscreen mode Exit fullscreen mode

The command analyzes the current changes in the repository and produces a review report.


Why Run AI Code Review Locally

Using local models has several advantages:

  • source code stays on your machine
  • no API costs
  • no rate limits
  • works in offline environments
  • model choice is flexible

The tool currently works with models available through Ollama, such as:

  • Qwen coder models
  • CodeLlama
  • other compatible LLMs

Why CLI First

Many AI code review tools integrate directly with pull requests. CodeFox started as a CLI tool instead.

This makes it:

  • repository-agnostic
  • usable with any git workflow
  • easy to integrate into scripts or CI later

Current Status

The project is still experimental. I'm exploring improvements in:

  • prompt design
  • context retrieval
  • review quality across different models
  • CI/CD integrations

Project

GitHub repository:

https://github.com/codefox-lab/CodeFox-CLI

Feedback and suggestions are welcome.

Top comments (2)

Collapse
 
klement_gunndu profile image
klement Gunndu

Ran into a similar tradeoff with local code review — the diff-only approach keeps prompts small but misses cross-file issues that matter most in refactors. Curious how CodeFox handles multi-file context retrieval.

Collapse
 
urlbug profile image
URLBug

Good point - diff-only analysis definitely has limitations when changes affect multiple parts of the codebase.

Right now CodeFox tries to handle this by using a small RAG step. It starts from the git diff, then retrieves additional related files (imports, referenced symbols, nearby modules, etc.) to provide extra context for the review. That helps catch some cross-file issues without sending the entire repository to the model.

Currently I'm also experimenting with agent-based workflows, where the model can use tools to fetch additional files when needed. The idea is to combine RAG with MCP tools, so the model can request relevant files or symbols dynamically instead of relying only on the initial context.