Hey everyone π
A few months ago I started building a small side project to make code reviews a little less painful. Today Iβm excited to share two big updates:
βοΈ ThinkReview is now open source
βοΈ ThinkReview now supports Ollama (run local LLMs for private code reviews!)
If you havenβt heard of ThinkReview before, hereβs the quick intro.
π What is ThinkReview?
ThinkReview is a browser extension that helps you review code directly inside your GitLab/GitHub/Bitbucket merge requests.
You can literally chat with your MR, ask questions about the diff, explore potential bugs, and generate review comments.
The key difference from typical AI review bots (CodeRabbit, CodeAnt, etc.) is:
β It does not spam your PR with automatic comments
βοΈ It gives you a private AI assistant inside the MR UI
You stay fully in control of the final review.
Think of it as βAI-assisted thinking,β not automated reviewing.
This is especially useful for devs who still prefer reviewing code in the browser, not inside an IDE or via CI bots.
π Update #1: ThinkReview is now Open Source
You can explore the code, file issues, request features, or contribute.
π GitHub Repo:
https://github.com/Thinkode/thinkreview-browser-extension
Making the project open source was the #1 request from early users, especially those in companies with strict audit/security requirements.
π€ Update #2: Ollama Support (Run Local LLMs)
As of version 1.4.0, ThinkReview can now connect to Ollama, letting you run:
- Qwen Coder
- Llama 3
- DeepSeek
- Codestral
- Any model supported by Ollama
Why this is awesome:
- π‘οΈ 100% local β your code never leaves your machine
- πΈ Free
- π§ Works great with self-hosted GitLab
- π No API keys required
For privacy-focused teams (or anyone who prefers local inference), this is a huge upgrade.
π οΈ Installation
ThinkReview works on all Chromium browsers.
π Chrome Web Store:
https://chromewebstore.google.com/detail/thinkreview-ai-code-revie/bpgkhgbchmlmpjjpmlaiejhnnbkdjdjn
π£οΈ Join the Discussion
If you have ideas, want to contribute, or need help setting up local models, feel free to reach out:
- GitHub Discussions
- https://thinkreview.dev/contact
Iβd love to hear feedback from this community β especially from developers experimenting with LLM-assisted workflows.
Thanks for reading!
β Jay


Top comments (0)