DEV Community

Tran Long An
Tran Long An

Posted on

I got tired of AI Reviewers hallucinating, so I built an Autonomous Agent for GitLab instead.

We've all been there. You push a massive 300-line Merge Request, and within 3 seconds, an "AI Code Reviewer" bot leaves 12 comments on your PR.

You read them. Two comments complain about a "missing import" that is clearly handled globally by your framework. One suggests a library you explicitly removed last sprint. The rest are annoying formatting nits. You quietly click "Resolve Thread" 12 times and sigh.

Standard AI reviewers pipe your .patch file into an LLM and pray for the best. They lack context.

That’s exactly why I built AI Review Agent. A truly autonomous, context-aware code review agent built natively for GitLab using Go.

🧠 The Problem with "Dumb" AI Reviewers

When a human senior developer reviews a PR, they don’t just look at the highlighted diff. If you modify a function signature, they Cmd+Click to see where else it's used. If you introduce a generic struct, they read the interface definition in another file. They think before they speak.

Most open-source AI reviewers can't do this. They suffer from:

  1. Zero Codebase Context: "Why did you use mylog.Info() instead of fmt.Println()?"
  2. Context Window Explosions: Bombarding the LLM with a 150-file MR diff until it forgets your original system prompt.
  3. One-Way Communication: They dump a review and vanish. You can't ask them to elaborate.

🛠️ How AI Review Agent Fixes This

I wanted to build an agent that behaves like an actual colleague. Here is how its pipeline works differently:

1. Agentic Tool Use (The Secret Sauce)

The bot doesn't just statically read the diff. It's equipped with tools like read_file, search_code, and multi_diff.
If it sees you calling a mysterious CacheManager.Get(), it will pause, use search_code to find the CacheManager implementation in the codebase, read it, and then decide if your code is buggy. No more hallucinated assertions.

2. The Interactive Reply Loop 💬

Most bots drop a comment and disappear. AI Review Agent stays in the conversation.
If the AI leaves a comment on your code, you can literally @reply to it on the GitLab thread.
"Actually, I did it this way because of a race condition in the upstream service."
A dedicated Replier Agent wakes up, reads the entire thread history, analyzes the surrounding code context again, and continues the technical debate. It will either apologize and agree with you, or push back if it finds a genuine flaw in your logic.

3. It Actually Learns From Your Team 📈

Every codebase has its own unwritten rules. AI Review Agent is designed to get smarter over time.
It features a background Cron job (Feedback Consolidator) that periodically scans historical human replies and resolved AI comments across your GitLab projects. It extracts "lessons learned" and builds a cached "Repository Best Practices" rulebook.
If your team agreed on a specific logging format last month, the agent remembers it and enforces it on all future PRs.

4. Seamless Webhook & Background Worker Server ⚙️

It’s not just a script you run manually. The AI Review Agent operates as a high-performance Webhook Server.

  • You configure it on your GitLab project once.
  • Every time a developer pushes code, the webhook triggers the agent.
  • Reviews are pushed asynchronously into an intelligent Queue / Worker Pool system with retry logic. So even if the OpenAI API hiccups, your code review is never lost.

5. Interactive CLI (Dry Run Mode)

Are you scared of installing a bot that might spam your entire team on GitLab?
You can run AI Review Agent locally against a live PR via CLI:

./cli review --project-id 123 --mr-id 45 --model claude-3-7-sonnet-20250219
Enter fullscreen mode Exit fullscreen mode

It prints the AI's suggestions directly in your terminal, and lets you interactively type 1, 3, 5 to decide exactly which comments are worth pushing to the live GitLab MR.

🏗️ Built with Go (Clean Architecture)

Under the hood, it’s built entirely in Go 1.25. It uses standard Clean Architecture abstractions making it incredibly easy to extend. It supports graceful degradation, multi-LLM routing (OpenAI, Anthropic, Google Gemini), and relies on a local SQLite or Postgres database to keep track of its asynchronous review jobs and feedback metrics.

🚀 Try It Out

If your team uses GitLab and you're looking for a smarter, less noisy AI reviewer—or if you're just interested in Go-based AI Agents—I'd love for you to check it out.

🔗 GitHub Repository: antlss/gitlab-review-agent

I'm actively looking for feedback, feature requests, and early contributors! Let me know in the comments: What is the most annoying comment an AI reviewer has ever left on your PR?

Top comments (0)