How We Built an AI Code Reviewer with NeuroLink and Bitbucket
At Juspay, we process thousands of pull requests across 100+ repositories every month. Code review bottlenecks were slowing our release velocity, and we needed a solution that could:
- Understand our domain-specific patterns and conventions
- Integrate seamlessly with Bitbucket and Jira
- Learn from past reviews to improve over time
- Run entirely within our infrastructure for security
Enter Yama — our AI-native code review tool built on NeuroLink, the universal AI SDK for TypeScript. This is the story of how we built it.
The Architecture Decision
We evaluated several approaches:
- Off-the-shelf AI code review tools: Great for generic checks, but couldn't understand our Haskell payment systems or custom conventions
- Direct LLM API integration: Would require building provider abstraction, memory management, and tool integration from scratch
- NeuroLink with MCP: Best of both worlds — provider flexibility + standardized tool integration
We chose NeuroLink because it gave us:
- 13 AI providers under one API (we use Claude for reasoning, Gemini for cost-effective checks)
- MCP (Model Context Protocol) for Bitbucket/Jira integration
- Conversation memory for learning reviewer preferences
- Streaming responses for real-time progress updates
Core Architecture
┌─────────────────┐ ┌──────────────┐ ┌─────────────────┐
│ Bitbucket PR │────▶│ Yama API │────▶│ NeuroLink │
│ Webhook │ │ (Node.js) │ │ SDK │
└─────────────────┘ └──────────────┘ └────────┬────────┘
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ Jira Issues │ │ MCP Servers │
│ (context) │ │ - Bitbucket │
│ │ │ - Jira │
└─────────────────┘ └─────────────────┘
Building the Review Pipeline
1. Setting Up NeuroLink with MCP Integration
First, we initialize NeuroLink with our MCP servers for Bitbucket and Jira:
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink({
conversationMemory: {
enabled: true,
redisConfig: {
host: process.env.REDIS_HOST,
port: 6379,
ttl: 86400 * 7, // Keep PR context for a week
},
},
});
// Connect to Bitbucket MCP server
await neurolink.addExternalMCPServer("bitbucket", {
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-bitbucket"],
env: {
BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN,
BITBUCKET_WORKSPACE: "juspay",
},
});
// Connect to Jira for ticket context
await neurolink.addExternalMCPServer("jira", {
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-jira"],
env: {
JIRA_TOKEN: process.env.JIRA_TOKEN,
JIRA_HOST: "https://juspay.atlassian.net",
},
});
2. Fetching PR Context
When a webhook fires, we gather all relevant context:
interface PRContext {
prId: string;
repoSlug: string;
branch: string;
author: string;
jiraTicket?: string;
}
async function gatherPRContext(
neurolink: NeuroLink,
ctx: PRContext
): Promise<string> {
// Let the AI use MCP tools to fetch PR data
const result = await neurolink.generate({
input: {
text: `Fetch the diff, files changed, and description for PR ${ctx.prId}
in repo ${ctx.repoSlug}. Also fetch related Jira ticket ${ctx.jiraTicket}.`,
},
provider: "anthropic",
model: "claude-4-sonnet",
enableOrchestration: true, // Let AI decide which tools to call
});
return result.content;
}
3. The Multi-Stage Review Engine
Yama performs reviews in three stages, each with different models for cost optimization:
async function performCodeReview(
neurolink: NeuroLink,
prContext: string,
files: string[]
): Promise<ReviewComment[]> {
const comments: ReviewComment[] = [];
// Stage 1: Security scan (fast, cheap model)
const securityResult = await neurolink.generate({
input: {
text: `Analyze this PR for security issues:
${prContext}
Check for:
- Hardcoded secrets or credentials
- SQL injection vulnerabilities
- Unsafe file operations
- Authentication bypasses`,
},
provider: "google-ai",
model: "gemini-2.5-flash", // Fast and cost-effective
schema: z.object({
issues: z.array(z.object({
severity: z.enum(["critical", "high", "medium", "low"]),
file: z.string(),
line: z.number(),
description: z.string(),
suggestion: z.string(),
})),
}),
output: { format: "json" },
});
comments.push(...parseSecurityIssues(securityResult));
// Stage 2: Architecture review (reasoning model)
const archResult = await neurolink.stream({
input: {
text: `Review this PR for architectural concerns:
${prContext}
Consider our conventions:
- Haskell services should use EulerHS patterns
- Database queries must use Beam ORM
- API responses follow Juspay standard format`,
files: files.filter(f => f.endsWith(".hs") || f.endsWith(".ts")),
},
provider: "anthropic",
model: "claude-4-sonnet",
thinkingConfig: {
thinkingLevel: "medium", // Enable extended reasoning
},
});
// Stream architecture review in real-time
for await (const chunk of archResult.stream) {
if ("content" in chunk) {
process.stdout.write(chunk.content);
}
}
// Stage 3: Style and conventions (cached model)
const styleResult = await neurolink.generate({
input: {
text: `Check style compliance. Be concise.`,
files: files,
},
provider: "google-ai",
model: "gemini-2.5-flash",
rag: {
files: ["./docs/coding-standards.md", "./docs/style-guide.md"],
strategy: "markdown",
topK: 3,
},
});
return comments;
}
4. Posting Review Comments
Using the Bitbucket MCP tool to post comments:
async function postReviewComments(
neurolink: NeuroLink,
prId: string,
comments: ReviewComment[]
) {
for (const comment of comments) {
await neurolink.generate({
input: {
text: `Post this review comment to PR ${prId}:
File: ${comment.file}
Line: ${comment.line}
Comment: ${comment.description}
${comment.suggestion ? `Suggestion: ${comment.suggestion}` : ""}`,
},
provider: "anthropic",
// MCP tool will be automatically invoked
});
}
}
Learning from Feedback
Yama improves over time by learning from developer feedback. When a reviewer dismisses or modifies a Yama comment, we capture that signal:
async function learnFromFeedback(
neurolink: NeuroLink,
originalComment: ReviewComment,
reviewerAction: "accepted" | "modified" | "dismissed",
reviewerNote?: string
) {
// Store feedback in Redis memory for future context
await neurolink.generate({
input: {
text: `Learning from review feedback:
Original: ${originalComment.description}
Action: ${reviewerAction}
Note: ${reviewerNote || "None"}
Adjust future recommendations accordingly.`,
},
provider: "anthropic",
model: "claude-4-haiku",
});
}
Results & Lessons Learned
After 6 months in production:
- 70% reduction in trivial review comments (style, formatting)
- 40% faster PR turnaround time
- Zero security issues missed in production (caught during review)
- $0.12 average cost per PR review (using cost-optimized model routing)
Key Lessons
Multi-model strategy works: Using cheaper models for simple checks and expensive ones for complex reasoning cut costs by 80%
MCP is a game-changer: Tool integration that "just works" across providers saved us weeks of integration work
Memory matters: Per-PR conversation context dramatically improved review quality over stateless approaches
Streaming UX: Real-time progress updates made developers trust the system more
The Code
Yama is now part of our internal tooling suite. Here's the complete minimal setup if you want to build something similar:
import { NeuroLink } from "@juspay/neurolink";
import { z } from "zod";
// Initialize
const yama = new NeuroLink({
conversationMemory: { enabled: true },
});
// Add your MCP servers
await yama.addExternalMCPServer("bitbucket", {
transport: "stdio",
command: "npx",
args: ["-y", "@modelcontextprotocol/server-bitbucket"],
env: { BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN },
});
// Review webhook handler
export async function handlePRWebhook(payload: PRWebhook) {
const context = await gatherPRContext(yama, payload);
const comments = await performCodeReview(yama, context, payload.files);
await postReviewComments(yama, payload.prId, comments);
}
Conclusion
Building Yama with NeuroLink let us focus on the review logic instead of AI infrastructure. The combination of provider flexibility, MCP tool integration, and conversation memory made it possible to ship a production-grade code review system in weeks, not months.
If you're building AI-powered developer tools, NeuroLink's unified API and MCP ecosystem will save you significant engineering time — it certainly did for us.
NeuroLink — The Universal AI SDK for TypeScript
- GitHub: github.com/juspay/neurolink
- Install:
npm install @juspay/neurolink - Docs: docs.neurolink.ink
- Blog: blog.neurolink.ink — 150+ technical articles
Top comments (0)