DEV Community

Paul Robertson
Paul Robertson

Posted on

AI Code Review Assistant: Build Your Own GitHub Bot with OpenAI in 20 Minutes

This article contains affiliate links. I may earn a commission at no extra cost to you.


title: "AI Code Review Assistant: Build Your Own GitHub Bot with OpenAI in 20 Minutes"
published: true
description: "Learn to build an intelligent GitHub bot that automatically reviews pull requests using OpenAI's API, complete with webhook integration and production deployment."
tags: ai, github, automation, codereview, openai

cover_image:

Code reviews are essential but time-consuming. What if you could have an AI assistant that provides instant, intelligent feedback on every pull request? In this tutorial, we'll build a GitHub bot that automatically reviews code changes using OpenAI's API.

Our bot will analyze pull requests, identify potential issues, suggest improvements, and provide constructive feedback—all within minutes of a PR being opened.

What We're Building

By the end of this tutorial, you'll have:

  • A GitHub webhook that triggers on pull request events
  • An AI-powered code analysis system using OpenAI's API
  • Automated deployment via GitHub Actions
  • Production-ready error handling and rate limiting

Prerequisites

  • Basic knowledge of Node.js and GitHub APIs
  • A GitHub account with repository admin access
  • An OpenAI API key
  • Familiarity with GitHub Actions

Step 1: Setting Up the Project Structure

First, let's create our project structure:

mkdir ai-code-reviewer
cd ai-code-reviewer
npm init -y
npm install express @octokit/rest openai dotenv
Enter fullscreen mode Exit fullscreen mode

Create the following files:

ai-code-reviewer/
├── src/
│   ├── index.js
│   ├── reviewEngine.js
│   └── utils.js
├── .github/
│   └── workflows/
│       └── deploy.yml
├── package.json
└── .env.example
Enter fullscreen mode Exit fullscreen mode

Step 2: GitHub Webhook Integration

Let's start with the webhook handler in src/index.js:

const express = require('express');
const { Octokit } = require('@octokit/rest');
const { reviewPullRequest } = require('./reviewEngine');
require('dotenv').config();

const app = express();
app.use(express.json());

const octokit = new Octokit({
  auth: process.env.GITHUB_TOKEN,
});

app.post('/webhook', async (req, res) => {
  const { action, pull_request, repository } = req.body;

  // Only process opened or synchronized PRs
  if (!['opened', 'synchronize'].includes(action) || !pull_request) {
    return res.status(200).send('Event ignored');
  }

  try {
    console.log(`Processing PR #${pull_request.number} in ${repository.full_name}`);

    // Get the diff for the pull request
    const diff = await octokit.rest.pulls.get({
      owner: repository.owner.login,
      repo: repository.name,
      pull_number: pull_request.number,
      mediaType: {
        format: 'diff'
      }
    });

    // Analyze the code changes
    const review = await reviewPullRequest(diff.data, pull_request);

    // Post the review as a comment
    await octokit.rest.issues.createComment({
      owner: repository.owner.login,
      repo: repository.name,
      issue_number: pull_request.number,
      body: review
    });

    res.status(200).send('Review posted successfully');
  } catch (error) {
    console.error('Error processing webhook:', error);
    res.status(500).send('Internal server error');
  }
});

const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
  console.log(`AI Code Reviewer listening on port ${PORT}`);
});
Enter fullscreen mode Exit fullscreen mode

Step 3: Building the AI Review Engine

Now let's create the core AI logic in src/reviewEngine.js:

const OpenAI = require('openai');
const { parseCodeChanges, formatReviewResponse } = require('./utils');

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

const REVIEW_PROMPT = `
You are an experienced software engineer conducting a code review. Analyze the following code changes and provide constructive feedback.

Focus on:
- Code quality and best practices
- Potential bugs or security issues
- Performance considerations
- Maintainability and readability
- Adherence to common patterns

Provide specific, actionable feedback. If the code looks good, acknowledge that too.

Code changes:
`;

async function reviewPullRequest(diffData, pullRequest) {
  try {
    // Parse the diff to extract meaningful changes
    const codeChanges = parseCodeChanges(diffData);

    // Skip if changes are too large (to avoid token limits)
    if (codeChanges.length > 8000) {
      return formatReviewResponse({
        summary: "⚠️ This PR is too large for automated review. Consider breaking it into smaller changes.",
        details: []
      });
    }

    const response = await openai.chat.completions.create({
      model: "gpt-4",
      messages: [
        {
          role: "system",
          content: "You are a helpful code reviewer. Provide constructive, specific feedback."
        },
        {
          role: "user",
          content: REVIEW_PROMPT + codeChanges
        }
      ],
      max_tokens: 1000,
      temperature: 0.3,
    });

    const aiReview = response.choices[0].message.content;

    return formatReviewResponse({
      summary: `🤖 **AI Code Review for PR #${pullRequest.number}**\n\n`,
      details: aiReview,
      footer: "\n\n---\n*This review was generated by AI. Please use your judgment and seek human review for critical changes.*"
    });

  } catch (error) {
    console.error('Error in AI review:', error);

    if (error.code === 'rate_limit_exceeded') {
      return "⏳ AI reviewer is currently rate-limited. Please try again later.";
    }

    return "❌ Unable to generate AI review at this time. Please proceed with manual review.";
  }
}

module.exports = { reviewPullRequest };
Enter fullscreen mode Exit fullscreen mode

Step 4: Utility Functions

Create helper functions in src/utils.js:

function parseCodeChanges(diffData) {
  // Remove binary files and focus on text changes
  const lines = diffData.split('\n');
  const relevantLines = lines.filter(line => {
    // Filter out metadata and focus on actual code changes
    return line.startsWith('+') || line.startsWith('-') || 
           line.startsWith('@@') || line.includes('function') || 
           line.includes('class') || line.includes('import');
  });

  return relevantLines.slice(0, 200).join('\n'); // Limit size
}

function formatReviewResponse({ summary, details, footer = '' }) {
  return `${summary}${details}${footer}`;
}

module.exports = {
  parseCodeChanges,
  formatReviewResponse
};
Enter fullscreen mode Exit fullscreen mode

Step 5: Production Deployment with GitHub Actions

Create .github/workflows/deploy.yml:

name: Deploy AI Code Reviewer

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v3

    - name: Setup Node.js
      uses: actions/setup-node@v3
      with:
        node-version: '18'
        cache: 'npm'

    - name: Install dependencies
      run: npm ci

    - name: Run tests
      run: npm test

    - name: Deploy to production
      if: github.ref == 'refs/heads/main'
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
      run: |
        # Add your deployment commands here
        # For example, deploy to Heroku, Railway, or your preferred platform
        echo "Deploying to production..."
Enter fullscreen mode Exit fullscreen mode

Step 6: Configuring the GitHub Webhook

  1. Go to your repository settings
  2. Navigate to "Webhooks" and click "Add webhook"
  3. Set the payload URL to your deployed app's /webhook endpoint
  4. Select "application/json" as the content type
  5. Choose "Let me select individual events" and check:
    • Pull requests
    • Pull request reviews
  6. Ensure the webhook is active

Step 7: Environment Configuration

Create a .env file with your secrets:

GITHUB_TOKEN=your_github_personal_access_token
OPENAI_API_KEY=your_openai_api_key
PORT=3000
Enter fullscreen mode Exit fullscreen mode

Important: Add .env to your .gitignore and use GitHub Secrets for production deployment.

Handling Production Scenarios

Rate Limiting

Implement exponential backoff for OpenAI API calls:

async function callOpenAIWithRetry(requestFn, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await requestFn();
    } catch (error) {
      if (error.code === 'rate_limit_exceeded' && i < maxRetries - 1) {
        const delay = Math.pow(2, i) * 1000; // Exponential backoff
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
      throw error;
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Customizing Review Criteria

You can customize the AI's focus by modifying the prompt:

const CUSTOM_PROMPTS = {
  security: "Focus primarily on security vulnerabilities and potential exploits.",
  performance: "Emphasize performance optimizations and efficiency concerns.",
  style: "Focus on code style, formatting, and adherence to team conventions."
};

// Use based on repository labels or configuration
const selectedPrompt = CUSTOM_PROMPTS[reviewType] || REVIEW_PROMPT;
Enter fullscreen mode Exit fullscreen mode

Testing Your Bot

  1. Deploy your application to a platform like Railway, Heroku, or Vercel
  2. Configure the webhook URL to point to your deployed endpoint
  3. Create a test pull request in your repository
  4. Watch as your AI bot automatically provides a review!

Conclusion

You now have a fully functional AI code review assistant that can:

  • Automatically trigger on pull requests
  • Provide intelligent, contextual feedback
  • Handle production scenarios gracefully
  • Scale with your team's needs

This bot won't replace human reviewers, but it's excellent for catching common issues, ensuring consistency, and providing immediate feedback. Consider it a first line of defense that helps your team focus on higher-level architectural and business logic concerns.

The real power comes from customization—adjust the prompts, add repository-specific rules, and fine-tune the feedback style to match your team's preferences. Happy coding!


Want to extend this further? Consider adding support for inline comments, integration with code quality tools, or custom review templates based on file types.


Tools mentioned:

Top comments (1)

Collapse
 
matthewhou profile image
Matthew Hou

Building your own code review bot is a great way to understand both the capabilities and the failure modes of LLM-based review. One thing I've found: the reviews get dramatically better if you include codebase-specific conventions in the system prompt. Generic LLMs will flag things that your team intentionally does differently from 'standard' practice, which creates noise and reviewer fatigue. The fix is a few paragraphs covering your error handling patterns, naming conventions, and acceptable codebase-specific patterns. It's basically the same as onboarding a human reviewer — you have to tell them how your team specifically does things, not just expect them to infer it.