DEV Community

Rahul Singh
Rahul Singh

Posted on • Originally published at aicodereview.cc

CodeRabbit Custom Rules: Review Instructions Guide

Every team writes code differently. Your naming conventions, architectural patterns, testing standards, and security requirements are specific to your project - and a code review tool that cannot be taught those conventions is only half useful. CodeRabbit solves this with its .coderabbit.yaml configuration file, which lets you write custom review instructions in plain English and apply them automatically to every pull request.

The real power of CodeRabbit custom rules is not just in setting global preferences. It is in the ability to apply different review standards to different parts of your codebase, configure the tone and depth of feedback, control which files get reviewed and which get skipped, and encode your team's hard-won knowledge about what "good code" looks like in your specific project. This guide walks through all of it - from basic configuration to production-ready examples for React frontends, Python APIs, and microservice architectures.

CodeRabbit screenshot

Understanding the .coderabbit.yaml file

The .coderabbit.yaml file lives in the root of your repository. When CodeRabbit processes a pull request, it reads this file first and uses its contents to shape how the review runs. If no configuration file exists, CodeRabbit uses sensible defaults - but those defaults are generic. Adding a configuration file is how you make CodeRabbit work like a team member who actually understands your project's conventions.

Here is the minimal structure of a .coderabbit.yaml file:

language: "en-US"
tone_instructions: "Be direct. Explain why, not just what."

reviews:
  profile: "chill"
  high_level_summary: true
  instructions: |
    - Your global review rules go here
Enter fullscreen mode Exit fullscreen mode

The file supports several top-level sections that control different aspects of the review. The most important ones are tone_instructions for controlling how CodeRabbit communicates, reviews.instructions for global review rules, reviews.path_instructions for directory-specific rules, reviews.path_filters for excluding files, and reviews.tools for enabling or disabling specific linters.

Every instruction you write in this file is interpreted by CodeRabbit's AI model. You are not writing regex patterns or rule IDs - you are writing plain English descriptions of what you want the reviewer to look for. This makes the configuration accessible to every developer on the team, not just those who know the syntax of a specific linting tool.

Global review instructions

The reviews.instructions field is where you define rules that apply to every file in every pull request. These are your team's universal standards - the things that should always be true regardless of which part of the codebase is being modified.

reviews:
  instructions: |
    - All public functions must have docstrings or JSDoc comments
    - Flag any function exceeding 50 lines as a refactoring candidate
    - Flag TODO comments that do not include a ticket number or author
    - Never approve hardcoded API keys, passwords, or secret values
    - Suggest early returns to reduce nesting depth
    - Flag catch blocks that swallow exceptions without logging
Enter fullscreen mode Exit fullscreen mode

Each instruction is a sentence that tells CodeRabbit what to look for or what to flag. The AI interprets these contextually - it does not just pattern-match on keywords. When you write "Flag any function exceeding 50 lines," CodeRabbit counts the logical lines of the function and applies judgment about whether the length is justified by the function's complexity. When you write "Suggest early returns to reduce nesting depth," it identifies nested conditional blocks that could be simplified and generates a concrete suggestion.

A few guidelines for writing effective global instructions:

Be specific about what you want flagged. "Write clean code" is too vague to be actionable. "Flag functions with more than three levels of nesting" gives the AI a clear criterion to apply.

Include the why when it matters. Writing "Flag subprocess calls with shell=True because it creates command injection risk when user input is involved" helps CodeRabbit generate more informative review comments than just "Flag subprocess calls with shell=True."

Keep the list focused. Twenty global instructions will produce noisy reviews. Six to ten well-chosen rules that reflect your team's most common code quality issues will produce reviews that developers actually read.

Use negative instructions sparingly. "Never approve code that..." is useful for hard requirements like security rules. For softer preferences, "Suggest..." or "Flag..." gives the reviewer flexibility to apply judgment.

Path-based review rules

Path-based rules are the feature that makes CodeRabbit's configuration genuinely powerful. Different parts of a codebase have different quality standards. Test code has different conventions than production code. API routes need different review criteria than utility functions. Frontend components need different checks than backend services.

The reviews.path_instructions field lets you define all of this:

reviews:
  path_instructions:
    - path: "src/components/**"
      instructions: |
        - Flag components longer than 200 lines as candidates for splitting
        - Ensure useEffect hooks have proper dependency arrays
        - Flag direct DOM manipulation - use React state and refs instead
        - Check that event handlers are not recreated on every render
        - Flag inline styles - use CSS modules or styled-components

    - path: "src/api/**"
      instructions: |
        - Every endpoint must validate input before processing
        - Flag any database query that does not use parameterized values
        - Ensure all endpoints return proper HTTP status codes
        - Check that error responses include meaningful error messages
        - Flag endpoints that return raw database objects without serialization

    - path: "tests/**"
      instructions: |
        - Relaxed style rules - focus on test quality not formatting
        - Flag tests that do not assert anything meaningful
        - Flag tests with hardcoded sleep calls - suggest polling or mocking
        - Ensure each test function tests exactly one behavior
        - Flag tests that depend on execution order
Enter fullscreen mode Exit fullscreen mode

Each path instruction block takes a glob pattern and a set of instructions. CodeRabbit matches the files changed in a pull request against these patterns and applies the corresponding instructions only to files that match. A file in src/components/Button.tsx receives the component-specific instructions. A file in tests/api/test_users.py receives the test-specific instructions. Files that match multiple patterns receive all applicable instructions.

The glob patterns follow standard syntax. ** matches any number of directories. * matches any filename. You can use file extensions to target specific languages - **/*.py for Python files, **/*.tsx for React TypeScript components.

Controlling review scope with path filters

Not every file in your repository benefits from AI code review. Auto-generated migration files, lock files, build artifacts, and vendored dependencies produce noise when reviewed. The path_filters field lets you exclude them:

reviews:
  path_filters:
    - "!**/migrations/**"
    - "!**/generated/**"
    - "!package-lock.json"
    - "!yarn.lock"
    - "!pnpm-lock.yaml"
    - "!**/*.min.js"
    - "!**/*.min.css"
    - "!vendor/**"
    - "!dist/**"
    - "!build/**"
Enter fullscreen mode Exit fullscreen mode

The exclamation mark prefix means "exclude this pattern." Any file matching an excluded pattern is skipped entirely - CodeRabbit will not generate comments, summaries, or analysis for it. This keeps the review focused on code that your team actually wrote and maintains.

You can also use positive patterns to explicitly include only certain paths:

reviews:
  path_filters:
    - "src/**"
    - "tests/**"
    - "scripts/**"
Enter fullscreen mode Exit fullscreen mode

When only positive patterns are present, CodeRabbit reviews only files that match at least one pattern. Everything else is excluded. This is useful for monorepos where you want CodeRabbit active on specific packages but not the entire repository.

Tone and communication style

How CodeRabbit communicates matters as much as what it catches. A review comment that sounds like a lecture will be ignored. A comment that sounds like a helpful colleague will be acted on. The tone_instructions field controls this:

tone_instructions: "Be direct and concise. Skip pleasantries. Explain the reasoning behind each suggestion in one sentence. Use code examples when suggesting alternatives."
Enter fullscreen mode Exit fullscreen mode

You can also control the overall review intensity with the reviews.profile setting:

  • "chill" - Fewer comments, focused on substantive issues. Style nitpicks are suppressed unless they are genuinely important. This is the right choice for experienced teams that want high-signal feedback.
  • "assertive" - More comments, covering everything from logic errors to minor style inconsistencies. Useful for teams with junior developers who benefit from detailed guidance, or for codebases going through a quality improvement initiative.

Additional formatting options:

reviews:
  profile: "chill"
  high_level_summary: true
  poem: false
  collapse_walkthrough: false
  request_changes_workflow: false
Enter fullscreen mode Exit fullscreen mode

Setting high_level_summary: true adds a summary comment at the top of each review that describes the overall intent of the pull request. This is useful for reviewers who want a quick understanding before diving into line-by-line comments. Setting poem: false disables the playful poem that CodeRabbit sometimes adds to reviews - appropriate for most professional environments. Setting request_changes_workflow: false means CodeRabbit posts comments without blocking the PR, leaving the decision to approve or request changes to human reviewers.

Configuring linters and tools

CodeRabbit's Pro plan includes 40+ built-in linters that run alongside the AI review. You can enable or disable specific linters in your configuration:

reviews:
  tools:
    eslint:
      enabled: true
    pylint:
      enabled: true
    ruff:
      enabled: true
    biome:
      enabled: false
    markdownlint:
      enabled: true
    shellcheck:
      enabled: true
    hadolint:
      enabled: true
Enter fullscreen mode Exit fullscreen mode

The linter results are incorporated into CodeRabbit's AI analysis. When Pylint flags a violation and the AI also identifies the same issue as problematic in context, CodeRabbit combines them into a single, more informative comment rather than posting duplicate findings.

For teams that already run linters in CI, enabling the same linters in CodeRabbit is mildly redundant but not wasteful. CodeRabbit's linter runs happen in parallel with the AI analysis and add less than a second to review time. The benefit is that developers see linter violations alongside contextual AI comments in a single review, rather than switching between CI logs and PR comments.

Practical example: React frontend project

Here is a complete .coderabbit.yaml for a React TypeScript frontend project:

language: "en-US"
tone_instructions: "Be concise. Use code examples in suggestions. Focus on React best practices and performance."

reviews:
  profile: "chill"
  high_level_summary: true
  poem: false
  request_changes_workflow: false

  tools:
    eslint:
      enabled: true
    biome:
      enabled: false

  path_filters:
    - "!dist/**"
    - "!build/**"
    - "!node_modules/**"
    - "!package-lock.json"
    - "!**/*.generated.ts"
    - "!**/*.d.ts"

  instructions: |
    - Flag components that mix data fetching and presentation logic
    - Suggest extracting custom hooks when useState/useEffect patterns repeat
    - Flag missing error boundaries around components that fetch data
    - Flag console.log statements that should not ship to production
    - Ensure accessibility - flag interactive elements missing aria labels

  path_instructions:
    - path: "src/components/**"
      instructions: |
        - Flag components longer than 200 lines
        - Ensure useEffect hooks list all dependencies
        - Flag direct DOM manipulation via document.querySelector
        - Check that event handlers use useCallback when passed as props
        - Flag components that accept more than 5 props as candidates for refactoring
        - Ensure form components have proper label associations

    - path: "src/hooks/**"
      instructions: |
        - Custom hooks must start with 'use' prefix
        - Flag hooks that do not clean up subscriptions or timers in useEffect return
        - Ensure hooks return stable references where appropriate via useMemo

    - path: "src/api/**"
      instructions: |
        - Ensure all API calls include error handling
        - Flag hardcoded API URLs - use environment variables
        - Check that request and response types are defined with TypeScript interfaces

    - path: "src/__tests__/**"
      instructions: |
        - Use React Testing Library patterns - flag enzyme usage
        - Prefer userEvent over fireEvent for interaction tests
        - Flag tests that query by class name or tag - use accessible roles and labels
        - Ensure async operations use waitFor or findBy queries
Enter fullscreen mode Exit fullscreen mode

This configuration produces reviews that understand React conventions. When a developer opens a PR that adds a 250-line component with three useEffect hooks and inline styles, CodeRabbit will suggest splitting the component, flag missing dependency arrays, and recommend CSS modules - all in the context of what the component is actually doing.

Practical example: Python API project

Here is a configuration for a FastAPI or Django REST API project:

language: "en-US"
tone_instructions: "Be direct. Reference PEP 8 and Python best practices by name when relevant."

reviews:
  profile: "chill"
  high_level_summary: true
  poem: false

  tools:
    pylint:
      enabled: true
    ruff:
      enabled: true

  path_filters:
    - "!**/migrations/**"
    - "!alembic/**"
    - "!setup.py"
    - "!**/__pycache__/**"
    - "!*.egg-info/**"

  instructions: |
    - Enforce type annotations on all public functions
    - Flag mutable default arguments in function signatures
    - Flag bare except clauses - require specific exception types
    - Flag SQL queries built with string formatting or f-strings
    - Ensure all environment variable access has a fallback or validation

  path_instructions:
    - path: "app/api/**"
      instructions: |
        - Every endpoint must validate input with Pydantic models
        - Flag endpoints missing response_model declarations
        - Ensure async endpoints do not call synchronous database operations
        - Check that authentication dependencies are applied consistently
        - Flag endpoints that return raw database models without serialization

    - path: "app/models/**"
      instructions: |
        - Ensure database models include __repr__ methods
        - Flag nullable fields that lack a clear justification
        - Check that foreign key relationships include appropriate cascade rules
        - Ensure indexes are defined for frequently queried fields

    - path: "app/services/**"
      instructions: |
        - Service functions should not access request objects directly
        - Flag functions that combine business logic with database queries
        - Ensure all external API calls include timeout parameters
        - Check that retry logic uses exponential backoff

    - path: "tests/**"
      instructions: |
        - Use pytest fixtures over manual setup
        - Flag tests that hit real external services - use mocks
        - Ensure each test function tests one behavior
        - Flag assertions on implementation details rather than behavior
Enter fullscreen mode Exit fullscreen mode

This setup catches Python-specific anti-patterns at the API level while keeping test reviews focused on test quality rather than style. The path-based rules mean that a change to a database model gets different review criteria than a change to an API endpoint, even though both are Python files.

Practical example: microservices architecture

Microservice repositories - especially monorepos containing multiple services - benefit heavily from path-based configuration because each service may use a different language or framework:

language: "en-US"
tone_instructions: "Be concise. Focus on service boundary concerns, API contracts, and deployment safety."

reviews:
  profile: "chill"
  high_level_summary: true
  poem: false

  path_filters:
    - "!**/node_modules/**"
    - "!**/__pycache__/**"
    - "!**/vendor/**"
    - "!**/*.lock"
    - "!**/dist/**"
    - "!infrastructure/terraform/.terraform/**"

  instructions: |
    - Flag any service that directly imports from another service's internal modules
    - Ensure all inter-service communication goes through defined API contracts
    - Flag hardcoded service URLs - use service discovery or environment variables
    - Check that all configuration values come from environment variables or config files

  path_instructions:
    - path: "services/auth/**"
      instructions: |
        - Apply strictest security review to all changes
        - Flag any change to token generation or validation logic for careful review
        - Ensure password hashing uses bcrypt or argon2
        - Check that rate limiting is applied to login endpoints
        - Flag any logging that might include sensitive user data

    - path: "services/*/api/**"
      instructions: |
        - Ensure API versioning is maintained
        - Flag breaking changes to response schemas
        - Check that all endpoints include request validation
        - Ensure error responses follow the shared error format

    - path: "infrastructure/**"
      instructions: |
        - Flag security group rules that allow 0.0.0.0/0 access
        - Check that all resources include proper tagging
        - Ensure database instances are not publicly accessible
        - Flag any reduction in replica count or resource limits

    - path: "services/*/Dockerfile"
      instructions: |
        - Ensure multi-stage builds are used to minimize image size
        - Flag use of latest tag for base images - pin specific versions
        - Check that non-root user is configured
        - Ensure health check instructions are defined
Enter fullscreen mode Exit fullscreen mode

The monorepo configuration demonstrates how path instructions can enforce architectural boundaries. The rule about cross-service imports catches a common microservice anti-pattern where developers accidentally create tight coupling between services by importing directly from another service's internal code rather than going through API contracts.

Advanced configuration patterns

Language-specific instructions

If your repository contains code in multiple languages, you can use file extension patterns in path_instructions to apply language-specific rules:

reviews:
  path_instructions:
    - path: "**/*.py"
      instructions: |
        - Enforce PEP 8 naming conventions
        - Require type annotations on public functions
        - Flag use of assert in production code

    - path: "**/*.ts"
      instructions: |
        - Flag use of 'any' type
        - Ensure interfaces are preferred over type aliases for object shapes
        - Flag non-null assertions (!) without a justifying comment

    - path: "**/*.go"
      instructions: |
        - Ensure errors are checked, not ignored with _
        - Flag goroutines launched without a way to signal shutdown
        - Check that defer is used for resource cleanup
Enter fullscreen mode Exit fullscreen mode

Review scope control

You can control what CodeRabbit summarizes and how it structures its output:

reviews:
  collapse_walkthrough: true
  review_status: true
  auto_review:
    enabled: true
    drafts: false
    base_branches:
      - main
      - develop
Enter fullscreen mode Exit fullscreen mode

Setting drafts: false prevents CodeRabbit from reviewing draft pull requests. This is useful when developers use draft PRs for work in progress and do not want review noise until the PR is ready. The base_branches option limits reviews to PRs targeting specific branches - handy for repositories that use feature branches heavily and only want reviews on PRs going to main or develop.

Combining global and path rules

Global instructions and path instructions are additive. A Python file in app/api/users.py receives the global instructions plus any path instructions matching app/api/** and **/*.py. This layering lets you set baseline standards globally and add specificity for particular directories or file types.

The order of evaluation is: global instructions first, then path instructions in the order they appear in the configuration file. If a global instruction says "Flag functions over 50 lines" and a path instruction for test files says "Relaxed style rules - focus on test quality not formatting," the AI interprets both in context and generally prioritizes the more specific path instruction for files in that directory.

Common configuration mistakes

Too many global instructions. More than 10 to 12 global instructions produces diminishing returns. The AI review becomes noisy, and developers start ignoring comments. Keep global rules to your highest-priority standards and use path instructions for everything else.

Vague instructions. "Write good tests" tells CodeRabbit nothing it can act on. "Ensure each test function contains at least one assertion and tests exactly one behavior" is specific enough to produce useful feedback.

Not excluding generated files. Migration files, lock files, compiled assets, and auto-generated code produce irrelevant review comments. Always add them to path_filters.

Using "assertive" profile prematurely. Start with "chill" and increase the review intensity only after your team is comfortable reading and acting on CodeRabbit's feedback. An assertive review on a codebase with no existing conventions produces an overwhelming volume of comments.

Forgetting to version control the configuration. The .coderabbit.yaml file should be committed to your repository and reviewed through the same PR process as code changes. This ensures the entire team agrees on review standards and changes to the configuration are tracked.

Comparing CodeRabbit's configuration to alternatives

CodeRabbit's plain-English instruction system is unusual among AI code review tools. Most competitors use one of two approaches: either they offer no customization beyond enabling or disabling built-in rules, or they require configuration in a tool-specific DSL that has its own learning curve.

CodeAnt AI offers custom rules through its platform but focuses more on centralized policy management across repositories rather than per-repo YAML configuration. CodeAnt AI's approach is better for organizations that want consistent rules enforced across 50+ repositories from a central dashboard. CodeRabbit's approach is better for teams that want each repository to own its review configuration, version-controlled alongside the code.

Tools like Sourcery and DeepSource provide rule configuration through their own rule systems - more precise than plain English for specific patterns but harder to extend for team-specific conventions. You cannot tell Sourcery "ensure all API endpoints validate input before processing" in plain English. You can with CodeRabbit.

The trade-off is precision. A Sourcery rule that flags a specific anti-pattern will catch it every time with zero false positives. A CodeRabbit plain-English instruction catches most instances but might miss edge cases or occasionally flag false positives because interpretation is probabilistic rather than deterministic. For most teams, the flexibility of plain-English rules outweighs the occasional imprecision.

Getting started with custom rules

If you are new to CodeRabbit configuration, here is a practical approach to building your .coderabbit.yaml over time:

  1. Start with the basics. Add a .coderabbit.yaml with profile: "chill", high_level_summary: true, and poem: false. Exclude auto-generated files with path_filters. Commit it and open a few PRs to see what the default review looks like.

  2. Add three to five global instructions. Look at the last ten code review comments your team wrote manually. What patterns do you flag most often? Encode those as instructions. Common starting points are function length limits, missing documentation, hardcoded secrets, and bare exception handling.

  3. Add path instructions for your highest-traffic directories. If most of your pull requests touch src/components/ and src/api/, start there. Write three to five instructions for each path that reflect the conventions specific to that part of the codebase.

  4. Tune the tone. After a week of reviews, ask your team whether the comments are helpful. Adjust tone_instructions based on feedback. Some teams prefer explanations with every suggestion. Others want just the suggestion with no elaboration.

  5. Iterate monthly. Review your .coderabbit.yaml once a month. Remove instructions that produce too many false positives. Add new ones when you notice recurring issues in manual code reviews. The configuration should evolve with your codebase.

For teams evaluating whether CodeRabbit is the right tool, the CodeRabbit review covers pricing, features, and real-world performance in detail. And if you are interested in how CodeRabbit handles specific languages, the CodeRabbit for Python guide shows what language-specific review looks like in practice.

Further Reading

Frequently Asked Questions

What is a .coderabbit.yaml file?

A .coderabbit.yaml file is a configuration file placed in the root of your repository that controls how CodeRabbit reviews your pull requests. It lets you define custom review instructions in plain English, set path-based rules for different parts of your codebase, configure review tone and scope, enable or disable specific linters, and exclude files or directories from review. The file is version-controlled alongside your code so the entire team shares the same review configuration.

How do I write custom review instructions for CodeRabbit?

Add a reviews.instructions key to your .coderabbit.yaml file and write your rules in plain English. For example, you can write instructions like 'Flag any function longer than 50 lines' or 'Ensure all API endpoints validate input before processing.' CodeRabbit's AI interprets these instructions and applies them to every pull request. You can also use path_instructions to apply different rules to different directories or file types.

Can CodeRabbit apply different rules to different files or folders?

Yes. The path_instructions feature in .coderabbit.yaml lets you define different review instructions for specific file paths using glob patterns. For example, you can set strict type-checking rules for source code in src/** while applying relaxed style rules and test-specific quality checks to files in tests/**. Each path instruction block includes a glob pattern and a set of plain-English review rules.

How do I configure CodeRabbit's review tone?

Use the tone_instructions field in your .coderabbit.yaml file to set the tone of CodeRabbit's review comments. You can write any natural language instruction such as 'Be direct and concise' or 'Explain the reasoning behind each suggestion.' Additionally, the reviews.profile setting accepts values like 'chill' for fewer comments focused on substantive issues, or 'assertive' for exhaustive feedback on every finding including style.

What files should I exclude from CodeRabbit review?

Use the path_filters field in .coderabbit.yaml to exclude files from review. Common exclusions include auto-generated files like database migrations, lock files such as package-lock.json or yarn.lock, build output directories, vendor or third-party code, and configuration files that rarely change. Use the exclamation mark prefix to negate a pattern, for example '!migrations/**' excludes all migration files.

Does CodeRabbit support custom rules for React projects?

Yes. You can write path_instructions targeting React component files with rules like 'Flag components longer than 200 lines as candidates for splitting,' 'Ensure useEffect hooks have proper dependency arrays,' 'Flag direct DOM manipulation in React components,' and 'Require error boundaries around async data-fetching components.' CodeRabbit understands React patterns and applies these instructions contextually during review.


Originally published at aicodereview.cc

Top comments (0)