Anthropic Launches Multi-Agent Code Review and It Is Doubling Pull Requests
On March 9, 2026, Anthropic announced a major new product: multi-agent AI code review powered by Claude. According to early users, the system is handling code review at a pace that doubled their pull request throughput.
How It Works
The system uses multiple Claude agents that collaborate on reviewing code. It integrates directly with GitHub, and unlike typical linters or static analysis tools, it focuses on logic errors: the kind of mistakes that slip through automated checks and cost hours of debugging.
Key details:
- Available as a research preview for Claude for Teams and Claude for Enterprise
- Integrates with GitHub pull request workflows
- Multi-agent architecture: several Claude instances review different aspects simultaneously
- Focus: logic errors, not just syntax
- Anthropic is running at a $2.5 billion revenue run-rate as of early 2026
- Enterprise subscriptions grew 4x since January 2026
- Major users include Uber, Salesforce, and Accenture
Why This Matters
Code review is one of the biggest bottlenecks in software teams. Senior engineers spend 20-30% of their time reviewing others' code, time that could go to building. If AI can handle the initial pass (and especially catch logic bugs, not just style issues), engineering teams can ship faster with the same headcount.
The Business Angle
This fits into a broader pattern: agents that do real work, not just assist. Whether it is AI agents handling customer service for Israeli businesses or Anthropic's Claude agents reviewing code, the shift from AI as chatbot to AI as autonomous worker is accelerating.
For businesses curious how AI agents can save time in their own operations, AI Buddy builds and deploys custom AI agents for Israeli companies.
Originally published in Hebrew on VibeTech.
Top comments (0)