I posted a question on the Cursor forum asking teams what breaks first when you go from solo to team usage. Three different teams replied with detailed workflows. Here's what surprised me.
Nobody uses .cursorrules
One team uses agents.md with a custom "devprompts" directory. Another tracks agents, hooks, rules, and skills in Git on a monorepo. The third just uses cursor rules on a team plan but called the AI "a wild horse."
To be clear, they all use Cursor's rules features. But nobody's using the old .cursorrules file format. They've moved to newer approaches like .mdc files, agents.md, and custom directory structures.
Review is the actual bottleneck
The monorepo team was blunt about it: once your AI can generate code fast, the constraint shifts to review. You need strong reviewers, and people who were good at writing code now need to become good at reviewing AI code. That's a different skill.
They use CodeRabbit and Cursor BugBot for automated review, but still need human eyes. AI-generated code needs to be reviewed, full stop.
Token management is a real problem
One team has someone at 98% of their monthly token allowance with two days left in the cycle. Their workaround: plan first in auto mode, review the plan, then execute. Most stay around 80% that way.
Another team runs out of Cursor credits entirely and falls back to Google AI Pro with Antigravity and Gemini CLI.
Multi-tool teams are the norm
Every team uses more than just Cursor:
- Cursor + Jules + Antigravity + Claude Code (trialing)
- Cursor + Google AI Pro + Antigravity + Gemini CLI
- Cursor + CodeRabbit + BugBot
And the configs don't port between tools. The team trialing Claude Code was frustrated it doesn't support agents.md.
What this means
If you're building for Cursor teams, the problems aren't "which rules to write." The problems are:
- Making sure the AI actually follows whatever rules you set up
- Reviewing AI output at scale without burning out your senior devs
- Managing token budgets across a team
- Dealing with config fragmentation across multiple AI tools
I'm working on the first one with cursor-doctor. The others are still wide open.
Based on conversations with teams on the Cursor forum.
Check your setup: npx cursor-doctor scan โ finds broken rules, conflicts, and token waste. Free on npm.
More from this series: 77 free .mdc rules ยท cursor-doctor (catches broken rules before they waste tokens) ยท All articles
๐ I made a free Cursor Safety Checklist โ a pre-flight checklist for AI-assisted coding sessions, based on actual experiments.
Top comments (0)