I keep seeing people online worry about how many .cursorrules they can have before Cursor starts ignoring them. "Don't use too many rules," "keep it under 10," that kind of thing. But where was this coming from? Was there any truth to it?
So I tested it. I made 50 rules, loaded them all at once, and ran the same refactoring task 18 times across different rule counts.
The setup
I created rules for things that are easy to verify. always-semicolons, no-console-log, interface-prefix, early-return, stuff where you look at the output and can immediately tell if it was followed. I started at 1 rule and scaled up: 1, 5, 10, 20, 30, 50. Every rule had alwaysApply: true and proper frontmatter. The same task, 3 runs at each level.
I expected it to start falling off somewhere around 15-20 rules; that's what the online advice implies. Context windows have limits, models forget stuff, right?
What actually happened
| Rules | Run 1 | Run 2 | Run 3 |
|---|---|---|---|
| 1-50 | 100% | 100% | 100% |
100% compliance across all 18 runs. At every level. Including 50 rules at once(!)
I honestly thought I'd messed up the test. I checked the output files manually, rule by rule, and every single one was addressed. Some rules got marked "N/A" when they didn't apply to the test file (like cors-explicit when there's no API endpoint), but the model explicitly acknowledged them instead of silently skipping them.
You don't need to keep it under 10
At least for the current version of Cursor with Auto mode. I can't speak for older versions or other tools, but right now, 50 rules with alwaysApply: true and proper frontmatter all fire correctly.
Why I think people are still saying to keep it low
The people warning about "too many rules" are probably running into a different problem. Bad frontmatter, missing alwaysApply, vague rules that the model interprets differently than intended. Those are real issues that look like "the model forgot my rule" but are actually structural failures.
I've seen this pattern a lot. Someone has 15 rules, 3 of them aren't firing, and they assume it's a quantity problem when it's actually a formatting problem. The rule count isn't the bottleneck. The rule quality is.
But that was a toy project
The test above used a single file with a straightforward refactoring task. Real projects are different. You've got thousands of lines across multiple files, complex prompts, and the model has to juggle your project context alongside all those rules.
So I ran the same 50 rules against a real codebase: cursor-lint itself (4 files, ~900 lines). Instead of a simple single-file refactor, I asked Cursor to do a multi-file architectural change.
| Test | Run 1 | Run 2 | Run 3 |
|---|---|---|---|
| Single file (50 rules) | 50/50 | 50/50 | 50/50 |
| Real project (50 rules) | 48/50 | 49/50 | 48/50 |
96-98% compliance. Not 100%. One or two rules got silently dropped each run, and here's the interesting part: it wasn't the same rules every time. Different ones fell off in different runs. No pattern I could find.
My read on this: with a toy file, the model has plenty of context budget for rules. In a real project, rules are competing with your actual code for attention. Most of them still fire. But if you're relying on every single rule hitting every single time, you might get surprised.
The actual takeaway
50 rules works. Even in a real project, you're getting 96%+ compliance. The "keep it under 10" advice is still wrong. But "load 50 rules and forget about it" isn't quite right either. If you have rules that absolutely must fire every time, keep them specific, keep the frontmatter clean, and spot-check occasionally.
🔧 Want to check your own setup? Run npx cursor-doctor scan to find broken frontmatter, conflicts, and token waste in your Cursor rules. Free, zero dependencies.
Top comments (0)