Most .cursorrules files are wishful thinking. "Write clean code." "Follow best practices." "Be consistent."
I tested a bunch of these through Cursor CLI with before/after comparisons. The vague ones? Zero measurable difference. Cursor ignored them completely.
But specific rules changed the output every time.
Here's the framework I use to write rules that actually work.
Why Most Rules Fail
Take this rule:
Write clean, maintainable code.
I tested it. Same prompt, with and without the rule. Identical output both times. Cursor already tries to write decent code. Telling it to "write clean code" is like telling a chef to "make it taste good."
The problem isn't Cursor. The problem is the rule gives Cursor nothing to act on. What does "clean" mean? Shorter functions? More comments? Fewer dependencies? Cursor can't verify its own compliance because there's nothing concrete to check.
The Three Criteria
After testing dozens of rules, the ones that work share three properties:
1. Specific
The rule describes an exact pattern, not a general direction.
Bad:
Handle errors properly.
Good:
Every async function must return a Result type:
{ ok: true, data: T } | { ok: false, error: string }.
Never throw exceptions from async functions.
The first rule is a suggestion. The second is a specification. Cursor can look at its output and verify: "Did I return a Result type? Yes or no."
2. Verifiable
You should be able to check compliance in a 2-second code review. If you can't tell whether the rule was followed by glancing at the output, Cursor can't either.
Bad:
Write performant code.
Good:
Every React component must have a Props interface
defined above the component. No inline type annotations
in the function signature.
You can see a Props interface in the output. You can't see "performant."
3. Additive
The rule asks for something Cursor wouldn't do by default. This is where people waste the most rules.
I tested a rule that said "prefer Server Actions over API routes for Next.js mutations." Both the control (no rule) and the rule version used Server Actions. Cursor already defaults to this. The rule was dead weight.
Before adding a rule, test the prompt without it. If Cursor already does what you want, skip the rule and save the context window for something that matters.
Building Rules: From Vague to Specific
Let me walk through three examples.
Example 1: Type Safety
Most people start here:
Use TypeScript and proper types.
This does almost nothing. Cursor already generates TypeScript when your project uses it. Try this instead:
Use branded types for ID fields to prevent mixing up
different entity IDs. For example, use
type UserId = string & { readonly brand: unique symbol }
instead of plain string for user IDs.
I tested this one. Without the rule, getUser(userId) and getOrder(orderId) both accepted string. With the rule, Cursor created separate branded types and TypeScript catches the mixup at compile time.
The difference: the first rule describes a goal ("proper types"). The second describes a pattern Cursor can copy.
Example 2: File Creation
Keep files organized.
Versus:
When creating a page.tsx file in the Next.js app directory,
ALWAYS create a corresponding error.tsx file in the same
directory.
Without the specific rule, Cursor creates page.tsx and moves on. With it, you get both files every time. Cursor even noted it was "satisfying the workspace rule."
File creation rules are some of the most reliable because they're binary. Either the file exists or it doesn't. No interpretation needed.
These rule patterns come from the free Starter Pack. Drop it in your project, takes 30 seconds.
Example 3: Immutability
Write safe code.
Versus:
All interface properties must be readonly by default.
Use ReadonlyArray<T> instead of T[]. Only remove readonly
when mutation is explicitly needed.
Tested this one too. The control had zero readonly keywords anywhere. The rule version added readonly to every property and used ReadonlyArray for all array types. Clear, verifiable difference.
How to Test Your Own Rules
You don't need a fancy setup. Here's the process:
- Pick a prompt you use often ("create a user registration form", "build a REST endpoint", etc.)
- Run it in Cursor without any rules. Save the output.
- Add your rule to
.cursorrules - Run the exact same prompt. Save the output.
- Diff them.
If the output changed in the way you expected, the rule works. If it's identical or changed in a way you didn't intend, iterate or scrap it.
What to look for in the diff:
- New patterns that weren't there before (types, error handling, file creation)
- Structural changes (different imports, different function signatures)
- Comments referencing the rule (Cursor sometimes explains why it's following a rule)
When to scrap a rule:
- Output is identical with and without it
- Cursor already does what the rule asks
- The rule is so long or complex that Cursor seems to ignore parts of it
Common Mistakes
Rules that are too long. Your .cursorrules file competes with your actual code for context window space. If you stuff 500 lines of rules in there, Cursor may start ignoring the ones near the bottom. Keep it focused. 20-30 targeted rules beat 100 vague ones.
Rules that contradict each other. "Always use try/catch for error handling" plus "return Result types instead of throwing" will confuse Cursor. Pick one approach and commit.
Rules that describe Cursor's defaults. Test before you add. Server Components, basic TypeScript, async/await patterns... Cursor already does these. Don't waste rules on things that are already happening.
The Quick Version
Before you add a rule, ask three questions:
- Is it specific enough to verify in a 2-second code review?
- Does it ask for something Cursor wouldn't do on its own?
- Can I test it with a before/after comparison?
If yes to all three, add it. If not, rewrite it until it passes.
The rules I tested for this article (and the previous one) come from these packs. The starter pack is free if you want tested rules to build from.
Or grab individual rules from the free collection on GitHub — 33 files across languages, frameworks, and practices.
Top comments (1)
The "specific, verifiable, additive" framework is exactly right. I've been using a similar approach with CLAUDE.md files (Claude Code's equivalent) and the pattern holds — vague instructions get ignored, concrete patterns get followed.
One thing I'd add: rules about what NOT to do tend to be even more effective than positive rules. Something like "Never use any as a TypeScript type. If you need a flexible type, use unknown and narrow it" works better than "use proper types" because the AI can check for the specific anti-pattern.
We've been building tools for freelancers at Cobalt Studio and the .cursorrules / CLAUDE.md approach has been a huge productivity multiplier for solo devs who need consistent output without a team to enforce standards. Having a well-crafted rules file is basically a free senior engineer reviewing every generation.