DEV Community

Olivia Craft
Olivia Craft

Posted on

Where to Get 50 Production-Tested Cursor Rules (And Why Writing Them From Scratch Is a Trap)

Where to Get 50 Production-Tested Cursor Rules (And Why Writing Them From Scratch Is a Trap)

I spent three months writing cursor rules from scratch. Hundreds of iterations. Dozens of rewrites. Rules that worked on Monday broke on Wednesday. Rules that fixed one problem introduced two more.

Then I stopped and asked a simple question: why am I doing this?


The real cost of writing cursor rules from scratch

Let me be honest about what "writing your own cursor rules" actually looks like.

You start with something reasonable:

# TypeScript Rules
- Use strict types
- Prefer interfaces over types
Enter fullscreen mode Exit fullscreen mode

Two lines. Feels clean. Then you notice Cursor generating any everywhere. So you add:

- Never use any
Enter fullscreen mode Exit fullscreen mode

Now Cursor refuses to use any even in generic constraints where it's the correct choice. So you add an exception:

- Never use any except in generic type constraints
Enter fullscreen mode Exit fullscreen mode

Then you notice it's over-annotating return types on simple functions. So you add another rule. Then another exception. Then a rule to handle the exception.

Three weeks in, you have 40 lines of rules that contradict each other.

This isn't a skill issue. It's a structural problem. Writing good cursor rules requires understanding how the model interprets instructions — which patterns it follows literally, which it treats as suggestions, and which it ignores entirely. You don't learn this from documentation. You learn it from hundreds of hours of trial and error.

Here's what that trial-and-error cycle actually costs:

  • Time: 2-4 hours per rule to get right. Not to write — to test, iterate, and validate across different codebases and scenarios.
  • Edge cases you miss: Rules that work for greenfield code break on legacy codebases. Rules for React don't account for Next.js app router differences. Rules for Python miss Django-specific patterns vs FastAPI patterns.
  • Interaction effects: Rule A works perfectly alone. Rule B works perfectly alone. Together, they produce garbage. You only discover this after deploying both.
  • Model updates: Cursor updates the underlying model, and suddenly your carefully tuned rules behave differently. Rules that relied on specific model behaviors need re-tuning.

I tracked my time. Over three months, I spent ~120 hours writing, testing, and fixing cursor rules. That's three full work weeks. On rules.


What makes cursor rules actually work

After all that time, I learned something important: the difference between rules that work and rules that don't isn't cleverness. It's structure.

Constraints beat preferences

This doesn't work:

- Prefer named exports
Enter fullscreen mode Exit fullscreen mode

"Prefer" is a suggestion. The model treats it as optional. Sometimes it follows it, sometimes it doesn't. You can't predict when.

This works:

- Always use named exports for utility functions
- Always use default exports for React page components
- Never use barrel files (index.ts re-exports)
Enter fullscreen mode Exit fullscreen mode

Constraints are binary. The model either follows them or violates them. When it violates them, you notice immediately. When it follows them, you get consistent output.

Every rule that uses "prefer," "try to," or "when possible" is a rule that will be ignored 30-40% of the time. I tested this across 200+ generations. Vague language produces vague compliance.

Specificity beats generality

This doesn't work:

- Write clean code
- Follow best practices
- Use proper error handling
Enter fullscreen mode Exit fullscreen mode

These rules do literally nothing. The model already "tries" to write clean code. You're adding noise.

This works:

- Use Result<T, AppError> for all fallible operations
- Define error variants with thiserror, not String
- Never catch errors silently — log at warn level minimum
- Return early on error — no nested if/else chains
Enter fullscreen mode Exit fullscreen mode

Specific rules produce specific behavior. Every effective cursor rule I've seen answers a concrete question: what exact pattern should the model use, in what exact context, and what should it never do instead?

Negative constraints prevent the most bugs

The highest-value rules aren't "do this." They're "never do that."

- Never use .unwrap() in production code paths
- Never use any as a type annotation
- Never import from barrel files
- Never write CSS outside of module files
- Never use console.log — use the project logger
Enter fullscreen mode Exit fullscreen mode

Why? Because the model's default behaviors are where the bugs live. Without constraints, Cursor will .unwrap() in Rust, use any in TypeScript, console.log in production code, and scatter inline styles across your React components.

Negative constraints are guardrails. They don't tell the model what to build — they tell it what cliff edges to avoid.

Organization determines consistency

Rules dumped in a flat file get inconsistent treatment. Rules organized by priority and context get reliable behavior.

# === CRITICAL — Never violate ===
- Never use any
- Never use .unwrap() in production

# === FRAMEWORK — React ===
- Use default exports for page components
- Co-locate component + styles + tests

# === STYLE — Apply when no conflict ===
- Prefer early returns
- Use descriptive variable names
Enter fullscreen mode Exit fullscreen mode

This isn't just for human readability. The model processes structured rules more reliably than unstructured ones. Priority sections reduce ambiguity when rules overlap.


Why writing them from scratch is a trap

Here's the trap: writing cursor rules from scratch feels productive. You write a rule, test it, see improvement, and feel like you're making progress.

But you're solving a problem that's already been solved.

The patterns I described above — constraints over preferences, specificity over generality, negative guardrails, organized priority sections — these aren't original insights. They're the result of extensive testing across real codebases.

And the specific rules? The exact wording that makes Cursor consistently avoid .unwrap() in Rust, or properly handle error boundaries in React, or structure Django views vs FastAPI endpoints? That's hundreds of hours of iteration baked into specific phrasings.

You can do that iteration yourself. But you're trading engineering time for rule-writing time. And unless rule-writing is your job, that's a bad trade.

What I actually use now

After burning those three months, I switched to using a pre-built set of production-tested rules. 50+ rules covering TypeScript, React, Next.js, Python, Rust, Go — organized by language, framework, and priority.

The difference was immediate:

  • No more rule conflicts. The rules are designed to work together, not in isolation.
  • Edge cases are handled. Framework-specific scoping means React rules don't break Next.js patterns, and Python rules account for both Django and FastAPI.
  • Negative constraints are already in place. The guardrails against common AI mistakes are built in from day one.
  • Updates track model behavior. When Cursor's underlying model changes, the rules get updated to match.

I dropped them into my .cursorrules file and immediately got more consistent code generation than three months of hand-tuning had produced.

The bottom line

Writing cursor rules from scratch is like writing your own ESLint config from scratch. You can do it. Some people enjoy it. But for most developers, starting from a tested, maintained set and customizing from there saves weeks of trial and error.

If you want a shortcut past the iteration cycle, check out the Cursor Rules Pack at oliviacraft.lat. It's 50+ production-tested rules organized the way I described above — constraints, not preferences; specific, not generic; with negative guardrails and priority sections built in.

Your time is better spent writing code than writing rules about code.

Top comments (0)