DEV Community

Ned C
Ned C

Posted on

Cursor rules vs skills — what's the actual difference?


title: Cursor rules vs skills — what's the actual difference?
published: true
tags: cursor, ai, codequality, productivity

series: cursorrules-that-work

Cursor has two ways to give the agent persistent instructions: rules and skills. The docs explain what each one is, but they don't really explain when you'd pick one over the other. I had about 20 .mdc rules in a project and wasn't sure if I was supposed to migrate, so I ran some tests.

Here's what I found, and what I wish someone had told me before I started.

Rules: the short version

Rules live in .cursor/rules/ as .mdc files. They have YAML frontmatter at the top:

---
description: Use early returns when refactoring
alwaysApply: true
---
When refactoring code, always use early returns (guard clauses)
instead of nested if/else blocks.
Enter fullscreen mode Exit fullscreen mode

The frontmatter gives you control over when the rule loads:

  • alwaysApply: true — loaded into every prompt, regardless of what you're working on
  • globs: "*.tsx" — only loads when working on matching files
  • description — helps the agent decide if the rule is relevant

Without alwaysApply or globs, rules aren't consistently picked up. In a previous test I ran, the agent didn't follow rules that were missing both. It can still find them through description matching or manual invocation, but don't count on it.

Skills: the short version

Skills live in .cursor/skills/<name>/SKILL.md. No frontmatter, no YAML, just markdown:

When refactoring code, always use early returns (guard clauses)
instead of nested if/else blocks.
Enter fullscreen mode Exit fullscreen mode

Each skill gets its own subdirectory. That's it. Simpler setup, less configuration. But also less control over when and where the skill loads.

So what's actually different?

I put the same instruction in both formats and ran them through 15 tests on a real codebase (cursor-lint, ~900 lines across 4 files). Three runs per test, using the Cursor agent CLI.

When the task matches the instruction, they're identical. Rules followed 3/3, skills followed 3/3. Same output quality. The agent cited the rule by filename and the skill as a "system rule," but the generated code was the same.

Neither one fires on unrelated tasks. I kept the refactor instruction loaded but asked the agent to add a --verbose flag instead. Both the rule and the skill stayed in context but the model ignored them. 3/3 runs just added the flag without touching the code structure.

This actually corrects something I said in an earlier article. I claimed alwaysApply: true meant the rule would fire on every task, even unrelated ones. That's not quite right. It means the rule is LOADED into every prompt, but the model is smart enough to skip it when the task doesn't match. Loaded into context isn't the same as applied to output.

When they conflict, rules win. I set up a direct contradiction: the rule said "use early returns," the skill said "use nested if/else." The rule won 3/3. The agent cited the .mdc file in its reasoning and completely ignored the skill.

When I'd use which

Rules if you want control. The frontmatter lets you scope rules to specific file types, decide when they load, and give the agent context about what the rule is for. If you have a team or a big project with different conventions for different parts of the codebase, this matters.

Skills if you want simplicity. No frontmatter to mess up, no alwaysApply footgun, just write your instruction in markdown and drop it in a folder. If you have a small project or you're just starting with Cursor customization, skills are less to think about.

Don't mix them for the same instruction. If you have a rule and a skill that say different things, the rule wins. If they say the same thing, you're just wasting context window space.

I'm sticking with rules because I already have 20+ of them and the frontmatter is useful. But if I were starting fresh today, I'd probably try skills first and only switch to rules if I needed the scoping.


📋 I put together a free Cursor Safety Checklist based on stuff I've run into while testing all of this. Pre-flight checks for AI-assisted coding sessions, basically.

Grab it here if you want →

Top comments (0)