DEV Community

Ned C
Ned C

Posted on • Edited on

What I Actually Put in My Project Context Files

Every AI coding tool has its own context file. Cursor has .cursorrules and .mdc files. Claude Code has CLAUDE.md. Copilot has .github/copilot-instructions.md. They all do roughly the same thing: tell the AI how you want it to write code.

Most guides stop at the format. "Here's the YAML frontmatter, here are the fields, good luck." That's not the hard part. The hard part is figuring out which rules the AI actually follows and which ones it quietly ignores.

I've spent the last few weeks testing this with Cursor's .mdc format specifically, running the same rules through multiple models and tracking compliance. Some of what I found was obvious. Some of it wasn't.

The quick landscape

If you're using multiple tools, here's where things stand:

Tool File Format Agent support
Cursor .cursorrules Plain text Yes (legacy)
Cursor .cursor/rules/*.mdc YAML frontmatter + markdown Yes (recommended)
Claude Code CLAUDE.md Markdown Yes
GitHub Copilot .github/copilot-instructions.md Markdown Yes

The big catch: none of these are portable. Your .cursor/rules/ folder does nothing in Claude Code. Your CLAUDE.md does nothing in Cursor. If you switch tools or use more than one, you're maintaining separate files with overlapping content.

What actually affects compliance

I tested a bunch of variables. Here's what moved the needle and what didn't.

alwaysApply is not optional

The .mdc format has a alwaysApply field in the YAML frontmatter. I tested it with a simple marker rule (add a specific comment to every new file).

  • With alwaysApply: true: 3 out of 3 runs followed the rule
  • Without it: the rule was ignored (and 2 of 3 runs failed to produce files at all)

Follow-up with 10 sessions, all with alwaysApply: true: 10 out of 10 compliance.

If you're writing .mdc files and wondering why your rules get ignored, check this first. It's the most common silent failure I've seen.

Two things silently break your rules

I tested 7 different ways an .mdc file can be malformed:

What's wrong Does it still work?
Missing alwaysApply field Yes (defaults work)
alwaysApply: false Yes (agent mode ignores the false)
Malformed YAML (no closing ---) No. Silent failure.
Wrong glob pattern (*.xyz) Yes (alwaysApply overrides)
Empty description Yes
.cursorrules + .mdc conflict Yes (.mdc wins)
No frontmatter at all No. Silent failure.

Two out of seven break silently: malformed YAML and missing frontmatter. No warning, no error, the rule just doesn't load. This is why I built cursor-doctor. It catches structural issues before they turn into "why is the AI ignoring me" debugging sessions.

Both formats work, but .mdc wins conflicts

Early on I thought .cursorrules was completely ignored in agent mode. Turned out my test workspace had .mdc files in the same project, and .mdc takes precedence on conflicts. In a clean directory with only .cursorrules, it loads fine.

If you have both: .mdc rules override .cursorrules when they conflict. If they don't conflict, both load.

Specificity matters more than length

I had a .cursorrules file that was 121KB. Still worked. The rules that got ignored weren't too long. They were too vague.

"Use proper error handling" gets interpreted however the model feels like. "Wrap async route handlers in try/catch, log with logger.error(err), return { error: 'Internal server error' } with status 500" gets followed.

The model doesn't matter (for compliance)

I ran the same .mdc rules through Sonnet 4.5, Gemini 3 Flash, and GPT-5.1 Codex Mini. All three hit 9 out of 9 compliance on the same rule set. The models differ on speed, reasoning, and code quality, but rule compliance was identical.

If your rules aren't being followed, it's probably not the model. It's the rules.

Positive vs negative framing: no difference

"ALWAYS add a comment" vs "NEVER create a file without a comment." Both worked the same. I expected negative framing to perform worse (it's harder to follow a "don't" than a "do"), but the compliance was identical across runs.

One exception: exclusive framing like "ONLY create files when explicitly asked" forced compliance better than "don't create unnecessary files," but it also broke code in 2 out of 3 runs because the model refused to create files it actually needed. There's a tradeoff.

What I actually put in mine

Here's how I structure .cursor/rules/ in a typical Next.js project:

Always-on rules (alwaysApply: true, no glob)

These apply to every file the agent touches:

---
description: Core conventions for all project files
alwaysApply: true
---

- TypeScript strict mode. No `as any`, no `@ts-ignore` unless commented with reason.
- Imports: group by external → internal → types. One blank line between groups.
- Error handling: wrap async in try/catch. Log with logger, don't swallow errors.
- Never remove existing comments unless explicitly asked.
Enter fullscreen mode Exit fullscreen mode

That last one matters. I tested what happens when you tell Cursor to "clean up this file" without a comment preservation rule. 0 out of 17 comments survived. With the rule, 10 out of 17 made it. Not perfect, but way better than zero.

Glob-scoped rules (alwaysApply: true, with glob)

These only apply to matching files:

---
description: React component conventions
alwaysApply: true
globs: ["*.tsx"]
---

- Functional components only. No class components.
- Props as type, not interface (unless extending).
- Branded types for IDs: UserId, PostId, not bare string.
Enter fullscreen mode Exit fullscreen mode

I tested glob scoping specifically: a branded-type rule on *.tsx didn't bleed into .py files. The scoping works.

Agent-requested rules (alwaysApply: false, specific description)

These only load when the agent decides the description matches the current task:

---
description: Use when modifying auth flows, token handling, or RLS policies
alwaysApply: false
globs: ["**/auth/**", "**/middleware/**"]
---

- All auth endpoints require token validation middleware.
- RLS policies must be tested with at least 2 role types.
- Never store tokens in localStorage. HttpOnly cookies only.
Enter fullscreen mode Exit fullscreen mode

The description field matters here. I haven't run a controlled test on vague vs specific descriptions yet, but anecdotally, the more specific the trigger language, the more reliably the agent pulls in the right file. "auth rules" gives it less to work with than "use when modifying auth flows, token handling, or RLS policies."

The stuff nobody tells you

Conflicting rules are predictable, not great. When two .mdc files say opposite things, the agent detects the contradiction and asks for clarification instead of picking one. Better than silently choosing wrong, but it breaks your flow.

"Clean up" is dangerous. Any prompt with "clean up," "refactor," or "simplify" will aggressively remove code the model considers unnecessary. Comments, verbose error messages, debug logging. If you care about preserving those, you need an explicit rule.

New files drift to generic patterns. Without rules, a new utility file gets generic Error classes and generic patterns. With a rule specifying your custom error types, it follows them. But only if the rule is loaded (see: alwaysApply).

Validating your setup

I got tired of debugging silent rule failures, so I built cursor-doctor. It's a CLI that checks your .mdc files for the structural issues that cause silent failures: missing frontmatter, malformed YAML, missing alwaysApply, vague descriptions.

npx cursor-doctor
Enter fullscreen mode Exit fullscreen mode

It won't catch everything (can't tell you if your rule content is good), but it catches the stuff that makes rules not load at all.

The portability problem

The biggest gap right now is that none of this transfers between tools. I keep my core rules in a plain markdown file and manually adapt them for each tool's format. It's annoying but it's the only way to use more than one AI coding tool without maintaining completely separate rule sets.

If you're only using Cursor, .mdc with proper frontmatter is the way to go. If you're bouncing between tools, keep a canonical version somewhere and copy it over. Not elegant, but it works.


This is part of my cursorrules-that-work series where I test what actually works instead of guessing. Previous articles cover model comparison and .cursorrules vs .mdc.


More from this series: 77 free .mdc rules · cursor-doctor (catches broken rules before they waste tokens) · All articles

Want someone to review your full setup? I do $50 async audits — rules, project structure, model settings. Written report in 48 hours.


📋 I made a free Cursor Safety Checklist — a pre-flight checklist for AI-assisted coding sessions, based on actual experiments.

Get it free →

Top comments (0)