You set up .cursorrules. You specified TypeScript strict mode. The AI still generates any types.
Configuration isn't enforcement.
This is Part 2 of my .cursorrules guide. The first article covered setup. This one covers what to do when it doesn't work.
Why AI Ignores Your Rules
Three main reasons:
1. Rule Conflicts
Your .cursorrules says one thing. Your codebase does another.
# .cursorrules
Use named exports only. No default exports.
Meanwhile in your codebase:
// src/components/Button.tsx (already in codebase)
export default function Button() { ... }
AI sees the existing pattern. It follows the codebase, not your rules.
Fix: Audit your codebase. Either update the rules to match reality, or refactor the code to match rules. Pick one.
2. Vague Instructions
# Bad
Write clean, maintainable code following best practices.
AI interprets "best practices" based on its training data. That includes Stack Overflow answers from 2018.
# Good
- Functions under 20 lines
- No more than 3 parameters per function
- Return early, avoid nested conditionals
- Extract magic numbers to named constants
Specific rules get specific results.
3. Context Window Limits
Long conversations push .cursorrules out of context. AI "forgets" your rules mid-session.
Signs this is happening:
- First responses follow rules, later ones don't
- AI starts mixing patterns (hooks + class components)
- Import styles change throughout conversation
Fix: Reference rules explicitly in long sessions.
Following our .cursorrules: add user authentication to the dashboard.
Or start fresh conversations for new features.
Debugging Checklist
When AI generates non-compliant code:
| Check | Action |
|---|---|
| File location | Is .cursorrules in project root? |
| File name | Exact spelling? (.cursorrules not cursorrules.txt) |
| Syntax | No parsing errors in the file? |
| Conflicts | Does existing code contradict rules? |
| Specificity | Are rules concrete or vague? |
| Context | Long conversation? Try new chat. |
The Enforcement Pattern
Rules alone don't work. You need enforcement at multiple levels.
Level 1: .cursorrules (Suggestion)
## TypeScript
- No 'any' type
- Strict mode enabled
AI should follow this. Sometimes doesn't.
Level 2: ESLint/Prettier (Automated)
// .eslintrc.json
{
"rules": {
"@typescript-eslint/no-explicit-any": "error",
"@typescript-eslint/explicit-function-return-type": "error"
}
}
AI generates bad code → linter catches it → you see the error.
Level 3: Pre-commit Hooks (Gate)
# .husky/pre-commit
npm run lint
npm run type-check
Bad code can't enter the codebase.
Level 4: CI Pipeline (Final Check)
# .github/workflows/ci.yml
- name: Type Check
run: npx tsc --noEmit
- name: Lint
run: npm run lint
Nothing merges without passing checks.
Key insight: .cursorrules is documentation for AI. Tooling is enforcement. Use both.
Measuring Effectiveness
How do you know if .cursorrules actually helps?
Manual Tracking
For one week, track every AI suggestion you accept or reject:
| Date | Task | Accepted | Rejected | Reason |
|------|------|----------|----------|--------|
| 12/09 | Add form | ✓ | | |
| 12/09 | API route | | ✓ | Used fetch instead of ky |
| 12/10 | Hook | ✓ | | |
Calculate acceptance rate. Below 70%? Your rules need work.
Automated Tracking
Count lint errors in AI-generated code before and after .cursorrules:
# Before committing AI code
npm run lint 2>&1 | grep "error" | wc -l
Track this over time. Trend should go down.
Advanced Patterns
Conditional Rules
Different rules for different parts of codebase:
## API Routes (src/app/api/)
- Validate all inputs with zod
- Return { data, error } shape
- Log to external service
## UI Components (src/components/)
- No data fetching
- Props interface required
- Storybook story required
## Hooks (src/hooks/)
- Must start with 'use'
- Return tuple or object, not primitives
- Include JSDoc with example
Negative Examples
Show what you don't want:
BAD: God component
function Dashboard() {
const [users, setUsers] = useState([]);
const [posts, setPosts] = useState([]);
const [comments, setComments] = useState([]);
// ... 300 more lines
}
GOOD: Composed from smaller components
function Dashboard() {
return (
<DashboardLayout>
<UserList />
<PostFeed />
<CommentSidebar />
</DashboardLayout>
);
}
Negative examples are often more effective than positive rules.
Version Pinning
AI training data is frozen. Your stack isn't.
## Framework Versions (as of December 2025)
- Next.js 15.1 (NOT 14.x patterns)
- React 19 with use() hook
- TypeScript 5.7
## Breaking Changes to Note
- Next.js 15: async request APIs (cookies, headers)
- React 19: ref as prop, no forwardRef needed
Update this section when you upgrade dependencies.
When to Skip .cursorrules
Not every project needs this.
Skip if:
- Solo project, no team
- Prototype/throwaway code
- Learning new technology (rules limit exploration)
- Very small codebase (<10 files)
Use if:
- Team project
- Production code
- Consistent patterns matter
- Onboarding new developers (human or AI)
Template: Debugging Section
Add this to your .cursorrules:
## If AI Ignores These Rules
1. Reference this file explicitly: "Following .cursorrules, ..."
2. Start new conversation for complex features
3. Check that existing code matches these patterns
4. File issues at [your-repo]/issues if patterns need updating
Last verified: 2025-12-14
This reminds both AI and humans that rules exist and need maintenance.
Summary
.cursorrules is a starting point, not a solution.
Effective AI code generation requires:
- Clear rules (specific, not vague)
- Consistent codebase (rules match reality)
- Automated enforcement (linting, type checking)
- Regular maintenance (update when stack changes)
The goal isn't perfect AI output. It's reducing the edit distance between what AI generates and what you actually need.
Part 1: .cursorrules: Stop AI From Breaking Your Codebase
About me: Technical Documentation Specialist. I help developers create README files, API docs, and technical guides. Services | LinkedIn
Top comments (0)