In the era of "Agentic IDEs" like Cursor, Windsurf, and Antigravity, the role of the developer is shifting from writing code to architecting context. Most developers treat these tools like smarter autocomplete, dropping a prompt and hoping for the best. This approach fails because code generation is only as good as the constraints you place upon it.
This article shows you how to move from generic prompting to building a Project-Aware Agent using structured rule files. We synthesize lessons from open-source projects like Coolify and Supabase to provide you with a universal framework for defining your own agentic constraints.
The Core Philosophy: Don't Just Guide, Constrain
The biggest mistake developers make with AI context is assuming "more is better." They dump their entire documentation into the chat.
This is wrong.
AI models suffer from "context pollution." If you feed an agent React documentation, generic TypeScript guides, and your entire legacy codebase, it gets confused. It might suggest a React 16 pattern for a React 18 project, or use a deprecated library function because it saw it in an old file.
Effective agentic rules follow two key philosophies:
1. Context-Aware Loading (The "Sniper" Approach)
Don't load every rule for every file. Use glob matching to target specific instructions.
Good example:
- Database rules only load when touching
schema.prismaorbackend/db/*.ts. - Frontend component rules only activate for
src/components/**/*.tsx.
Bad example:
Loading your entire "Best Practices" document into every chat session, regardless of whether you are writing SQL or CSS.
2. The "Negation" Strategy (Anti-Patterns)
It is often more effective to tell an AI what NOT to do than what to do. Explicitly banning patterns reduces the search space for the model, forcing it toward the correct solution.
Key Strategies From Coolify and Supabase
Analyzing frameworks like Coolify and Supabase reveals a standardized approach to agentic configuration.
1. Centralized Documentation Architecture
Instead of scattering instructions in prompts, maintain a single source of truth.
- Coolify uses a
.ai/directory with topic-specific markdown files (e.g.,testing.md,laravel.md). - Supabase uses
.mdcfiles with structured frontmatter metadata.
2. Context-Aware Rule Application
Supabase excels at this. They use frontmatter to define exactly when a rule applies:
---
description: "Standards for writing database functions"
globs: "supabase/migrations/*.sql"
alwaysApply: false
---
This ensures the agent only "thinks" about database security when it's actually writing SQL.
3. Explicit Anti-Pattern Prevention
Supabase's rules often include sections that visually distinguish critical constraints:
π¨ CRITICAL INSTRUCTIONS
β NEVER generate this code pattern...
β CORRECT implementation...
This visual distinction helps the model prioritize safety constraints over creative generation.
Case Study: From Generic To Intelligent
How does this evolution look in practice? Let's trace the journey of a hypothetical project, "Project X".
Phase 1: The Generic Agent (The "Junior")
- Setup: No custom rules.
- Behavior: The agent writes standard, syntactically correct code.
- Problem: It uses
anytypes, imports generic libraries (likelodash) that you don't use, and writes tests that don't match your mocking strategy. You have to rewrite 30% of its output.
Phase 2: The Rules-Based Agent (The "Mid-Level")
- Setup: Added
.cursor/rules/typescript.mdand.cursor/rules/testing.md. - Behavior: The agent now respects
strict: truein TypeScript and uses your preferred testing library. - Problem: It still makes architectural mistakes. It might import a server-side module into a client component or create circular dependencies.
Phase 3: The Project-Aware Agent (The "Senior")
- Setup: Added rules for Architecture, Workflow, and Tech Stack Constraints.
- Tech Stack: "NEVER use OpenAI SDK < 4.0.0".
- Architecture: "Modules in
shared/can NEVER import fromfeatures/". - Workflow: "When adding a new API provider, you MUST follow these 5 steps..."
- Behavior: The agent now acts as a guardian of your codebase. It refuses to write code that violates architectural boundaries and self-corrects if it accidentally uses a forbidden pattern.
Practical Checklist: Building Your Rules
Ready to verify if your agentic setup is mature? Use this checklist to build your own "Team Brain."
1. The Directory Structure
Create a dedicated folder (e.g., .cursor/rules or .agent/rules) to house your brain.
.agent/rules/
βββ stack-versions.md # "We use Next.js 14, Tailwind 3.4..."
βββ architecture.md # "Frontend cannot talk to DB directly..."
βββ anti-patterns.md # "Never use 'any', never use 'console.log'..."
βββ workflows/
βββ add-feature.md # "Step 1: Create type, Step 2: Create component..."
2. The Tech Stack Authority
Create a file that lists your exact versions.
Why it matters: This prevents "hallucinated upgrades" where the AI uses features from a version you don't have.
Content Example: "We use Node 18 (ESM). We use React Query v5 (not v4)."
3. The "Gotchas" File
Every project has weird bugs or constraints. Document these as explicitly as possible.
Format: "β οΈ WARNING: When using DatePicker, ALWAYS provide timeZone prop."
4. Workflow Automation
Don't just define code styles; define processes.
Example: "To add a new endpoint: 1. Define Zod schema. 2. Create handler. 3. Register in router. 4. Write integration test."
Conclusion: Start With One Rule
The difference between a frustrating AI experience and a productive one often comes down to clear documentation. By treating agentic rules as a versioned, structured, and maintained part of your codebase, you transform your IDE into a project-aware assistant that strictly follows your team's standards.
Don't try to document everything at once. Start small. Add one rule today that stops the AI from making that one mistake it makes every single time, and you will see immediate improvements in your workflow.
Top comments (0)