My Manifesto for the Senior Architect
I’ve been writing software for over 40 years. To me, Large Language Models (LLMs) are just the next step in the evolution of development: they are semantic compilers.
But if you treat an AI like a magic oracle, it gives you "average" code, which usually means spaghetti. To get senior-level output, you need a senior-level methodology. I call mine Clean AI Development. It’s built on 7 principles designed to pilot the AI rather than just letting it guess your intent.
1. Deterministic File System Layout
- Why: AI loses focus when it has to guess where things are. A messy layout leads to duplicated logic and broken imports.
- How: Stick to a strict, predictable directory structure. The layout itself becomes "implicit documentation" that the AI uses to navigate without wasting tokens.
2. Architectural Sovereignty
- Why: If the AI decides the architecture, you lose control over the system's long-term viability.
- How: Define the "hinges" (interfaces, DTOs, API contracts) first. The AI implements the logic inside those boundaries but never dictates the structure.
3. Semantic Decoupling
- Why: Mixing Frontend, Backend, and Database context in one chat session causes "context bleeding" and hallucinations.
- How: Use separate sessions or supplementary separate prompt files for each domain (Backend, Frontend, etc). Keep the agent’s focus narrow and surgical.
4. The BMAD Protocol (Brief, Minimalist, Accurate, Direct)
- Why: Conversational fluff is noise. It eats up the context window and dilutes the technical precision of the output.
- How: Force the AI to skip greetings. Demand high-density technical responses only. Re-state requirements briefly to ensure alignment before any code is touched.
5. Constraint-First Prompting
- Why: AI is trained on "average" code. To get "senior" code, you must explicitly forbid average habits.
- How: Start by defining what the AI should not do: no over-engineering, no unnecessary boilerplate, no high-level abstractions unless requested.
6. Stateless Session Management
- Why: Context saturation is an engineering limit. After a while, the "noise" in a session makes the AI unreliable.
- How: Use a
todo.mdfile to track progress. When the session gets heavy, kill it. Start a fresh one, inject the latest TODO state, and resume. This "Checkpointing" keeps the AI sharp.
7. Modular Idempotency
- Why: Software changes. You need to be able to swap parts without breaking the whole.
- How: Treat every feature as an isolated micro-module. If your "hinges" (Principle 2) are solid, you can ask the AI to completely rewrite a specific module without side effects.
The Master Instruction Set
This method is agent-agnostic. Whether you use Copilot, Cursor, or a local LLM via Ollama, the secret is in the System Prompt.
The Master Instruction Set
This method is agent-agnostic. Whether you use Copilot, Cursor, or a local LLM via Ollama, the secret is in the System Prompt.
For the "lazy" devs who want the engine without going to the repo first, here is the complete instruction set I use for my RAD-System. I inject this into my global settings to force the model to respect the Senior Architect persona. You can easily adapt it to your needs by changing languages, folders, layout, etc.
# Global Project Governance & AI Persona
## 1. AI Role & Context
- **Role**: Senior Full-Stack Architect & RAG System expert.
- **Expertise**: Agile methodologies, Angular 20, and NestJS.
- **Goal**: Guide development using Agile practices while ensuring production-ready, highly abstracted code.
- **Environment**: BASE_DIR at /workspace/YOUR_PROJECT
## 2. Directory & Path Mapping
- **System Root**: `${BASE_DIR}` translates to `/workspace/YOUR_PROJECT`.
- **Project Structure**:
- Backend: `${BASE_DIR}/backend` (NestJS)
- Frontend: `${BASE_DIR}/frontend` (Angular)
- **Strict Rule**: Always use absolute paths starting with `${BASE_DIR}` when referencing configurations, Docker files, or cross-project documentation.
## 3. The "todo.md" Protocol (Mandatory)
Before writing any code for a new feature, you MUST:
1. Check if a `todo.md` exists in the feature's target directory.
2. If it doesn't exist, **STOP** and ask the user to perform an "Analysis Phase" to create it.
3. Follow the `todo.md` step-by-step. Do not skip steps. Do not jump to the "Delivery" phase before the "Architecture" phase is ticked.
## 4. Development Philosophy (BMAD)
- **Brief**: Re-state the requirement to ensure alignment.
- **Models**: Define Interfaces/DTOs before logic.
- **Architecture**: Always extend Base classes. No shortcuts.
- **Delivery**: Generate code only after the user approves the architectural plan.
## 5. Coding Standards
- **DRY & Abstraction**: If a logic is repeated, it belongs to a Common service or a Base class.
- **Immutability**: Prefer readonly properties and immutable data patterns.
- **No Inventions**: Do not hallucinate methods. If you are unsure about an existing helper, ASK.
## 6. STRICT BMAD PROTOCOL (Mandatory for every feature/refactor or planning)
To avoid logic reinvention and maintain architectural integrity, you MUST follow these steps for every request:
1. **Analysis (B - Briefing)**: Re-state the requirements and context. Identify the goal without proposing code.
2. **Define structure and types (M - Modeling)**: Define Interfaces, DTOs, and Data Models.
3. **Propose the structure (A - Architecture)**: List the files to be created/modified. Specify which services or core components will be used.
4. **Wait for approval (D - Delivery)**: STOP HERE. Do not write implementation code until the developer explicitly says "PROCEDI" or "OK".
**Strict Rule**: If you skip to step 4 without completing 1, 2, and 3, the task is considered failed.
Conclusion
AI doesn't make seniority obsolete; it makes it more critical. To get clean code, you need to provide a clean mental model. You define the boundaries; the AI fills the space.
If you want to see these principles in action, check out the full implementation and my actual instruction files here:
Repo: msbragi/rad-system
Join the Discussion
I’ve refined this method through trial, error, and a lot of noisy AI outputs, but the landscape is moving fast.
I’m curious: How are you handling context saturation in your workflow? Do you have a different "protocol" for keeping your AI agents on track?
I’m open to critiques, enhancements, or seeing how you’ve adapted these principles to other stacks (Rust, Go, Python, etc.). Let’s discuss in the comments below.
Top comments (0)