Tell Claude Code or Copilot to "build a REST API for a todo app." It'll do it. You'll get working code.
But along the way it silently chose: the framework, the ORM, the naming convention, the error handling strategy, the test structure, the validation approach, the status codes, the folder layout. Fifty-plus decisions. You saw none of them.
Three features later, you notice your tests are inconsistent. The auth middleware uses a pattern that conflicts with the error handler. The folder structure doesn't scale. You didn't choose any of this, the AI did, and it never told you.
This is the actual problem with agentic coding. Not that the code is bad. It's that you lost control of the architecture without realizing it.
So I built Defer.
Defer is an open-source CLI that sits between you and your AI. You describe a task. The agent decomposes it into explicit decisions with concrete options and tradeoffs. You choose how much you care about each domain, mark it auto (agent decides, you challenge after) or review (you confirm before execution).
Every decision gets an ID (@STA-0001), a category, and a record of who decided; you or the AI. Change your mind mid-execution, and the agent re-implements. High-impact changes cascade: switch from Go to Python and every Go-specific decision gets invalidated automatically.
The output is a DECISIONS.md, a complete record of every architectural choice that shaped your project.
| ID | Category | Question | Answer | Source |
|-----------|----------|------------------------|---------------------|--------|
| @STA-0001 | Stack | Backend language | Node.js (TypeScript)| user |
| @DAT-0001 | Data | Database | PostgreSQL | auto |
| @NAM-0001 | Naming | Route naming convention | camelCase | agent |
| @ERR-0001 | Error | Validation status code | 422 | agent |
What it's not: a prompt optimizer, a framework, or a wrapper. It's a decision protocol. The AI still does all the work; you just see and control every choice it makes.
Works with (probably): Claude Code, OpenAI, Groq, Mistral, Together, Ollama, any OpenAI-compatible provider. Or skip the CLI entirely, run defer init cursor to inject the philosophy into your existing tool's config.
I tried my best to make this more than a PoC, but it's definitely not a complete tool just yet. Most of my testing was done using the CLI + Claude Code, so if that is what you're running on your setup, I highly recommend you stick to it. But if you do see a bug, please share!
I'd appreciate any early feedback, this has been consuming me for some weekends and I'm excited to finally get this out there.
brew tap defer-ai/tap && brew install defer
defer "build a REST API for a todo app"
GitHub: https://github.com/defer-ai/cli
Site: https://defer.sh

Top comments (1)
It's fascinating how AI agents are like black boxes, often making decisions that remain hidden. In our experience with enterprise teams, the real challenge isn't what the AI builds, but understanding the criteria it uses to make these decisions. A practical approach is to implement logging mechanisms that capture decision points, providing visibility into the agent's reasoning process. This transforms AI from a mysterious entity into a transparent tool, enabling better oversight and collaboration. - Ali Muwwakkil (ali-muwwakkil on LinkedIn)