While testing different AI coding tools, I kept running into two recurring problems.
The first one was cost. Using the same model to plan, implement, review, test, and document does not make much sense. Not every stage requires the same level of reasoning.
The second problem was more important: when a single agent does everything inside one long conversation, context starts to get polluted. It begins to mix decisions, forget constraints, and lose precision as it moves from one phase to the next.
To address that, I built agentflow, a CLI for setting up a multi-agent workflow where each stage has a clear responsibility:
- planning
- approval
- implementation
- review
- testing
- documentation
The goal is not just to split tasks. It is also to give each phase a cleaner context window and use the right model for the right kind of work.
What stood out to me during testing was that the real value was not only the potential cost savings, but the consistency of the process. As the workflow matured, test coverage improved, manual intervention dropped, and some issues started getting caught earlier in the pipeline.
What interests me most about this approach is not “making AI code on its own.” It is designing a better process around it.
If you want the full write-up, you can read the original post here:
Full post:
https://ricardolara.dev/es/blog/inteligencia-artificial-multiagente/
npm package:
https://www.npmjs.com/package/@riclara/agentflow
If you work with Claude Code, Codex, or similar tooling, I would love to hear your feedback.
Top comments (0)