Despite improvements in coding agents, single-pass prompting rarely produces production-quality results. There’s still a lot of manual steering involved.
My workflow lately has looked like this:
- Ask Claude to plan first (no code)
- Ask it to implement with constraints
- Ask it to review its own work
- Repeat reviews until no new issues are found
- Do a final human review before committing
The problem
Getting strong results from coding agents is already a multi-step process:
- Planning
- Implementation
- Iterative review
But this workflow is not enforced — it lives in:
- Ad-hoc prompts
- Manual iteration
As a result, it becomes:
- Inconsistent
- Hard to reproduce
- Time-consuming to babysit
The solution
I turned this into an automated pipeline.
Instead of one agent doing everything, a manager coordinates multiple agents across stages:
Plan → Implement → Verify → Review → Fix → Repeat
Each step gets targeted prompts, not a generic instruction blob.
For example:
- Planning → focuses on root-cause solutions, not workarounds
- Implementation → enforces full execution of the plan with well-documented code
- Verification → runs repo-specific commands (tests, linting, checks)
- Review → applies clear goals and severity calibration
This is far more reliable than relying on a static AGENTS.md that may or may not be followed.
Workdocs and traceability
Every step writes to a persistent workdoc.
- Full context is carried across stages
- Each agent sees what previous steps did
- Humans can inspect the entire process
- Runtime evidence is logged for traceability
What this enables
Once the pipeline is enforced, a full system naturally emerges:
- Parallel execution via isolated git worktrees
- Dependency-aware task graphs (tasks run in waves)
- Automatic branch integration + conflict resolution
- Task-specific pipelines (feature, bug, refactor, etc.)
- Automatic task decomposition into subtasks
- Human approval gates at key stages
Result
Instead of managing prompts, you can just fire tasks at the system.
And get:
- Higher code quality
- Less babysitting
- More predictable outcomes
- Full visibility into what happened
It starts to feel less like prompting an assistant, and more like running a real development workflow.
Try it out
pip install overdrive-ai
cd /path/to/your/repo
overdrive server
Repo
Open source (MIT):
https://github.com/Execution-Labs/overdrive
If you're exploring agent-based development workflows, would love your feedback.

Top comments (0)