If you're already used to writing code with BMAD, the real time sink isn't "writing" — it's "coordination."
An Epic has 5, 10, or 20 Stories. Each one goes through spec creation, implementation, test automation, code review, and retrospective. What's exhausting isn't any single step — it's constantly monitoring the process, switching sessions, handling failures, and deciding what to do next.
Story Automator aims to eliminate this "human orchestration" layer.
Last night I ran through the full /bmad-story-automator workflow. Below is a walkthrough of the experience, with screenshots taken along the way.
It doesn't solve "writing code" — it solves "babysitting the process"
One-sentence summary of Story Automator:
You tell it which Stories to process and what execution strategy to use, and it automatically runs the pipeline: spec creation → implementation → test automation → code review → retrospective — only interrupting you when a genuine human decision is needed.
This is different from running a single Skill command. It's more like a build cycle orchestrator:
Initialization
→ Read Epic / Sprint status
→ Select Story scope
→ Assess complexity
→ Choose Agent strategy
→ Execute create / dev / automate / review / retro
→ Escalate to human on failures or conflicts
By design, it automates "coordination work" rather than replacing any specific development step.
Upgrade BMAD to the latest version (v6.6)
To try Story Automator, you first need to upgrade BMAD to 6.6. During installation, make sure to check the BMad Automator (Experimental) module.
Otherwise, the Automator won't be available after installing BMAD.
First run: installing the Stop Hook to prevent mid-workflow interruptions
The first time you run /bmad-story-automator, it doesn't rush to start executing. Instead, it performs initialization checks.
As shown below, it loads the configuration, then attempts to read the current orchestration state. If the state directory doesn't exist yet, that's expected for a first run. It then automatically installs a Stop Hook into .claude/settings.json.
Step 1: Select Stories — don't blindly run all pending items
Before entering the orchestration phase, Story Automator reads the Epic and sprint status, then lets you decide the scope.
The screenshot below shows a typical scenario: across Epics 5, 6, and 7, some Stories are completed while others are still pending. The tool displays these states directly and asks which Stories you want to process.
Step 2: Complexity isn't guesswork — it scores first, then assigns Agents
This is one of the most interesting parts of Story Automator.
Instead of dumping all Stories to the same Agent, it first generates a Story complexity matrix. The screenshot shows:
- Story 5.3 "Server Commands & API Authentication" → 4 pts / Medium
- Story 6.1 "Expose Axion via SDK AgentMCPServer" → 2 pts / Low
- Story 6.2 "axion mcp Command & External Agent Integration Verification" → 2 pts / Low
- Story 7.1 "User Takeover Mechanism Based on SDK Pause Protocol" → 2 pts / Low
- Story 7.2 "--fast Mode" → 5 pts / Medium
More importantly, it doesn't just give scores — it shows reasoning:
- authorization / permissions
- real-time communication
- high number of acceptance criteria
- large Story text
This means complexity assessment isn't a black box. You see not just the "conclusion" but "why it considers this Story harder." This directly influences the subsequent Agent selection strategy.
In other words, what Story Automator is really doing:
First turning Stories into "schedulable objects," then deciding who executes them.
Step 3: You can inject custom instructions, but it doesn't force you to overthink
Next, it asks whether you have custom instructions.
For example:
- Run tests after every modification
- Prioritize a specific Story
- Watch out for database migrations
I really like this design because it strikes a nice balance between "fully automatic" and "controllable":
- If you have no special requirements, just select
none - If you have iteration-specific preferences, you can inject them temporarily
In other words, it treats "human experience" as an optional input rather than forcing you to configure a pile of parameters every time.
Step 4: Execution settings determine how aggressively it runs
Next comes execution strategy configuration. The screenshot shows two core questions:
- Whether to skip the
automatestep (test automation) - What's the maximum number of parallel sessions
The defaults are:
- Don't skip automate
- Maximum 1 parallel session
For most real projects, test automation is the last thing you should skip in the delivery loop; and keeping parallelism at 1 avoids multiple sessions stepping on each other when modifying the same codebase. In other words:
It defaults to "controllable completion" rather than "parallel sprint to the limit."
Step 5: Different complexity levels automatically map to different Agents
With the complexity matrix in hand, Story Automator can recommend Agent configurations.
The recommended configuration:
- Low: create / dev / auto / review all use Claude
- Medium: create / dev / auto / review all use Codex, with Claude as fallback
- High: Also primarily Codex
- Retro: Claude for the retrospective phase
From this configuration, it's clear that we're no longer at the "call a model" level — it's doing model orchestration.
It also provides strategy options:
- Suggested: Use the recommended complexity-tiered configuration
- Uniform: Use the same Agent for all Stories
Different teams can use it differently:
- Play it safe, follow recommendations
- Keep behavior consistent, use uniform Agents
And from the configuration summary screenshot later, you can see that this demo ended up saving as all-claude. This perfectly illustrates: recommendations are recommendations, not mandates. You can let the system intelligently assign by complexity, or manually unify to the same Agent type for stability or consistency.
This step makes the system more like a "dispatcher" than a simple command wrapper.
Step 6: Configuration is explicitly saved for easy recovery and review
Once you confirm, Story Automator saves the run configuration. The screenshot shows a configuration summary named all-claude:
The summary covers several things:
- Which Epic
- Story scope
- Whether there are custom instructions
- Which Agent is used for create / dev / auto / review / retro respectively
- Whether test automation is skipped
This kind of "summary page" looks ordinary, but it's the foundation for an orchestrator to be recoverable, auditable, and reviewable.
Because once automation spans multiple Stories, sessions, and phases, these questions inevitably arise:
- What if it stops midway?
- What configuration did I actually choose this time?
- Why did this batch of Stories use that Agent combination?
With explicitly saved configuration, subsequent recovery or post-hoc analysis won't be a guessing game.
Real-world experience takeaways
Before going to bed last night, I handed all 5 Stories to Story Automator. When I checked the results in the morning, it had run for 5.5 hours.
Honestly, it was slower than if I had run them manually. At my usual pace, I could have finished these 5 Stories in about 3 hours. As for why it was so much slower, I haven't carefully analyzed the implementation details yet — but it's still an experimental release, so getting it to work at all is what matters most.
For now, the best use case is: fire it off before bed.
Another thing I think it does well is that after each Epic completes, it automatically runs a retrospective and appends useful information to project-context.md.
So my current take is simple:
Running things manually during the day is still faster.
But handing a batch of Stories to it for an overnight run — that's a scenario where it really shines.
If you're using BMAD to develop projects, you should definitely give Story Automator a try. It might be the tool that reduces your repetitive coordination time from hours to minutes.









Top comments (0)