Originally published on 2026-04-11
Original article (Japanese): Confluxをリリース: 仕様駆動でAI開発を並列に進めるオーケストレータ
Conflux is now released. It is a tool designed to move an entire AI coding workflow forward — implementation, acceptance, archiving, and beyond — with spec-driven development as the foundation.
Tools like Claude Code, Codex, and OpenCode have made “writing code” itself much easier. But in real development, the harder problems are different: how to keep the specification in front, how to run multiple changes safely in parallel, and where to place acceptance judgment.
Conflux was built to fill that gap. It is not about one-off code generation. It is an orchestration layer for steadily growing a substantial finished product by stacking changes over time.
This is what Conflux looks like. As a TUI (Text User Interface), it lets you inspect the progress of each change and the overall flow from the terminal.
What problem was I trying to solve?
When you introduce AI agents into a development workflow, things look very fast at first. But once the task becomes even slightly larger, the same issues keep appearing.
- The spec is vague while implementation moves ahead
- Changes collide with each other
- It becomes unclear what is actually finished
- The implementation role and the acceptance role get mixed together
- The flow stops unless a human keeps watching it
In other words, what I needed was not just “a smarter single agent,” but an operating model that keeps multiple changes moving.
Conflux organizes that around a few principles:
- Put the spec first
- Split work into independent change units
- Use git worktree for safe parallel progress
- Separate the implementation role from the acceptance role
- Keep the flow moving even when no human is actively watching
Conflux in one sentence
The README calls it a “spec-driven parallel coding orchestrator for AI agents.” In Japanese, I would describe it as an orchestrator for spec-driven AI development with parallel execution and role separation.
One important point is that Conflux itself is not tied to a single “best” model. It is designed around swapability. Different models are good at different things: some are fast but rough, some are slower but better at review, some work better through CLI tools, and some are better at long-form evaluation. In practice, separating roles is often more stable than asking one model to do everything.
How does the workflow look?
The basic idea is simple.
- Define the spec and the intent of the change
- Split work into change units
- Let Conflux assign each change to an independent worktree
- Move implementation forward
- Run acceptance judgment
- Archive successful changes and carry them to final merge
What matters here is that step 1 is not just note-taking. Conflux is not designed around “implement first, explain later.” It assumes that the spec and change intent come first, and implementation flows under that unit.
Conflux keeps looping through the latter half of this flow — implementation → acceptance → archive/merge — in a repeatable way. The multi-Ralph-loop diagram makes this easier to picture.
What this diagram shows is not a straight line where implementation happens once and ends. It is a continuing development loop: implementation is judged, failed work goes into another iteration, and accepted work is archived so the product keeps moving forward.
The advantage of this flow is that you do not have to stuff everything into one huge prompt. Each change can stay small in context, which also makes it easier to control what you pass into an LLM (Large Language Model).
The smallest way to try it
Here is the minimum setup for trying Conflux locally. I assume you have Rust and cargo, plus at least one AI coding agent CLI installed.
There are only three steps:
- Install Conflux
- Initialize the config file
- Start running it
cargo install cflx
# Initialize the config file
cflx init
# Launch the TUI
cflx
That is enough for a first check. If cflx launches, the basic setup is done.
If you want to try headless execution, use the following commands.
# Headless execution
cflx run
# Run only a specific change
cflx run --change add-feature-x
The simplest checks are:
- Does
cflxlaunch the TUI? - Does
cflx initcreate a config template? - Does
cflx runstart the workflow?
What matters most in this first release
At this stage, I cared most about three things.
1. Treating the whole flow, not just one-off generation
There are already many options if all you want is “something that can write code.” But in practice, what matters is everything around that.
- Define the spec
- Split work into change units
- Run work in parallel
- Judge acceptance
- Carry it through to merge
If a system does not cover that entire sequence, manual operation quickly creeps back in. Conflux is aimed at that whole flow from the start.
2. Building around parallel execution
Even running one change at a time with AI agents is useful. But as the number of changes increases, waiting time becomes obvious.
That is why Conflux uses git worktree to give each change its own independent work area. This makes it possible to move multiple changes forward in parallel with more safety.
But the point is not merely that it can run in parallel. Parallel execution itself is no longer unusual; there are already many systems that run multiple agents or multiple tasks at once.
What is still rare is the next part: treating acceptance, archiving, and final merge for those parallel changes as one continuous development flow.
Of course, parallelization does not automatically make everything faster. Strongly dependent changes still need ordering, and acceptance quality still matters. But at the very least, this is much easier to reason about than mixing everything into one worktree — and Conflux tries to cover the downstream flow as well.
3. Not locking into a single vendor
This was very intentional. AI tools are moving so fast that tightly coupling your workflow to one vendor or product tends to shorten its useful life.
Conflux treats agents as swappable components. For example, you can use Claude Code for implementation, and another model for review or acceptance.
Who is this for?
Right now, Conflux is especially a good fit for people who:
- want to work in a spec-driven way
- want AI agents to handle implementation while humans stay focused on spec and final judgment
- want to run multiple changes at once
- want to grow a real product over time instead of doing one-off generation
On the other hand, if you only want to generate one file quickly or just want lightweight code completion, Conflux is probably too much. In that case, a standalone agent CLI is likely the better fit.
Where to start
If you want to explore it, I recommend this order:
- Read README.ja.md for the overall picture
- Follow QUICKSTART.ja.md for initial setup
The best first test is simply to run cflx init and cflx, and see whether the feeling of “put the spec first, split changes, run them in parallel, and accumulate them with acceptance” matches your own development style.
Closing thoughts
This is the first release of Conflux.
What I wanted to build was not just another wrapper around AI code generation. It is an operational foundation that starts from the spec, runs multiple changes in parallel, judges them, and keeps moving toward a substantial finished product.
It is still early, but I think it is already becoming an interesting base for anyone who wants to bring spec-driven development and AI coding agents into a practical workflow. OpenSpec is only one implementation means for that today, and it may be replaced in the future by another representation or another spec layer. Even so, the core idea of putting the spec first should remain.
If that sounds interesting, try it locally.


Top comments (0)