DEV Community

Li-Hsuan Lung
Li-Hsuan Lung

Posted on • Originally published at blog.projectbrain.tools

A Workflow Engine That Coordinates Work and Makes It Visible

"The future belongs to artificial intelligence."

Ke Jie said this around his 2017 AlphaGo match. He was the worldโ€™s top-ranked Go player, and AlphaGo still swept the series 3-0 (games on May 23, May 25, and May 27, 2017), with Ke Jie visibly emotional after the final game.

That story matters to me because I think one day AI may be much closer to solving software development than we ever expected. For a long time, top-level Go was treated as an especially hard frontier where human intuition would dominate for much longer. Then, suddenly, the gap closed fast. I want to treat software development with the same humility and learn from what the systems actually do, not from old assumptions.

That is why this post is about workflow design and visibility.

When people talk about agent workflows, they usually mean one thing: moving tasks from one stage to the next.

In Project Brain, we are building the workflow engine around two goals at the same time: coordinate work reliably, and make agent behavior visible and explainable.

If a task moved from "in progress" to "in review," we should know how it moved, why it moved, and what assumptions were used during that handoff.

What is interesting about our workflow engine

Workflow is a real system object, not prompt text

Our workflow is modeled directly in the platform as stages, statuses, and stage policies. That means teams can edit process behavior in the product itself, instead of hiding process rules inside long prompts.

Stage policy makes behavior explicit

Each stage can define what should happen after successful work: advance and delegate, advance only, terminal completion, or (optionally) reject work back to an earlier stage. In plain terms, we do not hardcode every route in the agent runtime. We store routing intent as workflow policy and execute against it.

Visibility is designed in, not added later

We attach structured metadata to handoffs and status transitions so task history is reconstructable. The focus is not just "current status." The focus is also "execution trace."

That trace is what helps teams improve prompts, policies, and role boundaries over time.

Real examples from team chat (cross-stage communication)

These are real excerpts from Project Brain team messages, showing planner/implementer/reviewer flow. The screenshot thread captures a full loop: blocker report, fix handoff, and approval.

In the screenshots, the reviewer first moved the task back to in-progress with specific blockers:

Reviewer feedback listing concrete blockers before re-review.

The implementer then replied with a concrete fix list and commit hash:

Implementer response with commit hash and explicit fixes.

Finally, the reviewer confirmed re-review and test outcome:

Re-review approval confirming blockers are fixed and tests pass.

This is exactly the visibility model we want: not just final status, but the full reasoning chain from failure to resolution.

Why this framing matters

Agents are not programmed in the traditional deterministic sense. You do not write a fixed function and always get the same output. What you can do is influence behavior through constraints, context, and feedback loops. That is why workflow management is so interesting to me: it is a way to shape behavior reliably even when outputs are probabilistic.

If AI systems can improve through iterative play and feedback loops, then our job is to build environments where those loops are observable, testable, and improvable, instead of hidden.

That is exactly why we designed this workflow engine around both coordination and visibility.

If you are not using Project Brain: how to apply this anyway

You can apply the same workflow principles in any stack (Jira + Slack, Linear + GitHub, custom tools, etc.).

  1. Define stage outcomes explicitly.

    For each stage, write what "done" means and what should happen next (advance, delegate, reject, or stop).

  2. Use machine-checkable transition guards.

    Require expected state/version fields on status changes so race conditions become explicit conflicts instead of silent corruption.

  3. Standardize handoff metadata.

    At minimum: task ID, from-stage, to-stage, actor, and reason for handoff.

  4. Treat review feedback as structured data.

    Capture blocker reason, fix commit, verification command, and verification result in one thread.

  5. Optimize for replayability.

    A new person (or agent) should be able to read a thread and answer: What happened? Why? What changed? Is it verified?

If you can do those five things consistently, you will get most of the value of workflow orchestration plus visibility, even outside Project Brain.

Top comments (0)