DEV Community

Cover image for OpenPawz Conductor Protocol
Gotham64
Gotham64

Posted on

OpenPawz Conductor Protocol

Every workflow engine executes the same way

Why your workflow engine is stuck in 2D — and how AI-compiled execution fixes it

n8n, Zapier, Make, Airflow, Prefect — they all do the same thing: walk the graph, node by node, in topological order. Node A finishes, pass data to Node B, Node B finishes, pass data to Node C. Sequential. Synchronous. One step at a time.

This worked fine when nodes were cheap API calls. But AI workflows are fundamentally different:

  • Agent nodes are expensive. Each one is an LLM call — 2–10 seconds of latency and real token cost.
  • Chains get long. A real pipeline might have 8–20 nodes: trigger → parse → agent analysis → condition → agent rewrite → tool call → agent review → output.
  • Branches are wasted. When two independent branches can run in parallel, sequential execution waits for each to finish before starting the next.
  • Cycles are impossible. Every platform requires DAGs — directed acyclic graphs. No loops, no feedback, no iterative refinement. Two agents debating until they agree? Can't express that.

The math is brutal

A 10-node flow with 6 agent steps, each averaging 4 seconds of LLM latency:

Platform Execution Time LLM Calls
n8n / Zapier / Make Sequential walk 24s+ 6
OpenPawz (Conductor) Compiled strategy 4–8s 2–3

The Conductor doesn't skip work. It does the same work smarter.

OpenPawz

Star the repo — it's open source


The invention: compile the graph, don't walk it

The Conductor Protocol treats flow graphs not as programs to execute, but as blueprints of intent that are compiled into optimized execution strategies before a single node runs.

Traditional platforms interpret flows imperatively — "do this, then this, then this." The Conductor interprets flows declaratively — "here is what needs to happen; let me figure out the fastest way."

Five optimization primitives make this possible: Collapse, Extract, Parallelize, Converge, and Tesseract.


Primitive 1: Collapse — merge adjacent agents into one LLM call

Adjacent agent nodes with compatible configurations merge into a single LLM call.

Before (traditional):

Agent 1: "Summarize this data"        → LLM call (4s)  → result
Agent 2: "Extract key metrics from…"  → LLM call (4s)  → result
Agent 3: "Write a report based on…"   → LLM call (4s)  → result
Total: 3 LLM calls, ~12 seconds
Enter fullscreen mode Exit fullscreen mode

After (Conductor Collapse):

Collapsed prompt:
  "Step 1: Summarize this data
   ---STEP_BOUNDARY---
   Step 2: Extract key metrics from the summary above
   ---STEP_BOUNDARY---
   Step 3: Write a report based on the metrics above"
→ 1 LLM call (5s) → parsed back into 3 node outputs
Total: 1 LLM call, ~5 seconds
Enter fullscreen mode Exit fullscreen mode

Two agent nodes can be collapsed when they share the same model, the same temperature, have no tool invocations configured, and form a direct chain with no branching. The Conductor detects these chains automatically and builds merged prompts. After execution, parseCollapsedOutput() splits the response back into individual node results using step boundary delimiters.


Primitive 2: Extract — bypass the LLM entirely

Not every node in an AI workflow needs artificial intelligence. Tool calls, HTTP requests, code execution, data transforms — these are fully deterministic. The Conductor classifies each node and routes deterministic work to direct execution:

Node Classification Execution Path Examples
Agent LLM call via engine agent, squad
Direct Bypass LLM — execute via Rust backend tool, code, http, mcp-tool, loop, memory
Passthrough No execution — data forwarding only trigger, output, error, group

In a 10-node flow with 4 agent nodes and 6 direct/passthrough nodes, the Conductor reduces LLM calls from 10 to 4 — or fewer, if some agents can be collapsed.


Primitive 3: Parallelize — run independent branches concurrently

When a flow fans out — one node feeding into multiple downstream branches that don't depend on each other — the Conductor detects independent branches via depth analysis and union-find grouping, then runs them simultaneously.

Sequential (traditional):

trigger → classify → summarize → fetch metrics → parse data → output
Total: 6 steps, ~16 seconds
Enter fullscreen mode Exit fullscreen mode

Conductor (parallel):

Phase 0: trigger (passthrough)
Phase 1: classify (single agent)
Phase 2: summarize ‖ fetch metrics ‖ parse data  ← all three concurrent
Phase 3: output (passthrough)
Total: 4 phases, ~8 seconds
Enter fullscreen mode Exit fullscreen mode

The grouping algorithm uses groupByDepth() to assign each node a depth level based on its longest path from roots, then splitIntoIndependentGroups() uses union-find to identify which nodes within the same depth level share dependencies.


Primitive 4: Converge — cycles that no other platform can express

This is the primitive that has no equivalent in any existing workflow platform. n8n, Zapier, Make, Airflow, Prefect — they all require DAGs. Cycles are errors. Feedback loops are impossible.

But some of the most powerful AI patterns are inherently cyclic:

  • Debate and consensus: Two agents argue until they reach agreement
  • Iterative refinement: A writer and editor pass drafts back and forth until quality stabilizes
  • Self-correction: An agent checks its own output, finds errors, fixes them, checks again
  • Multi-perspective analysis: Three analysts each review the others' findings and update their own

The Conductor enables these through bidirectional edges and convergent mesh execution.

How convergent meshes work

  1. The Conductor detects cycles in the flow graph (nodes connected via bidirectional or reverse edges)
  2. Overlapping cycles merge into mesh groups
  3. Each mesh group executes in iterative rounds:
    • Round 1: Each node executes with its initial input
    • Round 2: Each node re-executes with shared context from all other nodes' Round 1 outputs
    • Round N: Continue until outputs converge or max iterations are reached
  4. Convergence detection uses Jaccard similarity — when consecutive outputs from the same node are ≥85% similar, that node has stabilized
  5. When all nodes converge (or max iterations hit, default: 5), the mesh completes

Example: Writer–Editor debate

Round 1:
  Writer: produces initial draft
  Editor: reviews draft, suggests changes

Round 2:
  Writer: revises based on editor feedback
  Editor: reviews revision — "much better, minor grammar fix"

Round 3:
  Writer: applies grammar fix
  Editor: reviews — "looks good, approved" ← 92% similar to Round 2

Convergence detected (0.92 > 0.85 threshold). Mesh complete.
Output: final approved draft flows to downstream nodes.
Enter fullscreen mode Exit fullscreen mode

In n8n, you'd need to manually build a loop with external state management and hope it terminates. In Zapier, it's simply impossible.


Primitive 5: Tesseract — hyper-dimensional flows

Primitives 1–4 operate on a flat graph. But the Conductor already works in higher dimensions implicitly. When a convergent mesh iterates, each round is a distinct state. When parallel branches run independently before merging, they occupy separate "spaces" that collapse at a join point.

The Tesseract primitive makes these hidden dimensions explicit and controllable.

Four dimensions of a workflow

Dimension Axis Represents Example
1st (X) Sequence Step ordering, causality A → B → C
2nd (Y) Parallelism Concurrent branches A → {B ‖ C} → D
3rd (Z) Depth Iteration layers, sub-flow nesting Mesh round 1 → 2 → 3 (helix, not loop)
4th (W) Phase Behavioral mode shifts Exploration → Refinement → Convergence

A standard flow is a 2D projection (X × Y). A convergent mesh is a 3D helix (X × Y × Z). A tesseract flow is the full 4D object — independent workflow cells operating across all four dimensions, connecting only at event horizons where they synchronize and merge.

Event horizons

An event horizon is where multiple tesseract cells collapse into a single output. It's the 4D equivalent of a join node, but richer:

  • All cells must reach the horizon before the flow continues — hard synchronization
  • Phase transitions happen at horizons — the W coordinate shifts
  • Depth resets at horizons — completed iterations crystallize into a single state
  • Context merges according to configurable policy (concat, synthesize, vote, last-wins)

Why this matters

Consider a complex research pipeline:

Cell A (Exploration): Three research agents independently search different domains, iterating with a supervisor (Z=0..3). Phase W=0.

Cell B (Analysis): Two analyst agents debate findings, refining their synthesis (Z=0..2). Phase W=1.

These cells work independently — different topics, different models, different iteration depths. At the event horizon, their outputs merge: research feeds the analysts, analysis redirects the researchers, and the system transitions to W=2 (convergence phase) where all agents work toward a unified output.

No other automation platform can represent this. It requires reasoning about time (iteration depth), behavioral mode (phase), and spatial independence (parallel cells) — all simultaneously.


Four edge types

The Conductor's power partly comes from OpenPawz's edge types — richer than any other workflow platform:

Edge Kind Direction Purpose Enables
Forward A → B Normal data flow Standard pipelines
Reverse A ← B Data pull — B requests from A Lazy evaluation, on-demand data
Bidirectional A ↔ B Mutual data exchange Cycles, debates, iterative refinement
Error A --err→ B Failure routing Graceful degradation, fallback chains

n8n, Zapier, and Make support only forward edges. OpenPawz's reverse and bidirectional edges enable workflow patterns that are structurally impossible on other platforms.


Performance benchmarks

Flow Pattern Nodes Sequential Conductor Speedup LLM Calls Saved
Linear chain (3 agents) 5 20–45s 4–9s 4–5× 2 (collapse)
Fan-out (parallel branches) 8 35–70s 5–10s 5–7× 3 (collapse + parallel)
Bidirectional debate 6 ∞ (impossible) 15–25s N/A (new capability)
Production pipeline 20 80–160s 8–18s 8–10× 12+ (all primitives)
Tesseract research pipeline 12 ∞ (impossible) 20–40s N/A (new capability)

The gains compound: Collapse reduces total LLM calls, Extract eliminates unnecessary ones, Parallelize runs the remaining work concurrently.


vs. every other platform

Capability n8n Zapier Make Airflow OpenPawz Conductor
Execution model Sequential DAG walk Sequential DAG walk Sequential DAG walk Task scheduler (DAG) AI-compiled strategy
Cycles / feedback loops Error Error Error Error Convergent Mesh
LLM call optimization None None None None Collapse (N agents → 1 call)
Deterministic bypass All nodes same path All nodes same path All nodes same path All nodes same path Extract (skip LLM)
Auto-parallelism Manual split/merge None Manual router Executor-level Automatic depth analysis
Bidirectional edges No No No No Yes
4D hyper-dimensional flows No No No No Tesseract + event horizons
Self-healing No Retry only Retry only Retry only Error diagnosis + fix proposals
Debug step-through Limited None Limited Log-based Full breakpoints + cursor

The fundamental difference

Traditional platforms treat workflows as imperative programs — a fixed sequence of steps the computer follows literally. The Conductor treats workflows as declarative blueprints — a description of what needs to happen, which the system compiles into the most efficient execution plan.

This is the same conceptual leap that separated SQL from procedural database queries, or React's declarative UI from imperative DOM manipulation. You describe what, not how. The runtime figures out how.


Natural language to compiled flow

Traditional workflow platforms require dragging nodes, configuring each one, and wiring connections. The Conductor sits at the end of a pipeline that eliminates this:

  1. Natural language input — User describes a workflow in plain English
  2. NLP parsing — Text-to-flow parser identifies node types, relationships, and configurations
  3. Graph construction — Complete FlowGraph built with nodes, edges, and positions
  4. Conductor compilation — Graph analyzed and compiled into an optimized ExecutionStrategy
  5. Execution — Strategy runs with all five primitives applied

A user types:

"When a webhook fires, have an agent classify the data, then in parallel: summarize it and store it in Airtable, and if it's urgent, post to Slack #alerts"

The parser builds a 7-node flow graph. The Conductor compiles it:

  • Phase 0: Trigger (passthrough)
  • Phase 1: Agent classify (single LLM call)
  • Phase 2: Agent summarize ‖ Airtable store ‖ Condition check — all concurrent
  • Phase 3: Slack post (direct, no LLM)

The Airtable and Slack operations execute via Extract — direct MCP calls, zero LLM cost. The agent steps that need intelligence get Collapsed where possible. Independent branches Parallelize automatically.


Self-healing flows

When a node fails, the Conductor doesn't just retry blindly:

  1. Classifies the error — timeout, rate-limit, auth, network, invalid-input, config, code-error, api-error
  2. Generates a diagnosis explaining what went wrong
  3. Proposes fixes with confidence scores — e.g., "increase timeout to 60s (0.85)" or "check API key in vault (0.92)"
  4. Retries with backoff — configurable max retries and exponential delay
  5. Routes to error handlers — if retry fails, error edges route to fallback nodes

This turns brittle automation into resilient pipelines. A rate-limited API call doesn't crash the flow — it backs off, retries, and if it still fails, routes to a fallback path.


Part of a trinity

The Conductor Protocol works with two complementary OpenPawz innovations:

Protocol Problem Solution
The Librarian Method Which tool to use among many? Intent-driven discovery via semantic embeddings
The Foreman Protocol How to execute tools cheaply? Worker model delegation via self-describing MCP
The Conductor Protocol What's the optimal execution plan? AI-compiled flow strategies

In a single flow execution:

  1. The Conductor compiles the graph into an optimized strategy
  2. Agent nodes that need tools use the Librarian to discover which ones are relevant
  3. Tool calls are delegated to the Foreman for cheap or free execution

The result: a 20-node flow that would take 2+ minutes on n8n executes in under 20 seconds on OpenPawz, with lower cost and capabilities that other platforms cannot express at all.


Read the full spec

The complete technical reference — including TypeScript interfaces, compilation algorithms, and Tesseract implementation details:

Star the repo if you want to track progress. 🙏

OpenPawz — Your AI, Your Rules

A native desktop AI platform that runs fully offline, connects to any provider, and puts you in control. Private by default. Powerful by design.

favicon openpawz.ai

Top comments (0)