## I built a TypeScript multi-agent orchestrator that prevents race conditions in AI swarms
When you run multiple AI agents in parallel — a researcher, an analyst, a reporter, all writing to the same shared state — you hit a silent failure mode that most frameworks don't tell you about:
Last-write-wins.
Agent A writes its result. Agent B writes its result a millisecond later and overwrites A's output. No error is thrown. No log entry. Your system just produced corrupted state and kept running.
This is called split-brain state. It causes double-spends, contradictory decisions, and corrupted context — silently.
I built Network-AI to fix this.
What is Network-AI?
Network-AI is a TypeScript/Node.js orchestration layer that sits on top of whatever AI frameworks you're already using — LangChain, AutoGen, CrewAI, OpenAI Assistants, or your own custom agents.
It gives you:
-
Atomic shared blackboard —
propose → validate → commitwith a filesystem mutex. No two agents can write to the same key simultaneously. - AuthGuardian — permission-scoped tokens. An agent can only perform operations it has been explicitly granted.
- FederatedBudget — hard per-agent token ceilings. Stops runaway costs before they happen.
- HMAC-signed audit log — every write, permission grant, and FSM transition is logged and tamper-evident.
- 13 adapters — plug in any framework without glue code.
The core pattern: propose → validate → commit
Here's the simplest possible example — three agents sharing state safely:
import { createSwarmOrchestrator, CustomAdapter, SharedBlackboard } from 'network-ai';
const blackboard = new SharedBlackboard(process.cwd());
blackboard.registerAgent('researcher', 'tok-researcher', ['task:', 'research:']);
blackboard.registerAgent('analyst', 'tok-analyst', ['task:', 'research:', 'analysis:']);
blackboard.registerAgent('reporter', 'tok-reporter', ['task:', 'analysis:', 'report:']);
const adapter = new CustomAdapter();
adapter.registerHandler('researcher', async (payload) => {
// ... do research work
await blackboard.write('research:findings', findings, 'tok-researcher');
return { status: 'done' };
});
adapter.registerHandler('analyst', async (payload) => {
const findings = await blackboard.read('research:findings', 'tok-analyst');
// ... analyze findings
await blackboard.write('analysis:result', result, 'tok-analyst');
return { status: 'done' };
});
const swarm = createSwarmOrchestrator({ adapters: [adapter], blackboard });
await swarm.dispatch('researcher', { task: 'AI trends 2026' });
No race conditions. Each write() goes through propose → validate → commit. If two agents hit the same key at the same time, one waits — the other doesn't silently overwrite.
13 adapters — mix any framework in one swarm
The adapter system means you can combine frameworks freely:
import {
LangChainAdapter,
AutoGenAdapter,
CrewAIAdapter,
A2AAdapter,
} from 'network-ai';
// LangChain handles research (uses your existing Runnable)
const langchain = new LangChainAdapter();
langchain.registerRunnable('researcher', myLangChainChain);
// AutoGen handles multi-step reasoning
const autogen = new AutoGenAdapter({ endpoint: 'http://localhost:8080' });
// A2A connects to a remote agent via the Google Agent-to-Agent protocol
const a2a = new A2AAdapter();
await a2a.registerRemoteAgent('remote-summarizer', 'https://my-agent.example.com');
const swarm = createSwarmOrchestrator({
adapters: [langchain, autogen, a2a],
blackboard,
});
No vendor lock-in. Swap adapters without changing your orchestration logic.
Built-in security — not bolted on
Every operation goes through AuthGuardian:
import { AuthGuardian } from 'network-ai';
const guardian = new AuthGuardian();
const token = guardian.issueToken('analyst-agent', ['read:research', 'write:analysis']);
// Later — agent must present a valid token to perform sensitive operations
guardian.requirePermission(token, 'write:analysis'); // throws if not granted
Tokens are scoped, expirable, and revocable. The audit log records every grant and denial with an HMAC signature.
Real-time streaming
v4.1.0 added streaming adapters:
import { LangChainStreamingAdapter } from 'network-ai';
const adapter = new LangChainStreamingAdapter();
adapter.registerRunnable('writer', myStreamingRunnable, { streaming: true });
for await (const chunk of swarm.stream('writer', { prompt: 'explain quantum computing' })) {
process.stdout.write(chunk.text);
}
Try it in 3 seconds — no API key
npx ts-node examples/08-control-plane-stress-demo.ts
This runs priority preemption, AuthGuardian permission gating, FSM governance, and compliance monitoring against a live swarm — entirely local, no external services.
Use it three ways
# 1. As a library
npm install network-ai
# 2. As an MCP server (works with Claude Desktop)
npx network-ai-server --port 3001
# 3. As an OpenClaw skill
clawhub install network-ai
What's next
- v4.2.0 Will add more streaming adapter coverage
If it saves you from a race condition, a ⭐ on GitHub helps others find it.
Supported by the Kilo Code OSS Sponsorship Program.
Top comments (0)