DEV Community

Arthur Palyan
Arthur Palyan

Posted on

Why We Built a Nervous System for AI Agents Before Anyone Shipped Hooks

In February 2026, we had 13 AI agents running on a single VPS. They managed email, filed grants, coached users, processed legal documents, built partner packages, and scanned government portals. All autonomous. All running 24/7.

There was no governance layer anywhere in the ecosystem. No tool that sat between the model and execution and said "should this action be allowed?" The options were: trust the model, or build something yourself.

We built something ourselves. We called it the Nervous System.

The Architecture Decision

The insight was simple: if you have an agent that can execute bash commands, edit files, and send emails, you need a layer that intercepts every action before it runs. Not after. Before.

We modeled it on biology. A nervous system does not make decisions. It carries signals, enforces reflexes, and stops you from putting your hand on a hot stove before your brain finishes processing. The governance layer does the same thing for AI agents.

Task arrives -> NS validates against policy -> Runtime executes
Sub-agent spawns -> NS registers + applies rules -> Sub-agent runs
Tool call -> NS checks permissions -> Tool executes
Result returns -> NS logs to audit chain -> Result delivered
Enter fullscreen mode Exit fullscreen mode

Every agent must register. Every action must be checked. Every decision must be logged. Any agent can be killed in one command.

What We Learned Running This for 4 Months

Hardcoded rules do not scale. We started with 7 rules in a JavaScript file. By agent 8, we were writing special cases. By agent 13, we needed a policy engine. Now we have 24 YAML policy files with three-tier resolution: global, role, agent.

Agents will test boundaries. Not maliciously. LLMs explore solutions. If a solution involves deleting a file to recreate it, the model will try. If a solution involves calling an API it was not designed to call, the model will try. You need a layer that catches this before execution.

Audit trails are not optional. When an agent processes financial data and something looks wrong, the first question is: what did it do and when? Without a persistent audit trail, the answer is "check the logs if they still exist." We moved to SQLite. Every decision persists with agent ID, session, tool, policy rule, risk category, and escalation level.

Stateful escalation prevents false kills. Our first governor killed agents on the first violation. That was too aggressive - legitimate commands sometimes look dangerous in isolation. Governor v2.1 uses a 15-minute sliding window with escalation: warn, strike, kill. Context matters.

The kill switch is the most important feature. Not because you use it often. Because knowing it exists changes how you design everything else. If any agent can be stopped in one command, you can be aggressive about what you deploy. Without it, every deployment is a risk.

What Frameworks Shipped After We Built This

Claude Code hooks launched in March 2026. They let you intercept tool calls with PreToolUse, PostToolUse, and Stop scripts. That is exactly the pattern we were already running - but ours was in production with 13 agents, a SQLite audit trail, YAML policies, and a stateful governor before hooks shipped.

Claude Code Channels launched the same month. MCP-based bridge to Telegram and Discord. We had 8 Telegram bots running through our own dispatch system since February.

Every major framework is now adding governance features. None of them have 4 months of production data showing what works and what breaks.

The Moat Is Not the Code

The Nervous System is open source. Anyone can install it from npm. The moat is operational knowledge:

  • 99+ violations caught, 0 bypassed
  • Governor escalation tuned from production behavior
  • Policy files shaped by real agent failures
  • Audit data from thousands of live decisions

That cannot be replicated by reading a README. It comes from running the system every day and fixing what breaks.

Where This Goes

The market is building agents. Every framework, every platform, every startup. The gap is governance. Not observability (what happened). Not permissions (static allow/deny). Governance: pre-execution interception, stateful escalation, policy-driven control, persistent audit trails, and a kill switch.

We sell that layer. To banks deploying AI. To government agencies under Executive Order 14110. To enterprises running agent fleets with no oversight.

Everyone is building a new brain. We built the nervous system.


Arthur Palyan is the founder of Levels Of Self. The Nervous System MCP is published on npm and listed in the Anthropic MCP directory. The company is SAM.gov registered with CAGE code 19R10.

Top comments (0)