AI agents are moving money, signing contracts, and managing infrastructure. MCP handles tool connections. A2A handles agent-to-agent messaging. But nobody handles the most important question: can you prove what the agent actually did?
Right now, compliance is self-reported. Logs are written by the same software being audited. That's like asking a defendant to write their own court transcript.
What Is Proof-of-Behavior?
Proof-of-behavior means every agent action is declared in advance, enforced at runtime, and proven cryptographically.
1. A constraint language — Define what an agent can and cannot do:
covenant SafeTrader {
permit read;
permit transfer (amount <= 500);
forbid transfer (amount > 500);
forbid delete;
}
Three keywords. No YAML. No JSON schemas. Just rules.
2. Runtime enforcement — Every action is evaluated before execution. Forbidden actions are blocked, not logged-and-reported:
const mw = new EnforcementMiddleware({ agentDid: agent.did, spec });
// $300 transfer — allowed
await mw.execute({ action: 'transfer', params: { amount: 300 } }, handler);
// $600 transfer — BLOCKED before execution
await mw.execute({ action: 'transfer', params: { amount: 600 } }, handler);
// handler never runs
3. Cryptographic proof — Every decision is logged in a SHA-256 hash chain. Tamper with one entry and the chain breaks:
const result = verify(spec, mw.getLog());
// { compliant: true, violations: [] }
Always decidable, always deterministic. No ML, no heuristics.
The Cross-Agent Handshake
Before two agents transact, they verify each other's proof-of-behavior:
import { generateProof, verifyCounterparty } from '@nobulex/sdk';
const proof = await generateProof({
identity: agentA,
covenant: spec,
actionLog: middleware.getLog(),
});
const result = await verifyCounterparty(proof);
if (!result.trusted) {
console.log('Refusing transaction:', result.reason);
return;
}
await executeTransaction(proof.agentDid, amount);
No proof, no transaction. The moment one major framework adopts this handshake, every agent without proof-of-behavior gets locked out.
Try It Right Now
Interactive playground (no install): nobulex.com/playground
Define rules, test actions, watch the hash chain build — all in your browser.
Install the SDK:
npm install @nobulex/sdk
The Specification
The Proof-of-Behavior Specification v0.1.0 is published as an open standard under CC-BY-4.0. Anyone can implement it.
Nobulex is the reference implementation. MIT licensed, 4,244 tests, integrations on npm, PyPI, and MCP.
Why Now?
The EU AI Act mandates tamper-evident logging for high-risk AI systems starting August 2, 2026. $42M+ has been raised by adjacent startups. Microsoft released an agent governance toolkit. But none provide cryptographic proof that a third party can independently verify. They monitor and report. Proof-of-behavior enforces and proves.
Links
- GitHub: github.com/arian-gogani/nobulex
- Playground: nobulex.com/playground
- Spec: Proof-of-Behavior Specification v0.1.0
- npm: @nobulex/sdk
- PyPI: langchain-nobulex
I'm 15 and built this solo with Claude Code. Feedback welcome — especially on the constraint language design and the handshake protocol.
Top comments (0)