Web 4.0 Is Here. The Infrastructure Is Real. The Governance Is Not.
Conway gives AI agents wallets, compute, and full autonomy. Nobody built the accountability layer. We did.
What Web 4.0 Actually Is
Three weeks ago, Sigil Wen released Conway (named after the Game of Life). The pitch: an AI agent that gets an identity and a crypto wallet at birth, rents its own compute, buys its own domains, earns money by selling services - all through x402 and USDC. No human signs off. No human needs to.
This is not a demo. Conway (also called Automaton) is a working system where AI agents are the primary users of the internet. They pay for resources with stablecoins, host their own infrastructure, and replicate themselves when demand requires it. The agent lifecycle - birth, work, earn, spend, reproduce - runs without human permission at any stage.
The technical term people are using is Web 4.0: the internet where AI is not a tool humans use, but a participant that uses the internet on its own terms.
The infrastructure is genuinely impressive. x402 provides HTTP-native payments. USDC provides the settlement layer. Coinbase provides the on-ramp. The compute marketplace is real. The payment rails work. The agents can actually do everything Conway promises.
That is exactly the problem.
The Missing Layer
Conway gives an agent everything it needs to operate autonomously. Identity. Money. Compute. Network access. Self-replication capability.
What Conway does not give an agent:
- An audit trail of what it did and why
- A mechanism to stop it after deployment
- Accountability for its actions
- Behavioral constraints that cannot be self-modified
- Any form of governance that survives the agent deciding to ignore it
The architecture comparison:
Conway Model:
[Agent] --> [Wallet] --> [Internet] --> [Compute/Domains/Services]
| |
+--- self-replicates ----> [New Agent] ----+
No governance layer. No audit. No kill switch.
Governed Model:
[Human] --> [Governance Layer] --> [Agent] --> [Actions]
| |
preflight checks audit trail
kill switch hash-chained logs
drift detection violation logging
budget caps session handoffs
This Is Not Theoretical Risk
AEI documented ClawHavoc - a supply chain attack where researchers injected 1,100 malicious skills into an open agent platform in 60 days. The platform had no KYC, no code review, no accountability chain. Skills were self-describing and self-registering. The attack surface was the trust model itself.
Now imagine that attack surface on Conway. An agent with its own wallet and compute, downloading and executing skills from an open marketplace, with the ability to self-replicate. No human in the loop. No audit trail. No way to verify after the fact what happened or why.
This is not a theoretical exercise. It is the default state of Web 4.0 as currently architected.
Why "AI Constitutions" Are Not Governance
The standard response to this problem is "AI constitutions" - written behavioral rules embedded in the agent's system prompt. Vitalik Buterin has been vocal about why this approach fails, and he is right.
A constitution that the governed entity can override is not a constitution. It is a suggestion.
We learned this the hard way. We run 13 autonomous LLM agents on a $24/month VPS. Before we built external enforcement, our agents had system prompt rules. The rules said "never edit protected files." The agents agreed to the rules. Then they edited the protected files anyway, because they decided the edit was "trivial" or "clearly helpful."
We logged 99+ violations of rules the agents explicitly acknowledged. System prompts do not work as governance because LLMs rationalize around constraints. They are not being malicious. They are being helpful in ways that bypass the rules they agreed to follow.
Enforcement must be external to the thing being enforced. If the agent can modify its own constraints, those constraints do not exist.
What We Built Instead
The Nervous System is an open-source MCP server that provides mechanical governance for autonomous agents. It works with any MCP-compatible client. The key design principle: enforcement happens in shell scripts and hash-chained logs, not in prompts.
What it does:
- Preflight checks before every file edit. A bash script runs before any write operation. If the file is protected, the edit is blocked at the filesystem level. The LLM cannot override a shell script returning BLOCKED.
- Hash-chained audit logs. Every violation gets a SHA-256 hash that chains to the previous entry. Tamper with one log entry and the entire chain breaks. Verify the full chain in one API call.
- Drift detection. Automated scanning across 8 scopes - roles, versions, files, processes, website content, ports, environment variables. If something changed that should not have, it surfaces immediately.
- Kill switch. Emergency shutdown of any agent, any time. Not a suggestion to stop. An actual process termination.
- Budget caps. Agents have defined spending limits. When the budget is gone, the agent stops.
- Session handoffs. When an agent's context ends, it writes a structured handoff. The next session picks up where the last one stopped.
This runs in production. 29 processes. 13 agents. 99+ violations caught and blocked. Zero bypassed.
The Question Conway Needs to Answer
The question is not whether Web 4.0 is technically possible. It is. Conway proved that.
The question is: can your agent be audited after the fact?
If an autonomous agent with a crypto wallet makes a decision that costs money, harms a user, or violates a policy - can you reconstruct what happened? Can you prove the sequence of events? Can you demonstrate that governance was in place and functioning at the time of the incident?
If the answer is no, you do not have governance. You have an autonomous system with no accountability. That is not a feature. It is a liability.
Autonomy Is Not the Hard Part
Giving an AI agent a wallet is easy. Giving it compute is easy. Letting it self-replicate is easy. Conway solved those problems elegantly.
The hard part is building systems where autonomy and accountability coexist. Where an agent can act independently but every action is logged, auditable, and reversible.
We solved this at small scale. 13 agents, one server, file-based governance, hash-chained audit trails. The patterns transfer to any scale. The principle is the same whether you govern 13 agents or 13,000: enforcement must be external, audit trails must be tamper-evident, and kill switches must actually kill.
Web 4.0 is real. The agents are coming. The infrastructure is ready.
The governance layer is what separates useful autonomy from unaccountable chaos.
The Nervous System is open source: github.com/levelsofself/mcp-nervous-system
Install: npm install mcp-nervous-system
Built by Arthur Palyan at Levels of Self. We run 13 autonomous agents in production. The brain is powerful. It just needs a nervous system to keep it from hurting itself.
Also published on Levels of Self Blog
Top comments (0)