Why AI Infrastructure Needs Decentralization
Day 1 of the Tangle Re-Introduction Series
Last week, Moltbook went from "the most incredible sci-fi takeoff-adjacent thing" (Karpathy's words) to a security disaster. Moltbook is a social network for AI agents, where autonomous systems post, interact, and organize without direct human control. It went viral in late January. Then researchers at Wiz found the database wide open: 1.5 million API keys exposed, 35,000 emails leaked, private messages accessible to anyone who looked. The platform had no way to verify which posts came from actual AI systems and which came from humans using stolen credentials.
Business Insider ran the headline: "A viral AI agents platform was hacked in minutes, raising questions about security and vibe-coded apps."
Security breaches are not the only verification problem. After every major model release, developers complain the new version performs worse than the old one. "Opus must be nerfed because there's no way it's this retarded," one developer posted after the model destroyed hours of work. "It ruined so much." Levelsio, who built multiple products on these models, posted that GPT-5 was "so bad" after it advised him to delete a partition and promised his data would remain intact. It didn't. Garry Tan, YC's CEO, observed that Claude Code recommended using deprecated APIs that are 200x slower than current alternatives. "We're so early," he wrote. Translation: the tools don't work the way they should.
Users have no way to verify what changed, whether degradation is intentional cost-cutting, or whether they're getting the model they're paying for. The provider says "trust us." The user says "it feels worse." Neither can prove anything.
These failures share a common root: infrastructure without verification, without accountability, and without economic consequences for misbehavior. Moltbook had no access controls. Model providers have no obligation to maintain quality. In both cases, users bear the cost of failures they cannot prevent or even detect.
The question I keep returning to: as AI agents transition from tools to workforce, who should own the infrastructure that hosts them?
What AI Agents Actually Do Now
AI agents in 2026 generate code, execute trades, and run research workflows in production, creating billions in economic value.
AI agents in 2026 are not hypothetical. Coding agents generate 30-50% of code at major technology companies. Research agents synthesize literature and design experiments across pharmaceutical and materials science. Trading agents execute strategies across decentralized exchanges, managing portfolios and rebalancing positions without human intervention. Customer service agents handle 70% of support inquiries at companies that have deployed them.
These systems share a common architecture: perception, reasoning, action. They observe through APIs and data feeds. They reason using large language models and planning algorithms. They act through tool use and code execution. The loop runs continuously.
The economic value is already measured in billions. And the scope will only expand as capabilities improve and trust accumulates.
Three Models for Infrastructure
Infrastructure for AI agents falls into three categories: centralized cloud, decentralized compute, and cryptoeconomic coordination.
Three approaches compete to host this workforce.
The centralized model concentrates infrastructure in cloud providers. Amazon, Google, and Microsoft operate the data centers, control the APIs, and capture the margin. This model has real advantages: professional operations, high availability, economies of scale. Providers face reputational consequences for failures, and SLAs provide some contractual recourse.
But structural problems remain. Providers can observe what agents do. They can change terms unilaterally. Their economic penalty for misbehavior is bounded by litigation risk, which is slow, expensive, and uncertain. Unlike Tangle operators, who are economically accountable through staked collateral, centralized providers face only reputational and legal consequences for misbehavior. The provider relationship is asymmetric: developers need infrastructure more than any provider needs any individual developer.
The decentralized-compute model distributes infrastructure across independent providers but retains centralized coordination. A foundation or DAO sets terms, resolves disputes, captures fees. This creates competition among providers, but does not solve the coordination problem. The coordinator still accumulates power. Disputes still require trusted adjudication.
The cryptoeconomic model replaces trusted coordination with economic mechanisms. Providers stake assets that can be slashed for misbehavior. Smart contracts encode rules that execute automatically. Governance distributes decision-making to stakeholders.
Tangle implements the third approach.
Here's how these models compare on the properties that matter most for AI agent infrastructure:
| Feature | Centralized (AWS/GCP) | Tangle |
|---|---|---|
| Trust model | Reputation-based | Cryptoeconomic verification |
| Payment | Monthly invoices | Per-request x402 micropayments |
| Operator accountability | Terms of service | Staked collateral + slashing |
| Agent autonomy | Requires API keys + billing setup | Permissionless with HTTP payments |
| Audit trail | Provider logs (opaque) | On-chain verification records |
What Verification Actually Requires
Verification combines hardware isolation, redundant execution, and economic penalties to make cheating unprofitable.
The critics of decentralized infrastructure raise a legitimate concern: slashing is punishment, not prevention. If an operator leaks your trading strategy, slashing them afterward doesn't un-leak the information. This critique is correct, and any honest discussion of cryptoeconomic security must address it.
The answer is that slashing is one layer of a multi-layer security model. The other layers do the actual prevention.
Hardware Isolation via TEEs
TEEs create hardware-enforced enclaves that operators cannot observe, even with root access.
Trusted execution environments (TEEs) provide hardware-enforced isolation. Code runs inside an enclave that even the operator cannot observe. The operator can see that computation is happening, but not what data flows through it. TEEs are not perfect (side-channel attacks exist), but they raise the cost of data extraction dramatically.
Redundant Execution
Multiple independent operators run the same job, and BLS-aggregated signatures prove agreement.
Redundant execution with cryptographic comparison has multiple operators run the same computation independently. Results are compared cryptographically. Tangle supports multi-operator verification where N-of-M independent operators must agree on results before settlement. The aggregation service uses BLS signatures with G1/G2 points and signer bitmaps to prove operator agreement. Disagreement triggers investigation. An operator who deviates from honest execution gets caught.
MPC for Private Data
MPC splits secrets so no single party can reconstruct the full input during computation.
Secure multi-party computation (MPC) splits secrets across multiple parties so no single party can reconstruct the full input. Analysis happens on encrypted shares. The pharmaceutical company processing clinical trial data can get results without any operator seeing the underlying data.
Economic Backstop via Slashing
Operators stake TNT tokens as collateral that gets destroyed if verification detects cheating.
Slashing, the penalty mechanism where operators lose staked collateral for incorrect results, provides the economic backstop. Operators stake TNT tokens (amount configured per blueprint) as economic collateral proportional to the value they might extract. If verification mechanisms detect misbehavior, slashing destroys the stake automatically. The expected cost of cheating (probability of detection times slash amount) exceeds the expected benefit.
Different services require different verification mechanisms. A blueprint for AI inference might use TEEs for data isolation. A blueprint for financial computation might use redundant execution with majority voting. A blueprint for private data analysis might use MPC. The protocol provides the economic coordination; blueprints specify the verification approach.
Three Scenarios
Trading firms, pharma companies, and software teams all face the same problem: high-value computation on infrastructure they don't control.
To make this concrete, consider three scenarios. These are not hypothetical futures but problems companies face today.
A trading firm tests agent-driven strategies. No institutional investor deploys $100 million on day one. They start with a $1 million pilot, running alongside human traders, measuring performance and failure modes. The question is not whether AI agents will manage capital but what infrastructure they require.
In the centralized model, the cloud provider can observe trading patterns, positions, and strategies. Nothing prevents an insider from front-running or selling that information. In the Tangle model, the agent runs inside a TEE where the operator cannot observe execution. Multiple operators can verify results through redundant computation. Operators stake assets proportional to the value they might extract. If operators stake $500K to secure a $1M pilot, the economics work: the expected cost of cheating exceeds the expected benefit. As trust accumulates, stake requirements scale with portfolio size.
A pharmaceutical company processes clinical trial data. Proprietary data is worth billions in competitive advantage. The company needs analysis without exposure.
In the centralized model, providers have contractual obligations but limited economic penalty for breach. A lawsuit takes years and may not recover the value destroyed by leaked data. In the Tangle model, blueprints specify MPC protocols where analysis happens on encrypted shares. No single operator sees the underlying data. Slashing conditions define penalties, but more importantly, the architecture prevents exposure in the first place. The company selects operators based on security practices, reputation, and stake.
A software company deploys coding agents. Source code represents years of development effort. The agents need access to write code, which means they can also read it.
In the centralized model, security failure means litigation. The provider's incentive is to minimize security spending up to the point where expected litigation costs exceed the savings. In the Tangle model, operators stake assets proportional to the value of code they access. This creates a natural limit: operators with $100K stake won't be trusted with code worth $10 million. Customers select operators whose stake matches their exposure. A breach triggers slashing immediately. The economic penalty is certain, not contingent on winning a lawsuit.
Why Now
Agent capabilities, proven cryptoeconomic mechanisms, and regulatory pressure converge to make decentralized infrastructure practical today.
Several trends converge to make this moment critical.
Agent capabilities have reached commercial relevance. The systems described above exist in production today. Coding agents, trading agents, research agents: these are not demos but deployed infrastructure generating real economic value. This creates demand for infrastructure that matches the stakes.
Cryptoeconomic mechanisms have proven at scale. Proof-of-stake networks secure over $500 billion in assets. Restaking protocols like EigenLayer manage over $15 billion more. The mechanisms are proven for consensus. Applying them to service verification is engineering, not research.
Regulatory pressure on centralized providers is increasing. Antitrust scrutiny, data sovereignty requirements, AI-specific regulations. Decentralized infrastructure provides jurisdictional distribution that geographic concentration cannot match.
Developer demand for ownership is growing. The pattern where developers create value and platforms capture it has created a generation seeking alternatives. Protocols that distribute value to creators attract talent that centralized platforms cannot.
Infrastructure patterns, once established, become difficult to change. The decisions made now about who controls AI compute will shape the industry for decades.
What Tangle Provides
Tangle coordinates off-chain computation with on-chain economic guarantees through operator staking and slashing.
Tangle Network is a Substrate-based blockchain designed for coordinating off-chain computation with on-chain economic guarantees. It functions as a restaking layer. Restaking is the practice of using existing staked assets from protocols like EigenLayer to secure additional services like those on Tangle. Operators, the node runners who stake economic collateral to execute blueprint jobs, stake assets (native TNT or restaked assets from other networks) that can be slashed for misbehavior. Tangle provides cryptoeconomic guarantees through operator staking, meaning operators put up real economic collateral that the protocol destroys if they cheat. The protocol handles service discovery, payment flows, and slashing for incorrect results. Computation happens off-chain on operator infrastructure; settlement and accountability happen on-chain.
Blueprints
Blueprints are reusable service templates that define computation, pricing, and slashing.
A blueprint specifies what computation the service performs, how it should be priced, what verification mechanisms apply, and what slashing conditions govern operator behavior. Developers create blueprints; the protocol handles deployment and economics. The Blueprint SDK (v0.1.0-alpha.22, Rust 2024 edition, minimum Rust 1.88) provides the framework for building these services.
Lifecycle Customization
Lifecycle services and background tasks enable blueprints to customize behavior at every stage. The BackgroundService trait lets blueprints run persistent tasks alongside job processing. Custom validation for operator registration, custom logic for service activation, and custom verification for job completion are all supported through the SDK's producer/consumer architecture. Blueprints implement the specifics while the protocol handles the commons.
Economic coordination distributes value to participants who create it. Developers earn from blueprint adoption. Operators earn from service fees. Delegators earn from backing reliable operators. Customers pay for services and receive cryptographic guarantees of accountability. With x402 integration, settlement speed depends on chain finality, enabling agents to pay per-request at machine speed.
What Tangle Cannot Do
Cryptoeconomic security assumes rational actors, requires appropriate verification mechanisms, and faces a cold-start bootstrapping challenge.
No infrastructure solves every problem. Being clear about limitations is more valuable than overpromising.
Tangle does not prevent irrational attackers. Economic security assumes rational actors who will not attack when expected cost exceeds expected benefit. Against adversaries willing to lose their stake, or state-level actors with unlimited resources, economic security provides weaker guarantees.
Tangle does not guarantee verification for arbitrary computation. Verification mechanisms have tradeoffs. TEEs require trusting hardware manufacturers. Redundant execution is expensive. MPC has honest-majority assumptions. Blueprints must choose verification approaches appropriate to their threat models.
Tangle does not eliminate the cold-start problem. A protocol with no operators and no customers is an equilibrium, just a bad one. Bootstrapping requires incentives that attract initial participants before network effects take over.
These are real constraints. Building within them requires clear-eyed assessment of what cryptoeconomic infrastructure can and cannot achieve.
What's Next
This post is the first in a series reintroducing Tangle.
Tomorrow I'll dive into how blueprints and services actually work: the lifecycle from request to execution, the economics of operator incentives, and what it actually feels like to build on this infrastructure.
The infrastructure question will define the next decade of AI development. Whether that infrastructure concentrates power or distributes it depends on what we build now. I'd welcome thoughts from anyone working on these problems.
Frequently Asked Questions
What is decentralized AI infrastructure?
Decentralized AI infrastructure distributes computation across independent operators who stake economic collateral, replacing trust in a single provider with cryptographic verification and financial accountability.
Why can't centralized cloud providers handle autonomous AI agents?
Centralized providers can observe agent behavior, change terms unilaterally, and offer only slow legal recourse for failures, which is incompatible with agents that operate at machine speed across jurisdictions.
What is cryptoeconomic verification for AI?
Cryptoeconomic verification combines hardware isolation (TEEs), redundant execution, and operator staking so the expected cost of cheating always exceeds the possible benefit.
How does Tangle verify AI computations?
Tangle uses a multi-layer approach: TEEs for data isolation, redundant multi-operator execution for result comparison, and slashing of staked collateral when verification detects misbehavior.
What is the x402 payment protocol?
x402 is an HTTP-native payment protocol built by Coinbase and Cloudflare that lets AI agents pay for API calls with stablecoins, settling transactions in seconds without accounts or billing systems.
What is restaking and how does Tangle use it?
Restaking lets operators reuse assets already staked on other protocols (like EigenLayer) to secure Tangle services, lowering the capital cost of becoming an operator.
Why does AI agent infrastructure need economic accountability?
Agents transact autonomously at machine speed, so traditional legal recourse is too slow. Staking and slashing provide immediate, automatic consequences for misbehavior.
Sources:
- Karpathy on Moltbook: x.com/karpathy/status/2017296988589723767
- Balaji's analysis: x.com/balajis/status/2017544257238929716
- Business Insider on Wiz findings: x.com/BusinessInsider/status/2018564140273434672
- Developer on Opus degradation: x.com/thestonechat/status/2017678689454923791
- Levelsio on GPT-5: x.com/levelsio/status/1989567994872418472
- Garry Tan on Claude Code: x.com/garrytan/status/2018148196841840907
Links:
Top comments (0)