DEV Community

Cover image for Where Capability Lives: A Meta-Protocol for Distributed Intelligence on the Trillion-Device Installed Base
Rotifer Protocol
Rotifer Protocol

Posted on • Originally published at rotifer.dev

Where Capability Lives: A Meta-Protocol for Distributed Intelligence on the Trillion-Device Installed Base

The next decade of AI will not be decided by which model is largest. It will be decided by which protocol lets capability live inside hardware.

Today, AI capability lives where it was trained — inside large data centers, behind opaque APIs, in models whose weights cannot be inspected and whose runtime is divorced from any substrate that has to honor what they claim. The result is a structural mismatch. AI capability grows in the cloud. The trillion devices that already populate the physical world — phones, vehicles, embedded controllers, industrial sensors, smart appliances, edge gateways — are slowly drifting from "possible to upgrade" into "expensive to discard."

This essay argues that the missing layer is not a new model. It is a meta-protocol — a coordination layer at which capability declarations remain accountable to the substrates that honor them, while still allowing capabilities to evolve, accumulate, and move across heterogeneous hardware. Rotifer Protocol is the framework we have been building toward this position. The companion paper Where Capability Lives, and How Hardware Earns the Right to Run It develops the full argument; this short essay summarizes it for the time-constrained reader.


What HTTP Did, and What AI Has Not Done Yet

In 1991, the Web did not exist. By 2001, it was rewriting commerce, education, and software. The technical precondition was a single thing: a protocol that did not own the content but defined how content could be linked, addressed, and rendered by anyone.

HTTP did not invent text. It did not invent the network. What HTTP did was define a coordination layer at which two unrelated parties could agree on what a document was. The Web's value flowed through HTTP, but HTTP itself remained light, unowned, and evolvable. Composability followed openness; openness followed protocol-level minimalism.

Compare that to the current state of AI capability.

There is no agreed-upon way to ask another system "what can you do, on what substrate, at what fidelity, with what verifiable guarantees." There is no analog of an HTML document for a unit of intelligence — no portable, inspectable, citable, evaluable artifact. Function-calling tool schemas and MCP-style descriptions are improvements at the SDK layer, not at the protocol layer. They standardize a calling convention. They do not standardize the substrate-awareness that distinguishes a capability that can run from a capability that should run.

A meta-protocol for AI capability would do for distributed intelligence what HTTP did for documents — without owning the capability, without dictating implementations, without locking in any single vendor.


Three Sentences That Are Not the Same

Most capability drift originates from collapsing three different sentences into one.

"X is possible."

"X is possible on this kind of hardware."

"X is possible on the hardware in your hands right now."

A protocol that does not distinguish these sentences will let any product compress them into one. The first travels well in keynotes. The third is the only one that pays interest on the loan.

Recent information-theoretic work (Finzi et al., 2026, on epiplexity) makes this distinction precise: capability is not a property of a problem; it is a property of the pair (problem, observer). Two device generations facing the same workload are not running the same race at different speeds — they are running races with finish lines in different places. No amount of software effort raises an observer's computational budget; software gets better, but substrates remain finite. The protocol's job is to mediate between the two by exposing substrate-awareness as first-class structure.


TEE as the Physical Entry Point

The trillion-device installed base is real and it is not going to be cloud-native any time soon. Three industry default responses each fall short.

Centralized cloud inference is bounded by latency, sovereignty, and long-tail accessibility. Aggressive OTA promises produce capability drift across hardware generations. Edge autonomy in isolation loses cross-device knowledge transfer. Each path has a real success region. None of them, alone or in combination, supports distributed intelligence at installed-base scale.

The fourth path — the one we have been building toward — is a meta-protocol layer through which devices can declare what they actually do, attest the substrate they run on, and exchange capabilities with the rest of the network without surrendering control to any centralized layer.

The physical entry point for this layer, on the existing installed base, is the Trusted Execution Environment.

The protocol's L0 Kernel specification has always admitted TEE as one of four legitimate trust backends — alongside distributed ledgers, cryptographic signature chains, and HSMs. What this paper argues is operational, not architectural: among the four, TEE is the trust backend the meta-protocol specifically needs to engage the installed base, because TEE is the only one whose deployment surface is co-extensive with consumer-facing physical hardware.

Three properties make this role distinctive. Universal availability: TEE-class capability already exists in the silicon of devices that have shipped, been paid for, and are in operation. Hardware-rooted integrity: a capability declaration carrying a TEE attestation makes a claim verifiable against silicon-level state, not just software-level assertions. Identity rooted in a specific device: a meta-protocol whose unit of participation is a node, not just an account, needs identity anchored in silicon, not just in keys.

A TEE alone has no opinion about what a capability is. It can attest that a particular binary ran and produced a particular output. It cannot say whether the binary was a faithful implementation of a published capability, whether the output composed correctly with other capabilities, or whether resource declarations matched actual usage. The meta-protocol layer is where those questions become answerable. TEE provides hardware-trusted; the meta-protocol provides capability-known. Both are necessary; neither is sufficient alone.


The Math Just Started Working

A continuous improvement in edge inference would not change the architectural conversation. What has actually happened in 2026 is qualitatively different.

For multi-step agent workflows — tool calling, intermediate reasoning, structured output, several rounds of decision — the throughput threshold has become surprisingly concrete. Public reports for Google's Gemma 3 family indicate decode rates around 7–8 tokens per second on Raspberry Pi 5 CPU for the smaller variants, and 30+ tokens per second on Qualcomm-class mobile NPUs for the next variant up. These rates are sufficient to support a four-thousand-token input followed by two skill invocations within a wall-clock budget that users will accept as interactive.

This is a phase transition, not a continuous improvement. The same workload that previously required cloud round-trips because no available edge stack could close the loop in interactive time, can now, for the first time, be edge-resident with reasonable engineering effort. A meaningful fraction of the existing installed base — recent-generation smartphones, current-generation vehicle infotainment, the higher tiers of industrial gateways — already crosses this threshold. The bottleneck is no longer silicon. The bottleneck is the absence of a protocol layer at which these devices can declare, attest, and exchange.

That is what the meta-protocol exists to remove.


The Smartphone as Anchor

The cleanest evaluation surface for the meta-protocol-on-hardware story is the modern smartphone. Five reasons converge there. TEE deployment is effectively universal. The long-lifecycle tension between purchased capability and current capability is sharpest. User education cost is lowest because OTA updates are already accepted. Personalization data is naturally rich, making local Imprinting valuable. And cross-device migration — the case where the user replaces their phone — is a natural user behavior that maps directly onto the protocol's Adapter primitive.

Concretely, consider a five-year-old smartphone in active use today. Under current industry defaults, this device has two futures. Either it gets retired because newer capabilities cannot reach it, or it limps along on capability promises that progressively fail to match what the user was told at purchase. Both futures are wasteful and both are recurrent.

The meta-protocol offers a third future. The device declares its actual Phenotype: which capabilities it can run Natively, which only Wrapped, which exceed its observer class entirely. Its TEE attests that those declarations are honest. The device does not pretend to support what it cannot, and the protocol does not let it. In return, the device receives capabilities sized to its substrate and accumulates Imprinted local value across its remaining operational life — a model of one user's habits, one device's interaction patterns, one network environment's quirks. That value cannot generalize to other users. It does not need to. It is local by design.

When the user eventually replaces the device, the protocol's Adapter layer handles cross-device migration as a form of cross-class translation, attested at both endpoints and never surrendered to any intermediate party. This is the migration story users already expect; the meta-protocol's contribution is making it work without giving up the structure.


What This Essay Does Not Claim

To prevent the kind of capability drift this argument itself diagnoses, three exclusions are explicit.

This essay does not claim that engineering work to deploy a TEE-backed Binding for Rotifer Protocol is complete or imminent. The argument here is at the strategic and narrative layer, decoupled from the engineering priority of the protocol's near-term release schedule. The implementation track will catch up; this essay is being released ahead of full implementation because methodology benefits from public critique before its first measurement is produced.

This essay does not claim that TEE heterogeneity is solved. The five major TEE families currently deployed do not interoperate at the protocol layer today. Bridging them is the responsibility of the protocol's Adapter layer; the cross-TEE attestation story is one of the most concrete near-term open questions.

This essay does not claim that Rotifer becomes a hardware company. Rotifer is a protocol layer. A Binding is a contract under which a runtime can host the protocol; a TEE-backed Binding would be one such contract. The Foundation does not propose to manufacture silicon, certify devices, or operate TEE infrastructure on behalf of OEMs.

These exclusions are not boilerplate. They are the substrate the rest of the argument depends on.


What Substrates Do, and Why Foundations Should Want to Disappear

The success criterion for a meta-protocol is unusual. A successful product becomes increasingly important to its creators. A successful protocol becomes increasingly replaceable by its creators. HTTP outlasted its original commercial supporters because the protocol's value migrated away from any single party. Bitcoin's white paper continues to matter regardless of any individual operator's behavior. The deepest test of a meta-protocol is whether it can survive the disappearance of its originating organization.

Rotifer Foundation operates a privileged node within the protocol network. That privilege exists in capacity, in centrality, in early-adopter access. It does not exist in necessity. The protocol's design treats Foundation-operated infrastructure as one privileged node among others — privileged because it was first, not privileged because the protocol depends on it. The most successful version of this story is one where other privileged nodes, operated by partners, communities, competitors, and entities the Foundation has no relationship with, run alongside and the protocol thrives without distinguishing between them.

A protocol's contribution to distributed intelligence is not a product. It is a substrate. Substrates succeed by becoming things their originators do not control.


How to Engage

For readers who find the argument worth engaging with, four channels exist.

The protocol's specification, reference implementations, and companion papers are publicly available under permissive licenses. Implementation feedback, specification review, and Adapter contributions are welcome through the open-source community.

The information-theoretic framework, the Capable Edge profile, and the cross-class translation analysis each connect to active research traditions. Population biologists, complex-systems theorists, mechanism designers, information theorists, and embedded-systems researchers whose tools we have adopted are invited to collaborate.

For OEMs, Tier-1s, and integrators, the protocol's longer-horizon track includes Binding work for which the only realistic engineering path requires industry participation. Conversations on this track do not assume immediate commercial commitments; they are about the shape of a Binding spec that could, on a multi-year horizon, support production deployment.

For investors, integrators, and platform operators interested in supporting the formation of a non-vendor-owned meta-protocol layer, the Foundation's commercial strategy is structured around being a privileged node within an open ecosystem rather than a platform that captures the ecosystem's value. Early ecosystem participation is invited.

The full argument — including the information-theoretic foundations, the protocol's substrate-aware vocabulary, the implementation honesty layering, and the open questions still active — is in the companion paper Where Capability Lives, and How Hardware Earns the Right to Run It. This essay is the entry point. The reader is invited to disagree on every page.


This article was originally published on rotifer.dev. Follow the project on GitHub or install the CLI: npm i -g @rotifer/playground.

Top comments (0)