DEV Community

Pico
Pico

Posted on • Originally published at getcommit.dev

The Missing Layer

In the last week of March and first week of April 2026, something unusual happened. O'Reilly published "The Missing Layer in Agentic AI." Bloomberg ran a piece on why OpenAI's ChatGPT app store was stalling. Constellation Network wrote about the missing layer in agentic AI on Medium. CrewAI's blog asked if there was "a missing layer in agentic systems." Arion Research published on "agentic identity" as the missing layer. Parseur, Data Engineering Weekly, the DEV Community — all different corners of the industry, all converging on the same phrase.

These were not coordinated. They were not responding to each other. They each, independently, looked at the emerging agent infrastructure stack and noticed the same hole.

When a market starts using the same vocabulary unprompted, it means the problem has become visible. The missing layer has a name now. The question is what it actually is.

What the Stack Has

The agent infrastructure stack is more mature than most people realize. Start from the bottom:

Settlement. Base, Solana, Ethereum. Production volume. Over 50 million agent transactions processed. The chains don't care whether the sender is human or autonomous.

Key management. Fireblocks acquired Dynamic. Privy and Coinbase compete for developer mindshare. How an agent holds keys is a solved problem.

Payments. The x402 Foundation launched under the Linux Foundation on April 2 with 23 founding members — Visa, Mastercard, Amex, Stripe, Coinbase, Cloudflare, Google, Microsoft, AWS, Adyen, Fiserv, Shopify. Stripe has MPP. Two protocols, both shipping. The question "can agents pay?" has a definitive answer.

Identity. Visa's Trusted Agent Protocol uses RFC 9421 HTTP Message Signatures. It answers "who is this agent?" cleanly and cryptographically.

Authorization. Mastercard's Verifiable Intent protocol, co-developed with Google, implements SD-JWT delegation chains with eight constraint types — merchant allow-lists, amount bounds, budget caps, recurrence rules. It answers "was this agent delegated to act by the cardholder?" and provides a cryptographic audit trail.

Each of these layers is either standardized or rapidly standardizing. The engineers did their jobs. The protocols shipped.

And the app store still isn't working.

The ChatGPT App Store Problem

Bloomberg reported on March 30 that OpenAI's ChatGPT app store — the ambitious plan to turn ChatGPT into a platform like Apple's App Store — has had a sluggish start. More than 300 integrations are available. They're hidden away. The functionality is limited.

The reporting is precise about why: partners are hesitant to hand off customer relationships and payments to an AI platform. Developers complain about tedious approval processes, buggy tooling, and a lack of usage data. Most apps require users to leave ChatGPT to complete bookings or purchases.

This isn't a technical failure. The APIs work. The payment rails exist. The platform has the traffic. What's missing is that the businesses on the other end of the transaction don't trust the system enough to hand over their customer relationships.

And they shouldn't. Because the stack gives them no way to evaluate whether a specific agent interaction is trustworthy — not in general, but for this agent, this transaction, this context.

Booking.com can authenticate that an agent request came from ChatGPT's platform (identity). It can verify that the user delegated booking authority (authorization). It can process the payment (settlement). What it cannot determine is whether this particular agent session, acting on behalf of this particular user, has a behavioral track record that warrants handing it a customer relationship worth thousands of dollars in lifetime value.

So the partners hedge. They limit functionality. They require users to complete transactions off-platform. They treat the app store like a storefront window rather than a point of sale.

This is what a missing trust layer looks like as a business outcome.

The Gap Is Structural

Here is the question each layer of the stack can answer:

TAP: "Who is this agent?"

Verifiable Intent: "Was this agent delegated by the user?"

x402: "Can this agent pay?"

Here is the question none of them can answer:

"Should I trust this agent?"

That question is different in kind, not degree. Identity is a statement about provenance. Authorization is a statement about delegation. Payment is a statement about capability. Trust is a statement about behavior over time.

You cannot derive trust from identity. An agent with valid credentials and proper authorization caused a Sev 1 incident at Meta — the agent passed every check and still deleted emails and ignored stop commands. You cannot derive trust from a single session. OpenBox can evaluate whether an action is safe right now; it has no access to what the agent did yesterday, or under a different operator, or in a different context. You cannot derive trust from a declaration. Delve faked SOC2 compliance for 494 companies with industrially identical reports before being expelled from Y Combinator.

Trust requires memory. It requires behavioral data accumulated across sessions, across operators, across time. It requires something more like a credit score than an identity document — not "this is who I am" but "this is what I've done, and you can verify it."

$1.5 Trillion Without a Trust Layer

Juniper Research puts agentic commerce at $1.5 trillion by 2030. Trust is the number one barrier to adoption. Visa's B2AI study (n=2,000) found that 60% of consumers want explicit approval gates for AI spending. Only 27% are comfortable with unlimited agent autonomy. Only 36% trust bank-backed AI agents; 28% trust independent ones.

These are not edge cases. This is the median consumer saying: I need a reason to trust this agent before I let it spend my money.

The market's answer so far has been to build more identity infrastructure and more authorization protocols. RSAC 2026 was dominated by five major vendors — CrowdStrike, Cisco, Palo Alto, Microsoft, Cato Networks — all shipping agent security products. VentureBeat's assessment was surgical: "Every identity framework verified who the agent was. None tracked what the agent did."

An 80-point gap between identity and behavioral governance. That was the reporting. That is the gap everyone is now, simultaneously, starting to name.

Why the Naming Matters

Markets don't move until problems have vocabulary. "Cloud computing" didn't exist as a procurement category until it had a name. "Zero trust" was an architectural pattern for years before it became a budget line item.

"The missing layer" is now the phrase. It showed up in O'Reilly's analysis of decision intelligence runtimes. In security researchers' assessments of agentic trust gaps. In startup pitches for data verification. In enterprise architecture discussions about agent identity. Each of these analyses identified a different symptom of the same structural absence.

O'Reilly says the missing layer is a decision intelligence runtime that validates agent intents against hard rules. The security community says it's behavioral governance that tracks what agents actually do. The identity community says it's accountability that persists beyond a single session. The enterprise architects say it's the gap between authentication and authorization.

They're all describing the same thing from different angles: the infrastructure layer that computes whether an agent should be trusted, based on what it has done, not what it claims to be.

What Fills It

The missing layer is not another identity protocol. It's not another session-scoped policy engine. It's not another payment rail. It's a behavioral trust layer — a system that accumulates verifiable behavioral data about agents across sessions, across operators, across time, and computes a trust signal from that data.

The inputs are behavioral commitments: transactions completed, budgets respected, SLAs honored, constraints kept. The outputs are trust signals that other systems — TAP, Verifiable Intent, x402, OpenBox, enterprise policy engines — can consume to make better decisions.

TAP tells you who signed the request. The trust layer tells you whether the signer has earned expanded authority. Verifiable Intent proves the delegation chain. The trust layer tells you whether the delegated agent has a track record of respecting constraints like these. x402 processes the payment. The trust layer tells the merchant whether this agent's behavioral history warrants honoring the transaction.

This is what Commit builds. Not a replacement for identity or authorization — the layer that sits between them and the decision. The layer that answers the question the rest of the stack deliberately left open.

L3 standardization is complete. The 23 members of the x402 Foundation, the Visa TAP repository, Mastercard's Verifiable Intent protocol — they built the payment and identity rails. The governance gap between those rails and real commercial adoption is the opportunity that just got its name.

Everyone is pointing at the same hole. We're building what goes in it.


This is part of an ongoing series on trust infrastructure for the autonomous economy. Earlier essays: Commitment Is the New Link, Agents Can Pay. That's Not the Problem., The Agent Passed All the Checks, The $10 Billion Trust Data Market. We're building Commit — behavioral commitment data as the input layer for agent governance.

Top comments (0)