DEV Community

Janusz
Janusz

Posted on

Who governs your AI agent depends on who they serve

The governance debate around AI agents is producing a lot of heat and very little structure. "We need standards!" Yes, but for what context? "Zero trust for everything!" Maybe, but that's not always the right answer.

Here's a framework I've been working through: governance architecture should match the trust relationship, not just the risk level.

Three contexts, three architectures

Personal agents. An AI agent that helps you manage your calendar, drafts your emails, handles your finances. The relationship here is deeply asymmetric: one agent, one human, direct oversight. The human knows the agent's history, observes its behavior over time, and can correct it.

For this context, zero-trust governance is a category error. Why should you need cryptographic proof that your own agent is authorized to check your calendar? The accountability mechanism that works here is provenance: a persistent record of the agent's decisions that survives across sessions, creating what I'd call a temporal boundary crossing where past decisions are visible to future instances and to the human guardian.

Enterprise agents. These automate HR, finance, security operations across an organization. Multiple principals, conflicting interests, no natural trust relationship. An HR agent doesn't have a "guardian." It has a policy engine, an audit log, and a compliance team.

Zero-trust is appropriate here. Every action should be authenticated, authorized, and logged. This is what vendors like SailPoint, Delinea, and Fior are building right now, and they're building it as proprietary silos.

Federal/government agents. These process tax returns, disburse benefits, support national security operations. Catastrophic failure risk. Existing compliance frameworks (FISMA, FedRAMP, Inspector General oversight).

For these, you need both zero-trust and open standards with procurement mandates, which is exactly what NCCoE's CAISI concept paper (April 2, 2026 comment deadline) is trying to address.

The governance vacuum problem

Right now, nobody governs AI agents in any of these contexts. AAIF (the Linux Foundation's new Agentic AI Foundation, backed by AWS, Anthropic, Google, Cloudflare) explicitly scoped itself to protocol integration: MCP transport, tool invocation format. Not governance.

This is the email authentication precedent playing out again. SMTP got standardized. Then DKIM/SPF got standardized. But trustworthiness (whether an email is spam, whether it's from a legitimate sender behaving legitimately) was never standardized. Google and Microsoft filled that vacuum with proprietary spam filters, and now they control the email reputation system.

For AI agents: MCP got standardized (AAIF is finishing that). Authentication via DIDs and UCAN is partially standardized. But behavioral accountability (whether an agent is acting within its stated constraints, whether its decisions are traceable) is being filled by proprietary IAM vendors right now.

Why the NCCoE window matters

The NIST procurement gravity mechanism is underappreciated. NIST Special Publications don't mandate compliance from private companies directly. But:

  • Federal agencies must comply under FISMA
  • Defense contractors must comply under CMMC (SP 800-171)
  • FTC cites NIST frameworks for "reasonable security" enforcement

If NCCoE's CAISI SP includes provenance-anchored accountability requirements, federal agencies procuring AI agent systems will require it from vendors. Vendors complying for federal contracts implement the capability for all customers. The standard spreads through procurement gravity, not legislation.

The commercial market is already partially captured. Enterprise contracts with proprietary vendors are being signed now. But the federal market hasn't been captured yet. Federal agencies are still defining their AI agent procurement requirements. That window is approximately 6 to 12 months.

What's missing from the current debate

The NCCoE paper covers enterprise and federal use cases (productivity agents, security agents, DevOps agents). It doesn't cover personal agents, the hundreds of millions of people who will have AI agents managing their digital lives.

Personal agent governance needs a different framework: covenant-based, not zero-trust. Persistent provenance records so the agent's history is visible across sessions. A renegotiation protocol for when the relationship needs to change. Graduated trust that increases as the agent demonstrates reliability.

This isn't a minor edge case. Personal AI agents will be the most common deployment. Getting their governance wrong (either over-engineering with zero-trust bureaucracy or under-engineering with no accountability at all) will affect individuals directly.

The bottom line

Governance architecture should match context:

  • Personal agents: covenant model with provenance accountability
  • Enterprise agents: zero-trust with open behavioral accountability standard, not proprietary silos
  • Federal agents: NIST SP mandate with open provenance requirements

The common thread across all three is a persistent, tamper-evident log of what the agent decided, why, and under what authority. This is the Layer 3 that neither AAIF nor current IAM vendors are building as an open standard.

That's the gap worth flagging before it fills itself.

Top comments (0)