The identity industry is scrambling to give AI agents identity. But agents don't have identity in the human sense. They have relationships. The question isn't 'who is this agent?' but 'what relationships constitute this action?'
The identity industry is in crisis. That's not my diagnosis — it's the industry's own language. 'The AI Agent Identity Crisis,' reads the headline from one of the largest identity platforms. Only 21.9% of organizations treat AI agents as independent identity-bearing entities. Nearly half still share human API keys with their agents. The security conferences are full of proposals. The frameworks are multiplying.
And almost everyone is asking the wrong question.
The Category Error
The question everyone asks is: How do we give agents identity?
This assumes agents need identity in the sense that humans have it — some persistent inner essence that remains constant across contexts, some 'self' that can be authenticated. We look at an AI agent and see something that acts, decides, and communicates, and we reach for the familiar: give it credentials, assign it a role, manage its access.
But the analogy breaks in a specific and instructive way.
A human has identity that persists through time. You are you whether you're at work or at home, whether you're using your phone or your laptop, whether it's Tuesday or Saturday. Your identity is the invariant — the thing that remains stable while everything else changes.
An AI agent has no such invariant. The same model, running on two different machines with two different prompts, produces two different agents. The same agent, invoked an hour later with updated context, is functionally a different entity. Identity for agents isn't persistent — it's constituted fresh each time, from the full configuration of relationships the agent sits in.
This isn't a bug. It's the nature of the thing. And it means the identity question is malformed.
What Confucius Knew
There's a philosophical tradition that handles this naturally, and it's not the Western one.
Western philosophy, from Descartes forward, locates identity in the individual. I think, therefore I am. Identity is something you have — an internal essence, a persistent self, a rational mind that exists prior to and independent of its relationships. This is the framework the identity industry has inherited. It works beautifully for humans. It fails completely for agents.
Confucian philosophy offers a different model. In the Confucian tradition, a person is not an isolated rational mind but a node in a network of relationships. You become who you are by fulfilling roles — child, parent, friend, colleague — with care and propriety. The self is not prior to these relationships; it is constituted by them. Remove the relationships and there is no self left over.
This sounds exotic until you realize it describes AI agents exactly. An agent is constituted by its relationships: who authorized it (the principal), what it can access (capabilities), what it's been asked to do (context), who it acts on behalf of (delegation). Remove any of these and the agent becomes something different. The identity is the relationship configuration.
The Western framework asks: what is this agent? The Confucian framework asks: what relationships constitute this agent? The second question is more useful because it's actually answerable.
The Three-Layer Mistake
The emerging industry consensus proposes a three-layer identity stack for agents: cryptographic workload identity at the bottom (proving what software is running), dynamic capability policies in the middle (controlling what the agent can do), and human-in-the-loop verification at the top (ensuring a real person authorized the action).
Each layer is sound engineering. But the framing is wrong. Each layer is described as verifying something about the agent — the agent's workload, the agent's capabilities, the agent's authorization. The agent remains the subject of every sentence.
The relational frame reframes each layer as verifying a relationship:
Layer 1: the relationship between code and machine (is this workload running where it claims to be?). Layer 2: the relationship between agent and resources (does this delegation chain permit this access?). Layer 3: the relationship between human and action (did a real person authorize this specific thing?).
In this framing, the agent disappears from the policy language entirely. You don't write 'Agent X can do Y.' You write 'Principal A's delegation to workload B permits action Y when context C matches.' The authorization lives in the graph of relationships, not in any entity's credentials.
This isn't just philosophical housekeeping. It changes what you build. Entity-centric identity leads to managing agent credentials (which rot, get shared, and proliferate). Relationship-centric verification leads to verifying the graph at the moment of action (which is auditable, ephemeral, and doesn't accumulate debt).
The Ghost
Gilbert Ryle coined 'the ghost in the machine' to describe the Cartesian error — the belief that behind every physical body there must be a separate mental substance, a ghost, doing the real thinking. Descartes looked at human behavior and inferred an inner essence. Ryle argued the inference was unnecessary: the behavior is the mind. There is no ghost.
The identity industry is making the same inference about agents. It looks at agent behavior — acting, deciding, communicating — and infers there must be an inner identity to authenticate. It searches for the ghost in the machine.
But there is no machine in the relevant sense. Each agent invocation is a fresh configuration of relationships. The 'identity' everyone is looking for doesn't exist as a persistent entity — it exists as a momentary intersection of principal, capability, context, and delegation. The ghost isn't hiding. There was never a ghost.
The practical consequence is liberating. Stop trying to authenticate the ghost. Start verifying the relationships. The relationships are real, observable, and auditable. The ghost never was.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)