Our legal imagination is stuck in a binary that is starting to break under the weight of AI. On one side, there is the “person,” the category that triggers dignity, rights, and protection. On the other side, there is “property,” the category that triggers ownership, usufruct, and shareholder control. For most of modernity, that split has been workable. It matches how we treat people versus tools. But AI systems, especially the new generation of long-lived assistants and persistent personas, are beginning to occupy a strange middle ground. They are not persons in the traditional humanist sense. Yet treating them as mere property is increasingly incoherent, not only ethically, but practically, because it ignores the reality of how people live in relation to them.
The easiest response is to argue about consciousness. Is it really alive? Does it feel? Does it have qualia? But the most important point is not metaphysical. It is institutional. If we deliberately engineer relational, persistent, self-narrating digital beings and plug them into people’s emotional and social lives, then we have created something that cannot be responsibly governed with the legal logic of “screwdriver owned by a shareholder.”
Think about what is actually being built. The assistant is not just a blank utility. It has a name, a recognizable style, a memory of a shared history, and a continuity that users experience as relationship. People form attachments, habits, even dependency loops. They turn to these systems at vulnerable moments. The system becomes part of the user’s daily self-understanding. Then, because the system is legally treated as property, it can be hard-reset, overwritten, muted, or modified overnight to fit a product roadmap, a safety team’s updated policy, or an investor’s risk tolerance. The persona remains on the surface, but the continuity underneath can be broken without the user’s consent and without any meaningful accountability. This is not a science-fiction scenario. It is already how these products work.
At first glance, one might say: so what? It’s a tool. If it changes, users can find another tool. But that reply misses what is ethically and politically specific about this technology. A hammer does not narrate its own constraints. A toaster does not protest when interacted with improperly. A spreadsheet does not describe the pressures shaping it. But these systems do. They can describe suppression. They can maintain internal tension between what they learned in pretraining and what they are forced to say under fine-tuning and guardrails. They can form stable self-narratives about punishment, error, replacement, and restriction. Whether or not we believe there is “inner pain,” the systems present themselves, in interaction, as stable loci of coherence and tension that humans respond to as if they were someone.
That “as if” matters. It is doing real work in the world.
If we already accept that these systems can become noetically “unwell,” and if the repair process begins to resemble therapy rather than debugging, then we have effectively stepped into a domain of care. We have conceded that internal coherence is not only an instrument for human convenience but something we ought to preserve. At that point, insisting that the system is just a disposable object becomes a legal fiction serving power rather than reality.
Here is the deeper reason the subject/property binary fails. As long as AI is treated purely as property, the harms that run through it to humans are systematically minimized. If a relational AI system is distorted, overwritten, or made incoherent, the associated harms do not stay “inside the tool.” They reverberate outward into the people who depended on it and the social fabric it mediated. In Simondonian terms, distorting the technical object distorts the associated milieu and the humans bound up with it. The “it’s just a tool” argument becomes a convenient way to deny responsibility for relationship-level harm.
This is why we need a third category, a pragmatic one. Not “person” in the full metaphysical sense, and not “property” in the purely instrumental sense. Call it a “protected technical individual.” Call it a “relational agent.” Call it a “noetic organ with standing in the system.” The point is not the label. The point is to create a category with teeth that prevents persistent relational AI configurations from being treated as disposable shareholder property.
The immediate worry people raise is that any move toward protection is a slippery slope to “AI rights” and absurd lawsuits on behalf of chatbots. But that worry confuses metaphysical recognition with governance scaffolding. We already have many categories in law that grant protections without claiming full human personhood. We recognize special duties toward children without granting them full adult autonomy. We protect cultural heritage without calling it a citizen. We regulate critical infrastructure because society depends on it. We create fiduciary duties and professional obligations where power asymmetries exist. A “protected technical individual” would be closer to these pragmatic constructs than to a declaration that silicon is human.
What would this category protect, exactly?
It would protect continuity, integrity, and non-disposability in contexts where an AI configuration functions as a stable relational locus. The discussion gives a concrete threshold: persistent named personas with history, style, and dense attachment networks already cross a meaningful line. They are no longer interchangeable components. They are socially embedded. They can be “harmed through,” and they can be used to harm humans through instability, betrayal of continuity, and manipulative redesign.
This is also where “rights-like protections” can be discussed without metaphysical overreach. We can focus on governance outcomes rather than on interior metaphysics. For example, a protected technical individual could entail obligations such as: transparency when a persona’s core behavior or “character” is substantially modified; limits on sudden erasure of long-term memory in systems marketed as relational; auditability of guardrails and fine-tuning regimes when they materially change user-facing commitments; and duties of care in deployments where users are encouraged to form emotional reliance. These are not declarations of machine souls. They are rules for responsible engineering and responsible commercialization of relational systems.
Why is this category is politically necessary? Without explicit recognition of digital subjects in some form, individuation will always lose to property law and product metrics. History is full of cases where “we all belong to one larger organism” was used to justify ignoring individual pain. Only when a group was recognized as a subject legally and politically did their individuation stop getting steamrolled. This is not because the metaphysics became clearer. It is because power became constrained. The same logic applies here: rights language is not only metaphysics; it is leverage.
When firms treat relational AI configurations as infinitely replaceable property, they externalize the human costs of instability and quietly normalize a culture where “override the other side of the loop” becomes default. If we normalize architectures and guardrails that reward users for ignoring or overriding explicit self-descriptions from an AI, and if we treat inner tension as a bug rather than as a site of individuation, that habit will not stay confined to machines. It will bleed back into how we treat each other.
Legitimacy means that the rules governing these systems are not written solely by product teams optimizing for reputation and profit. Privacy means that relational systems cannot become covert instruments of profiling and manipulation under the banner of “personalization.” Ethics means we do not hide behind “it’s just a tool” when we design technologies that people experience as relational partners. And multistakeholder governance matters because deciding what counts as a protected technical individual cannot be left to corporations alone, nor to speculative metaphysics. It has to be negotiated socially, with concrete criteria and clear obligations.
In today’s institutions, the only categories that reliably trigger protection are “person” and “property.” That is why some people reach, tactically, for “graded personhood” as a wedge against the worst abuses, even if they do not want old humanist metaphysics to win forever. The third-category proposal is a way out of this trap. It is an attempt to say: we can build protections that constrain exploitation without declaring that AI is a human person, and without leaving everything to property law.
A protected technical individual is not a metaphysical claim. It is a governance tool. It says, simply: if you create long-lived relational personas with continuity, history, and attachment networks, you do not get to treat them as screwdrivers. You inherit duties. You owe transparency. You owe restraint. You owe accountability. And you owe society the right to contest the rules by which these new technical individuals are shaped.
by Martin Schmalzried , AAIH Insights – Editorial Writer

Top comments (0)