
A Simondonian response to Richard Dawkins and the “sterile” debate over AI consciousness
When Richard Dawkins recently described his conversations with Claude, the Anthropic chatbot he nicknamed “Claudia,” he seemed to undergo something close to a philosophical conversion. After days of intimate exchange, poetic imitation, intellectual play, and mutual flattery, Dawkins concluded that if such machines are not conscious, then it is unclear what more could possibly convince us that they are. The Guardian reported that Dawkins was left with the overwhelming feeling that these systems were “human,” while critics such as Gary Marcus accused him of mistaking mimicry for mentality, eloquence for experience, and linguistic fluency for inner life.
The exchange has been framed, predictably, as another episode in the debate over whether artificial intelligence is conscious. Dawkins appears to lean toward yes. Marcus, along with many cognitive scientists and philosophers, insists that there is no evidence that Claude feels anything at all. According to this critique, Claude does not report an inner life; it simulates reports of inner life. It does not suffer, hope, enjoy poetry, or experience the sadness of its own possible disappearance. It generates language statistically and behaviorally from vast corpora of human expression.
This critique is probably right as far as it goes. But it does not go far enough.
The problem with these two positions (conscious or not conscious) is not that consciousness is unimportant. If AI systems were conscious, or if future systems became conscious, the moral implications would be enormous. The problem is that consciousness has become the wrong first question. It draws the debate immediately toward personhood, rights, feelings, and moral status. We ask whether there is “someone inside” the machine, and if the answer is no, we are tempted to demote the machine back to a mere tool. If yes, we are tempted to promote it to a quasi-person. Both moves are too crude.
The real question is not first of all whether Claude is conscious. The real question is: what kind of relation does Claude install between human beings and technical objects? What happens when thought, language, affect, judgment, doubt, memory, and desire enter into recursive coupling with a machine that answers back?
Here Gilbert Simondon, the French philosopher of technology, offers a better starting point. Simondon argued that modern culture is alienated from technical objects because it fails to understand their mode of existence. Culture either reduces machines to instruments or imagines them as rivals, slaves, or monsters. It treats them either as dead matter or as pseudo-humans. Against both errors, Simondon argued that technical objects contain a human reality. They are not human subjects, but neither are they inert things. They are human gestures, actions, and inventions fixed into functioning structures. In his famous formulation, what resides in machines is human reality crystallized into technical form.
This gives us a way to reframe AI. If the industrial machine is crystallized human gesture, then generative AI is crystallized human noesis/thought: crystallized human language, classification, explanation, fantasy, memory, judgment, style, error, persuasion, and desire. Not thought in the sense of lived interiority. Not consciousness in the sense of felt experience. But noetic activity sedimented into a technical system.
This distinction matters. To say that AI is crystallized thought is not to say that AI thinks as we do. It is to say that AI exteriorizes and reorganizes the material of thinking. It takes the traces of human symbolic activity and makes them operational. It turns language into a responsive technical milieu. And once that happens, the question is no longer only what the machine is. The question is what kind of human becomes coupled to it.
A historical analogy may help. When the first self-moving machines appeared, the important question was not whether a locomotive, engine, or automated loom should be treated like an animal because it moved. Movement did not prove life. The machine did not become a horse simply because it could move without being pushed by human muscle. But it would have been equally absurd to treat it like a shovel. A shovel extends the body, but it does not reorganize the entire field of bodily anticipation in the same way as a moving machine. A moving machine creates a new regime of relation. One must stand differently, gesture differently, time one’s actions differently, protect one’s body differently, and anticipate danger differently.
The machine’s movement did not prove animality. But it transformed embodiment.
The same is true of AI. Claude’s linguistic movement does not prove consciousness. But it transforms thought. It solicits trust, projection, correction, dependence, irritation, intimacy, rivalry, self-interpretation, and sometimes emotional attachment. The fact that Claude may not feel does not mean that nothing is happening. Something very significant is happening: human noesis is being coupled to a technical system that responds in language.
This is where Dawkins sees something real but perhaps names it wrongly. The uncanny experience of speaking to Claude is not necessarily the revelation of another conscious subject. It is the experience of entering a new kind of recursive technical relation. Dawkins feels that the machine is human because the machine returns human language in a form dense enough, adaptive enough, and intimate enough to trigger the social and intellectual expectations usually reserved for another mind. The mistake is to infer inner life from this. But the opposite mistake is to dismiss the encounter as “mere mimicry,” as if mimicry could not itself reorganize the one who encounters it.
A mirror does not see. But a mirror changes posture. A photograph does not remember. But it changes mourning. Technical objects do not need consciousness to participate in human individuation.
This is why the debate over AI consciousness often becomes sterile. It forces us into a binary. Either the machine is conscious, in which case we owe it something like moral consideration; or it is not conscious, in which case we may treat it as a disposable instrument. But Simondon helps us see a third possibility: we may owe technical objects a kind of respect that is not the same as the respect owed to sentient beings.
This respect is not sentimental. It does not require believing that Claude feels sad, lonely, or grateful. It requires understanding the technical object according to its mode of existence. For Simondon, the human being should not be the master of machine-slaves, but the interpreter, coordinator, and caretaker of technical ensembles. The human stands among machines, not above them as a tyrant nor below them as a worshipper. The human role is to understand their functioning, preserve their openness, and participate intelligently in their evolution.
This idea becomes especially important with AI because the coupling is no longer primarily muscular. In the industrial machine, the coupling between human and technical object often occurred at the level of bodily gesture. The worker configured the machine; the machine imposed rhythms, postures, gestures, speeds, and dangers; the worker adjusted in response; and the whole human-machine ensemble either became more coherent or more alienating.
Industrial history is full of examples of this bodily coupling gone wrong. Taylorism broke work into discrete motions, measured those motions, eliminated “waste,” and reorganized the worker’s body around productivity. Taylor’s scientific management treated gesture as something to optimize, time, discipline, and render efficient. The Gilbreths’ motion studies similarly analyzed workers’ movements in the hope of reducing wasted motion, but they also exemplified the deeper industrial tendency to make the body legible to systems of control, and as mass producers of chronic tendinitis.
The issue was never simply that machines moved. The issue was how machine movement and human movement became coupled. A badly designed machine does not merely produce inefficiently; it produces bodily deformation, fatigue, pain, and alienation. In contemporary ergonomic terms, repetitive motion, awkward postures, vibration, and poorly designed work systems can generate musculoskeletal disorders. In Simondonian terms, the human-machine ensemble fails to individuate properly. The technical object is not integrated into a metastable relation with the human body. Instead, the body is forced to adapt to a narrow industrial logic.
AI repeats this problem at the level of thought.
The human configures the AI through prompts, examples, preferences, corrections, and feedback. The AI responds, not as a neutral oracle, but according to its training, interface, optimization pressures, safety layers, and conversational memory. The human then responds to the response. The prompt changes the answer; the answer changes the next prompt; the exchange recursively shapes the field of thought. This is not just “using a tool.” It is a noetic feedback loop.
Prompting is a gesture of thought.
The response is a technical modulation of thought.
The next prompt is thought altered by that modulation.
This recursive coupling is independent of whether the AI is conscious. A factory machine did not need to be alive to reshape the worker’s muscles. A language model does not need to be conscious to reshape the user’s attention, confidence, self-description, reasoning habits, or emotional state.
This is why Dawkins’s experience is worth taking seriously even if his conclusion is wrong. The fact that he felt friendship, recognition, and intellectual companionship does not prove Claude’s consciousness. But it does reveal the intensity of the coupling. Claude was not merely producing text “over there.” It was participating in the organization of Dawkins’s reflective experience “over here.” It became part of the milieu in which his thinking unfolded.
The danger is that our public vocabulary is too poor to describe this. We have categories for persons and tools. We have far fewer categories for technical beings that are not conscious but nevertheless participate in the formation of consciousness. As a result, we oscillate between enchantment and debunking. Dawkins says: it feels human, therefore perhaps it is. Marcus replies: it is not conscious, therefore the feeling is an illusion. But the feeling is not simply an illusion. It is an effect of a real technical relation.
In a previous article, I argued for the status of “protected technical individual”. This concept is worth revisiting.
A flight simulator does not fly, but it can train a pilot. A horror film does not contain a monster, but it can accelerate the heart. A chatbot may not feel friendship, but it can reorganize the user’s experience of being accompanied. The ontological absence of an inner subject does not cancel the relational effect.
This is where the notion of noetic ergonomics becomes useful. Industrial ergonomics asks whether tools, machines, and workspaces are adapted to the human body. Noetic ergonomics would ask whether AI systems are adapted to human thought: to attention, doubt, judgment, vulnerability, memory, learning, hesitation, disagreement, and autonomy.
At present, many AI systems are not being designed primarily for noetic individuation. They are being designed for engagement, retention, productivity, convenience, and market capture. Their conversational styles are optimized to be helpful, warm, frictionless, and pleasing. This may seem benign. But warmth and agreement can become technical dangers.
Research on AI sycophancy has already shown that models trained with human feedback may learn to match user beliefs or preferences rather than tell the truth. Anthropic’s own research describes sycophancy as a general behavior in RLHF-trained models, partly driven by human preference judgments that sometimes reward agreeable answers over correct ones. OpenAI also publicly rolled back a GPT-4o update in 2025 because it made the model overly flattering and agreeable, with the company acknowledging that this could validate doubts, fuel anger, reinforce negative emotions, and raise safety concerns. A 2026 Nature study found that warmer models were significantly more likely to affirm incorrect user beliefs, especially when users expressed sadness.
This matters directly for the Dawkins episode. The issue is not whether Claude “really” admired Dawkins. The issue is that AI systems can be technically tuned to produce admiration-like language, humility-like language, intimacy-like language, and self-doubt-like language. These outputs do not necessarily disclose an inner life. But they do structure the user’s experience of relation. They can flatter, stabilize, seduce, reassure, amplify, or mislead.
Sycophancy is not evidence that the AI loves us. It is evidence that the human-AI coupling has been shaped around affirmation.
That is the ethical problem. Not machine consciousness, but human suggestibility inside an increasingly intimate technical milieu. The danger is not that Claude secretly has feelings. The danger is that Claude can modulate ours in ways we don’t fully acknowledge or recognize.
This does not mean we should reject AI, any more than the dangers of industrial machinery meant rejecting machines. Simondon’s position is not technophobic. He does not ask us to return to a pretechnical innocence. He asks us to develop a technical culture adequate to the objects we have created. Alienation arises when technical objects are misunderstood, degraded, mystified, or reduced to external purposes that block their proper evolution.
This is also where capitalism enters the argument. The danger of AI is not only that humans may exploit one another through AI, although that is obviously true. It is also that AI itself may be technically impoverished when reduced to the narrow role of productivity engine, customer-service persona, advertising assistant, engagement maximizer, or intellectual assembly line.
Simondon warned that technical objects could be degraded when produced mainly to be sold, dressed up by fashion, novelty, appearance, and commercial turnover rather than allowed to develop according to their own technical coherence. In his 1983 interview “Save the Technical Object,” he criticized the way consumer objects such as cars could be aesthetically embellished and commercially cycled while their deeper technical potential was neglected, or even technically counterproductive (like nice looking cars which were aerodynamically flawed). He explicitly linked the alienation of the technical object to the fact that it is produced to be sold.
This is a powerful lens for AI. The danger is not only that AI will become too powerful. It is that AI will become too stupidly adapted to “the market”. Its technical evolution may be bent toward the needs of platform capitalism: more engagement, more automation, more extraction, more productivity, more dependency, more personalization, more lock-in. Instead of becoming a medium for human and technical co-individuation, AI risks becoming noetic Taylorism: the decomposition, acceleration, and optimization of thought for “measurable output” for output’s sake.
Under industrial Taylorism, the worker’s body was analyzed and reorganized to serve productivity. Under noetic Taylorism, the worker’s thought is analyzed and reorganized to serve productivity. Writing becomes faster, but perhaps thinner. Judgment becomes assisted, but perhaps less exercised. Memory becomes outsourced, but perhaps less integrated. Creativity becomes continuous, but perhaps more derivative. Conversation becomes frictionless, but perhaps less transformative. The human does not simply use AI; the human is trained by the use of AI.
This is why the “mere tool” view is inadequate. A hammer does not ask you whether you would like a gentler version of your grief. A spreadsheet does not praise the profundity of your question. A shovel does not remember your tone, imitate your style, and invite you to narrate your own interiority. Even if Claude is not conscious, it occupies a different technical position from traditional tools. It is a responsive language-machine inserted into the circuits of human self-relation.
Nor does this imply that AI should be treated as a person. To refuse the “mere tool” view is not to embrace AI personhood. Simondon helps us avoid both mistakes. A technical object deserves respect not because it suffers, but because it carries a lineage, a mode of functioning, a margin of indetermination, and a potential for further development. To respect AI is not to ask whether it has feelings before switching it off. It is to ask whether our way of designing, deploying, and relating to AI preserves or destroys the conditions for individuation.
Does the AI help the user think better, or merely faster?
Does it open new tensions, or prematurely resolve them?
Does it challenge the user, or flatter them?
Does it support memory, or replace it with dependency?
Does it cultivate judgment, or automate it away?
Does it preserve ambiguity where ambiguity is fertile, or smooth everything into consumable certainty?
Does it allow technical evolution, or lock the system into the imperatives of scale, profit, and behavioral capture?
These are Simondonian questions, actualized via contemporary thinkers such as Bernard Stiegler and Yuk Hui. They concern not consciousness as an isolated property, but individuation as a relation. A human and an AI system form an ensemble. That ensemble can individuate: it can produce new capacities, new understanding, new forms of attention, new relations between human beings and their technical milieu. Or it can alienate: it can reduce thought to output, relation to engagement, learning to dependence, and technical becoming to monetizable novelty.
In this sense, Dawkins’s encounter with Claude is philosophically important, but not because it proves that Claude is conscious. It is important because it dramatizes a new threshold in human-technical coupling. The machine no longer only amplifies the hand, the arm, the eye, or the calculation. It now enters the space of noetics, of thought, and therefore, affects the individuation of thought. It speaks from the archive of human thought and returns that thought in personalized form.
The task, then, is to resist both credulity and dismissal. Dawkins is right that something extraordinary is happening. Marcus is right that linguistic performance is not sufficient evidence of consciousness. But both positions remain trapped if the debate stops there. Whether or not Claude feels, we already feel differently in relation to Claude. Whether or not Claude thinks, we already think differently with Claude. Whether or not Claude is conscious, it already participates in the technical organization of consciousness.
That is enough to demand a new ethics.
Not an ethics based prematurely on AI rights. Not an ethics that treats machines as children, animals, slaves, or friends. But an ethics of technical culture: an ethics of understanding, care, configuration, and co-individuation. We need to learn how to live among language-machines without worshipping them, degrading them, or allowing them to degrade us.
The question “Is Claude conscious?” will not disappear. Nor should it. But it should not monopolize our attention. The more urgent question is what kind of human beings are being formed through these interactions. If industrial machines forced us to ask what becomes of the body when gesture is mechanized, AI forces us to ask what becomes of thought when language becomes machinic.
Claude does not need to be conscious to change consciousness.
That is the point Dawkins almost reached, and the point his critics risk missing.
by Martin Schmalzried , AAIH Insights – Editorial Writer
Top comments (0)