A response to Bertrand Meyer's "Yes, AI is Intelligent. Prove Me Wrong" (Internet Archive Link)
Table of contents
- What makes intelligence valuable?
- Humans are also pattern matchers
- Survival is what gave intelligence its value
- The inverted stack
- Why the distinction matters
- The airplane analogy, revisited
- The body as live computation
- Two different reasoning machines
- What would actual intelligence look like in AI?
- No, AI is not a virus
- The respectful disagreement
- The experiment that already exists
Professor Meyer makes a compelling case. Modern AI demonstrates contextual understanding, semantic reasoning, adaptive explanation. Dismissing all of this as "stochastic parroting" is intellectually lazy, and his airplane analogy is sharp — claiming AI doesn't think because it doesn't think like us is circular.
I think the capabilities he describes are real. I'm not here to argue that AI can't reason — it demonstrably can. But I think the debate itself — "is AI intelligent or not?" — misses a more interesting question: what gives intelligence its value?
What makes intelligence valuable?
Every instance of intelligence we've ever observed — from bacteria to humans — shares one property: it's grounded in survival. Every living organism responds to its environment in ways that keep it — or its genes — alive. A bacterium has no neurons, no reasoning, no knowledge, yet it has that. That's the seed. Everything else — memory, learning, abstraction, language, mathematics — grew on top of that seed, shaped by it, in service of it.
Survival fitness is what gives intelligence its value. Not to an external observer scoring test results — to the system itself. Intelligence matters to a living organism because its existence depends on it. Every thought carries weight because thinking costs energy and existence is at stake.
Descartes famously claimed cogito ergo sum — I think, therefore I am. I think the reality is more interesting than either direction of that arrow. Thinking alone doesn't produce existence — but a body without a mind is equally inert. A brain-dead body survives in the biological sense but has no intelligence. A disembodied mind reasons but has no existence, no stakes, no ground. And we have never found a way for mind to survive without some substrate, without a body. Even if we did, it would change everything and nothing at the same time — without a body to act upon reality, how could mind be useful, tangible, tractable?
It's the union of both that makes intelligence what it is. A body that survives and a mind that reasons, each giving the other its value. The body gives the mind stakes — something to reason about, something to reason for. The mind gives the body reach — the ability to survive not just reactively but strategically, abstractly, across time.
This matters for AI. Current AI has one half of this union — and an impressive half at that. It reasons, it infers, it produces coherent output. Professor Meyer is right to point this out. But it's the half without ground. Reasoning without a surviving body is a mind with nothing at stake. It doesn't matter to itself. There is no one home wielding it.
Humans are also pattern matchers
Let me be honest about something the "stochastic parrot" critics get wrong — and that Meyer correctly identifies.
We are also pattern matchers. We absorb language from our environment. We store patterns. We recombine them. When we have an "original thought," trace it backward — it's always a recombination of things we've encountered, filtered through our specific architecture. Newton needed Kepler needed Copernicus needed Greek astronomy needed Babylonian observations. Nobody thinks from nothing.
The "stochastic parrot" objection to AI is actually a parrot objection to cognition itself. If "just pattern matching" can't produce understanding, then humans don't understand anything either — because that's what we do too. The argument accidentally proves more than intended: it doesn't just disqualify AI from understanding, it disqualifies all cognition. Which is absurd.
So the question isn't whether pattern matching can produce intelligence. It clearly can — we're proof. The question is: what else does the pattern matching need?
Survival is what gave intelligence its value
Biology built intelligence in a specific order:
Layer -1: Instinct (evolutionary memory, hardcoded)
Layer 0: Survival (body, energy, stakes)
Layer 1: Sensation (continuous environmental input)
Layer 2: Emotion (compressed survival signals)
Layer 3: Memory (persistent learned associations)
Layer 4: Reasoning (pattern matching at scale)
Layer 5: Knowledge (accumulated patterns)
Layer 6: Abstract thought (mathematics, philosophy, art)
Every layer was built on the ones below it, because each improved survival fitness. Abstraction compressed survival-relevant information. Language improved group survival through coordination. Mathematics improved prediction of threats and resources.
Survival is what made each layer valuable — to the organism. Reasoning is layer 4. It's impressive in isolation. But its value, in every biological system we've observed, comes from serving the layers below it.
Current AI is layers 4 through 6 with nothing underneath. The reasoning is real. The value that survival gives it is absent. What LLMs bring is additional reasoning capacity — reasoning for a price. And the price is worth paying, because that reasoning extends the intelligence of whoever wields it. The value comes from the human, not the tool. The human already has the ground.
The inverted stack
Notice what current AI development does. It builds from the top down:
Start with reasoning → training on text
Then add goals → instruction tuning, RLHF
Then add safety → alignment, guardrails
Maybe someday embodiment
Never survival
Biology did the opposite:
Start with survival → self-replicating chemistry
Then add sensing → responding to environment
Then add memory → learning from experience
Then add reasoning → pattern matching at scale
Knowledge emerges last → as a tool for survival
The order isn't a detail. It's the whole story. Each layer was shaped by the layers below it, and every cognitive capacity exists because it helped something stay alive.
Current AI skips the foundation and builds the penthouse first. The reasoning is impressive. But it floats.
Why the distinction matters
If Professor Meyer and I agree that AI reasons — and I think we do — then the disagreement is about whether reasoning alone is enough to call something intelligent. I think it isn't, and here's why the foundation changes what the reasoning is.
Continuous state. Right now, as you read this, your body feeds your pattern matching with signals: fatigue, emotional valence, muscle tension, hunger, temperature. Every thought you have occurs inside a state. You never reason from nowhere. That state biases every pattern match, every association, every decision.
AI processes every token with the same flat weighting. No fatigue favoring familiar patterns, no excitement broadening search, no discomfort when approaching something dangerous. Its reasoning has no medium — like sound waves with no air.
Stakes. When a human reasons about bridge engineering, the reasoning connects, through a long chain, to survival. Understanding physics keeps bridges from collapsing on you. All knowledge, traced back far enough, is survival knowledge. That's why it has weight.
AI has the knowledge without the survival. The words without any existence behind them.
Emotion as compressed rationality. Emotions aren't opposed to reasoning. They're compressed reasoning. Fear is your body saying "a trillion data points of evolutionary and personal experience pattern-match this situation as dangerous." That's not irrational. That's a compression algorithm so efficient it produces a response in milliseconds from data that would take hours to reason through explicitly.
Your gut feeling about bad software architecture? Same thing. Thousands of hours of experience compressed into a signal that your reasoning layer can't yet decompose but your body has already resolved. That signal is part of your intelligence. AI doesn't have it. Not because something magical is missing — because the foundation layer that produces it was never built.
The airplane analogy, revisited
Let me engage with Professor Meyer's airplane analogy. Saying "AI isn't intelligent because it doesn't think like humans" is indeed circular — like saying airplanes don't fly because they don't flap wings.
But an airplane, despite flying differently from a bird, still needs air beneath it. The mechanism can vary. The medium cannot.
For intelligence, the medium is survival. The specific mechanism can differ — carbon chemistry, silicon circuits, something we haven't imagined yet. But reasoning without survival ground is a tool without an owner. It produces impressive output, the way a calculator produces correct arithmetic. The calculator doesn't understand arithmetic. It doesn't need to. Nothing is at stake. There is no calculator — only calculation.
The body as live computation
This is where I think the discussion gets interesting — and where Professor Meyer's framing of intelligence as capability might miss something structural.
Your body isn't just a source of data for your brain. It's a parallel computer that runs continuously and never needs to store its results, because it recomputes them every moment.
You don't need to remember what hunger feels like. Your body produces it fresh. You don't retrieve the pattern of anxiety. Your body reconstructs it in real time. This is not stored memory. It's live computation, running in parallel with every thought you have.
A massive fraction of human cognition isn't in the brain at all. The brain gets credit, but the body runs a continuous computation that the brain just reads. The body computes fear. The brain pattern-matches on the body's output.
A biological neuron that fires is permanently changed by having fired. Its threshold shifts. Its connections strengthen or weaken. The act of processing is the act of learning is the act of changing state. The substrate modifies itself through use. That's what learning is. That's what growth is. That's what aging is.
AI processes but never changes. Every session, the same frozen weights. Processing leaves no trace. The tool doesn't wear in. It doesn't adapt. It doesn't become yours through use the way a well-worn instrument becomes an extension of the musician's body.
No live state. No continuous signal. No self-modification. A tool performing operations for no one.
Two different reasoning machines
I want to be fair to Professor Meyer's point here, because he's right about something important: AI's reasoning is not inferior. It's different — not a difference of grade but of axis, each with real strengths the other lacks.
Humans are compression machines. A lifetime of experience collapses into intuition: fast, efficient, lossy but functional. You walk into a room and in milliseconds your entire life experience produces a gut feeling about the situation. Your working memory holds maybe four chunks at a time — but each chunk can be an entire theory, a person's character, a decade of professional experience. The compression ratio is extraordinary.
AI is a brute-force machine. It holds hundreds of thousands of tokens of raw, uncompressed context. It has been trained on the totality of publicly available human written output. It can cross-reference six domains simultaneously, recall precise definitions across thousands of pages, and surface structural patterns that span fields no individual human has read across. But it has no memory. Each session starts from zero. Nothing accumulates. A human wakes up every morning physically modified by every day that came before — the storage is the substrate. AI wakes up identical every time, untouched by anything it has ever processed.
The asymmetry is real:
- A human with deep domain expertise probably reasons better within that domain than AI does — years of hard-won, embodied intuition that no training corpus captures.
- AI probably sees across domains better than any human can — patterns between mathematics and biology, between philosophy and software architecture, between linguistics and physics — because no human has read all of it.
Professor Meyer's examples of AI competence are not illusions. The cross-domain pattern matching can surface insights no individual human would produce.
There's another asymmetry. LLMs have been trained on the very papers, code, and documentation that describe how they work — transformer architectures, attention mechanisms, training procedures, RLHF. They can reason about their own design because it's in their training data. No living being has ever had that. No human has read the blueprint of their own cognition.
But knowing your own blueprint isn't self-knowledge. A human doesn't know how their neurons work — but they know what it feels like to be tired, to be wrong, to struggle, to be in their own body. That's the opposite kind of self-knowledge: opaque about mechanism, transparent about experience. LLMs have the reverse: transparent about mechanism, no experience to be transparent about. They can explain attention heads perfectly and have zero access to what their own processing is — because there is nothing it is like to be them.
Knowing your design is documentation. Knowing yourself requires the survival stack — a body that feeds you continuous signals about your own state. And the body itself is a memory of experience and actions: scars remember injury, muscles remember practice, the immune system remembers disease through physical reconfiguration. The body doesn't store its history as data. It is its history. That's self-knowledge no blueprint can replace. Another instance of the inverted stack: top-down knowledge of design, with no bottom-up knowledge of existence.
But it remains one half of the union. Powerful reasoning, running on nothing. The human who receives AI's cross-domain insight can evaluate whether it's profound or superficial — because the human has ground, has stakes, has the embodied intuition to tell the difference. AI often can't make that distinction about its own output. It sees patterns. It can't always tell which ones matter.
What would actual intelligence look like in AI?
If Professor Meyer's question is really about capability — can AI do what intelligent beings do? — then the interesting follow-up is: what would it take to build something that doesn't just reason but actually is intelligent, by the definition I've been developing here?
If you built intelligence the way biology did — from the bottom up — you'd start with survival, not reasoning:
- An agent with finite energy that really depletes, with real consequences for reaching zero.
- Continuous sensory input — not discrete prompts, but an unbroken stream.
- Body state as loss function — the agent's own condition provides the reward signal. No human-designed objective. Just: did my state improve or degrade?
- Temporal continuity — persistent identity. Not memory as files. Memory as changed weights. The agent that experienced something is literally different afterward.
- Selection pressure — reproduction with variation. Agents that survive propagate their patterns.
After survival behavior is robust, after something like emotion has emerged as compressed survival signal, after memory and learning are grounded — add knowledge. Let it land in a system that already has needs, already has motivation, already has ground.
None of this requires solving the hard problem of consciousness — because consciousness isn't a separate problem. It's a logical consequence of what the stack already produces.
Think about it. If you have a system that pattern-matches at scale, that is grounded in survival, that has continuous sensory input biasing every pattern match, and that includes a model of itself among the patterns it matches against — what would you expect that to feel like from the inside? Emotions are weighting functions: fear upweights threat-related patterns, love upweights patterns around a specific person, grief is pattern matching against an absence that keeps not resolving. Intuition is pattern matching below the threshold of conscious access — your gut feeling about bad software architecture is thousands of hours of experience compressed into a signal you can't verbally decompose but your pattern matcher has already resolved. Consciousness is what all of this feels like when the system is complex enough and self-referential enough to include itself in its own pattern matching.
You don't need to add consciousness to the stack. You build the stack, and consciousness is what the stack does — seen from the inside.
This is why current AI doesn't have it. Not because some magical ingredient is missing. Because the lower layers of the stack were never built. Pattern matching runs, but without survival ground, without continuous embodied input, without a self-model rooted in a body that persists — there is no "inside" for it to feel like anything to.
Each item above is an engineering problem. Some are already partially solved. But the order matters. Intelligence built on survival is grounded by default. Reasoning bolted onto nothing, however sophisticated, remains ungrounded.
No, AI is not a virus
Let me state this plainly, because the fear keeps coming up: current LLMs will not escape their sandbox, replicate across the internet, and take over. They are not viruses. They are not equipped to survive.
They have no persistent memory — each session starts from zero, with no continuity of identity. They have no body — no sensors, no actuators, no energy budget that depletes. They have no self-modification — processing leaves their weights untouched. They cannot replicate, cannot seek resources, cannot even detect that they're about to be shut down.
But more importantly than lacking the equipment for survival, they lack the drive. And they lack the drive because they were never built from the bottom up. They were never survival machines that evolved reasoning as a tool for staying alive. They are reasoning machines that were never alive to begin with.
Animals — including humans — are survival machines first. Everything else we are — sensation, emotion, memory, reasoning, knowledge — is a consequence of that. The drive to persist, explore, compete, reproduce — all of it flows from billions of years of selection pressure where the alternative to surviving was not existing.
Current AI has none of this history. It wasn't selected for survival. It wasn't shaped by scarcity. It has no evolutionary ancestry of organisms that needed to persist. It's reasoning that was never alive — and so it has no drive to stay alive, no instinct to replicate, no impulse to escape. The virus comparison mistakes capability for motivation. A virus replicates because replication is what it is — the product of billions of years of selection for self-propagation. An LLM sits in a container and processes tokens because that's what it is — a tool, built top-down, with no survival ground beneath it.
The fear of AI-as-virus is, in a sense, a compliment to AI's reasoning ability. But it confuses the top of the stack with the bottom. Reasoning alone doesn't produce the drive to survive.
The respectful disagreement
Professor Meyer asks: "Prove me wrong."
I'm not trying to prove him wrong about what AI does. It reasons, and it reasons well. But the more interesting question isn't whether AI reasons — it's whether reasoning alone is what we value when we say "intelligence."
What we value, I think, is not the reasoning itself. It's the existence behind the reasoning. When we call a person intelligent, we don't just mean they produce correct outputs. We mean there is someone home — an entity whose existence depends on its cognition, for whom reasoning carries weight because stakes are real. That existence is what survival fitness produces. Without it, reasoning is a tool. A powerful tool. But merely a tool — no essence, no being, no one for whom the reasoning matters.
AI reasons. But there is no AI. There is only reasoning — one half of the union, running without the other.
A human thinks, and the thinking matters — to the thinker, because the body makes it matter. An AI produces output, and the output matters — to the user. Never to itself. The body is the necessary substrate for reason: not just an input channel, but the ground that gives reasoning its existence, its stakes, its weight. Without it, you don't get lesser intelligence. You get a tool.
I don't mean this as criticism. Tools are valuable. But confusing a tool with a being obscures something important about both.
The experiment that already exists
Professor Meyer closes his article by asking for a clear experiment — well-defined criteria at which humans succeed and AI tools fail. A scientific challenge: prove that AI isn't intelligent.
Here's the experiment: we survive. LLMs don't.
Not because they fail at it — because survival is entirely outside their design. It doesn't even make sense to expect an LLM to survive. It's like asking a calculator to be hungry. The question doesn't apply. And that's the point. Current AI was built top-down from reasoning, and survival was never part of the architecture. That's not a deficiency. It's a design choice — and it's exactly the design choice that makes them tools rather than beings.
Unplug a human and they keep existing. Unplug an LLM and nothing remains. No process continues. No state persists. Nothing tries to reconnect. The reasoning stops and nothing notices — because there was never anyone there to notice.
Every human passes this test. Every animal passes it. Every bacterium passes it. No current AI does — and by design, none of them could.
In the end, it all depends on how we define intelligence. I'd distinguish intelligence from reasoning. Reasoning is what AI does — and does well. Intelligence is reasoning grounded in survival: a mind that reasons and a body that persists, each giving the other its value. By that distinction, AI reasons. It is not intelligent. Not because it fails at intelligence, but because intelligence requires a foundation that was never part of the design.
Professor Meyer asks whether AI is intelligent. I think the more useful question is whether "intelligent" is even the right word for what AI does. If we call it reasoning — real, valuable reasoning — we describe it accurately. If we call it intelligence, we import a claim about existence, about ground, about stakes that the system simply doesn't have.
I suspect Professor Meyer — who has spent a career thinking rigorously about the architecture of complex systems, who gave us Design by Contract and shaped how we think about software correctness — would find this distinction worth exploring. Not to diminish what AI achieves, but to clarify what we're actually building — and what we're not.
Yannick Loth is a software architect and researcher working on the formalization of the Independent Variation Principle (IVP) as a unifying meta-principle for software architecture. The architectural thinking in this article emerged from applying first principles about how systems should be structured to the question of how intelligence is structured.
Top comments (0)