DEV Community

Khiari Hamdane
Khiari Hamdane

Posted on

We don't need to copy the human brain — we need to learn from it

In the two previous articles, we identified two problems. The first: LLMs don't always know when they're wrong — they generate, invent, fill the gap, sometimes without the slightest warning signal. The second: in the physical world, this behavior becomes dangerous. A system that improvises in an unknown situation doesn't produce a bad answer — it produces an unexpected movement in an environment where humans may be present.

The question that follows: where are we headed to move past this? And does the answer lie somewhere in the direction of the human brain?

Why reproducing the brain is unthinkable

The human brain — we don't fully understand how it works. Saying it's unthinkable doesn't mean it's useless to draw inspiration from it. The question isn't "how do we copy the brain" — it's "what specific mechanisms are we missing today, and does the human brain give us any leads to build them?"

What we can extract from it

Three mechanisms seem particularly important to me, and all three are absent from current LLMs in their native form.

The first is metacognition — the ability to know what you don't know. A human who doesn't know the answer to a question can recognize that and stop. An LLM will produce an answer regardless, even without solid ground to stand on.

The second is real-time self-correction. The human brain adjusts continuously — while we speak, while we act. Current LLM architectures work in one direction: generate first, verify later if at all.

The third is active doubt. Faced with an unknown situation, a human slows down, questions, asks for clarification. They don't keep acting with the same momentum. This is precisely the property that current autonomous systems lack.

What we're looking for isn't machines that think like humans. It's machines that know how to doubt like humans — without the emotions, without the fatigue, without the biases.
Where research stands

These questions aren't new. Researchers have been working for years on what are called neuro-inspired architectures — systems that attempt to go beyond pure statistical generation and integrate mechanisms closer to reasoning. The idea isn't new, but it remains largely open.

Some approaches try to ground models in verifiable sources to limit invention. Others explore systems where planning and verification are separated — the model proposes, another mechanism checks. None of them fully solve the problem. But all point in the same direction: making models bigger isn't enough. We need to change what they do with uncertainty.

What this would change in practice

In robotics first — an autonomous agent capable of detecting that it's outside its domain and stopping rather than improvising would fundamentally change the reliability of physical systems.

In medicine, a diagnostic support system that signals its level of certainty — and refuses to conclude when data is insufficient — is infinitely more useful than a system that always produces a confident answer.

In critical infrastructure — power grids, water management, transport — agents capable of flagging an anomaly they can't interpret, rather than continuing to operate normally, could prevent cascading failures.

In education, a pedagogical agent that adapts its explanations based on the learner's progress — and recognizes when it has reached its own limits — is much closer to what a good teacher actually does.

In the energy sector, systems capable of distinguishing a known situation from an unfamiliar one — and treating them differently — could transform grid management at a time when networks are becoming increasingly complex with the integration of renewable energy.

We're not trying to build an artificial brain. What we're looking for is more precise and more humble than that: systems capable of doubting, self-correcting, and recognizing the limits of what they know.

The human brain isn't a model to copy. It's a source of inspiration to build something different — more reliable, more honest about its own limits, and for that reason, genuinely useful in the real world.

That leap isn't just a matter of computing power. It's a matter of design.

In the next article, we'll try to explore concretely how an AI capable of correcting itself and doubting could become a 24/7 researcher — in medicine to try to discover new drugs, in energy to try to analyze and find new alternatives.

Top comments (0)