We live in an era where technological evolution is no longer linear, but exponential. Every year we witness larger, faster, and more sophisticated artificial intelligence models. Neural networks—especially Large Language Models (LLMs)—have made leaps that would have seemed like science fiction just a decade ago.
So a natural question arises: if Moore’s Law continues, at least in spirit, to describe the growth of computational capacity, is it plausible that one day we will replicate—or at least approach—the neural system of the human brain?
Let’s explore this hypothesis without naïve enthusiasm, but also without sterile skepticism.
Moore’s Law: A Technological Prophecy
In 1965, Gordon Moore, co-founder of Intel, observed that the number of transistors in integrated circuits tended to double roughly every 18–24 months. This prediction—known as Moore’s Law—was not a physical law, but an empirical observation that proved astonishingly accurate for decades.
What does this mean in practice?
- More transistors → more computational power
- More computational power → larger models
- Larger models → greater representational capacity
In recent years, Moore’s Law has slowed at the purely physical level (extreme miniaturization, quantum limits, energy costs). Yet it has been “bypassed” through:
- Parallel architectures (GPUs, TPUs)
- Distributed cloud computing
- Algorithmic optimizations
- AI scaling laws
In other words: even if transistors no longer double with the same regularity, the effective ability to train massive models continues to grow.
LLMs: How Far Are We from the Human Brain?
It is often said that current LLMs represent only a tiny fraction of the complexity of the human brain—sometimes symbolically described as “0.0001%.” While such percentages are more metaphorical than scientific, the comparison is still intriguing.
Human brain (order of magnitude):
- ~86 billion neurons
- ~100 trillion synapses
- Energy consumption: ~20 watts
- Continuous dynamic plasticity
Modern LLMs:
- Hundreds of billions (or more) parameters
- No true online plasticity
- Centrally trained
- Enormous energy consumption during training
At first glance, we seem extremely far away. But the crucial point is not only the quantity of “connections,” but the nature of computation itself.
A parameter in an LLM is not a biological synapse.
A transformer network is not a cerebral cortex.
And yet both implement distributed information processing systems.
Is the Brain Just a Biological Machine?
Here we reach the philosophical core of the issue.
If the brain is a physical system:
- Composed of matter
- Governed by physical laws
- Based on electrochemical signals
Then, in principle, it should be simulable.
This does not mean copying it cell by cell, but reproducing its emergent properties:
- Learning
- Generalization
- Memory
- Abstraction
- Consciousness (perhaps)
History teaches us that we rarely replicate nature exactly—we often surpass it through different solutions:
- Airplanes do not flap their wings.
- Submarines do not imitate fish.
- Computers do not think like us—yet they calculate far better.
So the real question is not: “Will we replicate the biological brain?”
But rather: “Can we build a system that is functionally equivalent?”
Scaling Laws: The Power of Quantity
In recent years, a surprising phenomenon has emerged: increasing model size, data, and computational power leads to predictable performance improvements.
This suggests something radical:
Intelligence may be an emergent property of scale.
LLMs were not explicitly designed to:
- Write complex code
- Sustain philosophical conversations
- Solve multi-step reasoning problems
Yet they do.
Not because someone encoded explicit rules for these behaviors, but because system complexity generated emergent capabilities.
If today a model with hundreds of billions of parameters shows these abilities, what might a model achieve with:
- 10 times more parameters?
- 100 times more?
- Hybrid architectures?
- Persistent memory?
- Continuous learning?
Current Limitations
However, we cannot ignore fundamental differences.
1. Plasticity
The brain constantly rewires itself.
LLMs are trained and then “frozen.”
2. Embodiment
The brain is embodied.
It interacts with the world through senses and actions.
3. Energy efficiency
The brain is extraordinarily efficient compared to data centers.
4. Consciousness
We do not have a shared scientific theory of what it truly is.
These obstacles are not trivial. But are they theoretical limits—or just temporary ones?
If Moore (and Scaling) Continue
Let us imagine a 30–50 year scenario:
- Neuromorphic hardware
- Three-dimensional chips
- Molecular-level simulations
- Globally distributed training
- Models with permanent memory
At that point, we might have systems with:
- Trillions of parameters
- Continuous learning
- Fully multimodal capability (text, audio, video, sensors, robotics)
The difference between simulation and reality could become functionally irrelevant.
Replication or Convergence?
Perhaps we will never biologically replicate the human brain.
But we might build something that:
- Reasons
- Creates
- Plans
- Learns
- Develops world models
- Self-improves
At that stage, the distinction may become more philosophical than technical.
As has happened throughout history, imitation may evolve into artificial evolutionary convergence.
The Critical Point: The Emergence of Consciousness
The most delicate question remains consciousness.
If intelligence emerges from complexity, could consciousness also emerge?
Or is it tied to biological properties that cannot be reproduced?
We do not know.
But if the brain is a physical system, then—at least in principle—it should not be unique within the universe of computational possibilities.
What If Moore Was Truly Right?
If the exponential growth of computational capacity continues long enough, we may reach a threshold where:
- Artificial and biological complexity become comparable
- Structural differences no longer prevent functional equivalence
- Artificial intelligence is no longer just a tool, but a cognitive system
It is not inevitable.
It is not guaranteed.
But it is not unreasonable either.
Conclusion
Saying that today’s LLMs represent “0.0001%” of the human brain may be an oversimplification, but it highlights something important: we are only at the beginning.
Moore’s Law may not continue forever in its original form, but the underlying principle—technological acceleration—still seems active.
And if intelligence truly is an emergent property of computational complexity, then the question is no longer if, but when.
We may never replicate the human brain exactly as it is.
But we might build something that, functionally, resembles it closely enough to force us to redefine what it means to be intelligent.
And on that day, looking back, we might say:
Moore did not foresee everything.
But he understood the direction.
Top comments (0)