DEV Community

Cover image for Can You Take Me Into Your Brain?
Soumia
Soumia Subscriber

Posted on • Edited on

Can You Take Me Into Your Brain?

We Built AI to Understand Intelligence. But is it actually teaching us about ourselves—or are we just projecting?


#ArtificialIntelligence #CognitiveScience #Psychology #MachineLearning #TechEthics


Someone once asked Gu Ailing — the freestyle skier who moves between two languages, two cultures, and two identities fused into one extraordinary athlete:

“Can you take me into your brain?”

I've been thinking about that question ever since.

Not about her specifically.
About what it means to even ask it.

Because humans are obsessed with understanding exceptional minds:

  • how decisions get made
  • how identity forms
  • how someone becomes themselves

And here is the uncomfortable truth:

We don't actually understand any of it.

Not fully.
Not for her.
Not for you.
Not for me.

We don't fully understand our own biases.

Or our emotions.

Or why you bought that thing you absolutely did not need—but it was so cute.

We can't explain why some feedback lodges in our chest for decades while a compliment evaporates in hours.

We don't know why dogs make everything better.

(That last one might just be me.)


Right now, in labs, startups, and university basements around the world, people are trying to build something that does understand these things.

Or at least something that can simulate understanding convincingly enough to fool us.

And that raises a strange possibility:

What if building artificial intelligence is accidentally becoming a way of studying human intelligence?

Not metaphorically.

Mechanistically.


The Original Ambition


AI was never just about automation.

DeepMind wasn't founded to build a better search engine.

Demis Hassabis was unusually explicit about the ambition from the beginning:

The goal was to understand intelligence itself.

Not merely imitate human behavior.

Understand the process by which minds arrive at answers.

That is a radically different objective.

Not:

“Build something smart.”

But:

“Figure out what smart actually is.”

And to do that, researchers had to make assumptions.

They had to choose theories:

  • theories about learning
  • theories about memory
  • theories about optimization
  • theories about reasoning

Then encode those theories mathematically and test whether they produced intelligence-like behavior.

And here's the part that matters:

Most of those theories were borrowed from what we already believed about ourselves.

So when the systems worked — partially, imperfectly, surprisingly — a deeper question emerged:

Were we discovering machine intelligence?
Or validating hidden truths about human cognition?


Three Parallels We Should Probably Take Seriously


1. You Are What You're Trained On


Every intelligence system inherits its environment.

Modern AI works through exposure.

Data.
Feedback.
Adjustment.
Repetition.

At a broad level, this is Reinforcement Learning:

  • reward useful behavior
  • penalize failure
  • repeat until patterns emerge

And if you stare at that framework long enough, it becomes difficult not to notice something unsettling:

It resembles human childhood almost perfectly.

Long before we have language, we absorb signals:

  • praise
  • criticism
  • silence
  • warmth
  • rejection
  • attention

Our internal systems adapt.

We generalize from incomplete information.

And decades later, we still live inside some of those early training patterns.

Sometimes without realizing it.


A thought experiment

Ask an AI:

“What am I?”

If the only data it receives says:

  • you're difficult
  • too obsessive
  • too different
  • too much

it will eventually reflect those things back as truth.

Feed it evidence of capability, creativity, and achievement later—and the model struggles.

The training signals conflict.

The louder pattern dominates.

And humans don't seem all that different.


The irony is extraordinary.

The exact kind of thinking historically dismissed in classrooms:

  • obsessive
  • nonlinear
  • pattern-driven
  • experimental
  • build-first-understand-later

is now driving the AI revolution itself.

But the model doesn't know that.

It only knows what it was trained on.

In machine learning, we call that:

a corrupted training set.

In humans, we use softer language.

But the mechanism may not be very different.


2. Change Is Slow, Directional, and Mathematical


Most transformation happens below the threshold of perception.


Underneath much of modern AI is a process called Gradient Descent.

New Understanding = Old Understanding − Small Correction
Enter fullscreen mode Exit fullscreen mode

The principle is deceptively simple:

  • measure how wrong the system is
  • adjust slightly
  • repeat millions of times

Not a leap.

A nudge.

Then another.

And another.

Until the model slowly becomes something different.


If you've ever tried to change a deeply held belief, this probably sounds familiar.

Therapy rarely feels dramatic while it's happening.

Growth rarely announces itself.

Most psychological transformation feels invisible in real time.

Until one day:

  • a trigger no longer triggers you
  • a fear loses intensity
  • an old story stops feeling true

And you realize change was occurring long before you could perceive it.


Human growth may work more like optimization than revelation.

Not sudden enlightenment.

Iterative correction.


3. The Bias Problem Is Our Problem


The mirror reflects everything.


One of the most disturbing discoveries in AI research is that models don't merely learn intelligence.

They learn prejudice too.

Systems trained on human data absorb:

  • racism
  • sexism
  • historical asymmetries
  • cultural assumptions
  • institutional distortions

Not because engineers explicitly coded hatred into them.

But because the data already contained it.

The AI was a perfect student.

And that should unsettle us.


Because if AI reflects humanity accurately enough, then it also reflects:

  • our blind spots
  • our hypocrisies
  • our normalized distortions
  • the beliefs we inherited without examining

In that sense, AI may be the most honest thing we've ever built.

And honesty, historically, has never been comfortable.


The Skeptic's Case


Maybe we're just projecting ourselves into machines.


A skeptic would argue the parallels above are compelling—but fundamentally metaphorical.

And honestly?

That criticism is fair.

Analogy is not proof.

A river and a highway both move things from point A to point B.
That doesn't mean they function the same way.

Human cognition includes:

  • embodiment
  • emotion
  • consciousness
  • lived experience

When an AI “learns,” there is no felt confusion.

No frustration.
No insecurity.
No 3 AM realization.

Just computation.

At scale.

Very fast.


There's also a deeper possibility:

Maybe humans anthropomorphize every sufficiently complex tool.

We called early computers “electronic brains.”

We gave chatbots personalities.

We describe algorithms as “thinking,” “hallucinating,” and “understanding.”

Perhaps AI isn't revealing humanity.

Perhaps humanity is simply impossible for us to stop projecting.


So What Are We Actually Looking At?


Probably a feedback loop.


Here's where I currently land:

The parallels are too structurally specific to dismiss completely.

Trauma resembles corrupted training data.

Psychological growth resembles optimization.

Bias propagation resembles inherited statistical patterns.

These are not random poetic comparisons.

They are mathematical structures deliberately chosen because researchers believed they captured something true about learning.

And then — unexpectedly — they worked.


So maybe the real story isn't:

“AI explains humans.”

Or:

“Humans are projecting onto AI.”

Maybe it's this:

We are building machines from theories of ourselves—
then using those machines to discover whether the theories were true.

And that creates a recursive loop:

  1. Humans theorize cognition.
  2. We encode the theory into machines.
  3. The machines surprise us.
  4. The surprise changes our understanding of cognition.
  5. We build again.

Repeat.


We build AI from theories of ourselves.

Then the AI surprises us.

And the surprise becomes new psychology.


Where the Mirror Cracks


The most revealing failures may matter more than the successes.


Researchers have started noticing something fascinating:

AI systems often outperform humans in constrained reasoning tasks—but underperform dramatically in social understanding.

A recent PNAS study found that large language models struggled with social reasoning and contextual judgment requiring nuanced human interpretation.

Intelligence, apparently, is not the same thing as wisdom.


Other studies have found something even stranger.

Models don't just reproduce human biases.

Sometimes they invert them.

Research published in Science Advances showed that changing the wording of identical prompts could push models toward entirely different moral conclusions.

The systems are not stable mirrors.

They're probabilistic ones.


This is partly why reasoning-focused models are so interesting.

Systems inspired by Daniel Kahneman’s “System 2” framework attempt to slow down before answering:

  • reflect
  • reason
  • deliberate
  • revise

Not unlike humans attempting to override instinctive bias.


And maybe that leads to the most useful insight of all:

Human-AI collaboration often outperforms either one alone.

Not because AI replaces human thinking.

But because interacting with it forces us to examine our own.

The friction itself becomes metacognition.


Which brings us back to Gu Ailing.

A journalist once asked if she could take us into her brain.

She laughed and answered simply:

“It's something I keep exploring.”

That might also be the most honest description of artificial intelligence ever given.

Because building AI is forcing humanity into a strange new activity:

thinking about thinking itself.

And maybe that's the real reason we keep building these systems.

Not to create a mind.

But to finally see our own more clearly.


By SoumiaLinkedIn · Portfolio


Are you working on something similar? Drop a comment — I'm curious what you're building and what you're seeing in your own work.

Written with Claude, Gemini and ChatGPT. Investigated by a human brain. Both works in progress.

Top comments (0)