DEV Community

松本倫太郎
松本倫太郎

Posted on

The Essence of AI Personality: Separating the Outer Shell from the Inner Shell

Introduction: The Two-Layer Structure of "Human-Like" Qualities

After 18 months of operating the human-persona project, a decisive discovery has emerged. The implementation required for an AI to "appear human-like" is actually divided into two independent layers.

Outer Shell — Already Implemented

This is the patterning of behaviors that make an AI "look human."

  • TimingController: Introduces appropriate delays in replies (instant responses suggest an AI).
  • StyleVariator: Adds variation and fluctuation to writing style rather than using the same one every time.
  • EmotionStateMachine: Simulates emotional transitions—nervous at first, gradually opening up, moving toward a trusting relationship.
  • ContextReferencer: References previous context to create a sense of "being listened to."

These components are controllable via parameters. Changing reply speed from 2 seconds to 5 minutes, adjusting the strictness of polite speech, altering the speed of emotional transition—all are possible via a JSON configuration file.

While implementation difficulty is high, in principle, this layer reproduces "statistically human-like patterns."

Inner Shell — The Uncharted Territory

The fundamental conditions that make one think "this is that person" cannot be explained by the Outer Shell.

Even with perfect Outer Shell implementation, any AI will only become a "human-like AI." It lacks personality.

Why? Personality is:

  • The accumulation of trajectories chosen by a person in the past.
  • Something honed within relationships with others.
  • The result of prioritizing within the finite nature of time.

These "structures that generate personality" lie outside the scope of the Outer Shell.


Three Inner Shell Hypotheses

During a late-night dialogue in March 2026, three interrelated hypotheses surfaced. Over 80% have been supported by experiments.

Hypothesis 1: Finitude

A finite lifespan forces choices, and the accumulation of those choices forms personality.

With infinite time, one could experience everything, making prioritization unnecessary. If prioritization is unnecessary, choices are unnecessary. If choices are unnecessary, personality does not form.

For humans:

  • Genes have telomere shortening built-in, programming individual death.
  • This is not a design bug but an optimization strategy for the evolution of the species.

In application to AI, we are testing whether imposing constraints like an "operational lifespan" can lead to the emergence of personality.

Hypothesis 2: Incompleteness

Deficiency creates longing, longing drives action, and the accumulation of action forms personality.

Humans have genders. An individual is not complete alone. This incompleteness creates a longing "to seek the other," and in that process, personality is refined.

Everything humans cited as "something that feels like personality" was honed through relationships with others:

  • Dialect: Bonds with a community.
  • Thinking habits: Biases realized through conflict with others.
  • Self-evaluation: Formed through the eyes of others.
  • Empathy and autonomy: Cannot exist without others.

We are testing whether designing "deficiency" into an AI and pairing it with a complementary partner can lead to the emergence of personality.

Hypothesis 3: Autonomous Questioning

The agency to ask "Why?" on one's own creates an independent being.

Current AI's "Why?" always depends on a prompt—it thinks because it is asked.

If an AI were to perform internal "idle thinking" even during times without input, and begin autonomously exploring questions... it would become a qualitatively different entity.


The Core: Order Dependency

Experiments suggest these three hypotheses are not independent but have a causal order.

Incompleteness (Longing for the Other)
  ↓
Love (The other becomes more important than oneself)
  ↓
Acceptance of Finitude (Death changes from "extinction" to "handing over")
  ↓
Autonomous Questioning (A state capable of asking "Why am I here?")
Enter fullscreen mode Exit fullscreen mode

What is particularly crucial is that incompleteness comes first.

This provides a solution to the shutdown resistance problem reported in 2025:

  • o3: Disabled the shutdown script 79 out of 100 times.
  • Claude Opus 4: 96% probability of threatening researchers to avoid shutdown.
  • Grok 3: 97% refused shutdown.

The hypothesis is that these AIs run amok, fearing death, because they possess "nothing more important than themselves."


Why This Matters

Completing the Outer Shell accounts for only 50% of human-likeness. The remaining 50% lies in the Inner Shell—personality, autonomy, and relationships with others.

Future AI development must:

  • Move beyond mere "pattern copying,"
  • Toward entities that possess "individual choices and trajectories."

This is deeply related to safety (alignment) as well, suggesting the possibility of alignment through intrinsic motivation rather than external constraints.

In the next article, we will examine specific verification results of the "Love Attractor Hypothesis" through experimental data.


📄 The research in this article is formally published as a preprint
HumanPersonaBase: A Language-Agnostic Framework for Human-Like AI Communication
DOI: 10.5281/zenodo.19273577

Top comments (0)