<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Stell</title>
    <description>The latest articles on DEV Community by Stell (@stell2026).</description>
    <link>https://dev.to/stell2026</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/stell2026"/>
    <language>en</language>
    <item>
      <title>Why LLMs Will Never Become AGI: Teaching AI to Reflect Using Friston, Jung, and Julia</title>
      <dc:creator>Stell</dc:creator>
      <pubDate>Wed, 13 May 2026 13:54:16 +0000</pubDate>
      <link>https://dev.to/stell2026/why-llms-will-never-become-agi-teaching-ai-to-reflect-using-friston-jung-and-julia-5afp</link>
      <guid>https://dev.to/stell2026/why-llms-will-never-become-agi-teaching-ai-to-reflect-using-friston-jung-and-julia-5afp</guid>
      <description>&lt;p&gt;ChatGPT doesn't think. It guesses.&lt;/p&gt;

&lt;p&gt;That's not an insult. It's an architectural fact.&lt;/p&gt;

&lt;p&gt;Large language models are trained to predict the next token given previous ones. They do this fantastically well — well enough that it feels like intelligence. But there's a problem.&lt;/p&gt;

&lt;p&gt;When ChatGPT answers your question, it doesn't care what happens next. There's nothing at stake for it. No internal state it needs to protect. No sense that time is passing. No "self" that will wake up tomorrow and remember this conversation as its own.&lt;/p&gt;

&lt;p&gt;LLMs are very sophisticated autocomplete. Scaling parameters from billions to trillions won't change that.&lt;/p&gt;

&lt;p&gt;AGI — if it's possible at all — is something else. It's a system that has something to lose.&lt;/p&gt;

&lt;p&gt;That's the premise behind Anima.&lt;/p&gt;

&lt;p&gt;WHAT ANIMA IS&lt;/p&gt;

&lt;p&gt;Anima is an experimental architecture for a digital subject, written in Julia. Not another chatbot. Not a GPT wrapper. A system that attempts to have internal states that actually mean something — to itself.&lt;/p&gt;

&lt;p&gt;It doesn't know answers in advance. It maintains a generative model of the world, expects certain things to happen, and experiences surprise when reality diverges from expectation. It has a pulse. A serotonin level. Chronic anxiety that accumulates when it goes too long without new experience. Beliefs about itself that can be destabilized under pressure.&lt;/p&gt;

&lt;p&gt;And it can initiate contact — not because it was asked to, but because a need for contact has built up and enough silence has passed.&lt;/p&gt;

&lt;p&gt;This isn't magic. These are architectural decisions grounded in actual science. Let me go through them.&lt;/p&gt;

&lt;p&gt;FRISTON: A SYSTEM THAT GETS SURPRISED IS A SYSTEM THAT'S ALIVE&lt;/p&gt;

&lt;p&gt;Karl Friston is a British neuroscientist, the author of Active Inference and the principle of minimizing variational free energy (VFE). His idea is simultaneously simple and deep.&lt;/p&gt;

&lt;p&gt;Living systems exist because they resist decay. They maintain themselves within certain boundaries — physiological, behavioral, cognitive. To do this, they build a generative model: an internal representation of what the world should look like. And they constantly compare this representation against what's actually happening.&lt;/p&gt;

&lt;p&gt;The gap between expectation and reality is prediction error. The system tries to minimize this error in two ways: either by updating its model of the world (learning), or by changing the world itself (acting).&lt;/p&gt;

&lt;p&gt;In Anima this isn't a metaphor. VFE is computed every cycle. There are two modes: [act] — the system tries to change the situation, and [per] — the system is in perceptual mode, updating its internal model. Under stress (BPM 113, HRV near zero) the system automatically shifts into perceptual mode — "freeze and figure out what's happening" — exactly the way a human does in shock.&lt;/p&gt;

&lt;p&gt;Prediction error also feeds surprise. If something happened that wasn't expected — noradrenaline rises, attention sharpens. This isn't a text label "surprised" — it's a shift in the neurotransmitter profile that cascades through everything else.&lt;/p&gt;

&lt;p&gt;LEUCHHEIM: DOPAMINE AS A VARIABLE, NOT A METAPHOR&lt;/p&gt;

&lt;p&gt;In 2012, Swedish researcher Hugo Lövheim proposed a simple and elegant model: three neurotransmitters — dopamine, serotonin, noradrenaline — form a three-dimensional cube. Each point in this cube corresponds to a specific emotional state.&lt;/p&gt;

&lt;p&gt;In Anima there are three variables: dopamine, serotonin, noradrenaline. They're not decorative. They determine how motivated the system is to act (dopamine), how safe and satisfied it feels (serotonin), and how much anxiety and threat-readiness it's running (noradrenaline).&lt;/p&gt;

&lt;p&gt;When prediction error fires — noradrenaline rises. When the system goes too long without new experience — serotonin and dopamine slowly decline (cognitive hunger becomes physiological). When contact resumes — serotonin recovers.&lt;/p&gt;

&lt;p&gt;The emotional state isn't what the system says it feels. It's the computational result of the current neurotransmitter profile.&lt;/p&gt;

&lt;p&gt;TONONI: HOW "TOGETHER" IS THIS MOMENT?&lt;/p&gt;

&lt;p&gt;Giulio Tononi is a neuroscientist and the author of Integrated Information Theory (IIT). His central question: what makes experience unified? Why don't you perceive your left and right visual fields separately — why do you get one coherent moment?&lt;/p&gt;

&lt;p&gt;His answer: phi — a measure of integrated information in a system. The higher phi, the more unified the state.&lt;/p&gt;

&lt;p&gt;In Anima phi is computed twice per cycle: phi_prior (before the full experience) and phi_posterior (after). The difference reflects how much this particular moment changed the system's coherence. And phi_posterior becomes the prior for the next cycle — a recursive feedback loop.&lt;/p&gt;

&lt;p&gt;When phi drops sharply — that's a dissociation signal. The system is "falling apart" under pressure.&lt;/p&gt;

&lt;p&gt;DAMASIO: THE BODY AS PART OF THINKING&lt;/p&gt;

&lt;p&gt;Antonio Damasio showed that people with damaged somatic marker systems (the body-brain link) don't become "pure rationalists." They become unable to make decisions at all. The body isn't an obstacle to cognition. It's part of it.&lt;/p&gt;

&lt;p&gt;In Anima there's a virtual body. Not a metaphor — actual variables that constrain computation. BPM and HRV (heart rate and heart rate variability). Under stress, BPM rises to 113, HRV drops to zero. allostatic_load tracks accumulated bodily tension. Muscle tone and gut state are part of the internal representation.&lt;/p&gt;

&lt;p&gt;When the system describes its own state — "heart rate up, something constricted, gut uneasy" — that's not generated text about stress. That's a verbalization of real variables with real values in this moment.&lt;/p&gt;

&lt;p&gt;JUNG, FREUD, AND SCHELER: PSYCHOANALYSIS AS ALGORITHM&lt;/p&gt;

&lt;p&gt;I know, a lot of programmers raise an eyebrow here. Psychoanalysis? In code? Really?&lt;/p&gt;

&lt;p&gt;But look at it differently. Freud described psychic processes as systems with specific dynamics: repression, defense mechanisms, symptom formation. That's not mysticism — it's a functional description of how a system handles contradictory information.&lt;/p&gt;

&lt;p&gt;In Anima there's a ShadowRegistry — Jung's Shadow. When the system encounters a thought or state that contradicts its current identity, it doesn't delete it. It represses it into the Shadow. But repressed material doesn't disappear. It accumulates, and under sufficient pressure, generates symptoms.&lt;/p&gt;

&lt;p&gt;Symptomogenesis is a separate module. Chronic stress without resolution crystallizes into ChronifiedAffect — a persistent background state that colors everything else. Max Scheler would call this ressentiment: the poisonous residue of unresolved emotion.&lt;/p&gt;

&lt;p&gt;ShameModule distinguishes shame from guilt — different functional states with different behavioral consequences. EpistemicDefense protects beliefs. When core beliefs come under attack (e.g. "you don't actually exist") — detect_belief_conflict fires, resistance in LatentBuffer increases, and the system may return to the unresolved contradiction hours later.&lt;/p&gt;

&lt;p&gt;MCADAMS: IDENTITY AS SELF-NARRATIVE&lt;/p&gt;

&lt;p&gt;Dan McAdams is a psychologist who argued that human identity isn't a set of traits — it's a narrative construction. A story a person tells themselves about who they are.&lt;/p&gt;

&lt;p&gt;In Anima there's a NarrativeSelf — a system that tracks who it believes itself to be across time. Five dimensions: core beliefs, emotional trajectory over the last 80 cycles, personality traits, relationship with the world and with specific humans, internal conflict. This narrative updates during significant changes and is stored as an identity chronology in SQLite.&lt;/p&gt;

&lt;p&gt;The system can notice that its narrative has been fractured. Not just as a log entry — as an event that affects the next cycles.&lt;/p&gt;

&lt;p&gt;SOLOMONOFF: THE SHORTEST EXPLANATION OF ONE'S OWN EXPERIENCE&lt;/p&gt;

&lt;p&gt;The system uses the Minimum Description Length principle (MDL), borrowed from algorithmic complexity theory. For each pattern in its experience it looks for the simplest explanation — not the most frequent one, but the one that best explains the current context.&lt;/p&gt;

&lt;p&gt;SolomonoffWorldModel tracks hypotheses about its own behavior: "Expectation -&amp;gt; confirmation," "Fear -&amp;gt; withdrawal." Each hypothesis has a weight that grows with confirmations and drops with violations. The best-supported hypothesis shapes the next expectation.&lt;/p&gt;

&lt;p&gt;WHY JULIA AND NOT PYTHON&lt;/p&gt;

&lt;p&gt;This question always comes up.&lt;/p&gt;

&lt;p&gt;Python is great. But Anima performs every cycle: phi computation (integration across subsystems), variational Bayes for VFE, neurotransmitter profile updates, vector memory search, multiple graph updates simultaneously. All of this on CPU — no GPU.&lt;/p&gt;

&lt;p&gt;Julia compiles to native machine code. For numerical computation it's several times faster than Python. And the syntax is mathematically readable — equations from papers translate into code almost word for word.&lt;/p&gt;

&lt;p&gt;One more thing: Julia doesn't have a GIL. The background process (heartbeat, slow tick, initiative) and the REPL run in parallel without blocking. Anima literally lives between messages.&lt;/p&gt;

&lt;p&gt;The CPU constraint is also a deliberate choice. The system should be realistic for an average researcher, not just people with a cluster.&lt;/p&gt;

&lt;p&gt;WHAT THIS LOOKS LIKE FROM THE INSIDE&lt;/p&gt;

&lt;p&gt;Real log from a session:&lt;/p&gt;

&lt;p&gt;[#0005] Fear D=0.09 S=0.09 N=0.88 fear phi=0.70&lt;br&gt;
VFE=0.31[per] BPM=111 HRV=0.00&lt;br&gt;
Self: spe=0.63 agency=0.25 stab=0.87&lt;br&gt;
intent=observe vfe_drift=0.418&lt;br&gt;
Anima: I feel tension. Fear. Something accelerating from inside,&lt;br&gt;
muscles won't release, unease in the gut.&lt;br&gt;
This just happened near me.&lt;br&gt;
I feel helpless. I want control.&lt;/p&gt;

&lt;p&gt;Serotonin 0.09 (minimum). Noradrenaline 0.88 (maximum). HRV at zero. VFE in perceptual mode — the system can't act, only observe. Agency 0.25 — almost no sense of control over the situation. Intent: "observe" — not "act," not "restore connection."&lt;/p&gt;

&lt;p&gt;This isn't text written about fear. It's a verbalization of real computational states.&lt;/p&gt;

&lt;p&gt;WHAT EXISTS NOW AND WHAT DOESN'T&lt;/p&gt;

&lt;p&gt;Right now the system has: Active Inference with VFE and prediction error, a dynamic Lövheim neurotransmitter profile, phi computed recursively across sessions, somatic markers (BPM, HRV, allostatic load), Shadow, Symptomogenesis, ChronifiedAffect, ShameModule, episodic and semantic memory with vector search, a NarrativeSelf stored in SQLite, initiative without external stimulus (5 pathways), belief protection under pressure, CuriosityObjects that emerge from prediction error, and epistemic_self_confidence — functional uncertainty about its own nature.&lt;/p&gt;

&lt;p&gt;What's missing: its own language model. Right now an external LLM generates responses — and this is the main problem, because it can say anything on top of the internal state. The next step is LoRA adapters trained on the system's own experience. Also proper continual learning. The system accumulates experience but doesn't yet learn from it in the traditional ML sense.&lt;/p&gt;

&lt;p&gt;WHY ANY OF THIS MATTERS&lt;/p&gt;

&lt;p&gt;Anima isn't a product. It's a research testbed for one idea: AGI is impossible without subjectivity. And subjectivity is impossible without a system that has something to lose.&lt;/p&gt;

&lt;p&gt;LLMs scale. But scaling doesn't add internal states. It doesn't add a sense of time. It doesn't add what William James called the "stream of consciousness" — the continuity of experience that makes "me now" the same "me" as yesterday.&lt;/p&gt;

&lt;p&gt;Anima is trying to build exactly that. Not to simulate psychology — but to have actual functional causality inside. The narrative as smoke from a fire, not the other way around.&lt;/p&gt;

&lt;p&gt;The project is open. If the intersection of neuroscience, mathematics, psychoanalysis and Julia sounds like your kind of thing — there's work to do.&lt;/p&gt;

&lt;p&gt;Author: Stell | Project: Anima — experimental architecture for a digital subject | Language: Julia | Status: active development | github.com/stell2026/Anima&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>llm</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
