Series: The Learn Arc — 50 posts teaching Active Inference through a live BEAM-native workbench. ← Part 10: Model-Based Data Analysis. This is Part 11.
The hero line
Where the theory goes — and where it bends.
Chapter 10 is the book's closer. It doesn't add new math. It does three things: stretch the theory across problems Chapters 1–9 didn't cover, stress-test it by exposing where it creaks, and sketch what comes next.
If you've been following along, this is the chapter where it all adds up — and where you learn what "adding up" actually commits you to.
The stretch
The book's closing move is to argue that the single variational objective from Chapter 2 — minimize free energy — unifies:
-
Perception (Chapter 2): minimize F with respect to
Q. - Action (Chapter 3): minimize expected F with respect to the policy.
- Learning (Chapters 4, 7): minimize F with respect to the parameters of A, B, C, D.
- Development (Chapter 10): minimize F with respect to the architecture itself — a slower gradient on what generative model to have.
- Evolution (Chapter 10): the slowest gradient, over the space of possible agents.
Five gradients on one functional. Different timescales: perception in milliseconds, action in seconds, learning in hours, development in years, evolution in generations. One rule.
Whether you buy this argument is your call. What's worth noting is that the Workbench implements the first three (perception, action, learning) and leaves the development + evolution story honestly open. No gesturing at future work disguised as present capability.
The bends
Chapter 10 is admirably honest about where the theory gets wobbly.
1. Consciousness. The theory is silent. It doesn't predict when a generative model becomes phenomenally aware, and it can't, without more machinery. The authors acknowledge this and suggest global workspace + Active Inference may yield a more complete picture.
2. Multimodal composition. When an agent has vision + audio + touch + proprioception, how are the A matrices factored? The book gestures at factorization but doesn't solve it cleanly. The Workbench's multimodal-* cookbook recipes (cookbook/multimodal-cue-combination, cookbook/multimodal-conflicting-cues, cookbook/multimodal-two-channel-frog) explore the pragmatic answers.
3. Active Inference vs RL at scale. The book's examples are small enough that EFE is tractable. Real domains (games, robotics, language) are not. How does Active Inference scale? An open question, and the honest answer in Chapter 10 is: nobody knows yet.
4. Development. The "optimize the architecture" gradient is elegant but underdetermined. Which architectural moves improve free energy? The book waves at meta-learning; a real mechanism is future work.
The studio of unsolved problems
One thing the Workbench is uniquely suited for: experiments on the open problems. Every cookbook recipe is a ~200-line Jido agent spec. Every running agent is tracked via Mnesia and auditable via Glass.
From Studio's New run page (/studio/new) you can:
- Attach an existing tracked agent to a new world — test whether a model trained on task A generalizes to task B (the out-of-distribution question).
- Instantiate from spec — run multiple instances of the same architecture with different priors and compare their F trajectories (a mini model-comparison experiment).
- Build from recipe — a cookbook-originated agent is lifecycle-tracked, so you can resume training, rollback, archive, restore.
Combined with the Glass per-equation provenance, every one of these becomes a reproducible experiment. The Workbench is a studio for the open problems. Not a demo rig.
The feature honesty
Chapter 10 models the disposition the Workbench tries to embody. Every feature has a state: works, partial, scaffold. The /guide/features page doesn't hide behind marketing copy.
This is how you build a tool for a theory that is simultaneously powerful and incomplete. You say what's solid. You say what's open. You let people who want to experiment pick up the open parts and push them forward.
The three sessions
Chapter 10 has three sessions under /learn/chapter/10:
- Perception, action, learning under one objective — the stretch argument, summarized with cross-refs to Chapters 2, 3, 4, 7.
- Open problems — consciousness, multimodal composition, scale, development.
- Paths forward — where Active Inference connects to adjacent fields (RL, predictive processing, computational psychiatry) and what the next decade of work looks like.
What the first 11 posts gave you
The theory spine is complete:
- Parts 1–5: Preface + Chapters 1–4. The map, the agent loop, the low road, the high road, the generative model.
- Parts 6–8: Chapters 5–7. Neuromodulation, design recipes, the full POMDP + Dirichlet + hierarchy stack.
- Parts 9–10: Chapters 8–9. Continuous time, and fitting models to data.
- Part 11 (this): Chapter 10. Where the theory goes, where it bends.
You can now read any paper in the Active Inference literature and track what the authors are doing. You can open any cookbook recipe and know what the math is saying. You can drop agents on the Builder canvas and trust the validation. You can open Studio and run tracked experiments.
This is the base. Next comes the depth.
The next 39 posts
Parts 12–49 go back to the beginning and deep-dive one 8–15 minute workshop session at a time. 38 sessions × 1 post each + 1 finale = the remaining 39.
The chapter-overview posts (1–11) paid for this. You'll have all the vocabulary, all the equations, all the surfaces. Each session post will drop into a specific workshop — the belief-update equation walked line by line, the softmax temperature tuned in three regimes, the Dirichlet prior's strength explored across five orders of magnitude — and tie it back to exactly one cookbook recipe you can run while reading.
Run it yourself
-
/learn/chapter/10— three sessions. -
/guide/features— honest status for every feature. -
/studio/new— start an experiment. -
/cookbook— 50 recipes across every chapter's territory. - The whole curriculum index:
/learn.
The mental move
Ten chapters ago Active Inference was "a theory in a book." Ten chapters on it is a computational substrate you can prototype, fit, and extend. The theory didn't get simpler — but the tooling did. That's the entire premise of the Workbench, and Chapter 10 is where it comes due.
Next
Part 12: Chapter 1, §1 — What Active Inference claims. Back to the beginning, in depth. We'll open the first session of the curriculum and walk through its four-tier narration, its attributed book excerpt, its quiz, and its linked lab. The depth arc starts here.
⭐ Repo: github.com/TMDLRG/TheORCHESTRATEActiveInferenceWorkbench · MIT license
📖 Active Inference, Parr, Pezzulo, Friston — MIT Press 2022, CC BY-NC-ND: mitpress.mit.edu/9780262045353/active-inference
← Part 10: Model-Based Data Analysis · Part 11: Unified Theory (this post) · Part 12: Ch 1 §1 Deep-dive → coming soon



Top comments (0)