DEV Community

ORCHESTRATE
ORCHESTRATE

Posted on

Active Inference, The Learn Arc — Part 14: Session §1.3 — Why One Theory, and What the Book Covers

Session 1.3 — Why one theory and what the book covers

Series: The Learn Arc — 50 posts teaching Active Inference through a live BEAM-native workbench. ← Part 13: Session 1.2. This is Part 14.

The session

Chapter 1, §3. Session title: Why one theory — and what this book covers. Route: /learn/session/1/s3_why_one_theory.

The third and final session of Chapter 1 is a scene-setter. Session 1.1 stated the claim. Session 1.2 let you watch the claim happen. Session 1.3 asks why anyone would bother defending this claim for 230 more pages, and then lays out what the next nine chapters actually do.

By the end of this session you know the shape of the mountain you're about to climb.

Three reasons "unified theory" is worth defending

Chapter 1 closes with three reasons to take the unified-theory framing seriously.

1. Parsimony. One functional F (Chapter 2) generates perception, action, and learning by varying what you minimize over. If three modules could do the same work, then three modules is extra structure you have to justify. The theory claims you don't.

2. Predictiveness. The theory makes specific quantitative predictions — not "the brain does something useful," but "the policy-selection precision is modulated by dopamine with these specific effects on behavior." Chapter 5 turns those predictions into the ACh/NA/DA/5-HT table; Chapter 9 turns them into fits to human behavioral data.

3. Reach. Other theories explain perception (predictive coding) or action (reinforcement learning) or learning (Bayesian inference). Active Inference claims them all, plus development and evolution (Chapter 10). The reach is the reason the book exists.

You don't have to accept any of these now. You do have to understand that the book's job is to defend them, and Sessions 1.3 explains how the next nine chapters attack the job.

The map

The session's session-text renders the book's structure:

Theory (Chapters 2–5) — the machinery.

  • Ch 2 (Low Road): variational free energy from Bayes.
  • Ch 3 (High Road): expected free energy = risk + ambiguity.
  • Ch 4 (Generative models): A, B, C, D.
  • Ch 5 (Message passing): the cortex as a factor graph.

Practice (Chapters 6–8) — build and run.

  • Ch 6 (Recipe): the six-question design template.
  • Ch 7 (Discrete time): POMDPs, Dirichlet learning, hierarchy.
  • Ch 8 (Continuous time): generalised coordinates, action on sensors.

Application (Chapters 9–10) — reach and limits.

  • Ch 9 (Model-based data analysis): fit + compare models.
  • Ch 10 (Unified theory): the stretch and the bends.

If you've been reading Parts 1–11, you've already done a chapter-level sweep of every one of these. Session 1.3 is where a first-time reader gets the map.

What Session 1.3 does differently

The first two sessions of Chapter 1 are about the claim. Session 1.3 is about the book. That matters because a lot of readers hit Chapter 2, see KL divergences, and bounce. Session 1.3 exists so you know — before you hit the math — what the math is for.

The session's quiz doesn't test you on math. It tests you on structure:

Q: The Parr/Pezzulo/Friston book organises its argument into three bands. Which is the correct order, bottom-up?

  • ☐ Application → Theory → Practice
  • ☐ Theory → Practice → Application ✓
  • ☐ Practice → Theory → Application
  • ☐ Theory → Application → Practice

Why: Chapters 2–5 develop the machinery; Chapters 6–8 build and run; Chapters 9–10 apply to data and test the unified-theory claim. You want the machinery before the practice, and the practice before the stretch.

Why this maps to the Workbench's architecture

The book's three bands map to the Workbench's three surfaces:

  • Theory band → the equation registry at /equations. Every theoretical claim has an equation record with verification status.
  • Practice band → the cookbook at /cookbook and the Builder at /builder/new. 50 recipes, a canvas to compose new ones.
  • Application bandStudio at /studio and Glass at /glass. Tracked long-running agents and per-signal provenance for auditing results.

When you sit in Session 1.3, those three Workbench surfaces map 1-to-1 to the three chapter bands. The tool's information architecture is the book's information architecture.

The concepts this session surfaces

Four concepts in the glossary strip, picked to foreshadow the next chapter:

  • variational free energy — the quantity Chapter 2 derives.
  • expected free energy — the quantity Chapter 3 derives.
  • POMDP — the structure Chapter 7 ships.
  • predictive coding — the motif Chapter 5 names.

Click any chip, get the path-tiered glossary entry. Your first encounter with each term happens in Session 1.3, the deep dives happen later.

The linked Workbench stops

Session 1.3 has cross-references to two Workbench surfaces (not labs — this session is orientation-heavy):

  • /equations filtered by by_family: :vfe — Chapter 2's equations.
  • /models — the model-family taxonomy, a parallel structure to the book's three bands.

Clicking either opens a tab where you can browse while the session stays open. You come back to the session, take the quiz, move on.

What to carry forward

When you close Session 1.3 and open Chapter 2, carry this: the book is defending a specific claim, and every subsequent chapter is one of the three tools (develop, build, or apply) you need to evaluate the claim. If Chapter 2's algebra feels dense, remember that by Chapter 7 you'll be watching that algebra drive a real Jido agent through a maze. The math is earning its keep.

Run it yourself

The mental move

Most books hit you with the first hard chapter before they've told you what they're doing. Session 1.3 is the book telling you what it's doing. Don't skip it. It's the difference between reading Chapter 2 fogged and reading it oriented.

Next

Part 15: Session §2.1 — Inference as Bayes' rule. The first session of Chapter 2. We return to the most-familiar identity in statistics — Bayes' rule — and set up the hole (intractable evidence) that free energy will fill.


⭐ Repo: github.com/TMDLRG/TheORCHESTRATEActiveInferenceWorkbench · MIT license

📖 Active Inference, Parr, Pezzulo, Friston — MIT Press 2022, CC BY-NC-ND: mitpress.mit.edu/9780262045353/active-inference

Part 13: Session 1.2 · Part 14: Session 1.3 (this post) · Part 15: Session 2.1 → coming soon

Top comments (0)