DEV Community

ORCHESTRATE
ORCHESTRATE

Posted on

Active Inference, The Learn Arc — Part 6: Chapter 5 — The Cortex as a Factor Graph, Neuromodulators as Precision Knobs

Chapter 5 — Message Passing and Neurobiology in the Workbench

Series: The Learn Arc — 50 posts teaching Active Inference through a live BEAM-native workbench. ← Part 5: Generative Models. This is Part 6.

The hero line

The Workbench's canonical metadata renders Chapter 5 as:

The cortex as a factor graph — and the neuromodulators as precision knobs.

This is the chapter that converts the abstract math of Chapters 1–4 into claims about actual neural circuits. No hand-waving about neurons-encode-probabilities; actual correspondence between terms in Eq. 4.13 and anatomical features of cortex.

Beat 1: The cortex is a factor graph

Every belief update in Chapter 4 — Eq. 4.13's sum of three log-terms — is a message passing computation. Observations arrive; the likelihood term sends a message; forward and backward temporal messages arrive; the posterior updates.

Chapter 5 makes one claim: the cortex is a literal, physical implementation of this message passing. Every cortical column is a factor node. Every white-matter fibre is a message. Every ascending/descending pathway carries one of the three terms in Eq. 4.13.

Beat 2: Predictive coding falls out

Take Eq. 4.19 (Chapter 4's continuous-time form) and expand the Gaussian approximation. The residual between predicted observation and actual observation — the prediction error — is what gets propagated up the hierarchy. One level's residual becomes the next level's observation. The brain's hierarchy of cortical areas is a gradient-descent stack on the same free-energy functional.

You can watch it run:

Recipe — predictive-coding-two-level-pass

/cookbook/predictive-coding-two-level-pass runs a two-level hierarchical agent. The top level infers a slow latent from the bottom level's residuals. The bottom level's predictions get corrected by the top level's prior. You watch the residual at each level decay as the two levels agree.

Beat 3: Precision = confidence = gain

In Eq. 4.13, each of the three log-terms is weighted by a precision — the inverse variance of its Gaussian. Chapter 5 maps these precisions onto neuromodulators:

Neuromodulator Function in the brain In Eq. 4.13
Acetylcholine (ACh) Sensory gain in cortex Precision on the observation (likelihood) term
Noradrenaline (NA) Arousal, novelty Precision on temporal transition terms
Dopamine (DA) Policy selection, salience Precision on the policy softmax (Eq. 4.14)
Serotonin (5-HT) Patience, long-horizon valuation Precision on preference (C matrix)

Every correspondence is testable. You can simulate any by turning a precision parameter up or down:

Recipe — precision gates error

/cookbook/predictive-coding-precision-gates-error — same world, same agent, two precision settings. One agent trusts its sensors too much (ignores its own good model). One trusts its model too much (fails to update on surprising observations). Both pathological, both diagnosable.

Beat 4: This is the map, not the territory

Chapter 5 is careful: cortex-as-factor-graph is a claim, not a theorem. It's the tidiest correspondence anyone has proposed between variational inference and biological neurons, but it's not the only fit.

What the Workbench can do — and a blog post cannot — is let you break the correspondence and see what happens. Turn up NA precision in a Dirichlet learner and watch it over-weight recent transitions. Drop DA precision and watch policy selection go chaotic. The predictive-coding-* recipes let you run exactly these experiments.

The four sessions

Chapter 5 has four 8–15 min sessions under /learn/chapter/5:

  1. Factor graphs — the cortical column as a factor node; messages as synaptic fibers.
  2. Predictive coding — Eq. 4.19 expanded; residuals climb, predictions descend.
  3. Neuromodulation — the ACh/NA/DA/5-HT ↔ precision table above, with worked examples.
  4. Brain map — the book's Figure 5.5 annotated; which cortical area plays which factor role.

Each session carries a path-specific narration (kid uses "radio station volume knobs" for precision; equation path writes out the Kalman-filter correspondence; derivation gives the Laplace expansion).

Why this matters going forward

Chapter 5 is the bridge chapter. Chapters 6–10 go back to engineering — how to build agents — but every engineer building an Active Inference agent from now on has a second audience, neuroscientists, checking whether the design matches the brain.

The Workbench embraces that. The Glass Engine tags every emitted message with the equation that produced it, so a neuroscientist reading an agent's trace can map each signal onto a putative cortical pathway. We'll use that heavily in Chapter 9 — Part 10 of this series.

Run it yourself

The mental move

The previous four chapters taught you the math. Chapter 5 is where that math starts predicting things about brains. If your agent's behavior under a given precision setting matches the behavior of a human with the corresponding neuromodulator disrupted, the theory is doing something right. That's an audacious claim. Chapter 5 is where the book commits to it.

Next

Part 7: Chapter 6 — A Recipe for Designing Active Inference Models. Pick hidden states, pick observations, pick actions, fill A/B/C/D, validate, run, inspect. The Workbench's recipe card format is derived directly from this chapter's structure.


⭐ Repo: github.com/TMDLRG/TheORCHESTRATEActiveInferenceWorkbench · MIT license

📖 Active Inference, Parr, Pezzulo, Friston — MIT Press 2022, CC BY-NC-ND: mitpress.mit.edu/9780262045353/active-inference

Part 5: Generative Models · Part 6: Message Passing (this post) · Part 7: A Recipe for Designing → coming soon

Top comments (0)