Series: The Learn Arc — 50 posts teaching Active Inference through a live BEAM-native workbench. ← Part 27: Session 4.5. This is Part 28.
The session
Chapter 5, §1. Session title: Factor graphs. Route: /learn/session/5/s1_factor_graphs.
Chapter 5 is the book's neuroscience chapter. Session 5.1 makes its opening claim: the cortex is a factor graph that physically implements the message-passing computations from Chapter 4. Every cortical column is a factor node. Every white-matter tract is an edge that carries a message.
What a factor graph is
A factor graph is a bipartite graph with two kinds of node:
-
Variable nodes — one per random variable in your model (e.g.,
s_τ). -
Factor nodes — one per factor in your joint distribution (e.g.,
P(o_τ | s_τ),P(s_{τ+1} | s_τ, π)).
Edges connect each factor to the variables it involves. To compute a marginal (like Q(s_τ)), you pass messages along the edges until they converge.
Eq. 4.13 is literally a message-passing computation on such a graph. The three log-terms you've been summing — observation likelihood, forward transition, backward transition — are the three messages arriving at state node s_τ.
The cortical claim
Here's where Chapter 5 gets audacious: the six-layer cortical column is the biological implementation of a factor node.
- Upper layers (II/III) — carry messages up the hierarchy (posterior estimates).
- Lower layers (V/VI) — carry messages down (priors).
- Layer IV — receives bottom-up evidence (likelihood).
- White-matter tracts — the edges connecting columns across the cortex.
Every cortical area is a factor in the brain's generative model. Every signal between areas is a message. The whole cortex is one enormous factor graph doing variational inference on the latents it infers from sensory input.
Why the claim is testable
Most "brain theories" are unfalsifiable gestures. Chapter 5's claim is different: it makes specific predictions about which cortical layer should respond to which message, which white-matter tract should carry which signal, which lesion should cause which computation to fail.
Example: the claim predicts that bottom-up prediction-error signals (the mismatch between observation and top-down prediction) should be carried by superficial pyramidal cells projecting out of layer II/III. The empirical literature, over 30 years, has accumulated substantial evidence that this is what happens.
Whether the theory is right about all of it is open. But it makes the right shape of claim — specific enough to refute, empirical enough to test.
How the Workbench embodies this
Every signal in the Workbench carries provenance tags including equation_id. When you look at /glass/agent/<id> you're looking at a live factor-graph trace — the messages flowing between the agent's own internal "factor nodes," labeled by which equation emitted them.
The correspondence:
-
eq_4_13_state_belief_update→ the message arriving at a state variable node. -
eq_4_14_policy_posterior→ the message arriving at the policy variable node. -
eq_7_10_dirichlet_a→ the message updating the A factor node.
Watching an agent run in Glass is watching a factor graph do its thing. The Workbench's architecture isn't inspired by the theory; it's a direct implementation of it.
The concepts this session surfaces
- Factor graph — bipartite graph with variable nodes and factor nodes.
- Message — a distribution passed along an edge.
- Cortical column — the factor-node candidate.
- Ascending/descending projections — message directions.
The quiz
Q: In the factor-graph picture of cortex, an ascending projection from a lower area to a higher area carries:
- ☐ A prior from the higher area's generative model.
- ☐ A prediction-error or posterior estimate from the lower area. ✓
- ☐ A motor command.
- ☐ A reward signal.
Why: Ascending projections (bottom-up) carry evidence from the lower level — typically posterior estimates or prediction-errors — which the higher level then uses to update its own variable nodes. Descending projections carry priors from higher areas down. This is the core directional claim Chapter 5 makes.
Run it yourself
-
/learn/session/5/s1_factor_graphs— session page. -
/cookbook/predictive-coding-two-level-pass— two-level hierarchy in action. -
/cookbook/predictive-coding-convergence-diagnostics— when has message passing actually converged? -
/glass— live factor-graph trace.
The mental move
Before Session 5.1, Chapter 4's message-passing was abstract. After 5.1, it has a biological home. You don't have to buy the cortical mapping — but you should hold it alongside the math, because the next three sessions (5.2, 5.3, 5.4) build on it to deliver the neuromodulator story.
Next
Part 29: Session §5.2 — Predictive coding. The most famous consequence of the factor-graph picture: prediction-error signals that ascend, predictions that descend, a gradient in generalised coordinates one hierarchy level at a time. The clearest connection between Active Inference and neural computation.
⭐ Repo: github.com/TMDLRG/TheORCHESTRATEActiveInferenceWorkbench · MIT license
📖 Active Inference, Parr, Pezzulo, Friston — MIT Press 2022, CC BY-NC-ND: mitpress.mit.edu/9780262045353/active-inference
← Part 27: Session 4.5 · Part 28: Session 5.1 (this post) · Part 29: Session 5.2 → coming soon

Top comments (0)