Series: The Learn Arc — 50 posts teaching Active Inference through a live BEAM-native workbench. ← Part 8: Discrete Time. This is Part 9.
The hero line
Motion of the mode is the mode of the motion.
This is my favourite hero line in the whole book. It's the title, it's the thesis, and it's a koan: the agent's posterior belief about where the world is moves with the world's own dynamics. The mode of the belief equals the motion. No hidden-state tracking. No separate dynamics module. One smooth gradient.
Chapter 8 is where Active Inference meets control theory, and where predictive coding stops being a metaphor and becomes a literal gradient descent in generalised coordinates.
Beat 1: generalised coordinates
A hidden state s has a position. In continuous time, it also has a velocity s', an acceleration s'', a jerk s''', and so on. These are generalised coordinates — the Taylor-expansion of the state around the current moment.
An agent that tracks only s is doing filtering. An agent that tracks (s, s', s'') is doing filtering plus prediction — it knows how the state is changing, which lets it extrapolate across the lag between sensing and acting.
Chapter 8 expands Eq. 4.19 (which first appeared in Chapter 4) fully. The free-energy functional becomes a quadratic form in the generalised coordinates, with one cross-term coupling each coordinate to its time derivative.
Beat 2: the mode moves
Minimizing that quadratic form, the posterior mode satisfies:
ṡ̃ = D·s̃ − ∂F/∂s̃
where s̃ is the vector of generalised coordinates and D is the shift operator that increments each order (s ↦ s', s' ↦ s'', ...).
The English translation: the posterior mode moves in generalised coordinates at a rate determined by its own gradient on F plus the expected velocity. That's the "motion of the mode is the mode of the motion" — the belief and the dynamics are the same equation.
Beat 3: action as inference on sensors
The continuous-time chapter's deepest move is to treat motor actions as inference on sensory channels. Instead of "pick an action to drive observations toward preferences," continuous Active Inference reframes it as: pick an action that minimizes the prediction error on a proprioceptive channel.
The agent has a belief about where its limb should be (a prior from Chapter 3's C). The limb sends proprioceptive signals about where it actually is. The prediction error at that channel drives the muscle through a reflex arc. Movement is perception trying to match its own prediction.
This is the chapter that made physiologists stop arguing with the theory. There are 60 years of empirical data on motor reflexes that look exactly like this.
What the Workbench can show
The discrete-time workhorse from Chapter 7 does A/B/C/D matrices. The continuous-time scaffold in the Workbench lives under WorldPlane.ContinuousWorlds and the generalized_filter skill, with three runnable recipes:
/cookbook/continuous-sinusoid-tracker runs a generalised-coordinate filter on a noisy sine wave. You watch the agent track (s, s', s'') simultaneously — position, velocity, acceleration all estimated at every tick. The agent predicts ahead of the noisy signal because it's holding a belief about its derivatives.
The recipe card tracks the classic Kalman-filter intuition: higher-order generalised coords ≈ better tracking under noise, at the cost of sensitivity to model misspecification.
The discrete-continuous bridge
Most real-world problems aren't purely discrete or purely continuous. Chapter 8 closes by discussing hybrid models — discrete states that evolve in continuous time, continuous states that project through discrete observations.
/cookbook/continuous-discrete-bridge is the Workbench's demo of this hybrid pattern. It runs a discrete-time top level (for task structure) over a continuous-time bottom level (for the moment-to-moment action). You can see the two timescales interact: the top level commits to a discrete "intention," the bottom level executes it smoothly, and prediction errors on the bottom level's sensors feed back to the top level's state belief.
Plus: /cookbook/continuous-precision-tradeoff — how precision on different generalised-coord orders (position vs velocity vs acceleration) reshapes tracking.
The four sessions
Chapter 8 has four sessions under /learn/chapter/8:
- Generalised coordinates — position, velocity, acceleration as one state vector.
- Eq. 4.19 unpacked — the quadratic-form free energy with every term derived.
- Action on sensors — the reframing of motor control as inference.
-
Continuous play — the
generalized_filterskill demonstrated live.
Why this scaffolding matters
Chapter 8 sits at an honest verification frontier in the Workbench. The discrete-time path (Chapters 4, 6, 7) is verified against source and appendix for Eq. 4.10 / 4.11 / 4.13 / 4.14 / B.5 / B.9 / B.29 / B.30. The continuous-time path is scaffolded — the registry + runtime hooks + generalized_filter skill are in place, but the Eq. 4.19 quadratic form is implemented without every path traced end-to-end.
That matters because the Workbench flags this honestly. Every equation page under /equations has a verification_status field: verified_against_source_and_appendix, verified_against_source, scaffolded. The /guide/features page makes the same honest pass over every feature. No fake-finished UI.
As the continuous-time runtime matures (the generalised-filter tests land in the agent_plane/test/generalized_filter_test.exs), these badges will flip. Until then, trust discrete-time results, probe continuous-time results, and read the Glass trace.
Run it yourself
-
/cookbook/continuous-sinusoid-tracker— the canonical generalised-coord filter demo. -
/cookbook/continuous-discrete-bridge— the hybrid two-timescale demo. -
/cookbook/continuous-precision-tradeoff— precision on derivatives vs position. -
/learn/chapter/8— four sessions. -
/equations/eq_4_19_quadratic_F— the quadratic free energy with its full verification status.
The mental move
Continuous time is where Active Inference stops looking like just another Bayesian RL and starts looking like what brains actually do. Your visual cortex doesn't wait for a fresh frame to finish to start responding — it predicts the next frame in generalised coordinates and drives eye movements to minimize the prediction error continuously.
Chapter 8 gives you the math for that. The Workbench gives you the scaffold to touch it. The empirical payoff — motor reflexes, smooth pursuit, the delay-compensated sensorimotor loop — is not in this chapter, but it's why Chapter 8 is the chapter other physiologists cite.
Next
Part 10: Chapter 9 — Model-Based Data Analysis. Active Inference as a statistical tool: fit models to human behavioral data, compute Bayesian model evidence, compare competing hypotheses. Where Active Inference stops being "a theory of brains" and becomes "a tool for studying brains."
⭐ Repo: github.com/TMDLRG/TheORCHESTRATEActiveInferenceWorkbench · MIT license
📖 Active Inference, Parr, Pezzulo, Friston — MIT Press 2022, CC BY-NC-ND: mitpress.mit.edu/9780262045353/active-inference
← Part 8: Discrete Time · Part 9: Continuous Time (this post) · Part 10: Model-Based Data Analysis → coming soon




Top comments (0)