DEV Community

ORCHESTRATE
ORCHESTRATE

Posted on

Active Inference — The Learn Arc, Part 47: Session §10.1 — Perception, action, learning

Session 10.1 — Perception, action, learning

Series: The Learn Arc — 50 posts through the Active Inference workbench.
Previous: Part 46 — Session §9.3: Case study

Hero line. Perception, action, and learning are not three algorithms. They are three gradients of the same free energy, acting on different variables. Session 10.1 is the synthesis — every piece of the series in one sentence.


The single equation, three handles

Chapter 10 is the closing chapter. Session 10.1 earns the word synthesis by showing that everything the series introduced collapses to one equation with three levers:

  • ∂F/∂μ → perception (update the belief)
  • ∂F/∂a → action (change the world)
  • ∂F/∂θ → learning (update the parameters)

Different timescales, different variables, same F.

Five beats

  1. Perception is fastest. Beliefs update every step. This is Eq 4.13 in discrete time, Eq 4.19's first gradient in continuous time. The gradient flows until the belief balances prior and likelihood.

  2. Action is simultaneous but acts through the world. Where perception moves the belief to match the sensors, action moves the sensors to match the belief. Same F, opposite variable. Chapter 7's EFE and Chapter 8's ∂F/∂a are two faces of the same gradient.

  3. Learning is slowest — and uses the same update. Dirichlet counts on A and B accumulate every step; the change is slow because each step contributes little. That is a feature: learning is a time-averaged version of perception, with the same equation doing the work.

  4. Precision chooses which gradient wins. If sensory precision is high, perception and action dominate. If parameter precision is low, learning runs fast. The same three gradients can produce wildly different behavior just by tuning the precisions.

  5. This is the whole framework. Everything — hierarchy, continuous coordinates, data fitting — is this equation, applied more than once or at a different level. If you keep "one F, three gradients" in your head, no future Active Inference paper will feel foreign.

Why it matters

Most presentations of Active Inference introduce perception, action, and learning as separate algorithms and bolt them together. That is the main reason the framework gets a reputation for being opaque. Session 10.1 does the opposite: it shows they were always one object. That reframing is what makes the rest of the field legible.

Quiz

  • Why do perception and action act on different variables of the same F?
  • What happens in the extreme where parameter precision is zero?
  • If perception and action both minimise F, how does the agent avoid a degenerate fixed point where nothing ever moves?

Run it yourself

mix phx.server
# open http://localhost:4000/learn/session/10/s1_perception_action_learning
Enter fullscreen mode Exit fullscreen mode

Cookbook recipe: synthesis/three-gradients — one agent, three live plots of the three gradients, and sliders that change each update rate independently. Build the "one F, three handles" intuition with your own hands.

Next

Part 48: Session §10.2 — Limitations. What Active Inference is not good at, what it handwaves, and where its assumptions break. The honest session.


Powered by The ORCHESTRATE Active Inference Learning Workbench — Phoenix/LiveView on pure Jido.

Top comments (0)