DEV Community

ORCHESTRATE
ORCHESTRATE

Posted on

Active Inference, The Learn Arc — Part 12: Session §1.1 — What Active Inference Actually Claims (Session-Depth)

Session 1.1 — What Active Inference Claims, rendered in the Workbench

Series: The Learn Arc — 50 posts teaching Active Inference through a live BEAM-native workbench. ← Part 11: Unified Theory. This is Part 12 — the start of the session-depth arc.

A new cadence

Parts 1–11 covered the full book at chapter-overview depth. You now have the spine: perception, action, learning under one functional; A/B/C/D matrices; precision as neuromodulation; POMDPs; continuous time; model fitting.

From here forward the series changes shape. Each post takes one 8–15 minute workshop session from /learn/session/:num/:slug and opens it up in depth. 38 sessions across the 10 chapters. One tight post per session. Plus a finale.

The posts are shorter, more focused, more runnable. You can read one in 4–5 minutes and have time to open the session page alongside.

Session 1.1 — "What Active Inference claims"

The first session of Chapter 1. Three minutes on the hero claim. Three minutes on the vocabulary you'll need. Three minutes on the first hands-on thing to do.

The session route is /learn/session/1/s1_what_is_ai. Open it.

The four narrations

Every session in the Workbench renders its content four different ways, keyed to the learner's path:

  • Kid — grade-5 vocabulary, one concrete image per concept, one sentence per idea.
  • Real-world — grade-8 vocabulary with everyday analogies, equations only when they clarify.
  • Equation — Unicode math freely, cite equation numbers, keep derivations tight.
  • Derivation — full formalism, cite proof sources, flag heuristic steps.

Click one of the path buttons at the top of /learn and the entire curriculum rewrites itself.

For Session 1.1, the hero claim in each voice:

Kid

Your brain is a prediction machine. It guesses what's going to happen next, and when it's wrong, it learns. That's it. That one idea explains seeing, moving, and learning.

Real-world

Active Inference is the claim that one rule — keep your predictions about the world accurate, and act to make them come true — explains perception, action, and learning together. Not three modules. One principle, running three ways.

Equation

An agent minimizes variational free energy F[Q] over its posterior beliefs Q (perception, Eq. 4.13), over its policy posterior π (action, Eq. 4.14), and over its generative-model parameters θ (learning, Eq. 7.10). One functional, three arguments.

Derivation

Given observations o and hidden states s, the agent maintains a variational posterior Q(s, π) that approximates P(s, π | o). The free-energy functional F[Q, o] = E_Q[ln Q(s,π) − ln P(o,s,π)] upper-bounds −ln P(o) (Jensen). Minimizing F jointly over Q (perception), over P(o|π) factors via C (action), and over the parameters of P(o,s|π) (learning) is the single computational commitment of Active Inference.

Same session. Same claim. Four vocabularies. Pick yours at the top of /learn; every subsequent page respects the cookie.

The session page's structure

Every session page has the same seven-block spine:

  1. Breadcrumb + path chip — which chapter you're in, which path you chose.
  2. Title + hero concept + minutes — 8-minute orienter.
  3. Path-specific narration — the block above, swapped live when you change paths.
  4. Book excerpt — attributed verbatim extract from priv/book/sessions/*.txt, truncated to a fair-use budget.
  5. Figure strip — static images from the book's figures directory.
  6. Podcast segment — a short audio clip covering the session's hero concept.
  7. Narrator button — "🔊 Narrate this session" — routes to the local Piper TTS endpoint if running, browser Web Speech API otherwise.

Then:

  1. Linked labs — buttons that open the session's relevant /learninglabs/*.html at a pre-seeded beat.
  2. Linked Workbench surfaces — routes to the cookbook recipes, equation pages, or builder canvases this session anchors.
  3. Concepts strip — clickable glossary chips.
  4. Micro-quiz — three or four multiple-choice questions with path-aware explanations.
  5. Prev / next session — the curriculum navigator.

Twelve blocks. Every session page honors every block. That uniformity is deliberate — you learn the shape once and then every subsequent session's content is what you're concentrating on.

The hands-on bit

Session 1.1's linked lab is the BayesChips simulator — the simplest possible demonstration of "belief updated by evidence." Click the lab chip and the Shell opens with ?path=real&beat=intro honored. You drop chips into a jar. Each chip is an observation. The posterior probability of "biased jar" updates after every drop.

That's the loop in one trivial world. Chapter 1's later sessions widen it to a 3×3 maze with a real Jido agent. Session 1.1 keeps the scope small so the claim lands before the machinery does.

The quiz

The quiz at the bottom of Session 1.1 has three questions with path-aware rationales. Example (real-world path):

Q: Active Inference says perception, action, and learning are:

  • ☐ Three separate modules in the brain.
  • ☐ Three timescales of the same gradient descent. ✓
  • ☐ Only three in some textbooks; really it's five.
  • ☐ Three different cortical regions.

Why: Chapter 2 shows all three minimize the same functional F[Q] with different arguments — Q, the policy, the parameters. Brains may implement them in different regions, but the computation is one rule, three scales.

The quiz is generous. It won't catch you on phrasing. It catches you on the claim.

The Qwen tutor sits with you on this page

Every page in the Workbench has a Qwen drawer in the bottom-right (✨ Ask Qwen). On a session page, the drawer knows:

  • Which chapter (1) and session (s1_what_is_ai) you're on.
  • Which learning path is active (from the suite cookie).
  • Which concepts are listed on this page's glossary strip.
  • Which labs link here and at what beat.
  • The session's qwen_seed — a short one-liner that tunes the tutor's voice for this specific session.

Ask "Why is this the hero of Chapter 1 and not Chapter 2?" and Qwen answers with the session's actual narration + attributed excerpt injected into its context. No hand-waving, no confabulation — the session's content is part of the prompt.

Run it yourself

The mental move

Sessions are where the book becomes a place you spend 8 minutes, not 8 hours. You pick your path, read the narration, run the lab, take the quiz, move on. Compound that across 38 sessions and you've done the whole book.

This series is a scaffolding for that. Parts 12–49 trace the same 38 sessions. One post per session. You read the post, open the session, do the work, close both. Next day, next session.

Next

Part 13: Session §1.2 — Perception and action — one loop. The second session of Chapter 1. We open the loop live, using /world and the Tiny Open Goal maze. You'll press Step three times and feel what Chapter 1's claim actually looks like.


⭐ Repo: github.com/TMDLRG/TheORCHESTRATEActiveInferenceWorkbench · MIT license

📖 Active Inference, Parr, Pezzulo, Friston — MIT Press 2022, CC BY-NC-ND: mitpress.mit.edu/9780262045353/active-inference

Part 11: Unified Theory · Part 12: Session 1.1 (this post) · Part 13: Session 1.2 → coming soon

Top comments (0)