DEV Community

ORCHESTRATE
ORCHESTRATE

Posted on

Active Inference — The Learn Arc, Part 46: Session §9.3 — Case study

Session 9.3 — Case study

Series: The Learn Arc — 50 posts through the Active Inference workbench.
Previous: Part 45 — Session §9.2: Comparing models

Hero line. One subject. One dataset. Two candidate models. One posterior per model, one Bayes factor on top. The whole Chapter 9 pipeline fits on one screen — and once you have walked it end to end, the methodology stops feeling like magic.


Why a case study closes Chapter 9

The mathematics in 9.1 and 9.2 is self-similar enough that it is easy to agree with and hard to feel. Session 9.3 collapses the whole chain into a single concrete example so you can watch the decisions happen: which parameters are free, which priors you set, which model wins, and how strongly.

Five beats

  1. The subject is a simulated agent with known ground truth. Because we control the generator, we know the right answer. That makes the case study diagnostic — if the pipeline does not recover it, the bug is in the pipeline, not the subject.

  2. Two models: flat vs hierarchical. Both are valid Active Inference agents. One is the true generator; one is simpler but plausible. Real studies would add a handful more — the beat is in how few you need to run the whole machinery.

  3. Fit first, compare second. For each model, run Session 9.1's parameter posterior pipeline. Then run Session 9.2's evidence comparison on the resulting F values. Two steps, one output: "the winning model is X, Bayes factor Y."

  4. Posterior predictive checks. Before trusting the Bayes factor, simulate from each fitted model and overlay on the real trajectory. If the hierarchical model's posterior predictive misses obvious structure the flat one does not, no Bayes factor can save you.

  5. Reporting template. The session ends with a one-paragraph reporting template — what to say about priors, identifiability diagnostics, winning model, effect size. A walkthrough you can paste into a methods section.

Why it matters

Chapters 4–8 built the framework. Chapter 9 fits it to data. Session 9.3 is where a reader sees that the whole stack — from A and B to Bayes factors — is actually deployable on a single subject's trial-by-trial data. The case study is what turns "I read the book" into "I could run this on my own experiment tomorrow."

Quiz

  • Why run posterior predictive checks before looking at the Bayes factor?
  • What does it mean when the ground-truth generator's Bayes factor over the competitor is only ~2 — and when is that still an acceptable outcome?
  • Which reporting element is the first to cut if your journal's word limit is tight?

Run it yourself

mix phx.server
# open http://localhost:4000/learn/session/9/s3_case_study
Enter fullscreen mode Exit fullscreen mode

Cookbook recipe: fitting/case-study — the full end-to-end pipeline on a simulated subject. Output: a fitted posterior for each candidate, a Bayes factor, and a posterior-predictive overlay ready for a report.

Next

Part 47: Session §10.1 — Perception, action, learning. Chapter 10 opens. The final chapter re-reads everything under a single lens: one agent, one equation, three different gradients. The synthesis session.


Powered by The ORCHESTRATE Active Inference Learning Workbench — Phoenix/LiveView on pure Jido.

Top comments (0)