Series: The Learn Arc — 50 posts through the Active Inference workbench.
Previous: Part 48 — Session §10.2: Limitations
Hero line. Session 10.3 is the roadmap. Which open problems in Active Inference are closest to tractable, which are research bets, which are probably dead ends. The last session in the book, and the first page of the next one.
From limits to bets
Session 10.2 listed what does not work. Session 10.3 does the complementary job — points at what might, and sorts it by how far away it looks.
Five beats
Scalable planning is the most tractable. Amortised policy networks, continuous-relaxation tricks, tree-search hybrids — there are a dozen routes out of the policy-enumeration wall, each with working prototypes. Expect this to land within the next few years.
Bridging to deep learning is underway. Active Inference's likelihoods are small by default; swapping them for neural likelihoods while keeping the EFE scoring gets you calibrated uncertainty on top of deep perception. Several groups are already shipping this.
Precision auto-calibration is a research bet. Can an agent learn its own precisions the same way it learns A and B? The math is appealing. Whether it converges in practice is open. High-risk, high-reward.
Empirical identifiability in humans is the experimental frontier. The framework makes specific predictions about dopamine (precision), serotonin (meta-precision), and frontal-parietal dynamics (hierarchy). Each is being tested. Results so far are encouraging but not unambiguous.
What is probably dead. Purely normative "F explains everything" arguments are probably a dead end; the interesting work is constructive and specific. Brute-force application to economics or social science without a generative model is also unlikely to pan out.
The honest closing
Active Inference is not a theory of everything. It is a coherent, testable framework for one slice of agency — perception, action, learning, uncertainty — built around a single equation. If in ten years it has solved half the items on this roadmap, it will have justified the attention it is getting now. The other half will have taught us something about the frontier we did not already know.
Quiz
- Which scalability route seems closest to working — amortised policies, tree search, or continuous relaxation?
- What makes precision auto-calibration a genuinely hard research problem?
- What is the single empirical prediction whose resolution would most change your confidence in the framework?
Run it yourself
mix phx.server
# open http://localhost:4000/learn/session/10/s3_where_next
Cookbook recipe: synthesis/roadmap — a single-page interactive "map" linking each roadmap item to the cookbook recipe that implements a first draft of it. A sandbox for readers who want to pick up one item and push it forward.
Next
Part 50: Series capstone. Fifty posts. Ten chapters. One framework. We close the Learn Arc with a reader's map — what to keep, what to skim, where the workbench sits in the broader Active Inference landscape, and what to build next. The final post.
Powered by The ORCHESTRATE Active Inference Learning Workbench — Phoenix/LiveView on pure Jido.

Top comments (0)