DEV Community

ORCHESTRATE
ORCHESTRATE

Posted on • Originally published at github.com

Active Inference, The Learn Arc — Part 1: Why I built a BEAM-native workbench for the Free Energy Principle

The ORCHESTRATE Active Inference Learning Workbench — overview screen

Built with wisdom from THE ORCHESTRATE METHOD™ and LEVEL UP by Michael Polzin — running on pure Jido on the BEAM, teaching Active Inference from Parr, Pezzulo & Friston (2022, MIT Press).

The problem

Active Inference: The Free Energy Principle in Mind, Brain, and Behavior (Parr, Pezzulo, Friston — MIT Press, 2022) is one of the most ambitious scientific books of the decade. It argues that one variational rule — minimize surprise — explains both what you believe and what you do. Perception, action, learning, even curiosity, all fall out of the same math.

And the book is a wall.

It's 240 pages dense with measure theory, generalized coordinates, factor graphs, and six flavors of free energy. If you come at it as a working engineer who just wants to feel the idea move, you'll put it down by chapter 2.

I got tired of the wall. So I built a tool that tears holes in it.

What this is

The ORCHESTRATE Active Inference Learning Workbench is a BEAM-native Phoenix application with native Jido v2.2.0 agents. Pure Elixir, no Python, no LangChain, no external agent runtimes. Every generative model is a real Jido.AgentServer. Every episode runs on OTP supervision trees. Every signal it emits is traceable to the book equation that produced it.

It ships with:

  • A 10-chapter curriculum (39 sessions, 4 learner paths kid/real/equation/derivation — mapped to the AI-UMM levels from LEVEL UP).
  • A 50-recipe cookbook where every card is runnable end-to-end (/cookbook/pomdp-tiny-corridor, /cookbook/sophisticated-plan-tree-search, and 48 more).
  • A Lego-style Builder canvas for composing generative models from typed blocks.
  • Labs (stable fresh-agent-per-click runner) and Studio (flexible agent lifecycle: live / stopped / archived / trashed).
  • Glass Engine, which traces every signal an agent emits back to the equation in the book.
  • A live in-app tutor that sees the page you're on, knows what comes before and after, and can open the right part of the app for you when you get stuck.

The source is public under MIT: github.com/TMDLRG/TheORCHESTRATEActiveInferenceWorkbench.

The map

The Learn hub is the front door. Pick a path — story, real-world, equation, or derivation — and the entire curriculum rewrites itself to your vocabulary.

Learn hub — 10-chapter grid with 4 learning paths

Canonically from the book:

# Chapter Hero concept
0 Preface Why this book exists — and how to read it.
1 Overview Perception, action, learning — one loop, one theory.
2 The Low Road to Active Inference From Bayes' rule to variational free energy — the minimal machinery.
3 The High Road to Active Inference Expected Free Energy: the value of a plan, as a bill with two lines.
4 The Generative Models of Active Inference Every belief, every action, every thought — inside one generative model.
5 Message Passing and Neurobiology The cortex as a factor graph — and the neuromodulators as precision knobs.
6 A Recipe for Designing Active Inference Models Ship your first agent — what's hidden, what's seen, what costs what.
7 Active Inference in Discrete Time POMDPs in full colour — message passing, Dirichlet learning, hierarchy.
8 Active Inference in Continuous Time Motion of the mode is the mode of the motion.
9 Model-Based Data Analysis Fit an Active Inference model to real data — and know when to trust it.
10 Active Inference as a Unified Theory of Sentient Behavior Where the theory goes — and where it bends.

Those hero lines are not mine. They come from WorkbenchWeb.Book.Chapters — the canonical metadata table that drives the app, the curriculum, the cookbook cross-refs, and this series.

The 50-post arc

One post per major milestone. The series mirrors the curriculum exactly, so you can read alone, read-while-building, or pair each post with the matching Workbench route.

  • Posts 1–11: the 10 chapters + this preface — the conceptual spine.
  • Posts 12–50: a session at a time — the 38 workshop-length sessions under those chapters, each anchored in one runnable exercise.

Every post has three things:

  1. The idea, in plain English. No hand-waving, no "imagine a brain," no cute metaphors that dissolve under questioning.
  2. A screenshot of the Workbench running, showing the idea in motion.
  3. A Run it yourself block with a clickable route (/cookbook/<slug>, /labs?recipe=<slug>, /studio/run/<session_id>, or /equations/<id>) and exactly what to look for.

If you've cloned the repo, the routes are live at http://localhost:4000. If you haven't, you can still read — the screenshots are the ground truth.

How the suite teaches

There's a guide hub at /guide that covers the suite's own surfaces:

Guide hub — 14 topic cards

Four surfaces matter most in the first 10 posts:

  • /learn — the chapter grid above. Each chapter card drops you into 3–5 sessions with path-specific narration, an attributed book excerpt, figures, linked labs, and a short quiz.
  • /cookbook — 50 recipe cards. Click any one and press Run in Studio to watch a real Jido agent execute the idea against a real world.
  • /builder/new — drag blocks onto a canvas. Each block is a typed Jido.Action, Jido.Skill, or Jido.Agent; params round-trip through server-side Zoi validation.
  • /studio — tracked agents with a full lifecycle, so an agent you build in one session survives into the next.

There's also a tutor-mentor that sits in the drawer on every page — local Qwen 3.6 if you have the weights, otherwise any OpenAI-compatible endpoint (OpenAI, Anthropic via LibreChat, Ollama, anything). It sees the page you're on. It knows the chapter, the session, the equation, the running episode's step count. On /labs/run it can answer why did the agent pick that action? and cite the actual policy posterior. That's a different post.

Why Elixir

Active Inference is a statement about processes — things that run, exchange messages, and hold state over time. BEAM was built for that. A Jido.AgentServer is a supervised process. A policy is a pure function. A directive is how an action describes its external effect. An episode is a GenServer that owns the loop.

No locks, no threadpool ceremony, no event-loop workarounds. The math maps onto the runtime almost one-to-one.

And because BEAM agents are cheap, the app lets you spawn a dozen at once in Studio, watch them attach to different worlds, and treat the whole thing as a wind-tunnel for your intuition.

How to follow along

  • Clone: git clone --recurse-submodules https://github.com/TMDLRG/TheORCHESTRATEActiveInferenceWorkbench.git
  • Start: ./scripts/start_suite.sh (macOS/Linux/Git Bash) or .\scripts\start_suite.ps1 (Windows)
  • Open: http://localhost:4000/learn
  • If you don't want to run it locally: the screenshots in every post are the same DOM the learner sees. You'll never miss anything by reading alone.

Next

Part 2: Chapter 1 — Perception, action, learning — one loop, one theory. We'll watch a tiny Jido agent close its first Perceive → Plan → Act loop on a 3×3 maze, and then trace every signal it emits back to Eq. 4.13.


⭐ The repo: github.com/TMDLRG/TheORCHESTRATEActiveInferenceWorkbench — MIT licensed, zero Python, zero LLM-required. Stars, forks, and issues welcome.

📖 The book: Active Inference by Parr, Pezzulo, Friston (MIT Press 2022, CC BY-NC-ND) — mitpress.mit.edu/9780262045353/active-inference

Follow along for Parts 2–50. Next up: the agent loop.

Top comments (0)