Ask Claude to teach you a non-trivial concept and watch the shape of the reply. It almost always descends: definition, syntax, an example, maybe a second example, a closing sentence that is either a summary or a handoff. The descent is fine. The problem is there is no return.
The opening hook — the analogy or scenario the reader would have used to latch onto the concept — is never revisited. Syntax gets retained. Mechanism does not. A week later the reader can name the API but cannot explain why it works, because the concept was never re-grounded into the surface form they started in.
I teach myself by having a fixed contract that forces the return. Five nodes. Every non-trivial concept travels them in order. If the teaching exchange does not close back on the opening analogy, the loop is incomplete and the concept has not landed — regardless of how detailed the middle was.
This is the rule I run in claude-code-agent-skills-framework under .claude/rules/concentric-loop.md. It carries a WHY tag and a retire-when clause so it decays cleanly when Claude's default teaching shape improves enough to make the rule unnecessary.
The five nodes
The loop travels analogy → code → system intermediaries → hardware/math → analogy. The final node is not a new analogy. It is the same analogy, now meaning more because the reader has traveled the physics underneath it.
| Node | What it is |
|---|---|
| 1. Analogy (OPEN) | A lived-experience scenario the reader already reasons fluently inside. Cooking, driving, a physical object in the room. |
| 2. Code | The first machine-legible layer. Python, TypeScript, shell. I/O contract named before any line is written. |
| 3. System intermediaries | OS syscalls, networking, file descriptors, memory management. What the kernel is doing when the code runs. |
| 4. Hardware / math | CPU/GPU, IEEE 754, linear algebra, probability, gradients. The physics of correctness. |
| 5. Analogy (RETURN) | The same analogy as node 1, reinterpreted through the descent. |
The descent is node 2 → 3 → 4. The return is not a victory lap. It is the gate that decides whether the reader has upgraded their mental model or only added facts to it.
Practitioner anchors per node
Each node is pinned to a named methodology from a working 2025-2026 practitioner. If a teaching exchange cannot be traced to at least one of these per node it traverses, the descent is ungrounded and I pause and re-anchor.
Node 1 (analogy). No practitioner here — the anchor must come from the reader's own life, not a canon text. Borrowing an analogy from a book is how analogy inflation starts.
Node 2 (code / I-O framing) — Chip Huyen. Every code node opens with an explicit input/output specification: what goes in, what comes out, what data is available at what latency tier (online / nearline / offline). The latency-tier framing originates with Netflix (Amatriain and Basilico, 2013) and Huyen mainstreamed it in Designing Machine Learning Systems. The node sounds like: "Before we write the function, state the contract. What is the input type, the output type, and what data are you assuming is already computed vs. computed on demand?"
Node 3 (system intermediaries / empirical baseline) — Eugene Yan + Hamel Husain. Before any ML or agent component is introduced, Yan's "start with the problem, not the technology" asks: what regex, SQL, or rule-based filter already gets 50-70%? Before building on top of an LLM, Husain labels 20-100 real outputs by hand. The manual trace becomes the eval harness. The node sounds like: "Before we pick the model, show me the 30-line regex that would already solve half of this. Before we tune the prompt, label 25 outputs and tell me which ones are wrong and how."
Node 4 (build-to-understand) — Jeremy Howard + Sebastian Raschka. Howard (fast.ai Part 1) runs top-down: get a working artifact end-to-end first, then spiral into mechanism. Raschka (Build a Large Language Model from Scratch) runs bottom-up: raw tensors, manual attention, everything by hand. The two balance each other. Howard for momentum ("I built a thing"), Raschka for mechanism ("I built the thing's guts"). The node sounds like: "Get it working end-to-end on 10 rows first — we will earn the right to optimize. Then we peel the library open and rebuild the core in 40 lines."
Node 5 (atomic derivation / return) — Andrej Karpathy + Julia Evans. Karpathy: shrink the concept until it fits in your head. micrograd is 100 lines of autograd; nanoGPT is 300 lines of training. The return to the analogy lands when the reader has seen the guts small enough to hold. Evans is the OS descent safety net: when the abstraction leaks, drop to strace / tcpdump / perf / /proc. The node sounds like: "Now that you have traced the whole thing, reduce it to the 40-line version you can explain on a whiteboard. If the abstraction leaks, we run strace and watch the syscalls."
Five failure modes the loop has to catch
The shape of the loop is not the hard part. The failure modes are.
1. Patronizing return. The closing analogy is simpler than the opening one. This signals the teacher condescended during the descent — softened the mechanism — and the reader gets a dumbed-down echo of where they started. The return must land at equal or higher sophistication, enriched by the descent.
2. Analogy leakage. The reader starts reasoning about the analogy instead of the system. "Functions are like machines that take input and return output" breaks at closures, first-class functions, recursion. If the analogy never breaks, the reader is reasoning about the analogy, not the mechanism. Gentner's structure-mapping says the places the analogy does NOT map are where the deepest meaning lives — surface at least one.
3. Descent-avoidance. The return becomes permission to skip math or OS. "We closed the loop on that one, moving on." The loop is not complete until nodes 3 AND 4 are traversed. A loop that skips the descent is a shape, not a proof.
4. Premature return. Returning before mechanism-level understanding lands. The Bransford transfer test below is the gate.
5. Analogy inflation. Multiple analogies accumulate. The reader remembers the metaphors and forgets the mechanisms. One analogy per concept. The return reuses the opening analogy. Do not replace.
The Bransford transfer test (loop-completion gate)
Bransford and Schwartz, "Rethinking Transfer" (1999), is the empirical spine. Transfer requires re-encountering the concept in a different surface form. Descent-only teaching produces what Whitehead (1929) called "inert knowledge" — the reader cannot apply it outside the original context.
After the return, pose a novel problem in a new surface form. Different domain, different vocabulary, different practitioner's framing. If the reader solves it, the loop worked. If the reader can only reproduce the original analogy, the loop collapsed into memorization and needs another descent with a different analogy or a deeper cut.
This test also works as an eval for Claude's output. Swap the system prompt or the input schema. Check whether the agent's correctness transfers or was memorized against the original framing. The three failure signals: (1) the agent can only reproduce the original analogy, (2) it solves the old problem but not the transfer, (3) it solves the transfer only after a hint.
Whole-loop practitioners worth studying
These four execute the entire loop in a single artifact — open, descent, return — and are worth reading to feel the shape end to end.
- Julia Evans — zines and blog (jvns.ca). Scenario opens, syscall/kernel middle, practical close.
- Jeremy Howard — fast.ai Part 1. Whole game first, then mechanism, then whole game with new eyes.
- Andrej Karpathy — Zero to Hero. Opens with analogy ("a neural net is just a function"), descends through autograd/backprop, returns to "now you can train GPT."
- Bret Victor — "Ladder of Abstraction." The loop is the argument.
Pick any post by any of the four. Mark where the open is, where the descent hits its deepest point, and where the return lands. You will start seeing the shape everywhere it exists and everywhere it is missing. The second list is longer than the first.
That is the whole practice. Contract the shape into a rule file. Apply it to every concept you explain and every concept Claude explains to you. Refuse to call the concept landed until the return fires.
Aman Bhandari. Operator of an AI-engineering research lab running Claude Opus as the coaching partner, plus a QA-automation surface shipping against a real sprint workload. Public artifacts: claude-code-agent-skills-framework and claude-code-mcp-qa-automation. github.com/aman-bhandari.
Top comments (0)