Only a chat log is a diary you cannot query. Only atomic notes is a corpus you cannot narrate. Writing both — every working session, in real time — is what makes past work both retrievable and tellable without turning Claude into your personal journal service.
This is the pattern I run against every working session in the lab. Two artifacts, always. The narrative log feeds a future chronicle. The atomic wiki feeds retrieval — via an Obsidian graph for the human, and via a RAG corpus for the system. Same exchange, two audiences, two compounding artifacts.
Reference surfaces: claude-code-agent-skills-framework has the rule; claude-code-mcp-qa-automation demonstrates the same discipline applied to sprint data.
Artifact 1 — The session log (narrative, chronological)
Lives at knowledge/sessions/YYYY-MM-DD.md. One file per session. Chronological, story-shaped, written in real time as the session unfolds.
The schema per exchange:
## Exchange N — <concept name>
**I asked:** <verbatim or close — the actual words, confusion, analogy>
**What I already understood:** <the correct parts>
**What was missing:** <the gap the exchange addressed>
**The teaching exchange:** <the analogy used, the mechanism explained,
the real-world context>
**Analogy-failure moment:** <where the opening analogy did not map to
the technical reality — the sentence or mismatch that forced the mental
model to upgrade>
**The breakthrough:** <the moment it clicked, in my own words>
**My return line:** <my own words at the close where the upgraded
mental model landed — verbatim>
**Concept linked:** [[wiki-page]]
Two beats in that schema are load-bearing and cannot be fabricated after the fact: the analogy-failure moment and the return line. These are the concentric-loop proof-of-completion signals. Without both, the loop collapsed into documentation and the concept did not actually land.
The session log is not a transcript. It is a curated narrative of the teaching moves that worked, with the specific points where the mechanism upgraded the learner's mental model. A future narrator — whether me writing a chronicle, a chronicler agent reading the session logs, or a reader retracing how the learning happened — uses this file as the primary source.
Artifact 2 — The atomic wiki (retrievable, cross-linked)
Lives at knowledge/wiki/<category>/<concept>.md. One file per atomic concept. Each file cross-links to related concepts using [[wiki-link]] syntax.
The shape per wiki page:
# <concept-name>
## What it is
<one-paragraph definition in my own words>
## Mechanism
<what actually happens, at the layer the concept lives at>
## Related
- [[parent-concept]]
- [[sibling-concept-1]]
- [[sibling-concept-2]]
## First encountered
Session YYYY-MM-DD, Exchange N.
## Whiteboard-test status
Not yet tested / Passed on <date> / Fragile (retry scheduled)
Plus three index-level files:
-
knowledge/wiki/index.md— updated on every page addition, with the new entry alphabetized. -
knowledge/wiki/log.md— append-only change log for every wiki edit. -
knowledge/wiki/<category>/— directory structure by domain.
The wiki is built incrementally, one atomic concept per entry, cross-linked as it grows. Over months it becomes a second brain — readable by hand (as an Obsidian graph) and queryable by retrieval (as a RAG corpus against my own thinking).
Why the split matters
Single-artifact patterns fail, and they fail in opposite directions.
Session-log-only. You have a chronological story. You cannot answer "what do I know about dict internals?" without scrolling through six months of session logs. The log is narrative but not retrievable.
Wiki-only. You have an atomic knowledge graph. You cannot answer "how did I come to understand this?" because the wiki page strips out the teaching exchange that produced the understanding. The wiki is retrievable but not narrative.
Both artifacts are the same exchange read two different ways. Writing both in real time costs 20% more than writing one, and buys you both retrieval and narrative — the expensive-to-reconstruct artifact is not the one you write, it is the one you skip.
Why you cannot batch this
The analogy-failure moment and the return line are ephemeral. By the end of the session they have faded from working memory. By the next day they are reconstructed — which means they are lies. Accurate capture requires writing the exchange within minutes of it happening, while the specific words the learner used are still present.
Batching session capture to "I'll write it up on the weekend" produces sanitized summaries with none of the narrative texture a chronicler needs. The breakthrough beat is where a reader feels the concept land. That beat is in the exact words of the learner at the exact moment, not in a weekend summary written after four other sessions have blurred the original.
Real-time, or not at all.
The log.md append-only change log
Every wiki edit gets an entry in knowledge/wiki/log.md. Timestamp, file edited, summary of the change, reason. Append-only.
This seems redundant (git already records edits) and is not. Git records what changed; the log records why, in the maintainer's own words, at the time of the change. A git blame six months from now will show you the diff. The log will tell you why the diff was worth making.
For a corpus that compounds, the why is what keeps edits principled. Without it, the wiki drifts into a collection of edits made because some session surfaced something new — without any record of which edits were critical versus incidental. The log separates the two.
The RAG corpus payoff
The wiki is writable by me and readable by me, but also readable by a RAG system against my own thinking. That second reader matters. Over months, the wiki becomes a retrieval corpus whose answers are grounded in the specific understanding of a specific learner — me — not in an average of the internet's understanding of the same concept.
A RAG query like "what did I conclude about backprop and why?" returns wiki pages I wrote, with cross-links to the sessions that produced them, and with my own wording of the mechanism. This is different from a generic search over Wikipedia or a generic LLM answer. It is grounded in one specific person's trajectory.
Building that corpus requires the atomic-wiki discipline. A session-log-only pattern does not produce it.
The chronicle payoff
The session log is the raw material for a longer-form narrative — the chronicle of how the learning happened. A chronicler agent (or a human writing a blog series, or a learner retracing their own path a year later) reads the session logs as the source text.
The chronicle cannot be written without the narrative texture the session log preserves. The wiki gives you the facts. The session log gives you the story. Stories are what readers engage with; facts are what they refer back to. Both audiences need their own artifact.
What this does NOT require
- Not every conversation. Trivial questions, casual checks, throwaway experiments do not earn the dual artifact. The rule fires on teaching exchanges that moved a concept from "not understood" to "understood" — the ones that matter.
- Not a perfect schema. The format above is what I use; you can adapt it. The two invariants are: (a) chronological narrative file with the analogy-failure and return-line beats preserved, and (b) atomic concept file that cross-links to related concepts.
- Not a substitute for code. Session logs and wiki pages are not where production work ships. They are where understanding is preserved so production work can build on it.
The move
If you are serious about compounding your own learning: pick one working session this week. Write both artifacts from it. One narrative file, one wiki page per concept that landed. Cross-link the wiki pages to each other as you write them.
The discipline takes twenty extra minutes per session. The payoff compounds across every future session that retrieves from the resulting corpus.
Aman Bhandari. Operator of an AI-engineering research lab running Claude Opus as the coaching partner, plus a QA-automation surface shipping against a real sprint workload. Public artifacts: claude-code-agent-skills-framework and claude-code-mcp-qa-automation. github.com/aman-bhandari.
Top comments (0)