<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: tasuku fujioka</title>
    <description>The latest articles on DEV Community by tasuku fujioka (@tasuku9).</description>
    <link>https://dev.to/tasuku9</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tasuku9"/>
    <language>en</language>
    <item>
      <title>Git tracks what changed. It doesn't track why. So I built project-memory.</title>
      <dc:creator>tasuku fujioka</dc:creator>
      <pubDate>Wed, 22 Apr 2026 02:05:01 +0000</pubDate>
      <link>https://dev.to/tasuku9/git-tracks-what-changed-it-doesnt-track-why-4aac</link>
      <guid>https://dev.to/tasuku9/git-tracks-what-changed-it-doesnt-track-why-4aac</guid>
      <description>&lt;h2&gt;
  
  
  In 3 lines
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I built a skill that solves a real problem in long-running AI-assisted work: decisions, hypotheses, experiment results, and half-formed ideas keep disappearing.&lt;/li&gt;
&lt;li&gt;It has a &lt;strong&gt;promotion rule&lt;/strong&gt; so hypotheses do not silently become facts, and a &lt;strong&gt;priority rule&lt;/strong&gt; for resolving contradictions between files.&lt;/li&gt;
&lt;li&gt;It works not only for software projects, but also for &lt;strong&gt;research, exploratory engineering, and paper writing&lt;/strong&gt;, without depending on a specific model vendor.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Have you run into this?
&lt;/h2&gt;

&lt;p&gt;When you work with AI agents over a long period of time, the problem is not just that chats disappear.&lt;/p&gt;

&lt;p&gt;Even when the chat still exists, these kinds of things get lost:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;why you chose option A over option B&lt;/li&gt;
&lt;li&gt;what you already tried and why it failed&lt;/li&gt;
&lt;li&gt;a promising idea you mentioned once and never found again&lt;/li&gt;
&lt;li&gt;which hypotheses are still untested&lt;/li&gt;
&lt;li&gt;which assumptions are actually verified&lt;/li&gt;
&lt;li&gt;how last week's experiment connects to today's decision&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Typical examples look like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“We compared A and B and chose A.” → A month later: &lt;em&gt;Why did we choose A again?&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;“We tried X and it didn't work.” → Two weeks later: you accidentally try X again.&lt;/li&gt;
&lt;li&gt;“Maybe Y is worth exploring too.” → It disappears into the conversation.&lt;/li&gt;
&lt;li&gt;Research notes, experiment notes, and the actual paper draft stop lining up.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Git tracks &lt;strong&gt;what changed&lt;/strong&gt;.&lt;br&gt;
It does not reliably track &lt;strong&gt;why you changed it&lt;/strong&gt;, &lt;strong&gt;what you ruled out&lt;/strong&gt;, or &lt;strong&gt;what is still only a hypothesis&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And now that AI often writes commits too, even commit messages do not always reflect the human reasoning behind the work.&lt;/p&gt;


&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;I built an Agent Skill called &lt;code&gt;project-memory&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It manages project and research knowledge through &lt;strong&gt;role-separated markdown files&lt;/strong&gt; inside the repository.&lt;br&gt;
The AI agent updates them during work sessions. Humans mostly just read them when needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;README.md            ← entry point
CURRENT_STATE.md     ← things safe to treat as current working truth
ROADMAP.md           ← future plan
DECISION_LOG.md      ← decisions and why they were made
RESEARCH_LOG.md      ← experiments, investigation, observations
HYPOTHESIS_LAB.md    ← unverified ideas and hypotheses
HUMAN_BRIEF.md       ← the summary a human should read first
RECOVERY_NOTES.md    ← restart checkpoint after interruption
CONTEXT_MANIFEST.md  ← what to read, and in what order
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For research-heavy work, I also use files like these:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LITERATURE_NOTES.md  ← literature and reading notes
FIGURES_LOG.md       ← figures, diagrams, and output tracking
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No database.&lt;br&gt;
No vector store.&lt;br&gt;
No vendor lock-in.&lt;br&gt;
Just markdown files.&lt;/p&gt;

&lt;p&gt;But the important part is that the information is separated by &lt;strong&gt;state and responsibility&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  Why this is useful
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. It keeps not only the decision, but the reason behind it
&lt;/h3&gt;

&lt;p&gt;The most useful file is probably &lt;code&gt;DECISION_LOG.md&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## DEC-014 — Authentication strategy&lt;/span&gt;

Date: 2026-04-15
Status: adopted

&lt;span class="gu"&gt;### Context&lt;/span&gt;
We needed to decide how to implement user authentication.

&lt;span class="gu"&gt;### Alternatives considered&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; Session-based auth — simple, but harder to scale
&lt;span class="p"&gt;2.&lt;/span&gt; JWT — stateless, but token revocation becomes complicated
&lt;span class="p"&gt;3.&lt;/span&gt; OAuth2 + PKCE — standard, and easier to integrate with external IdPs

&lt;span class="gu"&gt;### Decision&lt;/span&gt;
Adopt OAuth2 + PKCE.

&lt;span class="gu"&gt;### Rationale&lt;/span&gt;
We expect to support Google and GitHub login later.
Starting with OAuth2 now reduces migration cost later.
We also rejected plain JWT because token revocation management would add operational complexity.

&lt;span class="gu"&gt;### Risks&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; The first implementation of the OAuth2 flow is more complex
&lt;span class="p"&gt;-&lt;/span&gt; Login depends on IdP availability

&lt;span class="gu"&gt;### Revisit when&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Running our own IdP exceeds X hours per month
&lt;span class="p"&gt;-&lt;/span&gt; User count exceeds 10,000 and rate limits start becoming a real problem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The benefit is not just that it says &lt;strong&gt;what we chose&lt;/strong&gt;.&lt;br&gt;
It also records:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what we rejected&lt;/li&gt;
&lt;li&gt;why we rejected it&lt;/li&gt;
&lt;li&gt;what risks remain&lt;/li&gt;
&lt;li&gt;when we should revisit the choice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So one month later, “Why did we go with OAuth2 again?” is no longer a mystery buried in old chats.&lt;/p&gt;


&lt;h3&gt;
  
  
  2. Half-formed ideas do not disappear
&lt;/h3&gt;

&lt;p&gt;Some of the most important thoughts show up as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“maybe this is the cause”&lt;/li&gt;
&lt;li&gt;“this might be worth testing”&lt;/li&gt;
&lt;li&gt;“this is messy, but I don't want to lose it”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those ideas usually vanish first.&lt;/p&gt;

&lt;p&gt;So &lt;code&gt;HYPOTHESIS_LAB.md&lt;/code&gt; is split into two layers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Raw sparks&lt;/span&gt;

Low-commitment captures. Vague or incomplete ideas go here first.
&lt;span class="p"&gt;
-&lt;/span&gt; HYP-031: The latency problem might be DNS resolution, not N+1 queries
&lt;span class="p"&gt;-&lt;/span&gt; HYP-032: User drop-off might be caused by onboarding screen 3, not pricing

&lt;span class="gu"&gt;## Working hypotheses&lt;/span&gt;

Ideas that keep coming back, or already have a clear next step.

&lt;span class="gu"&gt;### HYP-018 — Cache invalidation timing is causing the performance regression&lt;/span&gt;

Status: testing
Originated: 2026-04-10
Evidence: RSC-039
Next step: change TTL from 30s to 120s and measure for one week
Revisit: after measurement results arrive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important part is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;the AI captures these automatically.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It does not stop and ask, “Do you want me to log that?”&lt;br&gt;
If the user says something like “maybe this is relevant,” it can go straight into &lt;code&gt;Raw sparks&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can clean up later.&lt;br&gt;
But once a useful idea is lost in a conversation, it usually stays lost.&lt;/p&gt;


&lt;h3&gt;
  
  
  3. Hypotheses do not silently turn into facts
&lt;/h3&gt;

&lt;p&gt;This is probably the core design choice.&lt;/p&gt;

&lt;p&gt;A very common failure mode in long-running projects is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Someone writes “I think X might be true.”&lt;br&gt;&lt;br&gt;
A few weeks later, everyone is acting as if X has already been confirmed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To prevent that, &lt;code&gt;project-memory&lt;/code&gt; has a &lt;strong&gt;promotion rule&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To move something from &lt;code&gt;HYPOTHESIS_LAB.md&lt;/code&gt; into &lt;code&gt;CURRENT_STATE.md&lt;/code&gt;, at least one of the following must exist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;evidence in &lt;code&gt;RESEARCH_LOG.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;an explicit operating decision in &lt;code&gt;DECISION_LOG.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;a clear user decision&lt;/li&gt;
&lt;li&gt;support from an external source&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the promoted item must also satisfy all of these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it is clear enough to be used as a stable working assumption&lt;/li&gt;
&lt;li&gt;the source file is traceable&lt;/li&gt;
&lt;li&gt;it is not dominated by unresolved objections&lt;/li&gt;
&lt;li&gt;it has a &lt;code&gt;revisit_when&lt;/code&gt; condition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the rule is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;capture broadly&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;promote narrowly&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is how you avoid losing ideas without polluting your working truth.&lt;/p&gt;


&lt;h3&gt;
  
  
  4. Contradictions between files are not silently merged away
&lt;/h3&gt;

&lt;p&gt;In long-running work, contradictions happen.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CURRENT_STATE.md&lt;/code&gt; says “we are using A”&lt;/li&gt;
&lt;li&gt;but the latest &lt;code&gt;DECISION_LOG.md&lt;/code&gt; says “we switched to B”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If an AI agent casually merges that into a vague compromise, you get bad state.&lt;/p&gt;

&lt;p&gt;So I define an explicit priority order:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;latest entry in &lt;code&gt;CURRENT_STATE.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;latest entry in &lt;code&gt;DECISION_LOG.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;latest entry in &lt;code&gt;RESEARCH_LOG.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;RECOVERY_NOTES.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;HUMAN_BRIEF.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ROADMAP.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;HYPOTHESIS_LAB.md&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the agent detects a conflict, it should not silently “fix” it.&lt;br&gt;
It should &lt;strong&gt;report the contradiction and propose a patch&lt;/strong&gt;.&lt;/p&gt;


&lt;h3&gt;
  
  
  5. Parallel work becomes visible
&lt;/h3&gt;

&lt;p&gt;In real projects and research, there is rarely only one thread.&lt;/p&gt;

&lt;p&gt;You may be doing several of these at once:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;implementation&lt;/li&gt;
&lt;li&gt;experiments&lt;/li&gt;
&lt;li&gt;literature review&lt;/li&gt;
&lt;li&gt;writing the paper itself&lt;/li&gt;
&lt;li&gt;preparing figures&lt;/li&gt;
&lt;li&gt;submission work&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A pure chronological log mixes these together and becomes hard to scan.&lt;/p&gt;

&lt;p&gt;So &lt;code&gt;HUMAN_BRIEF.md&lt;/code&gt; includes a &lt;code&gt;Tracked threads&lt;/code&gt; table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Tracked threads&lt;/span&gt;

| Thread | Status | Next action / blocker | Source |
| --- | --- | --- | --- |
| Authentication migration | active | Waiting for OAuth2 test results | RSC-042 |
| Performance audit | paused | Waiting for staging deployment | BLK-003 |
| Billing model design | active | Comparing 3 pricing options | HYP-028 |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file is &lt;strong&gt;not&lt;/strong&gt; updated on every small change.&lt;br&gt;
It only changes when the human-facing picture changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;current goal&lt;/li&gt;
&lt;li&gt;major blocker&lt;/li&gt;
&lt;li&gt;key risk&lt;/li&gt;
&lt;li&gt;next decision&lt;/li&gt;
&lt;li&gt;tracked threads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That keeps it short and actually readable.&lt;/p&gt;


&lt;h2&gt;
  
  
  It works for research and paper writing too
&lt;/h2&gt;

&lt;p&gt;This system is not only for software projects.&lt;br&gt;
It works surprisingly well for &lt;strong&gt;research, writing, and exploratory analysis&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Common problems in paper-writing workflows
&lt;/h3&gt;

&lt;p&gt;When you work on research with AI, these things happen a lot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;literature notes become scattered&lt;/li&gt;
&lt;li&gt;hypotheses and sourced claims get mixed together&lt;/li&gt;
&lt;li&gt;figures drift away from the argument they were meant to support&lt;/li&gt;
&lt;li&gt;abandoned directions get rediscovered and repeated&lt;/li&gt;
&lt;li&gt;you forget which source supported which claim&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  How the research profile handles it
&lt;/h3&gt;

&lt;p&gt;In research-heavy work, I split things like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LITERATURE_NOTES.md  ← key points, objections, citation candidates
RESEARCH_LOG.md      ← comparisons, tests, investigation, analysis
HYPOTHESIS_LAB.md    ← ideas not ready for the paper yet
DECISION_LOG.md      ← decisions about structure, method, scope
CURRENT_STATE.md     ← assumptions safe to use in the current draft
FIGURES_LOG.md       ← figure ideas, adoption reasons, revision history
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  A concrete example
&lt;/h3&gt;

&lt;p&gt;Suppose you are writing a paper in comparative mythology.&lt;/p&gt;

&lt;p&gt;A workflow might look like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After reading a source, write into &lt;code&gt;LITERATURE_NOTES.md&lt;/code&gt;:
what it claims, where it might be useful, and where it is weak&lt;/li&gt;
&lt;li&gt;While thinking, you say:
“maybe this myth motif spread through a different transmission route”
→ that goes into &lt;code&gt;HYPOTHESIS_LAB.md&lt;/code&gt; under &lt;code&gt;Raw sparks&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;When you actually build comparison tables or check primary sources, that goes into &lt;code&gt;RESEARCH_LOG.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;When you decide, “we will use this comparison axis in section 3,” that goes into &lt;code&gt;DECISION_LOG.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Only the things safe to treat as current draft assumptions go into &lt;code&gt;CURRENT_STATE.md&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That way, &lt;strong&gt;hypothesis&lt;/strong&gt;, &lt;strong&gt;investigation&lt;/strong&gt;, and &lt;strong&gt;adopted working truth&lt;/strong&gt; do not get mixed together.&lt;/p&gt;

&lt;p&gt;And that matters, because papers are not just the final claims.&lt;br&gt;
A lot of value is in the path that led you there.&lt;br&gt;
If that path disappears, you often end up rethinking the same ground from scratch.&lt;/p&gt;
&lt;h3&gt;
  
  
  Figure logs help more than expected
&lt;/h3&gt;

&lt;p&gt;Figures drift easily in research workflows.&lt;br&gt;
A chart gets made, then later you wonder:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Why did we make this figure in the first place?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;FIGURES_LOG.md&lt;/code&gt; helps by recording:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what the figure is for&lt;/li&gt;
&lt;li&gt;which section it supports&lt;/li&gt;
&lt;li&gt;which version was adopted&lt;/li&gt;
&lt;li&gt;which versions were rejected&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That reduces a surprising amount of confusion later.&lt;/p&gt;

&lt;p&gt;So in practice, &lt;code&gt;project-memory&lt;/code&gt; is less about “developer notes” and more about &lt;strong&gt;state management for long-running thinking work with AI&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  A failure mode this avoids
&lt;/h2&gt;

&lt;p&gt;I have seen this break things many times:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;someone says “this is probably the cause” in conversation&lt;/li&gt;
&lt;li&gt;it gets copied into a README or note&lt;/li&gt;
&lt;li&gt;a few weeks later it is treated as established fact&lt;/li&gt;
&lt;li&gt;implementation or writing proceeds based on it&lt;/li&gt;
&lt;li&gt;nobody can trace where it came from&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once that happens, the cleanup cost is high.&lt;/p&gt;

&lt;p&gt;That is why &lt;code&gt;project-memory&lt;/code&gt; keeps this distinction strict:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ideas can be captured broadly in hypotheses,&lt;br&gt;&lt;br&gt;
but &lt;code&gt;CURRENT_STATE.md&lt;/code&gt; must stay strict.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That separation matters a lot.&lt;/p&gt;


&lt;h2&gt;
  
  
  How to use it
&lt;/h2&gt;
&lt;h3&gt;
  
  
  New project
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/tasuku-9/project-memory-skill
python scripts/init_memory_workspace.py /path/to/project &lt;span class="nt"&gt;--profile&lt;/span&gt; research
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then tell your AI agent something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Read SKILL.md and start in Init mode.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It can then ask about the goal, current state, immediate next target, and known constraints, and write the initial files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Existing project
&lt;/h3&gt;

&lt;p&gt;If the repository already has code and documents, use an adoption flow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Adopt project-memory into this project.
Classify information from the README, existing docs, and git history,
and route it into the correct files.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI can then separate things like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;README stays the entry point&lt;/li&gt;
&lt;li&gt;decisions go into &lt;code&gt;DECISION_LOG.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;experiments go into &lt;code&gt;RESEARCH_LOG.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;hypotheses go into &lt;code&gt;HYPOTHESIS_LAB.md&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Being able to turn an overloaded README back into a real entry point is one of the nicest side effects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resume a session
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resume this project.
Read CONTEXT_MANIFEST.md and RECOVERY_NOTES.md first,
and tell me what I should do next.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Switch models
&lt;/h3&gt;

&lt;p&gt;You can move from Claude to GPT to Gemini more easily if the repo itself holds the durable memory.&lt;/p&gt;

&lt;p&gt;The point is that the &lt;strong&gt;repository markdown is the canonical memory&lt;/strong&gt;, not the chat thread.&lt;/p&gt;




&lt;h2&gt;
  
  
  The three profiles
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;profile&lt;/th&gt;
&lt;th&gt;use case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;light&lt;/td&gt;
&lt;td&gt;small projects, personal notes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;standard&lt;/td&gt;
&lt;td&gt;normal long-running projects&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;research&lt;/td&gt;
&lt;td&gt;research, experiments, exploratory engineering, paper workflows&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If full structure feels heavy, start with &lt;code&gt;light&lt;/code&gt; and move up only when needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why not just put everything in the README?
&lt;/h3&gt;

&lt;p&gt;Because READMEs bloat fast.&lt;/p&gt;

&lt;p&gt;They start absorbing all of this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;setup instructions&lt;/li&gt;
&lt;li&gt;current status&lt;/li&gt;
&lt;li&gt;decision history&lt;/li&gt;
&lt;li&gt;hypotheses&lt;/li&gt;
&lt;li&gt;experiment notes&lt;/li&gt;
&lt;li&gt;recovery steps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once all of that lives in one file, people stop maintaining it, and then they stop trusting it.&lt;/p&gt;

&lt;p&gt;The whole point of &lt;code&gt;project-memory&lt;/code&gt; is to separate files by responsibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can smaller models handle this?
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;light&lt;/code&gt; profile probably can.&lt;/p&gt;

&lt;p&gt;But &lt;code&gt;standard&lt;/code&gt; and especially &lt;code&gt;research&lt;/code&gt; depend on classification quality and on following the promotion rule consistently, so stronger models will be more reliable.&lt;/p&gt;

&lt;p&gt;That said, if you are already doing this kind of long-running AI-assisted work, you are probably using a stronger model anyway.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is it really safe to let AI write the memory?
&lt;/h3&gt;

&lt;p&gt;It is safer if the AI is not writing into one giant undifferentiated note.&lt;/p&gt;

&lt;p&gt;The rules are what make it work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ideas go broadly into &lt;code&gt;HYPOTHESIS_LAB.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;experiments and observations go into &lt;code&gt;RESEARCH_LOG.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;decisions go into &lt;code&gt;DECISION_LOG.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CURRENT_STATE.md&lt;/code&gt; has strict promotion requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the agent is not just “writing stuff down.”&lt;br&gt;
It is routing information by state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Are there already similar skills?
&lt;/h3&gt;

&lt;p&gt;There are similar ideas in public memory and project-notes systems.&lt;/p&gt;

&lt;p&gt;But in the examples I found, I did not see many that combine all of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;repository markdown as canonical memory&lt;/li&gt;
&lt;li&gt;promotion rules for hypotheses&lt;/li&gt;
&lt;li&gt;contradiction detection between files&lt;/li&gt;
&lt;li&gt;visibility for parallel threads&lt;/li&gt;
&lt;li&gt;an adoption flow for existing projects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So this is my attempt to generalize a workflow I had been developing for myself.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;What I want from this skill is actually pretty simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;do not lose ideas&lt;/li&gt;
&lt;li&gt;do not let hypotheses silently become facts&lt;/li&gt;
&lt;li&gt;keep the reason behind decisions&lt;/li&gt;
&lt;li&gt;connect experiments to judgments&lt;/li&gt;
&lt;li&gt;make context portable across models&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you work with AI for long stretches, I think this matters more than chat length.&lt;/p&gt;

&lt;p&gt;What matters is &lt;strong&gt;what gets preserved, where, and under what rules&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And if that part is handled well, you stop having to re-derive the same reasoning every time you come back to the work.&lt;/p&gt;




&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;code&gt;tasuku-9/project-memory-skill&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;PR to &lt;code&gt;anthropics/skills&lt;/code&gt;: &lt;code&gt;#1001&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;License: MIT&lt;/li&gt;
&lt;li&gt;Tools: Claude Code, Codex CLI, Gemini CLI, Cursor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you notice something useful or broken, an issue would be appreciated.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>claude</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
