<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: tasuku fujioka</title>
    <description>The latest articles on DEV Community by tasuku fujioka (@tasuku9).</description>
    <link>https://dev.to/tasuku9</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tasuku9"/>
    <language>en</language>
    <item>
      <title>Git tracks what changed. It doesn't track why</title>
      <dc:creator>tasuku fujioka</dc:creator>
      <pubDate>Wed, 22 Apr 2026 02:05:01 +0000</pubDate>
      <link>https://dev.to/tasuku9/git-tracks-what-changed-it-doesnt-track-why-4aac</link>
      <guid>https://dev.to/tasuku9/git-tracks-what-changed-it-doesnt-track-why-4aac</guid>
      <description>&lt;p&gt;Last month I lost a three-week conversation with Claude. API timeout, session gone. All the context — the hypotheses we'd tested, the decisions we'd made, the dead ends we'd explored — vanished.&lt;/p&gt;

&lt;p&gt;I spent two hours trying to reconstruct it from memory. I got maybe 60% back.&lt;/p&gt;

&lt;p&gt;That was the last time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem nobody names
&lt;/h2&gt;

&lt;p&gt;If you use AI agents for anything longer than a single session, you've hit this wall:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chat disappears. You start over.&lt;/li&gt;
&lt;li&gt;You switch models. The new one knows nothing.&lt;/li&gt;
&lt;li&gt;A teammate needs context. There's no doc to hand them.&lt;/li&gt;
&lt;li&gt;Three months later, you can't remember why you chose approach A over B.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Git doesn't solve this. Git tracks &lt;em&gt;what changed&lt;/em&gt; in code. It doesn't track &lt;em&gt;why you made that decision&lt;/em&gt;, &lt;em&gt;what you tried and ruled out&lt;/em&gt;, or &lt;em&gt;which ideas are still unverified hunches vs. confirmed facts&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;And now that AI agents write most of the commits, even the commit messages are losing human intent.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;I created an Agent Skill called &lt;a href="https://github.com/tasuku-9/project-memory-skill" rel="noopener noreferrer"&gt;project-memory&lt;/a&gt; that turns a project folder into a durable memory system. The AI agent maintains it during the session. The human reads it when needed.&lt;/p&gt;

&lt;p&gt;It separates project knowledge into canonical markdown files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CURRENT_STATE.md&lt;/strong&gt; — what is true right now&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DECISION_LOG.md&lt;/strong&gt; — what was decided and why&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RESEARCH_LOG.md&lt;/strong&gt; — what was tested or observed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HYPOTHESIS_LAB.md&lt;/strong&gt; — what is still unverified&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ROADMAP.md&lt;/strong&gt; — what is planned&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HUMAN_BRIEF.md&lt;/strong&gt; — what a human should read first&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RECOVERY_NOTES.md&lt;/strong&gt; — how to resume after interruption&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No database. No embeddings. No vendor lock-in. Just markdown files in your repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not just use a README?
&lt;/h2&gt;

&lt;p&gt;Because READMEs become dumps. Setup instructions, current state, decisions, hypotheses, and recovery notes all collapse into one overloaded file. Nobody maintains it, and nobody trusts it.&lt;/p&gt;

&lt;p&gt;project-memory keeps each concern in its own file with clear ownership rules. The AI agent knows exactly where to write each type of information.&lt;/p&gt;

&lt;h2&gt;
  
  
  The two features that don't exist anywhere else
&lt;/h2&gt;

&lt;p&gt;I looked through the Agent Skills ecosystem — awesome-agent-skills, LobeHub, skills.sh — and couldn't find anything that solves these two problems:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Promotion rules
&lt;/h3&gt;

&lt;p&gt;A hypothesis in &lt;code&gt;HYPOTHESIS_LAB.md&lt;/code&gt; &lt;strong&gt;cannot&lt;/strong&gt; be promoted to &lt;code&gt;CURRENT_STATE.md&lt;/code&gt; without:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Evidence recorded in &lt;code&gt;RESEARCH_LOG.md&lt;/code&gt;, or&lt;/li&gt;
&lt;li&gt;An explicit decision recorded in &lt;code&gt;DECISION_LOG.md&lt;/code&gt;, or&lt;/li&gt;
&lt;li&gt;A clearly stated user decision&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This sounds obvious, but it's the thing that breaks most project documentation. Someone writes "we should probably use OAuth2" in a note, and six weeks later everyone treats it as a decided fact. Nobody remembers it was just a hunch.&lt;/p&gt;

&lt;p&gt;project-memory makes the status transitions explicit. Hypotheses stay hypotheses until proven.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Conflict resolution
&lt;/h3&gt;

&lt;p&gt;When files disagree — and they will, over months of work — the skill defines a priority order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;CURRENT_STATE.md&lt;/code&gt; (latest entry)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DECISION_LOG.md&lt;/code&gt; (latest dated entry)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RESEARCH_LOG.md&lt;/code&gt; (latest dated entry)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;RECOVERY_NOTES.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;HUMAN_BRIEF.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ROADMAP.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;HYPOTHESIS_LAB.md&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The agent reports the conflict and proposes a patch. No silent merging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model-independent by design
&lt;/h2&gt;

&lt;p&gt;This is the part I'm most proud of. The memory lives in plain markdown — not in Claude's memory feature, not in ChatGPT's custom instructions, not in any tool's hidden state.&lt;/p&gt;

&lt;p&gt;I've tested this across multiple models. Hand over the folder, and the new model picks up where the last one left off. The context recovery takes seconds.&lt;/p&gt;

&lt;p&gt;This matters more than it sounds. Today you're using Claude. Tomorrow you might switch to Gemini or GPT for a specific task. Your project memory shouldn't be locked inside any single vendor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three profiles for different scales
&lt;/h2&gt;

&lt;p&gt;Not every project needs 11 files. The skill comes with three profiles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Light&lt;/strong&gt; — for small projects. Just &lt;code&gt;CURRENT_STATE.md&lt;/code&gt;, &lt;code&gt;RECOVERY_NOTES.md&lt;/code&gt;, and a logbook.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard&lt;/strong&gt; — for general long-running work. Adds decision log, hypothesis lab, and human brief.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Research&lt;/strong&gt; — for experiments and investigation. Adds &lt;code&gt;RESEARCH_LOG.md&lt;/code&gt; with evidence and confidence fields.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python scripts/init_memory_workspace.py /path/to/project &lt;span class="nt"&gt;--profile&lt;/span&gt; standard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How the AI agent uses it
&lt;/h2&gt;

&lt;p&gt;You don't maintain these files yourself. The skill instructs the agent to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Classify new information by type (fact, decision, hypothesis, evidence)&lt;/li&gt;
&lt;li&gt;Route it to the correct file&lt;/li&gt;
&lt;li&gt;Update &lt;code&gt;RECOVERY_NOTES.md&lt;/code&gt; at session end&lt;/li&gt;
&lt;li&gt;Review &lt;code&gt;HUMAN_BRIEF.md&lt;/code&gt; when the big picture changes&lt;/li&gt;
&lt;li&gt;Never promote a hypothesis without evidence&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resume this project. Read CONTEXT_MANIFEST.md and 
RECOVERY_NOTES.md first, then tell me what to do next.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent reads the files in the defined order and comes back with: current goal, last completed step, active blockers, and recommended next action.&lt;/p&gt;

&lt;h2&gt;
  
  
  The parallel threads problem
&lt;/h2&gt;

&lt;p&gt;One thing that drove me crazy: when you have multiple workstreams running in parallel, time-series logs become unreadable. Thread A and Thread B interleave in the log, and a human scanning the file can't tell what's happening in either one.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;HUMAN_BRIEF.md&lt;/code&gt; solves this with a tracked threads table:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Thread&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;th&gt;Next action / blocker&lt;/th&gt;
&lt;th&gt;Source&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Auth migration&lt;/td&gt;
&lt;td&gt;active&lt;/td&gt;
&lt;td&gt;Waiting on OAuth2 test results&lt;/td&gt;
&lt;td&gt;RSC-042&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance audit&lt;/td&gt;
&lt;td&gt;paused&lt;/td&gt;
&lt;td&gt;Blocked by staging deploy&lt;/td&gt;
&lt;td&gt;BLK-003&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Updated only when the human-facing picture changes. Not on every small commit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/tasuku-9/project-memory-skill
python scripts/init_memory_workspace.py ./my-project &lt;span class="nt"&gt;--profile&lt;/span&gt; standard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then tell your AI agent to read the &lt;code&gt;SKILL.md&lt;/code&gt; and start maintaining the docs.&lt;/p&gt;

&lt;p&gt;Works with Claude Code, Codex CLI, Gemini CLI, Cursor, and any agent that supports the Agent Skills standard.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;I submitted this as &lt;a href="https://github.com/anthropics/skills/pull/1001" rel="noopener noreferrer"&gt;PR #1001&lt;/a&gt; to Anthropic's official skills repository. Whether it gets merged or not, the skill is free and open source under MIT.&lt;/p&gt;

&lt;p&gt;If you've ever lost a chat and wished you hadn't, give it a try. And if you find a better way to handle any of this, PRs are welcome.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/tasuku-9/project-memory-skill" rel="noopener noreferrer"&gt;tasuku-9/project-memory-skill&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;License&lt;/strong&gt;: MIT&lt;br&gt;
&lt;strong&gt;Compatibility&lt;/strong&gt;: Claude Code, Codex CLI, Gemini CLI, Cursor&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agentskills</category>
      <category>opensource</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
