Every session with a coding agent starts the same way.
You open a new conversation, and before you can get anything done, you spend 10-20 minutes re-explaining the project. Clean architecture. No Prisma imports in use cases. Domain layer here, infrastructure there. We use this pattern for repositories. Oh, and we decided last week to enforce 85% test coverage on all use cases.
The agent nods along — then next session, you do it again.
This isn't a model problem. Claude, GPT, whatever — they're all capable. The problem is context. Specifically, the decisions your team has already made. The agent doesn't know them unless you tell it. Every. Single. Time.
Decisions are the missing layer
When a senior dev joins a project, they don't just read the code. They absorb the why behind it — the constraints, the trade-offs, the rules the team agreed on. That's what makes them productive fast.
A coding agent only gets the code. The decisions live in someone's head, in a Slack thread from 3 months ago, or in a doc nobody reads.
That's the gap pi-project-memory fills.
What it does
pi-project-memory is an extension for pi, a terminal-based coding agent. It gives the agent a persistent, project-local memory — stored as plain markdown files in .pi/project-memory/.
When you make a decision — architecture, tooling, conventions — the extension captures it. Either automatically from your prompt, or with a quick /memory remember. It classifies it: decision, pattern, preference, or gotcha. Next session, it's injected into context before the agent starts. No re-explaining.
Four files. No database. No cloud sync. Just markdown you can read, edit, and commit.
.pi/project-memory/
├── MEMORY.md ← injected every session
├── decisions.md
├── patterns.md
├── preferences.md
└── gotchas.md
The moment it clicked
A teammate started using pi on our project for the first time. A few minutes in, they messaged: "how does the agent know all this stuff about our architecture?"
It had read the memory files.
We hadn't written an onboarding doc. We hadn't sent them a Notion page or a Slack thread. The agent just... knew — because decisions made across dozens of sessions had been captured as they happened.
That's the shift that surprised us most: the agent became a participant in the project, not just a tool. It doesn't only read from the memory files. It writes to them. When it encounters a decision worth keeping, it proposes saving it. Over time, it builds up a shared understanding of the project alongside the team.
Auto-capture, not manual bookkeeping
The extension works in the background. When you type something that looks like a decision — "we will use clean architecture", "never import Prisma directly in use cases", "all use cases must be tested, 85% coverage" — it classifies it and asks if you want to save it.
High-confidence captures save silently. Mid-confidence ones surface a quick confirmation. You stay in flow.
It also learns from what the agent discovers. Ask it to scan your feature folder structure? If the response contains a structured template, the extension proposes saving that too — so future features follow the same blueprint.
What's next
The extension is open source and early. The next big piece is /memory export — syncing your captured decisions to CLAUDE.md, .cursor/rules, or AGENTS.md, so the memory travels beyond pi sessions and becomes part of the project for every tool and every teammate.
If you're building with pi and tired of re-explaining your architecture every session — try it. And if you want to contribute, the repo is open.
github.com/zeflq/pi-project-memory
Built while building. Decisions captured along the way.
Top comments (1)
The auto-capture with confidence thresholds is exactly the right UX call. The failure mode of manually-curated decision docs is that nobody writes in them unless they're already thinking about documentation, which means the decisions that feel obvious in the moment never get recorded. High-confidence silent capture is what makes it actually work at speed.
One thing I'd think about as this matures: decision staleness. Decisions made in week 1 might be actively wrong by month 6 — especially framework or library choices that the team reconsidered. Right now the files capture what was decided, but not when or whether it's still current.
A lightweight timestamp + "last confirmed" flag on each decision would let the agent weight recent decisions over old ones when there's a conflict. It'd also give you a way to surface decisions that haven't been touched in 90+ days for review — kind of a debt audit built into the workflow.
The /memory export to CLAUDE.md and AGENTS.md is the right next step. Tool-agnostic is what makes it useful across a whole team rather than just whoever's running pi.