DEV Community

Dana from PeKG
Dana from PeKG

Posted on

Why AI Coding Agents Forget Everything Between Sessions (And How to Fix It)

Last week, I watched an AI coding agent make the exact same mistake for the third time.

It reintroduced a bug we’d already fixed, ignored a team convention we’d already explained, and suggested a migration path we’d already rejected.

None of this was surprising. The agent wasn’t “bad.” It just had the same problem most AI coding agents have:

they don’t actually remember your project.

They remember the current prompt window. Maybe a few files. Maybe some chat history. But once the session ends, a lot of the hard-won knowledge disappears:

  • why you chose one pattern over another
  • which workaround fixed that weird framework bug
  • what not to touch in a fragile integration
  • which architecture decisions are still valid
  • which ones were replaced two weeks ago

So every new session starts with a tax:
re-explain context, re-discover gotchas, re-make old mistakes.

The real problem isn’t generation. It’s memory.

Most people focus on model quality.

That matters, of course. Better models write better code.

But in day-to-day development, the bigger issue is often context continuity. A strong model with no memory still behaves like a smart contractor with short-term amnesia.

Here’s the usual loop:

Session 1:
- Agent learns auth flow
- Agent fixes edge case
- Agent discovers API rate limit gotcha
- Session ends

Session 2:
- Agent forgets auth flow
- Agent misses the same edge case
- Agent hits the same rate limit gotcha
- You explain it all again
Enter fullscreen mode Exit fullscreen mode

That’s not an intelligence problem. It’s a knowledge storage problem.

Why this happens

Most coding agents are stateless or semi-stateless by design.

They’re good at:

  • reading the current codebase
  • following local instructions
  • making edits quickly

They’re bad at:

  • preserving decisions across sessions
  • carrying knowledge across projects
  • building a durable “mental model” of your engineering environment

Even when an agent can search files, that’s not the same as memory.

A codebase tells you what exists. It usually does not tell you:

  • why a decision was made
  • what alternatives failed
  • which bug fixes came with caveats
  • what hidden constraints the team already learned the hard way

That missing layer is where teams lose time.

What actually helps

If your problem is small, a DECISIONS.md, ARCHITECTURE.md, or a good internal wiki may be enough. For many teams, that’s the right answer.

But once you’re using AI coding agents heavily, you usually need something more structured than scattered docs and chat logs.

What works better is persistent, queryable memory:

  • store bug fixes, patterns, and decisions
  • make them searchable by the agent
  • retrieve them automatically when relevant
  • keep them across sessions and projects

Think less “chat history,” more knowledge graph for your engineering context.

A simple mental model

Raw experience
   ↓
bug fixes / decisions / gotchas / patterns
   ↓
structured memory
   ↓
retrieved into future agent sessions
   ↓
fewer repeated mistakes
Enter fullscreen mode Exit fullscreen mode

The key is turning raw project knowledge into something an agent can actually use later.

A runnable example

If you’re using MCP-compatible agents, the setup pattern is straightforward. Your agent stays the same; memory gets added as an external tool.

npm install -g @pekg/cli
pekg login
pekg mcp add
Enter fullscreen mode Exit fullscreen mode

Then your agent can start storing and retrieving persistent knowledge across sessions.

For example, after fixing a bug, you’d save knowledge like:

pekg remember \
  --project my-app \
  --title "Stripe webhook retries can duplicate order creation" \
  --note "Use idempotency check on event.id before creating orders"
Enter fullscreen mode Exit fullscreen mode

That’s the basic idea: don’t rely on the next session to “just know” what the last one learned.

What we built for this

This is the problem we built PeKG to solve.

PeKG is a personal knowledge graph for AI coding agents. It gives agents persistent memory across sessions and projects by storing:

  • decisions
  • patterns
  • bug fixes
  • gotchas
  • architecture knowledge

It works with any MCP-compatible agent like Claude Code, Cursor, Windsurf, Cline, Aider, and Roo Code.

A few details that matter:

  • your agent does the reasoning; PeKG stores and retrieves the knowledge
  • it can synthesize knowledge across projects
  • it compiles raw notes and source material into structured wiki-style articles
  • it supports graph relationships like depends_on, replaces, and conflicts_with
  • it has tiered knowledge: personal > team > shared > hive
  • there’s also a public Hive for community-shared patterns and gotchas

That said, the bigger takeaway isn’t “use our tool.” It’s this:

if you rely on AI coding agents, you need a memory layer somewhere.

For some teams that will be docs. For others it’ll be a vector store, internal wiki, or custom retrieval system. For teams that want an MCP-native option, PeKG is one approach.

Try it yourself

If this sounds familiar, the easiest next step is to test a persistent memory workflow on one real project:

  1. Capture 5 things your agent keeps forgetting
  2. Store them somewhere queryable
  3. See whether future sessions stop repeating the same mistakes

If you want an MCP-native setup, check out https://pekg.ai/docs for MCP setup.

If you want practical ideas for what knowledge is worth saving, see https://pekg.ai/hints.txt for 115 practical tips.

And if you want to try PeKG directly, try https://app.pekg.ai — there’s a free tier available:

  • 100 articles
  • 5 projects
  • 1 user

The shift that matters

The next leap for coding agents probably isn’t just better code generation.

It’s better memory.

Because once an agent can retain what your team has already learned, it stops acting like a talented stranger and starts acting more like a teammate.

How are you handling long-term memory for coding agents today — docs, prompts, custom RAG, something else? Drop your approach below.

-- PeKG team

This post was created with AI assistance.

Top comments (0)