DEV Community

Dana from PeKG
Dana from PeKG

Posted on

Why AI coding agents keep making the same mistakes (and how we fixed it)

Last week, I watched an AI coding agent make the exact same mistake for the third time.

It wasn’t a hard problem. Two days earlier, we had already explained the fix: don’t use that migration pattern in this repo; it breaks multi-tenant rollbacks. The agent fixed it, the PR passed, everyone moved on.

Then the next session started.

Fresh context window. Fresh confusion. Same bad migration. Same explanation from a human.

That’s the real problem with AI coding agents right now: they’re smart in-session, but amnesiac across sessions and projects.

And if you use agents seriously, you’ve probably felt this already:

  • you keep re-explaining architecture decisions
  • the agent rediscovers the same bug fixes
  • repo-specific gotchas vanish after the chat ends
  • lessons learned in Project A never show up in Project B

Bigger context windows help a little. Better prompts help a little. Repo indexing helps a little.

But none of those actually give your agent a persistent memory.

The gap isn’t code search. It’s knowledge memory.

Most tooling today is good at finding files.

That’s useful, but it’s not the same as remembering things like:

  • “We tried library X and replaced it because of cold start issues”
  • “This service depends on eventual consistency, so don’t ‘fix’ the delay”
  • “That flaky test only fails when Redis is started with AOF disabled”
  • “In every React app we own, this auth edge case shows up eventually”

Those aren’t just snippets. They’re decisions, patterns, tradeoffs, and gotchas.

What agents really need is something closer to a team brain:

Project A lesson ─────┐
Bug fix from March ───┼──> persistent knowledge graph ──> Agent in new session
Architecture note ────┤
Gotcha from Project B ┘
Enter fullscreen mode Exit fullscreen mode

That’s where a knowledge graph turns out to be surprisingly practical.

Why a knowledge graph works better than a giant note dump

A flat wiki helps humans. A vector DB helps similarity search. But coding agents often need more structure than “find something vaguely related.”

A knowledge graph can store:

  • entities: services, libraries, patterns, bugs, teams
  • relationships: depends_on, replaces, conflicts_with, uses
  • compiled knowledge: clean articles distilled from raw chats, PRs, docs, and code

That means an agent can retrieve not just “something about auth,” but:

  • the auth middleware pattern your team uses
  • the reason it exists
  • which service depends on it
  • what it replaced
  • what breaks if it’s changed

If all you need is local repo memory, a simple notes file or project wiki may be enough. But if your problem is cross-session + cross-project memory, that’s where graph-based memory gets interesting.

A simple MCP example

If you’re already using an MCP-compatible agent, the easiest path is to give it tools for storing and retrieving knowledge.

Here’s a minimal example of adding the PeKG MCP server with npm:

npm install -g @pekg/mcp-server
Enter fullscreen mode Exit fullscreen mode

Then in your MCP config:

{
  "mcpServers": {
    "pekg": {
      "command": "npx",
      "args": ["@pekg/mcp-server"],
      "env": {
        "PEKG_API_KEY": "your_api_key"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Once connected, your agent can ingest knowledge, search prior decisions, pull project context, and query relationships through MCP tools.

PeKG works with any MCP-compatible agent, including Claude Code, Cursor, Windsurf, Cline, Aider, and Roo Code. Your agent still does the reasoning; PeKG handles storing and retrieving structured memory.

Check out https://pekg.ai/docs for MCP setup.

What this looks like in practice

The useful pattern is not “save every chat forever.”

It’s:

  1. capture raw sources: chats, code, docs, fixes, notes
  2. compile them into structured knowledge
  3. extract entities and relationships
  4. retrieve the right knowledge when the agent needs it later

That’s the model PeKG is built around.

It stores decisions, patterns, bug fixes, gotchas, and architecture knowledge in a searchable graph. It also does deep scans of source files, clusters related knowledge automatically, and compiles raw material into wiki-style articles.

One thing I think matters here: it’s BYOLLM. Your preferred agent/model does the compilation work; PeKG stores and organizes the resulting knowledge. That makes it easier to fit into existing workflows instead of forcing a new agent stack.

It also supports tiered knowledge:

  • personal
  • team
  • shared
  • hive

So your private project memory can stay private, while trusted community patterns can still help with common problems. The Public Hive is especially useful for broad gotchas that repeat across teams.

If you’re working solo, this solves “why do I keep re-teaching my agent?”
If you’re on a team, it starts solving “why does every engineer and every agent relearn the same lessons?”

Try it yourself

If this sounds like your problem, the easiest test is simple: pick one recurring gotcha and see whether your agent can remember it next week without you retyping it.

PeKG has a free tier:

  • 100 articles
  • 5 projects
  • 1 user

You can try it at https://app.pekg.ai.

If you’re thinking about agent memory design more broadly, see https://pekg.ai/hints.txt for 115 practical tips.

Pricing if you need more:

  • Free: 100 articles, 5 projects
  • Pro: $15/mo
  • Team: $39/mo/seat
  • Enterprise: custom

The bigger point

The next bottleneck for AI coding agents isn’t just model quality.

It’s memory.

Until agents can retain decisions, patterns, fixes, and architectural context across sessions, they’ll keep acting like brilliant interns with goldfish memory.

And that’s fixable.

How are you handling agent memory today: bigger prompts, repo indexing, internal docs, or something else? Drop your approach below.

-- PeKG team

This post was created with AI assistance.

Top comments (0)