DEV Community

Cover image for MemoryGraph vs Graphiti: Choosing the Right Memory for Your AI Agent
Gregory Dickson
Gregory Dickson

Posted on

MemoryGraph vs Graphiti: Choosing the Right Memory for Your AI Agent

When general-purpose memory meets coding-specific memory

December 2025 - Gregory Dickson


You've decided your AI agent needs persistent memory. Context loss between sessions is one of the biggest friction points in AI-assisted development.

Now you're comparing options. If you've done any research, you've probably found Graphiti. With 21,000+ GitHub stars, Y Combinator backing, and a peer-reviewed architecture paper, it's the category leader in AI agent memory.

So why would you consider anything else?

Because the best tool depends on what you're building.

This post offers an honest comparison to help you choose. We built MemoryGraph, so we're biased. But we'll be fair about where Graphiti excels and where we think MemoryGraph is the better fit.


The TL;DR

If You're Building... Consider...
A general AI agent (customer service, personal assistant, enterprise bot) Graphiti
A coding agent (Claude Code, Cursor, Aider, Continue) MemoryGraph
An agent that needs temporal queries across all entity types Graphiti
An agent that needs to know what solved what, what caused what MemoryGraph
Production infrastructure with Neo4j/FalkorDB already deployed Graphiti
Zero-infrastructure local development MemoryGraph

If you're building a coding agent and want to get started in 60 seconds without infrastructure, MemoryGraph is purpose-built for you. If you're building a general-purpose agent and have database infrastructure, Graphiti is excellent.


What They Have in Common

Both MemoryGraph and Graphiti are:

  • Graph-based: not flat vector stores
  • MCP-compatible: work with Claude Desktop, Cursor, and other MCP clients
  • Apache 2.0 licensed: open source, enterprise-friendly
  • Python-native: built for the AI/ML ecosystem
  • Relationship-aware: store entities and their connections

Both emerged from the same insight: vector similarity alone isn't enough for agent memory. When you ask "What did we decide last week?" or "What caused this bug?", you need relationships and temporal context, not just embedding similarity.


Where They Differ

1. Target Use Case

Graphiti is designed for any AI agent. Their tagline is "Build Real-Time Knowledge Graphs for AI Agents." The examples in their docs include things like:

  • "Kendra loves Adidas shoes"
  • Customer preferences across sessions
  • Business entity relationships
  • User interaction history

This generality means Graphiti can model any domain.

MemoryGraph is designed specifically for coding agents. Every feature is optimized for software development workflows:

  • 12 memory types built for code (solution, problem, error, fix, code_pattern, etc.)
  • 35+ relationship types for development (SOLVES, CAUSES, DEPENDS_ON, IMPROVES, etc.)
  • Integration patterns for Claude Code, Cursor, Aider, Continue

This specificity means less configuration for coding use cases.

2. Relationship Model

Graphiti uses a flexible triplet model where you define your own ontology:

# Graphiti: Define custom entity and edge types
class Person(EntityNode):
    name: str

class Product(EntityNode):
    name: str

class Loves(EntityEdge):
    strength: float
Enter fullscreen mode Exit fullscreen mode

This flexibility enables custom ontologies for any domain, but requires upfront design work.

MemoryGraph provides 35+ pre-defined relationship types organized into 7 categories:

# MemoryGraph: Use built-in coding relationships
{
    "tool": "create_relationship",
    "from_memory_id": "solution_123",
    "to_memory_id": "problem_456",
    "relationship_type": "SOLVES"  # One of 35+ built-in types
}
Enter fullscreen mode Exit fullscreen mode

The categories:

  • Causal: CAUSES, TRIGGERS, LEADS_TO, PREVENTS, BREAKS
  • Solution: SOLVES, ADDRESSES, ALTERNATIVE_TO, IMPROVES, REPLACES
  • Context: OCCURS_IN, APPLIES_TO, WORKS_WITH, REQUIRES, USED_IN
  • Learning: BUILDS_ON, CONTRADICTS, CONFIRMS, GENERALIZES, SPECIALIZES
  • Similarity: SIMILAR_TO, VARIANT_OF, RELATED_TO, ANALOGY_TO, OPPOSITE_OF
  • Workflow: FOLLOWS, DEPENDS_ON, ENABLES, BLOCKS, PARALLEL_TO
  • Quality: EFFECTIVE_FOR, INEFFECTIVE_FOR, PREFERRED_OVER, DEPRECATED_BY

For coding agents, these relationships are immediately useful without ontology design.

3. Entity Extraction

Graphiti uses LLM-powered entity extraction. When you add an episode (a piece of text), it automatically extracts entities and relationships:

# Graphiti: Automatic extraction
await graphiti.add_episode(
    name="user_message",
    episode_body="I fixed the timeout by adding retry logic with exponential backoff",
    source=EpisodeType.message
)
# LLM extracts: entities, relationships, timestamps
Enter fullscreen mode Exit fullscreen mode

This eliminates manual data structuring, but adds latency (LLM calls) and cost (tokens).

MemoryGraph uses explicit storage. You decide what to store:

# MemoryGraph: Explicit storage
{
    "tool": "store_memory",
    "type": "solution",
    "title": "Fixed timeout with retry logic",
    "content": "Added exponential backoff with max 3 retries...",
    "tags": ["timeout", "retry", "exponential-backoff"]
}
Enter fullscreen mode Exit fullscreen mode

This gives you control over exactly what's stored, with no LLM extraction overhead. The tradeoff is that your agent (or you) must explicitly store memories.

Architecture comparison:

Graphiti (automatic extraction):
┌─────────┐    LLM     ┌──────────┐   Neo4j   ┌───────────┐
│ Episode │ ─────────▶ │ Entities │ ────────▶ │ Knowledge │
│  (text) │  Extract   │ + Edges  │   Store   │   Graph   │
└─────────┘  500ms-2s  └──────────┘           └───────────┘

MemoryGraph (explicit storage):
┌────────┐   Direct    ┌────────────┐
│ Memory │ ──────────▶ │ SQLite/Neo │   No LLM required
└────────┘    <5ms     └────────────┘
     │
     ▼ (explicit)
┌──────────────┐
│ Relationship │   You control what's linked
└──────────────┘
Enter fullscreen mode Exit fullscreen mode

The extraction trade-off:

Aspect Graphiti (automatic) MemoryGraph (explicit)
Cognitive load Lower: just feed it text Higher: you decide what to store
Relationship discovery May find implicit connections Only what you specify
Storage latency 500ms-2s (LLM call) <5ms (direct write)
Cost per memory $0.003-$0.01 (token cost) $0 (no LLM)
Extraction quality Depends on model/prompts Deterministic

4. Infrastructure Requirements

Graphiti requires a graph database:

# Graphiti setup
docker run neo4j...              # Or FalkorDB, Kuzu, Neptune
export NEO4J_URI=bolt://localhost:7687
export NEO4J_PASSWORD=...
export OPENAI_API_KEY=...        # Required for entity extraction
pip install graphiti-core[neo4j]
Enter fullscreen mode Exit fullscreen mode

This is appropriate for production systems. But it's friction for getting started.

MemoryGraph defaults to SQLite with zero configuration:

# MemoryGraph setup
pipx install memorygraphMCP
claude mcp add --scope user memorygraph -- memorygraph
# Done. Database created automatically.
Enter fullscreen mode Exit fullscreen mode

You can upgrade to Neo4j, FalkorDB, or cloud sync later. But the default works immediately.

5. Temporal Model

Graphiti has a sophisticated bi-temporal model:

  • Valid time: When the fact was true in the real world
  • Transaction time: When the fact was recorded

This enables queries like "What did we know about X as of March 2024?" and handles contradictions by invalidating old edges rather than deleting them.

MemoryGraph also supports bi-temporal tracking (added in v0.10.0, inspired by Graphiti):

# MemoryGraph temporal queries
march_2024 = datetime(2024, 3, 1, tzinfo=timezone.utc)
solutions = await db.get_related_memories("error_id", as_of=march_2024)
changes = await db.what_changed(since=one_week_ago)
Enter fullscreen mode Exit fullscreen mode

Both handle temporal queries well. Graphiti's bi-temporal model is more sophisticated, tracking validity intervals on every edge. MemoryGraph's temporal support (added in v0.10.0) covers the common cases: point-in-time queries and change tracking.

6. Query Model

Both systems are "graph-based" but query differently:

Graphiti uses hybrid retrieval (from the arXiv paper):

  • Semantic similarity search (embeddings)
  • BM25 full-text search (Lucene via Neo4j)
  • Breadth-first graph traversal from seed nodes

MemoryGraph uses:

  • FTS5 full-text search (SQLite) or native graph queries (Neo4j/FalkorDB)
  • Tag-based filtering with exact match
  • Typed relationship traversal with configurable depth
  • Three search tolerance modes: strict, normal (stemming), fuzzy (typo-tolerant)

Graphiti's hybrid approach excels at finding semantically related content across large, unstructured graphs. MemoryGraph's typed traversal excels at answering specific questions like "what solved this error?" or "what depends on this component?"


Practical Comparison: A Debugging Workflow

Here's how each tool handles a common coding scenario.

The Scenario

You're debugging a Redis timeout issue. Over several sessions, you:

  1. Encounter the error
  2. Try a fix (increase timeout), doesn't work
  3. Try another fix (add retry logic), causes memory leak
  4. Find the root cause (connection pool exhaustion)
  5. Implement the real fix (increase pool size)

With Graphiti

# Session 1: Encounter error
await graphiti.add_episode(
    name="debug_session",
    episode_body="Got RedisTimeoutError after 30 seconds. Stack trace shows connection.execute() hanging.",
    source=EpisodeType.message
)

# Session 2: Try timeout fix
await graphiti.add_episode(
    name="debug_session", 
    episode_body="Increased Redis timeout to 60s. Still getting timeouts under load.",
    source=EpisodeType.message
)

# Session 3: Try retry logic
await graphiti.add_episode(
    name="debug_session",
    episode_body="Added retry logic with exponential backoff. Now seeing memory growth - possible leak.",
    source=EpisodeType.message
)

# ... and so on

# Later: Query what happened
results = await graphiti.search("Redis timeout fixes")
Enter fullscreen mode Exit fullscreen mode

Graphiti's LLM extraction will create entities and relationships from this text. The quality depends on the extraction prompts and model.

With MemoryGraph

# Session 1: Store the error
error = store_memory(
    type="error",
    title="RedisTimeoutError under load",
    content="Connection.execute() hangs after 30s under concurrent requests",
    tags=["redis", "timeout", "production"]
)

# Session 2: Store failed attempt
attempt1 = store_memory(
    type="solution",
    title="Increased Redis timeout to 60s",
    content="Changed timeout config. Still fails under load - not the root cause.",
    tags=["redis", "timeout", "failed"]
)
create_relationship(attempt1, error, "ADDRESSES")  # Attempted to address
create_relationship(attempt1, error, "INEFFECTIVE_FOR")  # But didn't work

# Session 3: Store attempt that caused new problem
attempt2 = store_memory(
    type="solution",
    title="Added retry with exponential backoff",
    content="Implemented retry logic. Works for timeout but causes memory growth.",
    tags=["redis", "retry", "partial-fix"]
)
leak = store_memory(
    type="problem",
    title="Memory leak from retry logic",
    content="Each retry holds connection reference, causing memory growth under load.",
    tags=["redis", "memory-leak"]
)
create_relationship(attempt2, error, "ADDRESSES")
create_relationship(attempt2, leak, "CAUSES")  # This fix caused a new problem

# Session 4: Find root cause and real fix
root_cause = store_memory(
    type="problem", 
    title="Redis connection pool exhaustion",
    content="Default pool size of 10 is exhausted under load, causing queued connections to timeout.",
    tags=["redis", "connection-pool", "root-cause"]
)
real_fix = store_memory(
    type="solution",
    title="Increased Redis connection pool to 50",
    content="Set REDIS_POOL_SIZE=50. Handles concurrent load without timeouts or retries.",
    tags=["redis", "connection-pool", "fix"]
)
create_relationship(root_cause, error, "CAUSES")
create_relationship(real_fix, root_cause, "SOLVES")
create_relationship(real_fix, attempt1, "IMPROVES")
create_relationship(real_fix, attempt2, "REPLACES")

# Later: Query the full picture
recall_memories("redis timeout")
Enter fullscreen mode Exit fullscreen mode

The result is a queryable graph:

[pool_exhaustion] ──CAUSES──▶ [timeout_error]
       │                            ▲
       │                            │
       ▼                    ┌───────┴───────┐
[real_fix: pool=50]         │               │
       │              [attempt1: 60s]  [attempt2: retry]
       │                    │               │
       ├──IMPROVES─────────▶│               │
       │                    │               ▼
       └──REPLACES─────────────────────▶[memory_leak]
                                            ▲
                                            │
                              [attempt2]──CAUSES──┘
Enter fullscreen mode Exit fullscreen mode

When you ask "What happened with Redis?" six months later, MemoryGraph returns this entire causal chain, including what didn't work and why.


Decision Framework

Choose Graphiti If:

✅ You're building a general-purpose AI agent (not specifically for coding)

✅ You want automatic entity extraction from unstructured text

✅ You need sophisticated temporal queries across arbitrary entity types

✅ You already have Neo4j, FalkorDB, or similar infrastructure

✅ You want a commercial platform with support (Zep Cloud)

✅ You're okay with LLM costs for entity extraction

Choose MemoryGraph If:

✅ You're building with Claude Code, Cursor, Aider, or Continue

✅ You want coding-specific relationships out of the box (SOLVES, CAUSES, DEPENDS_ON)

✅ You want zero infrastructure: SQLite default, upgrade later

✅ You prefer explicit control over what gets stored

✅ You want to get started in 60 seconds, not 60 minutes

✅ You want local-first with optional cloud sync


What About Using Both?

This is a valid architecture:

  • Use Graphiti for your product's user-facing memory (customer preferences, conversation history, business entities)
  • Use MemoryGraph for your development workflow (what you learned building the product)

They solve different problems. Graphiti helps your AI agent remember your users. MemoryGraph helps your coding agent remember your codebase.


What If You Choose Wrong?

Both systems use standard data formats. Migration is possible:

MemoryGraph → Graphiti: Export memories as JSON, feed them as episodes. Graphiti's LLM will re-extract entities and relationships (you'll lose your explicit relationship types but gain Graphiti's automatic extraction).

Graphiti → MemoryGraph: Export entities and edges. Map entity types to MemoryGraph's 12 memory types, map edge types to the 35 relationship types. Manual mapping required, but no data loss.

Neither system creates vendor lock-in at the data layer. Choose based on current needs; you can migrate if requirements change.


Getting Started with MemoryGraph

If MemoryGraph sounds right for your use case:

# Install
pipx install memorygraphMCP

# Add to Claude Code
claude mcp add --scope user memorygraph -- memorygraph

# Start using
claude
> "Remember this: Use pytest fixtures for database tests"
> "What do you remember about testing?"
Enter fullscreen mode Exit fullscreen mode

No database setup. No Docker. No API keys.

See memorygraph.dev for documentation, or GitHub for the source.


Getting Started with Graphiti

If Graphiti is the better fit:

# Start Neo4j
docker run -p 7474:7474 -p 7687:7687 neo4j

# Install
pip install graphiti-core[neo4j]

# Configure
export NEO4J_URI=bolt://localhost:7687
export OPENAI_API_KEY=...
Enter fullscreen mode Exit fullscreen mode

See github.com/getzep/graphiti for documentation.


Conclusion

Graphiti and MemoryGraph both solve the fundamental problem of AI agent memory. They're both graph-based, both MCP-compatible, both Apache 2.0 licensed.

The difference is focus.

Graphiti is a general-purpose temporal knowledge graph for any AI agent. It's mature, well-funded, and production-proven.

MemoryGraph is a coding-specific memory system for AI coding agents. It's opinionated, zero-config, and built for developers who want to start in 60 seconds.

Choose the tool that matches your use case. For coding agents, we think MemoryGraph is the better fit. For general AI agents, Graphiti is excellent.

And if you're building both? Use both.


MemoryGraph is open source under Apache 2.0. Try it at memorygraph.dev or star us on GitHub.


Gregory Dickson is a Senior AI Developer & Solutions Architect specializing in AI/ML development and cloud architecture. He's the creator of MemoryGraph, an open-source MCP memory server using graph-based relationship tracking.

Top comments (0)