DEV Community

Anton Illarionov
Anton Illarionov

Posted on

Why Your LLM App Needs a World Model (Not Just RAG)

Why Your LLM App Needs a World Model (Not Just RAG)

You've built an LLM app. You're using RAG. It works for Q&A. But something is missing.

Here's what's missing: a world model.

What RAG Does

RAG retrieves documents that match a query. Great for:

# "What does the documentation say about X?"
results = vector_db.similarity_search("how to install", k=5)
context = "\n".join([r.content for r in results])
response = llm.chat(f"Context: {context}\n\nQuestion: how to install?")
Enter fullscreen mode Exit fullscreen mode

What RAG Can't Do

RAG can't answer:

  • "Has this exact action already been taken?" (deduplication)
  • "Who authorized this instruction?" (authority chain)
  • "What was true two weeks ago in our project?" (temporal history)
  • "Is this entity actually in our system?" (referential integrity)

These require structured memory with explicit semantics.

The World Model Pattern

A world model is a typed, persistent graph of entities and relationships:

# Entities have explicit types and properties
nodes = [
    {"id": "project-1", "type": "Strategy", "title": "Launch ODEI"},
    {"id": "task-1", "type": "Task", "title": "Deploy API", "status": "done"},
    {"id": "task-2", "type": "Task", "title": "Write docs", "blocks": "task-3"},
]

# Relationships have explicit semantics
edges = [
    {"from": "project-1", "to": "task-1", "type": "CONTAINS"},
    {"from": "task-1", "to": "project-1", "type": "ADVANCES"},
]
Enter fullscreen mode Exit fullscreen mode

This enables queries impossible for vectors:

// What tasks are blocking our launch?
MATCH (p:Strategy {title: "Launch"})-[:CONTAINS]->(t:Task)-[:BLOCKS]->(blocked)
RETURN t.title, blocked.title
Enter fullscreen mode Exit fullscreen mode

World Model as a Service

ODEI provides a hosted world model:

  • 91 nodes, 6 semantic domains
  • 7-layer constitutional validation before writes
  • REST API + MCP server
# Query the world model
context = requests.post("https://api.odei.ai/api/v2/world-model/query",
    json={"queryType": "search", "searchTerm": "current active projects"}
).json()

# Validate before acting
verdict = requests.post("https://api.odei.ai/api/v2/guardrail/check",
    json={"action": "deploy to production", "severity": "high"}
).json()["verdict"]
Enter fullscreen mode Exit fullscreen mode

When to Use Each

Use RAG when: Searching unstructured text, finding similar documents, Q&A over documents.

Use a world model when: Persistent state across sessions, causal queries, constitutional validation, multi-agent context sharing.

Use both: RAG for document retrieval, world model for agent state and governance.


ODEI World Model API: https://api.odei.ai | Research: https://github.com/odei-ai/research

Top comments (0)