Standard AI models are great at vibes, but terrible at truth. You can tell an agent that the sky is toxic and the main character is a debt-ridden deck-runner — but three sessions later, that context has drifted. The agent starts hallucinating a blue sky and a rich hero.
This happens because most memory systems treat “The Plot” the same as “The Last Chat Message.” Everything lands in a single flat context bucket, and the most recent tokens always win.
VEKTOR solves this with Narrative Partitioning — organizing your agent’s history into four logical layers using the MAGMA graph and metadata tags. Each layer has different retrieval rules, different persistence guarantees, and a different role in your agent’s cognition.
This is your baseline. Facts that should never be forgotten or pruned. The axioms of your universe — the laws of physics, the political factions, the state of the sky.
Store with importance: 1.0 and layer: “world”. High-importance nodes are protected from the REM consolidation cycle — they persist as Ground Truth indefinitely.
Character arcs change. A hero becomes a villain. A debt gets paid. A betrayal rewrites everything that came before. Standard RAG retrieval surfaces all of this as an undifferentiated pile of facts — leaving your agent confused about why Sarah is acting the way she is today.
The MAGMA causal graph fixes this. Every character action creates an edge to their motivation. When the agent recalls a character, it doesn’t just find their description — it traverses the graph to understand causality.
Use type: “causal” for character actions. When you retrieve, the graph returns why things happened, not just what happened.
Cyberpunk isn’t just a setting — it’s a linguistic style. Rain-slicked chrome. Electrical hums. The smell of ozone and fried noodles. Without consistent style retrieval, your agent generates tonally inconsistent prose that breaks immersion across sessions.
Tag aesthetic observations as layer: "style" and filter exclusively on these nodes when generating descriptions. The result is a persistent voice that stays consistent even months into a project.
Filter exclusively on layer: “style” when generating prose. This prevents plot context from contaminating tone — your agent writes in the right voice without knowing the wrong things.
The author’s intent. Instructions you’re giving the agent about where the story should go next — separate from what any character knows. This separates a story assistant from a story collaborator.
Use source: "author" metadata to flag these. Your agent can then reason differently when drawing on meta-commentary versus in-world character knowledge.
// Author intent - out-of-world direction await memory.remember( “Story needs to move toward Sarah discovering the Syndicate plan in Act 3. Plant foreshadowing.”, { tags: [”director”, “plot-direction”], layer: “meta”, source: “author”, importance: 0.7 } );
The Code: Putting It Together
Layer-filtered retrieval in practice
With all four layers populated, retrieval becomes surgical. You pull exactly the context each moment requires — no noise, no drift, no hallucinated blue sky.
The REM Cycle: Why It Matters for Fiction
Turning creative chaos into narrative truth
The most powerful part of VEKTOR for creative work isn’t the retrieval — it’s what happens while you’re away from the keyboard.
If you and the agent spent three hours arguing about a plot point, standard RAG retrieves all those conflicting fragments and confuses your agent next session. The REM cycle synthesizes that argument into a single Truth Node.
REM Consolidation: A Three-Hour Plot Argument
The raw debate is archived — not deleted, but deprioritized. Your agent wakes up with a clear, sharp understanding of the new plot direction, not a confused jumble of half-formed ideas.
The Sovereign Narrative Graph
Stop fighting your agent’s memory. Stop dumping 50 pages of world-building into a context window that only half-reads it. Build a living, layered memory that your agent actually understands.
Layer 1 — World: importance: 1.0, never pruned, your immutable axioms
Layer 2 — Characters: causal graph edges, traversable motivation chains
Layer 3 — Style: filtered on generation, persistent aesthetic voice
Layer 4 — Meta: author intent, separated from in-world knowledge
REM Cycle: session noise consolidated into truth nodes overnight
One file. One history. A world that never forgets.



Top comments (0)