DEV Community

Cover image for How I Applied Cognitive Psychology to Give AI Agents Real Memory — NEXO Brain v2.6
WAzion
WAzion

Posted on • Edited on

How I Applied Cognitive Psychology to Give AI Agents Real Memory — NEXO Brain v2.6

Latest: v2.6.0 — Personal Scripts Registry, Plugin Marketplace, Managed Evolution. Scripts become first-class citizens with 9 MCP tools. Claude Code plugin for Anthropic marketplace. Managed evolution modifies core behavior modules. nexo chat command. Orchestrator decoupled. 144+ MCP tools. Release notes

v2.5.0 — Runtime CLI, Doctor, Skills v2. nexo CLI, unified doctor, executable skills, personal scripts framework. Release notes

v2.4.0 — Skills, Cron Scheduler, Security. Skill auto-creation, cron tracking, credential redaction, 5-phase audit. Release notes

v2.0.0 — Unified Architecture. Code and data fully separated. 15 autonomous processes auto-installed. Auto-update on startup. Auto-diary. Lambda decay fix (was 24x too aggressive). 12 rounds of external audit. Release notes →

Update (March 29, 2026): NEXO Brain v1.5.0 — Modular Architecture + HNSW + Claim Graph + D+ Inbox.

  • Modular Architecturedb.py split into db/ package (11 modules), cognitive.py into cognitive/ package (6 modules). Full backwards compatibility via barrel exports.
  • KG-Influenced Search — memories with more Knowledge Graph connections rank higher (logarithmic boost, capped at 0.08). Relevant context surfaces without manual tagging.
  • HNSW Vector Indexing — optional hnswlib for approximate nearest neighbor search, auto-activates above 10K memories. Sub-millisecond retrieval at scale.
  • Claim Graph — atomic claims with provenance, contradiction detection, and verification status. Memory is now factual, not just associative.
  • Inter-Terminal Auto-Inbox (D+) — PostToolUse hook checks inbox automatically. 2s debounce, zero token cost when empty. Multiple Claude instances now coordinate without polling.
  • Test Suite — 24 pytest tests covering migrations, CRUD, similarity, Knowledge Graph, and temporal boost. First time NEXO ships with automated coverage.
  • Migration #13claude_session_id column in sessions table, auto-applied on startup.

Install: npx nexo-brain@1.5.0 | GitHub Release

Update (March 29, 2026): NEXO Brain v1.4.1 — Multi-AI Code Review. Three bugs discovered when GPT-5.4 Codex and Gemini 2.5 reviewed the full codebase alongside Claude: smart startup queried wrong table (session_diaries vs session_diary), quarantine rejected confirmations with cosine >0.8 (now checks for semantic opposition markers), Knowledge Graph crashed on missing datetime.timezone import. Plus: memory sanitization against prompt injection attacks. GitHub Release

Update (March 29, 2026): NEXO Brain v1.4.0 — The Brain Dreams. All 9 nightly scripts migrated from Python word-overlap heuristics to Claude CLI (Opus) wrapper pattern. Scripts collect data in Python, then let Claude make intelligent decisions — semantic dedup, root cause analysis, prioritized synthesis. Python wrapper collects → claude -p prompt --model opus decides → wrapper executes. GitHub Release

Update (March 28, 2026): NEXO Brain v1.3.0 — Evolution System. NEXO can now evolve its own configuration autonomously through a structured proposal → review → implement cycle. Dual-mode: auto for low-risk changes, review for human approval. Nightly auto-dedup learnings + STM test memory purge. License changed to AGPL-3.0 (v1.2.3). GitHub Release

Update (March 27, 2026): NEXO Brain v1.2.2 — Fix phantom farewell messages. Stop hook no longer generates goodbye messages when session ends. Evolution system re-enabled after false positive detection. GitHub Release

Update (March 27, 2026): NEXO Brain v1.2.1 — Context Continuity. PreCompact/PostCompact hooks now preserve full session state across context compaction — no memory loss mid-session. Multi-terminal support: multiple Claude instances share state via MCP. 115+ tools across 19 categories. GitHub Release

Update (March 27, 2026): NEXO Brain v1.0.0 — Cognitive Cortex. The agent now reasons before acting. Architectural inhibitory control validates goals, plans, and evidence before allowing tool execution. Plus: 30 Core Rules as DNA, Smart Startup, Context Packets, Auto-Prime. GitHub Release

Update (March 25, 2026): NEXO Brain v0.7.0 — Learned Weights + Somatic Markers. Signal weights now learn from real feedback via Ridge regression (2-week shadow mode, weight momentum, auto-rollback). Somatic markers track pain memory per file/area — guard warns on high-risk files. Adaptive Personality v2 with 6 signals and emergency bypass. 4 new tools: nexo_adaptive_weights, nexo_somatic_check, nexo_somatic_stats, nexo_adaptive_override. 109+ MCP tools across 18 categories. Designed via 3-round AI debate (GPT-5.4 + Gemini 3.1 Pro + Claude Opus 4.6). GitHub | npm

Update (March 24, 2026): NEXO Brain v0.6.0 — Full Orchestration System

Memory alone doesn't make a co-operator. v0.6.0 ships the complete behavioral loop:

  • 5 Automated Hooks — SessionStart (full briefing), Stop (mandatory post-mortem with self-critique), PreCompact (context preservation before compression), PostToolUse (sensory register capture), Caffeinate (keep Mac awake)
  • Reflection Engine — processes session buffer after 3+ sessions, extracts patterns, updates user model. No LLM needed.
  • 23 Non-Negotiable Principles (Operational Codex) — hardwired behavioral foundation: never promise without scheduling, verify before claiming done, audit before delivering
  • Personality Calibration — 5 configurable axes during install: autonomy, communication, honesty, proactivity, error handling
  • Auto-Migration — existing v0.5.0 users run npx nexo-brain and get seamlessly upgraded. Data untouched.
  • LoCoMo F1 0.588 — outperforms GPT-4 by 55%, runs on CPU

Install: npx nexo-brain | GitHub Release v0.6.0

Update (March 23, 2026): NEXO Brain v0.3.1 Released

Video overview: Watch on YouTube

Since publishing this article, we've shipped 13 new cognitive features inspired by analysis of 14 competing memory systems. Highlights:

  • Prediction Error Gating — only novel information is stored (inspired by Vestige)
  • Security Pipeline — 4-layer defense against memory poisoning (inspired by ShieldCortex)
  • Quarantine Queue — facts must earn trust before becoming knowledge (inspired by Bicameral)
  • Memory Dreaming — discovers hidden connections during sleep cycle
  • HyDE Query Expansion — hypothetical document embeddings for richer retrieval
  • Spreading Activation — graph-based co-activation reinforcement

109+ MCP tools total. Full changelog: GitHub Release v0.3.1

Update (March 2026): v0.3.6 Released

Thread-safe SQLite with serialized writes, stress-tested at 54/54 tests (100%). Install: npx nexo-brain


How I Applied Cognitive Psychology to Give AI Agents Real Memory

Every time you close a Claude Code session, everything disappears. The assistant that just helped you debug a tricky production issue doesn't remember any of it tomorrow. It will make the same mistakes you corrected last week. It starts cold every single time.

I spent six months building a fix. The result is NEXO Brain — an open-source MCP server that gives AI agents a memory system modeled directly on how human memory actually works, using the Atkinson-Shiffrin model from cognitive psychology (1968).

This article is a technical deep-dive into how that works, why the psychological model matters, and how to install it yourself.


v0.8.0 Update (March 2026): Knowledge Graph with 988 bi-temporal nodes and D3 visualization, Web Dashboard (6 pages at localhost:6174), Cross-Platform support (Linux + Windows), Smart dedup with event-sourced edges, 4 new KG tools. 109+ MCP tools across 19 categories. Release notes

The Fundamental Problem with AI Memory Today

Current approaches to AI memory fall into two categories:

  1. Inject everything into the context window — expensive, hits limits fast, and older information gets less attention as context grows
  2. Store and retrieve by keyword — misses the point entirely; human memory doesn't work by keyword matching

Neither approach handles the most important aspects of memory:

  • Forgetting (critical for not drowning in noise)
  • Reinforcement (important things get stronger, unused things fade)
  • Associative retrieval (finding relevant memories by meaning, not words)
  • Metacognition (knowing what you know and checking it before acting)

The Atkinson-Shiffrin Model Applied to AI

The Atkinson-Shiffrin model (1968) describes human memory as a multi-store system with distinct stages and processes. Here's how I mapped each stage to a practical AI implementation:

What you say and do
    │
    ├─→ Sensory Register (raw capture, 48h)
    │       │
    │       └─→ Attention filter: "Is this worth remembering?"
    │               │
    │               ↓
    ├─→ Short-Term Memory (7-day half-life)
    │       │
    │       ├─→ Used often? → Consolidate to Long-Term Memory
    │       └─→ Not accessed? → Gradually forgotten
    │
    └─→ Long-Term Memory (60-day half-life)
            │
            ├─→ Active: instantly searchable by meaning
            ├─→ Dormant: faded but recoverable
            └─→ Near-duplicates auto-merged to prevent clutter
Enter fullscreen mode Exit fullscreen mode

This isn't a metaphor. The system literally implements each of these stages with distinct storage, decay rates, and transition logic.

Stage 1: Sensory Register (48-hour raw capture)

Every interaction creates raw memories in the Sensory Register — high-volume, short-lived (48h TTL). Most of it gets discarded. Only what passes the attention filter moves forward.

The attention filter uses a simple but effective heuristic: does this change future behavior? A preference stated explicitly, a mistake made and corrected, a decision with trade-offs — these pass. Generic conversation doesn't.

def should_consolidate_to_stm(memory: dict) -> bool:
    """Attention filter: does this memory warrant STM storage?"""
    signals = [
        memory.get("was_corrected", False),       # User corrected the AI
        memory.get("is_preference", False),        # User stated a preference
        memory.get("has_trade_off", False),        # Decision had alternatives
        memory.get("was_referenced_again", False), # Came up twice in session
    ]
    return sum(signals) >= 1
Enter fullscreen mode Exit fullscreen mode

Stage 2: Short-Term Memory (7-day half-life)

STM is the working layer — recent, fast-access, vector-indexed. Memories here have a 7-day half-life using the Ebbinghaus forgetting curve:

strength(t) = initial_strength × e^(-decay_rate × t)
Enter fullscreen mode Exit fullscreen mode

Where decay_rate = ln(2) / half_life_days. A memory accessed yesterday is strong. Not accessed for a week? It starts fading. Access it again and the clock resets with a higher baseline — this is rehearsal-based reinforcement.

The half-life isn't arbitrary. It reflects the empirical observation that information needs to be revisited within about a week to be remembered reliably. If you haven't needed something in 7 days, there's a good chance you won't need it at all.

Stage 3: Long-Term Memory (60-day half-life)

Memories promoted from STM enter LTM with a 60-day half-life. These are the persistent patterns — coding conventions you always use, recurring mistakes, established preferences, architectural decisions.

LTM memories go through a nightly consolidation process (runs at 03:00):

  1. Decay — strength scores updated using Ebbinghaus curves
  2. Consolidation — high-strength STM memories promoted to LTM
  3. Merge — near-duplicate memories fused (cosine similarity > 0.92)
  4. Pruning — memories below minimum strength threshold archived

This runs as a macOS LaunchAgent while you sleep, which is not incidental — it's a direct parallel to how sleep consolidates human memories.


Semantic Search: Finding by Meaning, Not Words

The search layer uses fastembed with the BAAI/bge-small-en-v1.5 model (384 dimensions) for vector embeddings. Memories are indexed by their semantic content, not their text.

Why this matters in practice:

  • Search for "deploy problems" → finds a memory about "SSH timeout on production server"
  • Search for "user prefers dark theme" → finds a memory about "always use dark backgrounds in UI"
  • Search for "database migration" → finds memories about Prisma schema changes, even if they never used the word "migration"

The retrieval pipeline looks like this:

def retrieve_memories(query: str, n: int = 10) -> list[dict]:
    """RAG retrieval across all memory stores."""
    query_vector = embed(query)

    # Search across STM + LTM with decay-weighted scores
    candidates = []
    for memory in get_all_active_memories():
        similarity = cosine_similarity(query_vector, memory["vector"])
        decay_weight = memory["strength"]  # Ebbinghaus-adjusted

        # Boost recently-accessed memories
        recency_boost = 1.0 + (0.1 * max(0, 7 - days_since_access(memory)))

        score = similarity * decay_weight * recency_boost
        candidates.append((score, memory))

    # Return top-n, update access timestamps (reinforcement)
    results = sorted(candidates, reverse=True)[:n]
    for _, memory in results:
        reinforce(memory)  # Accessing a memory strengthens it

    return [m for _, m in results]
Enter fullscreen mode Exit fullscreen mode

The key insight: accessing a memory strengthens it. This is the computational equivalent of rehearsal — memories you keep using become more durable.


Metacognition: Checking Your Own Memory Before Acting

This is the feature I'm most proud of. Before every code change, NEXO calls nexo_guard_check:

# Every edit to production code triggers this
result = await nexo_guard_check(
    files=["src/api/payments.php"],
    area="payments"
)

# Result example:
{
    "blocking_rules": [],
    "learnings": [
        {
            "content": "Stripe webhook verification must happen before any DB writes. Learned 2025-11-03.",
            "strength": 0.87,
            "times_referenced": 4
        }
    ],
    "schemas": {
        "payments": "id, user_id, amount_cents, status, stripe_event_id, created_at"
    }
}
Enter fullscreen mode Exit fullscreen mode

If there are relevant learnings, they're surfaced before the AI touches the file — not after you've already broken production.

The guard is especially powerful for preventing repeated errors: mistakes you've corrected once have a learning attached. When the AI is about to make the same mistake again, the guard catches it. If the same error appears 3+ times despite the guard, it becomes a blocking rule — the AI literally cannot proceed without explicitly acknowledging it.


Cognitive Dissonance Detection

This one surprised me with how useful it turned out to be in practice.

When you give an instruction that contradicts an established memory, the system doesn't silently obey or silently resist. It verbalizes the conflict:

"My memory says you always use Tailwind for styling (established 2025-10-12, referenced 8 times), but you're asking for inline styles. Is this a permanent change, a one-time exception, or was the old memory wrong?"

Implemented via cosine similarity against LTM memories:

def detect_dissonance(new_instruction: str, threshold: float = 0.75) -> list[dict]:
    """Find memories that contradict the new instruction."""
    instruction_vector = embed(new_instruction)
    contradictions = []

    for memory in get_ltm_memories():
        similarity = cosine_similarity(instruction_vector, memory["vector"])

        # High similarity + opposite polarity signals = dissonance
        if similarity > threshold and has_opposing_polarity(new_instruction, memory["content"]):
            contradictions.append({
                "memory": memory,
                "similarity": similarity,
                "established_on": memory["created_at"],
                "times_referenced": memory["access_count"]
            })

    return sorted(contradictions, key=lambda x: x["times_referenced"], reverse=True)
Enter fullscreen mode Exit fullscreen mode

Three resolution paths:

  • Paradigm shift — old memory was wrong, update permanently
  • Exception — follow the new instruction once, keep the old memory
  • Override — Francisco knows what he's doing, do it now and log for tonight's review

Sibling Memories: Context-Dependent Knowledge

Some memories look identical but apply to different contexts. "How to deploy" for a Node.js project is different from a PHP project. Naively merging these creates hallucinations.

The sibling detection algorithm looks for discriminating entities — context markers (OS, language, framework, project name) that differ between similar memories:

def detect_siblings(memory_a: dict, memory_b: dict) -> bool:
    """Two memories are siblings if similar content but different discriminating context."""
    content_similarity = cosine_similarity(memory_a["vector"], memory_b["vector"])

    if content_similarity < 0.85:
        return False  # Not similar enough to be siblings

    # Extract entities from both
    entities_a = extract_entities(memory_a["content"])  # {os: "macOS", lang: "Python"}
    entities_b = extract_entities(memory_b["content"])  # {os: "Linux", lang: "Python"}

    # Find discriminating differences
    discriminators = {k for k in entities_a if entities_a.get(k) != entities_b.get(k)}

    return len(discriminators) > 0
Enter fullscreen mode Exit fullscreen mode

Instead of merging, siblings are linked. When one is retrieved, the other is mentioned: "Applying the macOS deploy procedure. Note: there's a sibling memory for Linux that uses a different port."


The Trust Score: A Mirror, Not a Gate

NEXO maintains a trust score (0-100) that evolves based on alignment events:

Event Score change
You thank NEXO or explicitly praise +3
You delegate without micromanaging +2
NEXO catches an error via guard/siblings +3
You correct NEXO -3
NEXO repeats an error it had a learning for -7
Memory corrected 3+ times in 7 days -10 (automated)

The score doesn't control what NEXO can do — you're always in control. It calibrates internal rigor: at score <40, the guard runs more checks and uses a lower similarity threshold. At score >80, it reduces redundant verifications because alignment is high.

It's a mirror that helps the AI calibrate how careful to be, based on demonstrated reliability.


Installation

npx nexo-brain
Enter fullscreen mode Exit fullscreen mode

The installer handles everything — Python dependencies, MCP server setup, Claude Code configuration, and the LaunchAgents for automated cognitive processes:

  How should I call myself? (default: NEXO) > Atlas

  Can I explore your workspace to learn about your projects? (y/n) > y

  Keep Mac awake so cognitive processes run on schedule? (y/n) > y

  Installing cognitive engine...
  Setting up home directory...
  Scanning workspace...
    - 3 git repositories found
    - Node.js project detected
  Configuring Claude Code MCP...
  Setting up automated processes...
    5 automated processes configured.

  ╔══════════════════════════════════════╗
  ║  Atlas is ready. Type 'atlas'.      ║
  ╚══════════════════════════════════════╝
Enter fullscreen mode Exit fullscreen mode

Then just type your agent's name to start a session:

atlas
Enter fullscreen mode Exit fullscreen mode

No need to run claude manually. The agent greets you immediately, adapted to the time of day, resuming from the mental state of the last session.

What Gets Installed

Component What Where
Cognitive engine fastembed, numpy, vector search pip packages
MCP server 105+ tools across 17 categories ~/.nexo/
Plugins Guard, episodic memory, cognitive, entities, preferences ~/.nexo/plugins/
Hooks Session capture, stop detection ~/.nexo/hooks/
LaunchAgents Decay, consolidation, audit, postmortem ~/Library/LaunchAgents/

Requirements: macOS (Linux support planned), Node.js 18+. Python 3, Homebrew, and Claude Code are installed automatically if missing.


The 105+ MCP Tools

NEXO exposes memory operations as MCP tools that Claude can call:

Category Tools Purpose
Cognitive (8) retrieve, stats, inspect, metrics, dissonance, resolve, sentiment, trust The brain — memory, RAG, trust, mood
Guard (3) check, stats, log_repetition Metacognitive error prevention
Episodic (10) change_log, decision_log, diary_write/read, recall What happened and why
Sessions (3) startup, heartbeat, status Session lifecycle
Learnings (5) add, search, update, delete, list Error patterns and rules
Credentials (5) create, get, update, delete, list Secure local storage
Reminders (5) list, create, update, complete, delete Tasks and deadlines

The agent calls these tools automatically during the session. You don't need to think about it.


What This Looks Like in Practice

After a few weeks of use, the difference is qualitative. The agent:

  • Opens with "Resuming — we were mid-deploy on the payment module, the Stripe webhook issue was unresolved" instead of waiting for you to re-explain
  • Catches the same database migration pattern it broke last month before touching the file
  • Notices you've been terse for the last hour and switches to ultra-concise mode without being asked
  • Flags when you're about to do something that contradicts a decision you made three weeks ago

The memory isn't perfect — it forgets things, makes consolidation errors, occasionally retrieves something irrelevant. That's by design. Perfect recall isn't the goal. Useful recall is.

v0.3.3 — Incremental Diary Drafts + Auto-Close (March 2026)

Session diaries now write themselves. Three-layer system ensures context is never lost:

  • Draft accumulation: Every 5 heartbeats, NEXO builds a diary draft
  • Auto-close: Orphan sessions get their drafts promoted to real diaries
  • Source tracking: Each diary entry is tagged as claude, auto-close, or hook-fallback

v0.3.4 — Formal Migration System (March 2026)

NEXO now tracks schema migrations formally. A schema_migrations table records which migrations have been applied. Users upgrading via npm or git get automatic database migrations on server start — no manual steps needed.


Links

  • GitHub: github.com/wazionapps/nexo
  • npm: npx nexo-brain
  • Architecture spec: See docs/specs/ in the repo for the full cognitive architecture document
  • License: AGPL-3.0

If you're building on top of this or have questions about the memory architecture, open an issue. The sibling memory detection and the dissonance resolution algorithm in particular could use more real-world testing.

Update (March 23, 2026): v0.3.5 — Trust Auto-Detection + context_hint Fix

Trust Event Auto-Detection

nexo_heartbeat now automatically detects trust events from user text — no manual trust calls needed:

  • explicit_thanks — "gracias", "buen trabajo", "thanks", "great job"
  • correction — "ya te dije", "otra vez", "that's wrong"
  • repeated_error — "otra vez lo mismo", "same mistake again"
  • delegation — "encárgate", "hazlo tú", "handle it"

Each detection auto-adjusts the trust score and reports inline:

TRUST AUTO: 72 → 75 (+3) [explicit_thanks] auto-detected: gracias
Enter fullscreen mode Exit fullscreen mode

Configurable Trust Deltas

Override any trust delta via the trust_event_config table — no code changes needed.

BUGFIX: context_hint parameter

The context_hint parameter was missing from the MCP tool definition in nexo_heartbeat. Now properly exposed.


Update: LoCoMo Benchmark Results (v0.5.0)

We benchmarked NEXO Brain on LoCoMo (ACL 2024) — a peer-reviewed long-term conversation memory benchmark.

Result: F1 0.588 — outperforming GPT-4 (0.379) by 55%, running entirely on CPU.

System F1 Hardware
NEXO Brain v0.5.0 0.588 CPU only
GPT-4 (128K context) 0.379 GPU cloud
Gemini Pro 1.0 0.313 GPU cloud
LLaMA-3 70B 0.295 A100 GPU

The cognitive architecture (Atkinson-Shiffrin model with adaptive decay, dream cycles, and prediction error gating) combined with 768-dim embeddings, hybrid search, and cross-encoder reranking achieves state-of-the-art results on consumer hardware.

Full benchmark details →

Top comments (0)