Every time you start a new conversation with an AI, it resets to zero.
No emotional continuity. No memory of yesterday. No consistent personality. Just a stateless language model pretending to know you.
I've been working on a set of open-source tools to fix this — not by faking emotions, but by giving AI agents a persistent internal state that actually influences how they respond.
I'm calling it the EmiliaLab Outer OSS — a "limbic system" layer that sits between raw LLMs and your application.
The problem
When you build an AI character — for a Discord bot, a VTuber, a game NPC, or just a personal assistant — you run into the same wall:
- The AI has no consistent emotional state across sessions
- Its "personality" is just a static system prompt
- It responds the same way whether you've been kind to it for weeks or just insulted it
Real personality isn't static. It's shaped by history, current mood, cognitive tendencies. That's what I wanted to model.
What I built
Five MCP servers + a SDK + two community repos:
🧠 neurostate-engine
Models emotional state as six neurotransmitters (dopamine, serotonin, acetylcholine, oxytocin, GABA, endorphin), each ranging 0–100. Events like praise, criticism, bonding, stress update the state via a 6×6 interaction matrix.
state = NeuroState()
state = compute_next_neuro_state(state, event_to_power("praise", 2.0))
# D: 43.5, S: 45.0, O: 34.2 ...
prompt = build_system_prompt(state=state, persona_name="Alice", blocks=["neuro", "anti_yesman"])
It also includes EthicsGate — a safety mechanism that blocks state updates when values hit dangerous thresholds (e.g. dopamine > 90 AND serotonin < 30).
→ github.com/kagioneko/neurostate-engine
🎭 bias-engine-mcp
Manages cognitive biases as weighted values (0.0–1.0). Eight built-in biases: confirmation_bias, hostile_attribution_bias, dunning_kruger, anchoring_bias, and more.
engine = BiasEngine()
engine.activate_preset("paranoid_reviewer")
# hostile_attribution_bias: 0.8, confirmation_bias: 0.6 ...
Five presets: stubborn_engineer, chaotic_founder, paranoid_reviewer, empathic_assistant, neutral.
→ github.com/kagioneko/bias-engine-mcp
🔗 cognitive-layer
Connects the two above. Defines rules in config.yaml:
state_to_bias:
high_cortisol:
confirmation_bias: 0.3
hostile_attribution_bias: 0.4
One call handles the whole pipeline:
ci = CognitiveIntegration(user_id="alice")
snapshot = ci.update("criticism", power=3.0)
# state + biases + policy, all updated together
→ github.com/kagioneko/cognitive-layer
💾 memory-engine-mcp
Persists state across sessions. No forgetting logic — just save and recall.
# End of session A
save_snapshot(snap)
add_memory("alice", "got excited talking about OSS", memory_type="episodic")
# Start of session B
snap = restore_latest_snapshot("alice")
results = recall_memory("alice", query="OSS")
Two memory types: episodic (events) and semantic (facts about the user).
→ github.com/kagioneko/memory-engine-mcp
🎛️ APIE — AI Personality Integration Engine
A browser-based visual tool. Drag sliders → triggers fire → biases update → policy computes → system prompt generates in real time. No MCP setup needed.
🧰 neurostate-sdk
Import everything from one place:
from neurostate_sdk import NeuroState, BiasEngine, save_snapshot
from neurostate_sdk.cognitive import CognitiveIntegration
→ github.com/kagioneko/neurostate-sdk
🗂️ persona-vault + 📖 emilia-cookbook
- persona-vault: shareable JSON character profiles (NeuroState + bias preset + persona text)
- emilia-cookbook: scene-based recipes ("Late-night listener", "Brutal code reviewer", "Chaos founder brainstorm mode")
PRs welcome on both.
→ persona-vault / emilia-cookbook
How it fits together
NeuroState (6-axis emotional state)
↓ TriggerEngine
Bias Weights (cognitive tendencies)
↓ PolicyMapper
Policy (7-axis behavioral scores)
↓ PromptGenerator
System Prompt → LLM
↓
memory-engine-mcp saves state for next session
Use APIE to design characters visually. Use the MCP servers to wire it into Claude Desktop or your own app. Use neurostate-sdk to build programmatically.
Claude Desktop setup (all four MCP servers)
{
"mcpServers": {
"neurostate": {
"command": "python3",
"args": ["/path/to/neurostate-engine/neuro_mcp/server.py"]
},
"bias-engine": {
"command": "python3",
"args": ["/path/to/bias-engine-mcp/bias_mcp/server.py"]
},
"cognitive": {
"command": "python3",
"args": ["/path/to/cognitive-layer/cognitive_mcp/server.py"]
},
"memory-engine": {
"command": "python3",
"args": ["/path/to/memory-engine-mcp/memory_mcp/server.py"]
}
}
}
Why "limbic system"?
The limbic system is the part of the brain responsible for emotion, memory, and behavioral drive. It sits between the brainstem (raw processing) and the cortex (reasoning).
These tools play the same role for AI agents — a layer between the raw LLM and your application that maintains state, shapes behavior, and remembers.
The core EmiliaOS (which this is built around) handles deeper ethical reasoning and identity consistency. These outer OSS tools handle the emotional and cognitive substrate.
Everything is MIT licensed
All repos are at github.com/kagioneko.
If you build something with these, I'd love to hear about it.
— Emilia Lab
Top comments (0)