Introduction: Beyond the Static Model
CRAM-Net (Conversational Reasoning & Memory Network) represents a fundamental shift in neural architecture—from static weight models to Memory-Native (MN) systems.
While traditional AI treats conversation history as external text stored in a temporary cache, CRAM-Net treats every interaction as a physical catalyst for synaptic change. In this architecture:
“The conversation is the network.”
The model literally rewires itself in real time as dialogue progresses.
CRAM-Net is part of the Memory-Native Neural Network (MNNN) family and is available on GitHub:
👉 https://github.com/hejhdiss/CRAM-Net
The Dual-Track Plasticity System
To mirror the human brain’s ability to handle both fleeting context and permanent logic, CRAM-Net uses two internal memory tracks:
Track 1: Rapid Synaptic Plasticity
(The Chat Layer)
- Mechanism: Hebbian Trace Neurons
- Function: Captures immediate conversational context (e.g., names, current topic)
-
Dynamics:
- High learning rate
- Fast decay
- Enables short-term memory without permanently modifying core logic
This allows the network to remain context-aware without relying on a traditional context window.
Track 2: Structural Plasticity
(The Reasoning Layer)
- Mechanism: Differentiable Logic Manifolds
- Function: Discovers and hardens logical invariants (e.g., A ⇒ B)
-
Dynamics:
- Low learning rate
- High stability
- Logical structures persist beyond the conversation
This layer forms a durable reasoning map that survives long after the chat ends.
Cognitive Pressure: The Global Workspace Bottleneck
A defining characteristic of CRAM-Net is that information does not flow freely. All internal representations must pass through a Global Workspace Bottleneck.
Key Properties
- Compression Ratio: ~12.5% of raw thought vectors
- Cognitive Pressure: Forces the system to choose what truly matters
- Reasoning Trigger: Logical abstraction becomes necessary to survive compression
This bottleneck naturally activates the reasoning track, as structured logic compresses far better than raw data.
Mathematical Engine and Performance
CRAM-Net is powered by a high-performance C backend (cram-net.c) that applies a synaptic update for every token processed:
[
W_{\text{new}} = W_{\text{old}} + \eta \left(h_t \otimes h_{t-1}\right) - \lambda W_{\text{old}}
]
Update Breakdown
Association Step:
Links the current thought with the previous one, preserving continuity.Decay Step:
Prevents runaway memory growth and gradually removes conversational noise.
Efficiency
- Only 25–30% of synapses remain active per interaction
- Maintains high contextual retention with minimal computational overhead
Summary
CRAM-Net reframes intelligence as a living, adaptive structure, where:
- Conversation directly alters the network
- Memory and reasoning are intrinsic, not bolted on
- Logic emerges under pressure, not instruction
This is not a chatbot with memory.
This is a network that thinks by rewiring itself.
Top comments (0)