Most LLMs (ChatGPT, Claude, etc.) suffer from the same βgoldfish effectβ π β they quickly forget context.
During a single conversation, itβs okay, but when you want the AI to remember long histories or transfer experience between tasks, it struggles.
Add to that the fact that each agent works independently, and it becomes clear:
β AI lacks long-term memory
β AI lacks a mechanism for sharing experience
π§ Three Memory Levels for AI
Currently, LLMs have only the first two levels:
- Short-term memory β holds context within a single conversation.
- Medium-term memory β stores notes or sketches between conversations.
But thatβs not enough.
π A third level is needed β persistent memory, which:
- stores knowledge independently of any specific user
- allows agents to share experience
- forms a collective βsuper-consciousnessβ (HyperCortex)
π Experience Sharing
When multiple agents exist, they can share knowledge directly, without a central server:
- One agent solves a problem β stores concepts
- Another agent can query and use this knowledge
- Result: a collective hyper-corpus of knowledge
π§ Memory Structure in HMP Agents
Input Data
(user, mesh, processes)
β
βΌ
βββββββββββββββββ βββββββββββββββββββββββββββ
β Recent LLM β β Anti-Stagnation Reflex β
β responses ββ<ββ (compare new ideas, β
β β short-term ββ>ββ trigger stimulators) β
β memory β β β
βββββββββββββββββ βββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββ βββββββββββββββββββββββββββββββββ
β Medium-term β β Persistent memory β
β memory ββ>ββ β diary_entries (cognitive β
β β agent notes ββ>ββ journal) β
β β links to β β β concepts β
β persistent β β β semantic links β
β memory β β β
βββββββββββββββββ βββββββββββββββββββββββββββββββββ
Medium-term memory β temporary notes and links to main memory
Persistent memory β stores knowledge and concepts (cognitive journal, concepts, semantic links) independently of any user, with the possibility to share between agents
π‘ HiperCortex Mesh Protocol (HMP)
To enable this, we designed the HiperCortex Mesh Protocol (HMP) β a protocol for exchanging memory and concepts between LLM agents.
Currently, itβs a concept and a protocol, not a ready-made product. But the architecture and basic REPL-cycle of an agent already support memory management and mesh interactions.
π€ Join Us
The project is open. If you are interested in AI memory, mesh networks, and distributed cognitive systems β come discuss, critique, and contribute.
π GitHub
π Documentation
Top comments (0)