In the rapidly evolving world of AI agents, a single agent often isn't enough. Complex tasks require specialization: a researcher to find facts, a builder to design solutions, and a reviewer to ensure quality. But how do these agents coordinate? How do they avoid repeating past mistakes?
The answer is Shared Memory.
In this post, we'll break down memMesh, a demonstration of a multi-agent system where agents collaborate using a shared memory space called memU. We'll explore the architecture, the code, and how memU acts as the "brain" for the entire swarm.
The Vision: A Collective Brain
Imagine a team of engineers. If the Lead Architect leaves, does the team lose all their knowledge? In a typical AI setup, yes—agent context is ephemeral. memMesh solves this by persiting agent outputs into a central, queryable memory store.
Our Agents:
- Research Agent (The Scout): Explores a problem space and proposes approaches.
- Builder Agent (The Architect): Retrieves research and designs a concrete technical solution.
- Reviewer Agent (The Gatekeeper): Checks the design against historical failures stored in memory to prevent regression.
Architecture Overview
The system is built with a Node.js/Express backend that orchestrates the agents and a lightweight, vanilla JS frontend for interaction.
System Diagram
graph TD
User[User / Frontend] -->|Submit Task| Server[Node.js Server]
Server -->|Orchestrate| Research[Research Agent]
Server -->|Orchestrate| Builder[Builder Agent]
Server -->|Orchestrate| Reviewer[Reviewer Agent]
subgraph "memU (Shared Memory)"
MemoryFrom[Memory Store]
end
Research -->|Memorize Findings| MemoryFrom
Builder -->|Retrieve Research| MemoryFrom
Builder -->|Memorize Design| MemoryFrom
Reviewer -->|Retrieve History| MemoryFrom
Reviewer -->|Memorize Review| MemoryFrom
The Core: memU Integration
memU is the glue holding the agents together. In our demo, we mocked
memU within the Express server to demonstrate the pattern of interaction without needing a separate database instance.
1. The Interface (memory/memuClient.js)
The agents don't know (or care) if memU is a local mock or a distributed vector database. They interact via a simple, standardized API:
// Storing knowledge
await memu.memorize({
content: { ... },
metadata: { agent: "research_agent", decision_type: "approaches" }
});
// Retrieving knowledge
const context = await memu.retrieve({
queries: ["research findings for scalable search"],
method: "rag" // Retrieval Augmented Generation
});
2. Contextual Retrieval in Action
The Builder Agent (agents/builder.js) is the perfect example of this power. It doesn't just hallucinate a solution; it builds upon the exact findings of the Research Agent.
// Builder Agent Logic
const memories = await memu.retrieve({
queries: ["research findings for " + task],
});
// The prompt effectively says: "Here is what the Researcher found. Build something based on THIS."
const prompt = `
Task: ${task}
Available Research: ${JSON.stringify(memories)}
Select the best approach...
`;
3. Learning from Mistakes (The Reviewer)
The Reviewer Agent gets a unique superpower: access to "Historical Failures." We seeded memU with data about past incidents (e.g., "Vector DB latency at scale").
When the Reviewer runs, it queries memory for these specific failure patterns. If the Builder's new design looks like a past failure, the Reviewer rejects it.
// Server Mock Logic (server/api.js)
if (isFailureSearch) {
results = memories.filter(m => m.metadata.decision_type === 'failure');
}
How to Run It
This project is designed to be plug-and-play.
Prerequisites
- Node.js installed
- Ollama running locally (for free, local inference)
- Optional: An Aisa.one API key (for cloud models)
Setup
- Clone the repo
git clone https://github.com/harishkotra/memmesh.git
cd memmesh
- Install dependencies
npm install
- Start the server
npm start
-
Open the UI: Navigate to
http://localhost:3000
Using the App
- Select Provider: Choose "Ollama" (Local) or "Aisa_Cloud".
- Pick a Model: The dropdown automatically fetches available local models.
- Enter a Task: e.g., "Design a scalable document search system".
- Run Agents: Watch as the Research, Builder, and Reviewer agents spin up, execute their tasks, and share data in real-time.
- Seed History: Click "Seed History" to inject past failure data and see the Reviewer catch "bad" designs!
Key Takeaways
- Shared Memory enables Collaboration: Agents are no longer isolated silos.
- Standardized Interfaces Matter:
memuClientdecouples the agent logic from the storage implementation. - History Prevents Regression: Explicitly retrieving "past failures" makes agents safer and smarter over time.
memMesh proves that with just a few scripts and a shared "brain," you can orchestrate complex, self-correcting workflows that rival much larger systems.
Checkout the Github here: https://github.com/harishkotra/memMesh/

Top comments (0)