DEV Community

Cover image for Building "memMesh": A Multi-Agent System with Shared Memory (memU)
Harish Kotra (he/him)
Harish Kotra (he/him)

Posted on

Building "memMesh": A Multi-Agent System with Shared Memory (memU)

In the rapidly evolving world of AI agents, a single agent often isn't enough. Complex tasks require specialization: a researcher to find facts, a builder to design solutions, and a reviewer to ensure quality. But how do these agents coordinate? How do they avoid repeating past mistakes?

The answer is Shared Memory.

In this post, we'll break down memMesh, a demonstration of a multi-agent system where agents collaborate using a shared memory space called memU. We'll explore the architecture, the code, and how memU acts as the "brain" for the entire swarm.

Demo example

The Vision: A Collective Brain

Imagine a team of engineers. If the Lead Architect leaves, does the team lose all their knowledge? In a typical AI setup, yes—agent context is ephemeral. memMesh solves this by persiting agent outputs into a central, queryable memory store.

Our Agents:

  1. Research Agent (The Scout): Explores a problem space and proposes approaches.
  2. Builder Agent (The Architect): Retrieves research and designs a concrete technical solution.
  3. Reviewer Agent (The Gatekeeper): Checks the design against historical failures stored in memory to prevent regression.

Architecture Overview

The system is built with a Node.js/Express backend that orchestrates the agents and a lightweight, vanilla JS frontend for interaction.

System Diagram

graph TD
    User[User / Frontend] -->|Submit Task| Server[Node.js Server]
    Server -->|Orchestrate| Research[Research Agent]
    Server -->|Orchestrate| Builder[Builder Agent]
    Server -->|Orchestrate| Reviewer[Reviewer Agent]

    subgraph "memU (Shared Memory)"
        MemoryFrom[Memory Store]
    end

    Research -->|Memorize Findings| MemoryFrom
    Builder -->|Retrieve Research| MemoryFrom
    Builder -->|Memorize Design| MemoryFrom
    Reviewer -->|Retrieve History| MemoryFrom
    Reviewer -->|Memorize Review| MemoryFrom

Enter fullscreen mode Exit fullscreen mode

The Core: memU Integration

memU is the glue holding the agents together. In our demo, we mocked
memU within the Express server to demonstrate the pattern of interaction without needing a separate database instance.

1. The Interface (memory/memuClient.js)

The agents don't know (or care) if memU is a local mock or a distributed vector database. They interact via a simple, standardized API:

// Storing knowledge
await memu.memorize({
    content: { ... },
    metadata: { agent: "research_agent", decision_type: "approaches" }
});

// Retrieving knowledge
const context = await memu.retrieve({
    queries: ["research findings for scalable search"],
    method: "rag" // Retrieval Augmented Generation
});
Enter fullscreen mode Exit fullscreen mode

2. Contextual Retrieval in Action

The Builder Agent (agents/builder.js) is the perfect example of this power. It doesn't just hallucinate a solution; it builds upon the exact findings of the Research Agent.

// Builder Agent Logic
const memories = await memu.retrieve({
    queries: ["research findings for " + task],
});

// The prompt effectively says: "Here is what the Researcher found. Build something based on THIS."
const prompt = `
    Task: ${task}
    Available Research: ${JSON.stringify(memories)}
    Select the best approach...
`;
Enter fullscreen mode Exit fullscreen mode

3. Learning from Mistakes (The Reviewer)

The Reviewer Agent gets a unique superpower: access to "Historical Failures." We seeded memU with data about past incidents (e.g., "Vector DB latency at scale").

When the Reviewer runs, it queries memory for these specific failure patterns. If the Builder's new design looks like a past failure, the Reviewer rejects it.

// Server Mock Logic (server/api.js)
if (isFailureSearch) {
    results = memories.filter(m => m.metadata.decision_type === 'failure');
}
Enter fullscreen mode Exit fullscreen mode

How to Run It

This project is designed to be plug-and-play.

Prerequisites

  • Node.js installed
  • Ollama running locally (for free, local inference)
  • Optional: An Aisa.one API key (for cloud models)

Setup

  1. Clone the repo
git clone https://github.com/harishkotra/memmesh.git 
cd memmesh
Enter fullscreen mode Exit fullscreen mode
  1. Install dependencies
npm install
Enter fullscreen mode Exit fullscreen mode
  1. Start the server
npm start
Enter fullscreen mode Exit fullscreen mode
  1. Open the UI: Navigate to http://localhost:3000

Using the App

  1. Select Provider: Choose "Ollama" (Local) or "Aisa_Cloud".
  2. Pick a Model: The dropdown automatically fetches available local models.
  3. Enter a Task: e.g., "Design a scalable document search system".
  4. Run Agents: Watch as the Research, Builder, and Reviewer agents spin up, execute their tasks, and share data in real-time.
  5. Seed History: Click "Seed History" to inject past failure data and see the Reviewer catch "bad" designs!

Key Takeaways

  1. Shared Memory enables Collaboration: Agents are no longer isolated silos.
  2. Standardized Interfaces Matter: memuClient decouples the agent logic from the storage implementation.
  3. History Prevents Regression: Explicitly retrieving "past failures" makes agents safer and smarter over time.

memMesh proves that with just a few scripts and a shared "brain," you can orchestrate complex, self-correcting workflows that rival much larger systems.

Checkout the Github here: https://github.com/harishkotra/memMesh/

Top comments (0)