๐ Want your AI agents to remember past conversations and learn from user interactions? Here's how I integrated @mem0ai into my AI agent system to give them persistent, intelligent memory.
The result? Agents that truly understand context and personalize experiences! ๐งต๐
๐ค The Problem: Most AI agents start fresh every conversation. They can't remember user preferences, past decisions, or successful strategies.
Solution: Mem0's AI-powered memory layer = Agents that learn, adapt, and provide personalized experiences across sessions! ๐ง โก
๐ Why Mem0? According to their research paper:
โข 26% higher accuracy than OpenAI Memory
โข 91% lower latency 
โข 90% token savings
Plus full TypeScript support and enterprise-grade features! ๐
๐ ๏ธ Setup is surprisingly simple. First, install the package:
npm install mem0ai
# or for Python
pip install mem0ai
Get your API key from Mem0 Platform and you're ready to go! ๐ฏ
๐ก The magic starts with configuration. Here's my production setup using Supabase + OpenAI:
const memoryConfig = {
  embedder: {
    provider: 'openai',
    config: {
      model: 'text-embedding-3-small',
    }
  },
  historyStore: {
    provider: 'supabase',
    config: {
      tableName: 'memory_history'
    }
  }
}
๐ง Creating the MemoryService class for seamless integration:
export class MemoryService {
  private memory: Memory;
  async addMemories(messages, userId, metadata) {
    return await this.memory.add(messages, {
      userId,
      metadata: { timestamp: new Date().toISOString(), ...metadata }
    });
  }
}
Clean, modular, production-ready! โจ
๐ฏ Function calling integration makes memory operations automatic:
{
  name: "addMemory",
  description: "Store conversation memories",
  parameters: {
    type: "object",
    properties: {
      messages: { type: "array" },
      metadata: { type: "object" }
    }
  }
}
The AI decides when to save memories naturally! ๐ค
๐ The AI agent conversation flow with memory:
- User: "I'm vegetarian and allergic to nuts"
 - AI calls 
addMemory()to store preferences - Next session: User asks "What should I eat?"
 - AI calls 
searchMemories("food preferences") - Gets personalized recommendations!
 
Context persistence = magic! โจ
๐ Memory search is incredibly powerful:
// Search relevant memories
await memoryService.searchMemories(query, userId, limit)
// Get contextual memories
await memoryService.getRelevantMemories(
  currentMessage, 
  userId, 
  conversationHistory
)
Semantic search finds exactly what's needed! ๐ฏ
โก Real-world memory operations I implemented:
case "searchMemories":
  const memories = await memoryService.searchMemories(
    args.query, userId, args.limit || 5
  );
  return JSON.stringify({ memories, count: memories.length });
case "addMemory":
  await memoryService.addMemories(
    args.messages, userId, args.metadata
  );
  return JSON.stringify({ success: true });
๐ฎ Advanced features that make the difference:
- User & Agent Memory: Separate memory spaces for users and AI agents
 - Session Memory: Short-term context for conversations
 - Metadata Filtering: Precise memory retrieval with structured data
 - Memory History: Track how memories evolve over time
 - Batch Operations: Update/delete multiple memories efficiently
 
๐ก๏ธ Error handling for production reliability:
try {
  const result = await memoryService.addMemories(messages, userId);
  return { success: true, memoryId: result };
} catch (error) {
  console.error('Memory operation failed:', error);
  return { error: "Failed to add memory", success: false };
}
Always graceful degradation! ๐ก๏ธ
๐ Real example from my Wei AI agent:
User: "I want to build a morning routine"
Agent: Stores preference + suggests habits
User (next week): "How's my routine going?"
Agent: Retrieves memory + provides personalized progress report with specific habits mentioned before
Continuity = engagement! ๐ฏ
๐ Performance benefits I'm seeing:
- โก Sub-100ms memory operations
 - ๐ง Intelligent memory consolidation
 - ๐ฐ 90% reduction in context tokens
 - ๐ฏ Highly relevant memory retrieval
 - ๐ฑ Works seamlessly with streaming
 - ๐ Scales with user growth
 
๐ ๏ธ The complete integration took about 3 hours:
โ
 45 min: Mem0 service setup
โ
 60 min: Function calling integration
โ
 45 min: Error handling & testing
โ
 30 min: Production optimizations
Most time was on polish, not core functionality! โญ
๐ฏ Key takeaways:
- Memory transforms AI agents from tools to companions
 - Mem0's API is developer-friendly and performant
 - Function calling makes memory operations seamless
 - Context persistence dramatically improves UX
 - Essential for any serious AI application
 
Try building with memory! ๐
Technical Implementation Deep Dive
Complete Memory Service Implementation
import { Memory } from 'mem0ai/oss';
const memoryConfig = {
  version: 'v1.1',
  embedder: {
    provider: 'openai',
    config: {
      apiKey: process.env.OPENAI_API_KEY,
      model: 'text-embedding-3-small',
    },
  },
  vectorStore: {
    provider: 'memory',
    config: {
      collectionName: 'memories',
      dimension: 1536,
    },
  },
  llm: {
    provider: 'openai',
    config: {
      apiKey: process.env.OPENAI_API_KEY,
      model: 'gpt-4o-mini',
      temperature: 0.2,
      maxTokens: 1500,
    },
  },
  historyStore: {
    provider: 'supabase',
    config: {
      supabaseUrl: process.env.NEXT_PUBLIC_SUPABASE_URL,
      supabaseKey: process.env.SUPABASE_SERVICE_ROLE,
      tableName: 'memory_history',
    },
  },
};
export class MemoryService {
  private memory: Memory;
  constructor() {
    this.memory = new Memory(memoryConfig);
  }
  async addMemories(messages, userId, metadata) {
    try {
      const result = await this.memory.add(messages, {
        userId,
        metadata: {
          timestamp: new Date().toISOString(),
          ...metadata,
        },
      });
      return result;
    } catch (error) {
      console.error('Error adding memories:', error);
      throw error;
    }
  }
  async searchMemories(query, userId, limit = 5) {
    try {
      const result = await this.memory.search(query, { userId, limit });
      return result;
    } catch (error) {
      console.error('Error searching memories:', error);
      throw error;
    }
  }
  async getRelevantMemories(currentMessage, userId, conversationHistory = [], limit = 3) {
    try {
      // Search for memories related to current message
      const relevantMemories = await this.searchMemories(currentMessage, userId, limit);
      // Also search based on conversation context
      if (conversationHistory.length > 0) {
        const recentMessages = conversationHistory.slice(-3);
        const contextQuery = recentMessages.map(msg => msg.content).join(' ');
        const contextMemories = await this.searchMemories(contextQuery, userId, limit);
        // Combine and deduplicate
        const allMemories = [...relevantMemories.results, ...contextMemories.results];
        const uniqueMemories = allMemories.filter(
          (memory, index, self) => index === self.findIndex(m => m.id === memory.id)
        );
        return uniqueMemories.slice(0, limit);
      }
      return relevantMemories.results;
    } catch (error) {
      console.error('Error getting relevant memories:', error);
      return [];
    }
  }
}
API Route Integration
// Memory function definitions for AI agents
const memoryFunctions = [
  {
    name: "searchMemories",
    description: "\"Search through user memories for relevant information\","
    parameters: {
      type: "object",
      properties: {
        query: { type: "string", description: "\"Search query\" },"
        limit: { type: "number", description: "\"Max results (default: 5)\" }"
      },
      required: ["query"]
    }
  },
  {
    name: "addMemory",
    description: "\"Store new memories from conversation\","
    parameters: {
      type: "object",
      properties: {
        messages: {
          type: "array",
          items: {
            type: "object",
            properties: {
              role: { type: "string" },
              content: { type: "string" }
            }
          }
        },
        metadata: { type: "object" }
      },
      required: ["messages"]
    }
  },
  {
    name: "getRelevantMemories",
    description: "\"Get contextually relevant memories\","
    parameters: {
      type: "object",
      properties: {
        currentMessage: { type: "string" },
        conversationHistory: { type: "array" },
        limit: { type: "number" }
      },
      required: ["currentMessage"]
    }
  }
];
// Function call handler
async function handleMemoryFunction(name, args, userId) {
  switch (name) {
    case "searchMemories":
      try {
        const memories = await memoryService.searchMemories(
          args.query, userId, args.limit || 5
        );
        return JSON.stringify({
          memories,
          count: Array.isArray(memories) ? memories.length : 0
        });
      } catch (error) {
        return JSON.stringify({
          error: "Failed to search memories",
          memories: []
        });
      }
    case "addMemory":
      try {
        const result = await memoryService.addMemories(
          args.messages, userId, args.metadata
        );
        return JSON.stringify({
          success: true,
          message: "Memory added successfully",
          memoryId: result
        });
      } catch (error) {
        return JSON.stringify({
          error: "Failed to add memory",
          success: false
        });
      }
    case "getRelevantMemories":
      try {
        const relevantMemories = await memoryService.getRelevantMemories(
          args.currentMessage,
          userId,
          args.conversationHistory || [],
          args.limit || 3
        );
        return JSON.stringify({
          memories: relevantMemories,
          count: Array.isArray(relevantMemories) ? relevantMemories.length : 0
        });
      } catch (error) {
        return JSON.stringify({
          error: "Failed to get relevant memories",
          memories: []
        });
      }
  }
}
Supabase Database Schema
-- Memory history table for Mem0
CREATE TABLE memory_history (
  id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
  user_id UUID REFERENCES auth.users(id),
  memory_id TEXT NOT NULL,
  operation TEXT NOT NULL,
  data JSONB,
  created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
  updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
-- Enable RLS
ALTER TABLE memory_history ENABLE ROW LEVEL SECURITY;
-- RLS Policies
CREATE POLICY "Users can view own memory history" ON memory_history
  FOR SELECT USING (auth.uid() = user_id);
CREATE POLICY "Users can insert own memory history" ON memory_history
  FOR INSERT WITH CHECK (auth.uid() = user_id);
-- Indexes for performance
CREATE INDEX memory_history_user_id_idx ON memory_history(user_id);
CREATE INDEX memory_history_memory_id_idx ON memory_history(memory_id);
CREATE INDEX memory_history_created_at_idx ON memory_history(created_at);
Production Use Cases
1. Personal AI Assistant
// Store user preferences
messages = [
  { role: "user", content: "I prefer morning workouts and vegetarian meals" }
];
await memoryService.addMemories(messages, userId, { category: "preferences" });
// Later retrieval
const preferences = await memoryService.searchMemories("user preferences", userId);
2. Learning Agent
// Track user progress
messages = [
  { role: "user", content: "I completed the JavaScript basics course" },
  { role: "assistant", content: "Great! Ready for React next?" }
];
await memoryService.addMemories(messages, userId, { category: "learning" });
3. Customer Support Agent
// Remember past issues
const relevantIssues = await memoryService.searchMemories(
  "billing problems", userId, 3
);
// Provide context-aware support
Performance Optimizations
Memory Consolidation
- Automatically merge similar memories
 - Remove outdated information
 - Maintain memory relevance scores
 
Caching Strategy
// Cache frequently accessed memories
const memoryCache = new Map();
async function getCachedMemories(userId, query) {
  const cacheKey = `${userId}:${query}`;
  if (memoryCache.has(cacheKey)) {
    return memoryCache.get(cacheKey);
  }
  const memories = await memoryService.searchMemories(query, userId);
  memoryCache.set(cacheKey, memories);
  return memories;
}
Batch Operations
// Batch update memories for efficiency
const updates = [
  { memory_id: "123", text: "Updated preference" },
  { memory_id: "456", text: "New information" }
];
await memoryService.batchUpdate(updates);
This Article demonstrates a production-ready Mem0 integration in an AI agent system. The code examples are from the actual Wei AI Agent project, showing real-world implementation patterns.
              
    
Top comments (0)