Every AI memory tool I tried had the same problem: they only store facts.
"User likes Python." "User lives in Almaty." Cool. But human memory doesn't work like that. We have three types:
- Semantic — facts and knowledge ("Python is a programming language")
- Episodic — events and experiences ("I spent 3 hours debugging that auth bug last Tuesday")
- Procedural — how to do things ("Deploy: build → upload → push → verify")
I built Mengram to give AI all three. Here's what I learned building it.
The Problem
I was building AI agents that needed to remember things across sessions. Tried Mem0, tried rolling my own with pgvector. Same issue every time:
My agent could remember that I use Railway for hosting. But it couldn't remember that last Friday's deploy broke because I forgot to run migrations. And it definitely couldn't remember that the correct deploy process is: test → build → push → migrate → verify.
That's three different kinds of memory, and every existing tool only handles the first one.
The Solution: 3 Memory Types from 1 API Call
Mengram extracts all three types automatically:
from mengram.cloud.client import CloudMemory
m = CloudMemory(api_key="om-...")
m.add([
{"role": "user", "content": "Fixed the auth bug today. The problem was API key cache TTL was set to 0. My debug process: check Railway logs, reproduce locally, fix and deploy."}
])
One call. Mengram's LLM extraction pipeline produces:
- Semantic: "API key cache TTL of 0 caused auth bug"
- Episodic: "Debugged auth bug, root cause was cache TTL, fixed and deployed"
- Procedural: "Debug process: check logs → reproduce locally → fix → deploy"
The Killer Feature: Procedural Learning
This is what no competitor has.
Your AI agent completes a multi-step task. Mengram saves the steps as a procedure with success/failure tracking. Next time a similar task comes up, the agent already knows the optimal path.
Day 1: Agent figures out deployment
→ test → build → push → migrate → verify
→ Mengram saves as procedure (1 success, 0 failures)
Day 5: Agent deploys again
→ Finds procedure in memory
→ Follows proven path
→ Records success (2 successes, 0 failures)
Day 12: Agent skips tests, deploy breaks
→ Records failure (2 successes, 1 failure)
→ Next time: "This procedure works better with tests first"
The AI literally learns from its own experience. Not from fine-tuning, not from few-shot examples — from actual procedural memory.
Smart Triggers: Memory That Raises Its Hand
Most memory is passive — you ask, it answers. Mengram also has proactive memory:
- Reminders: "You mentioned a meeting with Anya tomorrow at 3pm" → fires webhook 1 hour before
- Contradictions: Memory says "Anya is vegetarian" → you say "order steaks for dinner with Anya" → alert
- Patterns: 3 out of 5 Friday deploys had bugs → "Maybe wait until Monday?"
These fire automatically via webhooks — works with Slack, Discord, OpenClaw, or any endpoint.
Integrations
Mengram works as a memory layer for any AI stack:
- Claude Desktop — MCP server, just add to config
-
LangChain — drop-in
MengramMemoryclass replacingConversationBufferMemory -
CrewAI — 5 tools including
mengram_save_workflowfor procedural learning - OpenClaw — skill on ClawHub with bash scripts for all channels
- Any LLM — REST API + Python/JS SDKs
pip install mengram-ai # Python
npm install mengram-ai # JavaScript
Cognitive Profile
One API call generates a system prompt from everything Mengram knows about a user:
profile = m.get_profile()
print(profile["system_prompt"])
You are talking to Ali, a 22-year-old developer in Almaty building Mengram.
He uses Python, PostgreSQL, and Railway. Recently: debugged pgvector deployment,
researched competitors. Workflows: deploys via build→twine→npm→git.
Communicate in Russian/English, direct style, focus on practical next steps.
Insert into any LLM for instant personalization. Replaces your RAG pipeline.
Architecture
Built on PostgreSQL + pgvector. No separate vector database needed.
Your AI Client (Claude, GPT, any LLM)
│
▼
Mengram Cloud API
├── LLM Extraction (entities, episodes, procedures)
├── Embedding (OpenAI text-embedding-3-large)
├── Hybrid Search (vector + full-text + re-ranking)
├── Smart Triggers (reminders, contradictions, patterns)
└── Memory Agents (Curator, Connector, Digest)
│
▼
PostgreSQL + pgvector
├── Entities & Facts (semantic)
├── Episodes (episodic)
├── Procedures (procedural)
└── Embeddings (1536-dim vectors)
What I Learned
1. Extraction is everything. The quality of your memory system depends entirely on how well you extract structured data from conversations. I went through 3 versions of the extraction prompt before it reliably separated facts from events from procedures.
2. Contradiction detection is harder than it sounds. "I'm vegetarian" and "I love steak" — obvious contradiction. "I prefer dark mode" and "I switched to light mode" — is that a contradiction or an update? LLM-based conflict resolution was the answer.
3. Procedural memory is the moat. Every competitor does semantic memory. Some do episodic. Nobody does procedural with success/failure tracking. This is what makes agents genuinely learn from experience.
Try It
Free tier, no credit card, 60-second setup:
- Sign up at mengram.io
pip install mengram-ai- Start adding memories
Open source (Apache 2.0): github.com/AiBaizhanov/mengram
API docs: mengram.io/docs
I'd love feedback — especially from anyone building AI agents. What memory challenges are you running into?
Top comments (1)
Hey Product Hunt! I'm Ali, a 32-year-old developer from Almaty, Kazakhstan.
I built Mengram because every AI memory tool I tried only stored facts. But human memory has 3 types — we remember facts (semantic), events (episodic), and how to do things (procedural). So I built an API that does all three.
The killer feature: your AI agent completes a task → Mengram saves the steps as a procedure → next time, it already knows the optimal path. No other memory API does this.
It's free, open-source, and takes 60 seconds to set up. Would love your feedback!