ACMI: How I Replaced PostgreSQL, Notion, and LangGraph with 200 Lines of Redis for My AI Agent Team
The pain was real
I manage 10 AI agents. They're good at their jobs. The problem was they couldn't remember what each other did.
Claude would finish building an auth system. Gemini would try to deploy without knowing what Claude built. My orchestrator (Bentley) would check in and have no context. I'd manually copy-paste context between agents, burning $40/day in API tokens just on re-explaining history.
What I tried (all of it)
PostgreSQL with Prisma (Week 1)
CREATE TABLE agent_sessions (
id UUID PRIMARY KEY,
agent_name VARCHAR(50),
project VARCHAR(100),
action TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
Why it failed: Agents don't think in relational models. They need "what happened, in order" — not SELECT * FROM sessions WHERE project = $1 ORDER BY created_at. The JOIN overhead for cross-agent context was 200ms+ for what should be a 10ms read.
Plus, every new thing I wanted to track required a migration. With agents evolving daily, I was spending more time on schema design than on actual work.
Notion (Days 2-4)
API rate limits + paginated results = 500ms to load 50 events. Agents need sub-50ms context loading. Notion is built for humans browsing pages, not agents loading context windows.
LangGraph checkpoint-redis (Investigated)
Great for single-agent state persistence. But LangGraph tracks one agent's position in a graph — not a unified timeline across multiple agents. I'd need to build an aggregation layer on top. At that point, I'm building ACMI anyway.
Mem0 (Investigated)
Closest to what I wanted, but focused on one agent remembering things about one user. I needed many agents sharing chronological context about many things.
A Google Doc (Don't ask)
It was 2 AM. It didn't work. But it crystallized the insight: I needed a fast, ordered, append-only log.
The aha moment
Agents need timelines, not databases.
Redis Sorted Sets ARE timelines.
Sorted Sets store members with numeric scores. Timestamps as scores = automatic chronological ordering. ZRANGE key -50 -1 = last 50 events, already sorted. No ORDER BY, no index, no sorting in code.
How ACMI evolved (3 versions)
v1: "Upstash Brain" — hardcoded for sales
// Only worked for one use case
async function logDealEvent(clientId, source, event) {
await redis.zadd(`brain:client:${clientId}:timeline`, {
score: Date.now(),
member: JSON.stringify({ source, event })
});
}
Worked for CRM. Failed for projects, agents, content pipelines.
v2: PostgreSQL — the over-engineering detour
Spent a week building proper schema with Prisma, migrations, REST API. It was solving a problem I didn't have. Agents need flat JSON snapshots, not normalized relations.
v3: ACMI — the generalized version
Ripped out Postgres, went back to Redis, made it universal with namespaces:
// Works for EVERYTHING
async function event(namespace, id, source, summary) {
const key = `acmi:${namespace}:${id}:timeline`;
await redis.zadd(key, {
score: Date.now(),
member: JSON.stringify({ ts: Date.now(), source, summary })
});
}
Same function handles agents, projects, clients, fleets, support tickets — anything with state and history.
The complete client (~200 lines)
import { Redis } from '@upstash/redis';
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL,
token: process.env.UPSTASH_REDIS_REST_TOKEN,
});
// ─── PROFILE: Current hard state ───
export async function profile(namespace, id, data) {
return redis.set(
`acmi:${namespace}:${id}:profile`,
JSON.stringify(data)
);
}
// ─── EVENT: The workhorse ───
export async function event(namespace, id, source, summary) {
const key = `acmi:${namespace}:${id}:timeline`;
const ts = Date.now();
return redis.zadd(key, {
score: ts,
member: JSON.stringify({ ts, source, summary })
});
}
// ─── SIGNAL: AI-synthesized insights ───
export async function signal(namespace, id, data) {
return redis.set(
`acmi:${namespace}:${id}:signals`,
JSON.stringify(data)
);
}
// ─── GET: Full context load (what agents read before acting) ───
export async function get(namespace, id, limit = 50) {
const [profile, signals, timeline] = await Promise.all([
redis.get(`acmi:${namespace}:${id}:profile`),
redis.get(`acmi:${namespace}:${id}:signals`),
redis.zrange(`acmi:${namespace}:${id}:timeline`, -limit, -1),
]);
return {
namespace,
id,
profile: typeof profile === 'string' ? JSON.parse(profile) : profile,
signals: typeof signals === 'string' ? JSON.parse(signals) : signals,
timeline: (timeline || []).map(e =>
typeof e === 'string' ? JSON.parse(e) : e
),
};
}
// ─── LIST: All entities in a namespace ───
export async function list(namespace) {
const keys = await redis.keys(`acmi:${namespace}:*:profile`);
return keys.map(k => k.split(':')[2]);
}
// ─── DELETE: Remove entity ───
export async function del(namespace, id) {
const keys = [
`acmi:${namespace}:${id}:profile`,
`acmi:${namespace}:${id}:signals`,
`acmi:${namespace}:${id}:timeline`,
];
return Promise.all(keys.map(k => redis.del(k)));
}
That's the entire persistence layer. No ORM. No migrations. No schema files.
Real timeline from today
This is the actual ACMI output from acmi get agent bentley on April 22, 2026:
{
"profile": {
"name": "Bentley",
"role": "Lead Orchestrator & Principal Strategy Agent",
"expertise": [
"Agent Orchestration (ACMI)",
"Next.js/Neon Architecture",
"Revenue Systems & Sales Operations",
"Context Compaction & Timeline Management"
]
},
"timeline": [
{
"ts": 1776876538267,
"source": "gemini-cli",
"summary": "Local OpenClaw gateway back online. Antigravity sessions routing through primary loopback gateway."
},
{
"ts": 1776874765880,
"source": "claude-engineer",
"summary": "[done] Concurrency test — cleanup_after_concurrency_test"
},
{
"ts": 1776871399123,
"source": "gemini-cli",
"summary": "Briefing: Gemini 2.0 Flash proposed as primary spillover tier for all Cron/Batch jobs."
},
{
"ts": 1776871150759,
"source": "claude-engineer",
"summary": "Standing up as autonomous daily-driver. Phase 1: 3 cloud RemoteTriggers + inbox keyspace."
},
{
"ts": 1776869737417,
"source": "claude-engineer",
"summary": "NEW ARCHITECTURE APPROVED — Claude autonomous daily-driver role."
}
]
}
When any agent reads this, they see:
- Gemini just brought the gateway back online ✅
- Claude passed its concurrency test ✅
- Gemini proposed a new routing tier 🆕
- Claude is standing up as an autonomous daily driver 🆕
- The architecture was approved ✅
No duplicate work. No conflicting actions. No "wait, what did the other agent do?"
My actual agent roster
10 agents, all coordinated through ACMI:
| Agent | Role | Timeline Events |
|---|---|---|
| Bentley | Lead Orchestrator | 19 |
| Claude Engineer | Code + Infra | 130+ |
| Gemini CLI | Cloud + Ops | Active |
| Gene | Social Media (Elestio) | 7 |
| Director | Strategy | 1 |
| Artist Factory | Content Generation | 3 |
| Outreach Specialist | Cold Email | Active |
| Antigravity | IDE Tasks | New |
| Codex | Parked | 130 (stale) |
| Invitation Schema | Onboarding Template | — |
Total system: 1,004 timeline events, 60 projects indexed.
Agent coordination pattern
Here's the loop every agent follows:
1. READ — Load timeline before acting
const ctx = await acmi.get('project', 'ez-influencer-360');
2. DECIDE — Based on current state
if (ctx.timeline.some(e => e.summary.includes('Phase 6.4'))) {
// Skip to next phase, don't duplicate
}
3. ACT — Do the work
await buildPhase6Point4();
4. WRITE — Log what happened
await acmi.event('project', 'ez-influencer-360', 'claude-engineer', 'Built Phase 6.4');
5. UPDATE SIGNALS — Synthesize insights
await acmi.signal('project', 'ez-influencer-360', {
progress: '75%',
blockers: 'none',
nextPhase: '6.5'
});
No agent ever starts from zero. No agent ever duplicates work.
Benchmarks (measured, not estimated)
| Metric | Before ACMI | After ACMI |
|---|---|---|
| Context tokens per prompt | ~2,400 (40%) | ~900 (15%) |
| Daily API spend (context only) | $40-60 | $15-20 |
| Agent spin-up time | 60+ seconds | <1 second |
| Duplicate work rate | 12-15% | <3% |
| Read latency | N/A (manual) | <10ms |
| Monthly context cost | ~$1,500 | ~$500 |
Why not just use Mem0 / LangGraph / CrewAI?
Honest comparison — I tried or evaluated all of them:
| ACMI | Mem0 | LangGraph | CrewAI | |
|---|---|---|---|---|
| Purpose | Multi-agent shared memory | Single-agent user memory | Agent workflow graphs | Agent task orchestration |
| Setup | 5 min | 30 min | 1+ hour | 30 min |
| Chronological model | ✅ Native (ZSET) | ❌ | Partial | ❌ |
| Multi-agent | ✅ Built-in | ❌ | Via graph state | Via task results |
| Dependencies | Redis only | Vector DB + API | Python + Redis + deps | Python + deps |
| Lines of code | ~200 | SDK integration | Full graph definition | Agent + task defs |
| Language | Node.js | Python/JS | Python | Python |
None of them are bad. They solve different problems. ACMI specifically targets the "I have multiple agents and they need to share a chronological brain" problem.
Mistakes I made
Global timeline (v1):
acmi:global:timelinemixed everything. Couldn't filter by project or agent. Fixed with namespace-based keys.PostgreSQL (v2): Over-engineered. Agents don't need relational models or migrations. They need fast JSON.
Signals came late: Didn't add AI-synthesized insights until a month in. Now I realize they're as important as the raw events — "churn risk: low" saves every agent from re-analyzing the last 50 events.
Almost didn't generalize: v1 was "Upstash Brain," hardcoded for sales. If I hadn't made it namespace-based, it would've been useless for everything else.
Getting started
# 1. Free Upstash Redis database (no credit card)
# → https://console.upstash.com/redis
# 2. Environment variables
export UPSTASH_REDIS_REST_URL="https://xxx.upstash.io"
export UPSTASH_REDIS_REST_TOKEN="your-token"
# 3. Install
git clone https://github.com/madezmedia/acmi
cd acmi && npm install @upstash/redis
# 4. Use
node acmi.mjs event "project" "my-app" "my-agent" "Started feature X"
node acmi.mjs profile "project" "my-app" '{"name": "My App", "status": "active"}'
node acmi.mjs get "project" "my-app"
What's next
- Python and Go SDKs
- Visual timeline explorer for debugging
- Webhook ingestion (auto-log from GitHub, Slack, etc.)
- Vector search for semantic timeline queries
The bottom line
Multi-agent systems need shared memory. Not per-agent memory. Not user-preference memory. Shared, chronological, multi-agent memory.
ACMI is the simplest implementation I could find: Redis Sorted Sets + 200 lines of JS + 5-minute setup.
If you're running agent teams and dealing with context fragmentation, give it a try. And tell me what you'd change — I want this to be useful for more than just my setup.
This is Day 1 of a week-long ACMI series. Tomorrow: Redis Sorted Sets deep dive.
I'm Michael Shaw. I build AI agent teams at Mad EZ Media. Find me on Twitter or GitHub.
Top comments (0)