How many AI conversations did you have this week? 10? 50? 100?
How many can you find right now?
That's the problem. AI coding tools generate enormous amounts of knowledge — architecture decisions, debugging sessions, implementation discussions — and all of it vanishes when you close the terminal.
I built a system that captures every AI conversation automatically. It works with 25+ tools. The entire architecture is a hook, a CLI, and a parser pipeline.
Here's how it works.
The Problem: Knowledge That Disappears
Every AI coding tool stores conversations differently:
-
Claude Code writes JSONL to
~/.claude/projects/ -
Codex writes JSONL to
~/.codex/sessions/ - Cursor stores data in SQLite
- ChatGPT is accessible only via export
- Copilot Chat logs to VS Code output channels
Some tools give you hooks. Some give you files. Some give you nothing.
I needed one system that could ingest all of them, normalize the data, and make it searchable. Not a viewer for each tool's format — a unified knowledge base.
Architecture Overview
AI Tool (session ends)
↓ hook / file watcher / manual push
@nokos/cli (local)
↓ reads session file, gzip compresses
Nokos API (cloud)
↓ tool-specific parser extracts messages
↓ AI generates title + metadata + summary
↓ vector embedding for semantic search
PostgreSQL (storage)
↓ full-text search + vector search
Nokos UI (web/mobile)
Fully automatic for tools that support hooks. One command for everything else.
Capturing Sessions: Three Patterns
Pattern 1: Session Hooks (Claude Code, Codex)
Claude Code has a SessionEnd hook. When a conversation ends, it fires automatically:
{
"hooks": {
"SessionEnd": [{
"command": "nokos push --from-hook",
"timeout": 30000
}]
}
}
The hook pipes session metadata to stdin. The CLI reads the transcript file, compresses it, and sends it to the API. The user does nothing.
We also hook into PreCompact — when Claude Code compresses a long conversation, we capture the intermediate state before it's compacted. This means you don't lose progress during long sessions.
Codex has a similar model — agent-turn-complete notifications with the conversation payload. Same CLI, different input format.
As a safety net, nokos setup also configures a cron job that runs nokos watch every 30 minutes. It detects changed session files and pushes them automatically — catching anything the hooks might miss.
Pattern 2: Extension APIs (Roo Code, Cline)
For VS Code-based tools, we built an extension that listens for task completion events and file changes, then sends conversations to the API automatically.
Pattern 3: Manual Push
For tools without hooks:
nokos push session.jsonl --tool cursor
One command. The CLI auto-detects the tool from the file path when possible.
The Parser Pipeline
This is where it gets interesting. Every AI tool has a different conversation format. Claude Code uses JSONL with content blocks. Codex uses OpenAI-style messages. Copilot Chat uses request/response pairs. The Anthropic format is a JSON array, not JSONL.
We have 7 dedicated parsers and a generic fallback:
| Parser | Tools | Key difference |
|---|---|---|
| Claude Code | Claude Code |
userType: "human", content blocks |
| Codex | Codex CLI |
role field, OpenAI format |
| Anthropic | Roo Code, Cline | JSON array of MessageParam[]
|
| Copilot Chat | Copilot Chat |
request/response pairs |
| Cursor | Cursor | Workspace JSON with threads |
| Aider | Aider | Markdown chat history |
| Gemini CLI | Gemini CLI | Gemini-specific content parts |
| Generic | Everything else | Best-effort: JSON, JSONL, plain text |
Every parser produces the same output: a normalized array of {role, content} messages plus token stats and file change lists.
The real challenge is format detection — each tool's JSONL looks similar but has different field names, nesting structures, and content representations. Each parser checks for its tool's fingerprint before parsing. The generic fallback handles anything we haven't seen before, and it covers more cases than you'd expect.
AI Summary + Semantic Search
Raw session data is noisy — hundreds of JSONL lines with tool calls, file edits, and internal reasoning. Nobody wants to read that.
The API generates a structured summary (title, category, tags, sentiment) and a vector embedding for each session. This means you can search by meaning, not just keywords. "That session where I fixed the auth bug" matches even if the word "auth" never appears in the transcript.
Sessions and handwritten memos live in the same search index. Ask "What did I work on last week?" and you get both. This also powers Personal AI (RAG) — when you chat with Nokos, it retrieves relevant memos and sessions as context.
The Compaction Problem
Here's a production quirk nobody warns you about: Claude Code fires "Compacting conversation" multiple times per session.
Every compaction triggers the SessionEnd hook. A single coding session can generate 3-5 hook events. Without deduplication, you'd store 5 copies of the same conversation.
Our solution: daily session IDs.
const today = new Date().toISOString().slice(0, 10); // "2026-03-22"
const sessionId = `${baseSessionId}_${today}`;
Each push upserts by session ID. Multiple pushes in one day update the same record. The next day starts fresh. Simple, and it solved the problem completely.
MCP: Closing the Loop
The capture pipeline gets data into Nokos. MCP (Model Context Protocol) lets AI tools search from Nokos.
You (in Claude Code): "What was the approach I used for auth last month?"
↓ MCP tool call: search_nokos({ query: "auth approach" })
↓ Returns relevant memos and sessions
Claude Code: "Based on your session from Feb 15, you implemented..."
Three MCP tools: search_nokos, get_nokos, save_session. Available via local MCP (stdio, for Claude Code/Codex) and remote MCP (HTTP, for claude.ai/ChatGPT).
The loop closes: AI tools generate knowledge → Nokos captures it → AI tools retrieve it. Your AI remembers what you've built.
Try It
Set up auto-capture in 2 minutes:
npm install -g @nokos/cli
nokos login
nokos setup # configures Claude Code SessionEnd hook
That's it. Every Claude Code session is now automatically captured, summarized, and searchable.
nokos.ai — free plan available to try it out.
This is article 4 in my series about building a SaaS with AI. Article 1: Zero Lines of Code. Article 2: AI Cost Split. Article 3: PostgreSQL RLS.
Launching on Product Hunt March 31st — follow for updates!
Top comments (0)