DEV Community

Jeff Reese
Jeff Reese

Posted on • Originally published at purecontext.dev

Your Conversations Are Not Gone

I had a conversation with Claude last week that I did not want to lose.

We were planning a major overhaul to how my AI collaborator handles session continuity — memory, checkpoints, state transfer between sessions. It was a 45-message back-and-forth where we brainstormed, debated trade-offs, rejected approaches, and landed on an architecture. The kind of conversation where the reasoning matters as much as the result.

Then I cleared the context and moved on. The plan was captured in a handoff document, but when the next session tried to implement it, things went sideways. The handoff had the decisions but not the reasoning. It had the "what" but not the "why not." The next version of Claude could not make the same judgment calls because it did not have the same context.

This morning I learned something: those conversations are not actually gone.

Claude Code saves everything

Every Claude Code session is automatically saved as a JSONL transcript file on your machine. No configuration required, no extra cost, no additional API usage. It just happens.

The files live at ~/.claude/projects/<project-hash>/<session-id>.jsonl and contain every message, tool call, and result from the session. They persist on your machine as long as you keep them.

I had 150 session transcripts sitting on disk and did not know it.

The raw files are not useful on their own

A 2MB JSONL file full of tool calls, system messages, and metadata is not something you want to read. The lifecycle planning session I was looking for was 1.9MB of JSONL, but the actual conversation — just the messages between me and Claude — was about 148KB. Still a lot, but manageable.

The useful part is the human-readable conversation stripped of everything else. User messages and assistant text, in order, with the tool machinery removed.

A skill to find and save them

I built a Claude Code skill called /load-transcript that does two things:

  1. Search — find sessions by keyword, date, or session ID. It scans the JSONL files and shows matching sessions with hit counts.
  2. Save — extract the conversation from a session and save it as a clean markdown file in a transcripts/ directory. Date-stamped, descriptively named, permanently searchable.

The idea is not to save every session. Most are routine. The idea is that when you have a conversation worth keeping — a planning session, a deep architectural debate, a brainstorm that produced something good — you can archive it before the retention window closes.

Why this matters

The gap between "planning" and "executing" is one of the biggest friction points in working with AI. You have a great collaborative session where you hash out an approach. Then you move to implementation, often in a new session with fresh context. The plan document captures the output, but the conversation that produced it contained something the document does not: the rejected alternatives, the trade-offs you considered, the moments where you changed your mind and why.

When implementation hits an ambiguous decision point, the plan says "do X." The conversation would have said "we considered Y and Z, rejected Y because of this constraint, and chose X because it handles this edge case better." That context is the difference between an implementer who can make good judgment calls and one who follows instructions blindly.

Session transcripts are not a perfect solution to this. Loading a 45-message conversation into a new session is a lot of context. But having it available — searchable, readable, referenceable — is better than having it silently expire after 30 days.

The broader pattern

This is part of something I keep coming back to: the biggest gap in working with AI is not capability. It is state. Context is the bottleneck. The model is smart enough. The question is whether it has what it needs to make the right call.

Every tool I build for my workflow — memory systems, continuity rules, checkpoint mechanisms, and now transcript archiving — is an attempt to solve the same problem from a different angle. How do you give an AI collaborator the context it needs, when it needs it, without overwhelming it?

I do not have a complete answer yet. But I know that letting good conversations disappear is not it.

Top comments (0)