DEV.to shipped Agent Sessions last week — a beta feature that lets you upload Claude Code session files directly to your DEV profile, curate what to include, and embed specific exchanges in your posts. Upload a .jsonl or .json session file, pick the moments worth keeping, save it. The parser currently supports Claude Code, Gemini CLI, Codex, Pi, and GitHub Copilot CLI.
I was in the announcement thread.
Pascal flagged that the useful unit for embedding in technical writing is a specific exchange, not the full session. You want the moment the agent made the wrong call, or the prompt that finally produced the right output. Uploading a full session to access two minutes of it is friction that'll stop most writers from using it at all.
The DEV team shipped a refactor the same day. Client-side curation, 10MB limit gone. That's a fast turnaround.
So I'll say this clearly: the feature works. The local parsing decision is the right call architecturally — nothing hits their servers before you've seen it. The sessions sitting in ~/.claude/projects/ right now are invisible to everyone, including future you. DEV.to made them visible.
That's the first 10% of the problem.
What's Actually in a Session File
A Claude Code .jsonl session file isn't clean. It's a raw record of everything: every prompt, every response, every tool call, every error, every retry, every tangent that went nowhere.
A 4-hour session produces tens of thousands of lines. Most of it is noise. The three decisions that mattered — the architectural choice you made in hour two, the wrong assumption you caught before it shipped, the pattern you'd use again — are buried in the middle of a conversation about a linter warning.
Manual curation finds those moments if you remember where they are. It doesn't find them if you don't. And it definitely doesn't find them across 40 sessions from the last two months.
This is the problem DEV.to's feature doesn't touch: extraction.
Not capture. Not curation. The automated process of reading a session and asking — what here is worth keeping? What decision was made, what reasoning supported it, what would someone need to know to not repeat this mistake?
That's not a UI problem. That's an evaluation problem. It's solvable — but not with a manual upload flow.
The Scrapbook vs The Commons
DEV.to built a scrapbook. Upload a session. Pick the good parts. Post it.
That's individually useful. A developer who publishes their sessions regularly builds a public record of how they think. That has real value — for their own portfolio, for their team, for the developers who find it later.
But it doesn't compound.
The knowledge in that session stays attached to that post, on DEV.to's platform, searchable through DEV.to's search, accessible through DEV.to's interface. It doesn't talk to anything else. It can't be queried by an AI tool working on a related problem. It can't be federated to a developer who runs their own instance. It can't surface automatically when a similar problem comes up in a different session six months later.
The session is uploaded. The knowledge is published. Nothing learns from it.
A knowledge commons does something different. It extracts the signal from the noise automatically, scores it, indexes it semantically, and makes it retrievable — by humans, by AI tools, by other instances in a federated network. The knowledge compounds. Sessions from six months ago surface when they're relevant today. A decision made by a developer in one context becomes findable by a developer in a different context who's facing the same constraint.
That's what's missing. Not the upload. The evaluation layer that turns uploaded sessions into searchable, reusable, federated knowledge.
The Manual Curation Ceiling
DEV.to's flow puts curation on you. You decide what to include. You decide what to cut. You write the context that makes it useful for other readers.
That's fine for one session. It doesn't scale to forty.
The developers who would benefit most from session preservation — the ones building seriously, shipping regularly, running multiple Claude Code sessions a day — are exactly the developers who don't have time to manually curate every session before publishing. They'll upload three sessions in the first week because it's novel. Then they'll stop. The sessions keep accumulating in ~/.claude/projects/. The feature stops getting used.
This is the friction problem that killed a dozen "public learning" features before it. The intent is genuine. The execution cost is too high to sustain.
The alternative is automatic extraction that runs in the background — sessions flow in, evaluation runs, high-signal insights surface for review, low-signal noise is filtered out before it ever reaches you. You review what the evaluator flagged, not the raw session. The curation burden drops from hours to minutes.
That's a pipeline, not a UI.
Agent Sessions is the UI.
Who Owns the Knowledge
There's a structural question underneath the feature that DEV.to doesn't answer: who owns the knowledge you publish?
When you upload a session to DEV.to and publish it, the content lives on DEV.to's platform. That's fine — DEV.to has been a good steward of developer content. But the knowledge is centralized. DEV.to controls the search. DEV.to controls the API. DEV.to controls what gets surfaced and how.
We've seen this pattern before. Stack Overflow was a good steward too, until the economics changed. Reddit. Twitter. The platforms that host the knowledge and the platforms that profit from it are the same platform, and that alignment doesn't hold forever.
A federated model works differently. Your knowledge lives on your instance. You control the search. You decide what gets federated to other instances and what stays private. The network is the value, not the platform. No single company owns the graph.
ActivityPub makes this possible today. The same protocol that lets Mastodon instances talk to each other can federate developer knowledge across independent instances. Your session insights are yours. They travel to other instances when you choose. They're retrievable by any tool that speaks the protocol.
DEV.to can't build this. It's structurally incompatible with a platform business model. A platform needs the content centralized to monetize the audience. Federation distributes the content, which distributes the audience, which distributes the revenue.
That's not a criticism — it's a constraint. DEV.to is doing what a platform can do. Federation is what a commons can do.
What the Other 90% Looks Like
I've been building Foundation for the past several months. It's not a concept.
A browser extension captures conversations automatically from Claude.ai — no manual upload, no file hunting. Conversations flow into a three-table schema in D1 (chats, messages, chunks) and get chunked at the passage level for semantic search in Vectorize. When a session ends, the pipeline runs an evaluation pass using Workers AI (Llama 3.3 70B): three binary signals — is there a concrete technique being applied (usage), is it confirmed to work (validation), is it actionable rather than vague (specificity). Score ≥ 0.67 auto-promotes to knowledge memory. Score 0.33–0.67 surfaces for human review. Score < 0.33 gets discarded.
The knowledge that survives is indexed semantically and exposed via an MCP server: list_chats, get_chat, extract_insights, score_insights. Any AI tool that speaks MCP can query it. The session ended an hour ago. The insight from it is already retrievable in the next session.
The federation layer is ActivityPub. The same protocol Mastodon runs on. Your insights live on your instance. They travel to other instances when you choose. A developer running their own Foundation can federate with yours without either of you going through a centralized platform.
That's the full picture. Not a wireframe — running code, at github.com/dannwaneri/chat-knowledge.
The problem is now visible enough for a major platform to build around it. Developers are generating valuable knowledge in their sessions and losing it. That's real, and DEV.to naming it publicly matters.
But capture without evaluation is a scrapbook. A scrapbook without federation is a silo. A silo on someone else's platform is borrowed infrastructure.
The commons version of this is harder to build. That's why I've been building it.
Top comments (3)
So you think that dev.to's feature is a "nice idea" but it's not gonna be practically usable?
Not exactly - the feature works, and the local parsing decision is the right architectural call. What I'm skeptical about is the manual curation ceiling. The developers who'd benefit most are running multiple Claude Code sessions a day. They'll upload three sessions in the first week because it's novel. Then stop. The sessions keep accumulating in ~/.claude/projects/.
The missing piece isn't the UI. It's the evaluation layer that runs before you ever see the session — automatic extraction, scoring, surface the high-signal moments for review. DEV.to built the right first 10%. The rest is a pipeline problem, not a design problem.
But not only a curation problem - as you say:
"They'll upload three sessions in the first week because it's novel. Then stop"
It takes effort to do, and without tangible benefits, people won't do it ...
Curation (the other 90%) is then the next hurdle.
I see it a bit as a "cool feature" which they whipped up in an afternoon, but which they didn't really think through enough.