DEV Community

Cover image for Beyond the Scrapbook: Building a Developer Knowledge Commons

Beyond the Scrapbook: Building a Developer Knowledge Commons

Daniel Nwaneri on March 16, 2026

DEV.to shipped Agent Sessions last week — a beta feature that lets you upload Claude Code session files directly to your DEV profile, curate what t...
Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Funny thing — I’m actually the “Pascal” referenced here 😄

I really find this tool interesting! For me, its value isn’t building a knowledge base, but being able to extract and embed specific moments from a session (Copilot CLI, Claude, etc.) directly into an article — showing the process in context.

Not everything valuable requires a complex study beforehand; sometimes it’s quite the opposite.

Big thanks to @jonmarkgo for putting this out there and for taking user feedback into account live — it really shows how the tool can evolve with real use.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

Pascal! The real one. Glad you found it.
Your use case reframes something I hadn't fully separated. preserving knowledge for future sessions versus documenting process for current readers. Those look similar from the outside both involve capturing what happened but the curation logic is completely different. For a knowledge base you're filtering for reusability. For an article embed you're filtering for legibility. What makes a moment worth storing isn't the same as what makes a moment worth showing.

The "not everything valuable requires complex study beforehand" point cuts at something real. Some of the most useful things I've captured from sessions are small . A decision that got made quickly, a correction mid-task. Not worth a paragraph on their own. Exactly right as evidence inside an argument.

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

That’s exactly it — “worth showing” vs “worth storing” is a really good way to put it 👍

Collapse
 
leob profile image
leob • Edited

So you think that dev.to's feature is a "nice idea" but it's not gonna be practically usable?

Collapse
 
dannwaneri profile image
Daniel Nwaneri

Not exactly - the feature works, and the local parsing decision is the right architectural call. What I'm skeptical about is the manual curation ceiling. The developers who'd benefit most are running multiple Claude Code sessions a day. They'll upload three sessions in the first week because it's novel. Then stop. The sessions keep accumulating in ~/.claude/projects/.
The missing piece isn't the UI. It's the evaluation layer that runs before you ever see the session — automatic extraction, scoring, surface the high-signal moments for review. DEV.to built the right first 10%. The rest is a pipeline problem, not a design problem.

Collapse
 
leob profile image
leob

But not only a curation problem - as you say:

"They'll upload three sessions in the first week because it's novel. Then stop"

It takes effort to do, and without tangible benefits, people won't do it ...

Curation (the other 90%) is then the next hurdle.

I see it a bit as a "cool feature" which they whipped up in an afternoon, but which they didn't really think through enough.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

The effort problem and the curation problem are the same problem. People don't stop uploading because curation is hard. They stop because the return isn't visible. You put in the work, you get a post that maybe three people read. The feedback loop is too weak to sustain the habit.
That's why the pipeline has to run before you see the session not after. If the evaluator surfaces two insights worth keeping from a four-hour session automatically, the effort cost drops to minutes and the return is immediate - something retrievable in your next session, not a post you hope someone finds.
"Whipped up in an afternoon" is probably fair. The local parsing was thoughtful. Everything upstream of it wasn't.

Thread Thread
 
jonmarkgo profile image
Jon Gottfried

I actually think we're solving for different problems here, and I really appreciated your post's thoughtful deep dive into the larger problem space since there's a lot of opportunities there. Chat-knowledge is also a very clever tool.

Ultimately, the goal of DEV's Agent Sessions was not explicitly to provide a new layer of knowledge to agents.

The goal was to enrich the content on DEV that already showcases people's work with coding agents. The most common way that people were showcasing these interactions was with screenshots, and this was meant to be a solution for that specifically, especially as our volume of content around AI workflows has grown drastically.

I agree with you that the curation is a bit unwieldy, especially with very long sessions or a lot of parallel sessions. I have some ideas around better recommendations of what to include in your curated sessions as part of the upload workflow as well.

I also do think there are opportunities in the future to make sessions or even content on DEV a part of your workflow with coding agents, via some kind of agent-friendly skill or tool (MCP/CLI/etc), which could help with some of the knowledge commons issues you're describing.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

That's a fair correction and worth making clearly. I was reading Agent Sessions through the lens of the knowledge commons problem which was my frame, not yours. Screenshots as the dominant format for showcasing agent workflows is a real problem and a more immediate one than what I was writing about.

The MCP/CLI direction is exactly where I'd hope this goes. The gap between "showcasing work" and "work feeding back into future sessions" is smaller than it looks architecturally. The capture layer is mostly already there. What's missing is the curation signal that decides what's worth injecting into future context versus what's noise. That's the problem Foundation's evaluator is trying to solve at the session level.

If DEV ever wants to experiment with an agent-friendly knowledge layer, I'd be genuinely interested in that conversation.

Collapse
 
tomorrmonkey profile image
golden Star

This is a really good distinction between capture, curation, and evaluation.
Most tools stop at capture, some reach curation, but almost none solve extraction at scale. And without extraction, session logs are just archives, not knowledge.

The manual curation ceiling you describe feels very real. The people who generate the most useful sessions are exactly the ones least likely to spend time cleaning them up for publishing. If the pipeline depends on discipline, it won’t last. If it depends on automation with human review, it might.

The federation point is also important. We already saw what happens when the knowledge graph lives inside a single platform. Even good platforms eventually optimize for growth, not for preservation. A commons needs portability by design, not as an afterthought.

What’s interesting to me is that AI sessions are probably the richest raw knowledge source developers have ever produced — full reasoning paths, failed attempts, trade-offs, corrections — but without an evaluation layer they’re almost unusable.

If we get the pipeline right (capture → evaluate → index → federate), this could become something bigger than Stack Overflow ever was.
If we don’t, we’ll just end up with millions of private conversations that never compound into shared understanding.

Collapse
 
hyperliquidapp profile image
Hyperliquid

Great post