DEV Community

Cover image for A Research Workflow That Starts With Sources, Not Prompts
dengkui yang
dengkui yang

Posted on

A Research Workflow That Starts With Sources, Not Prompts

#ai

How private AI notebooks turn scattered files, links, notes, and local models into a reusable thinking loop.

Based on public materials from opennotebook.shop and the open-source open-notebook repository reviewed on April 30, 2026.

Many AI note-taking tools begin with the same interface: a blank prompt box.

That is convenient, but it quietly puts the wrong thing at the center. Real research does not start with a prompt. It starts with a pile of material: papers, links, meeting notes, transcripts, PDFs, half-formed thoughts, and questions that become clearer only after you spend time with the sources.

This makes Open Notebook useful to examine as a workflow idea. The opennotebook.shop page presents a simple flow: add files, links, and notes; ask questions; save cited answers; then turn the notebook into audio-style briefings with local or cloud models. The open-source project adds the deeper architecture: self-hosting, multiple model providers, full-text and vector search, context-aware chat, AI-assisted notes, podcasts, REST API access, and local model options such as Ollama.

The useful question is not whether an AI notebook can answer a question.

The useful question is whether it can help a person keep thinking after the answer.


The Scenario: Turning Raw Material Into a Briefing

Imagine a small team preparing for a product strategy review.

They have customer interview notes, a few internal memos, a competitor page, a product analytics export, and a recording transcript from last week's meeting. None of these is enough on its own. Together, they contain a direction, but only if someone can collect them, ask better questions, preserve evidence, and turn the result into something reusable.

The common AI shortcut is to paste everything into a chatbot and ask for a summary.

That works once. Then the problems begin:

  • Which source supported the summary?
  • Which parts came from private notes versus public links?
  • What should be sent to a cloud model, and what should stay local?
  • Where does the useful answer go after the chat ends?
  • Can the team turn the result into a note, a briefing, or a follow-up research plan?

This is where the notebook metaphor becomes more than UI. A notebook is not just where answers appear. It is where the research state accumulates.


Start by Protecting the Difference Between Sources and Notes

A good research workflow begins by refusing to collapse everything into "content."

Sources and notes are not the same thing.

Sources are evidence. They are the imported material: files, links, transcripts, videos, audio, pasted text. They should remain stable and referenceable because they are the ground from which later claims are made.

Notes are thinking. They are summaries, extracted insights, saved answers, manual observations, and decisions made after interacting with sources. Notes should be editable because understanding changes.

Open Notebook's own mental model follows this split: notebooks contain sources and notes. Sources are processed, indexed, and searchable. Notes are the evolving layer of insight.

This distinction matters more than it first appears. If a system lets generated summaries blur into source material, the notebook becomes harder to trust. If a system preserves source identity, then every later output can be inspected:

  • This answer came from these sources.
  • This note was created from that interaction.
  • This briefing reused these materials.

In ontology terms, a source and a note exist differently inside the workspace because they support different interactions. A source supports verification. A note supports adaptation. Confusing the two weakens both.

Figure: a private notebook is a transformation loop. Sources remain evidence; notes, answers, and audio become reusable layers of understanding.


The First Real Control Is Context

Once sources are collected, the next decision is not "which prompt should I write?"

It is:

What should the model be allowed to see?

This is the most underrated part of AI notebook design. Context is not just a technical limit. It is a privacy, cost, and reasoning boundary.

Open Notebook's docs describe context levels such as full content, summary only, or not in context. That seems like a small control, but it changes the whole workflow.

For the strategy review scenario:

  • Public competitor pages can go into full context.
  • Customer interviews might be summarized before model use.
  • Sensitive internal notes might stay out of cloud context entirely.
  • A local model can be used for a first pass on private material.

This is where a notebook becomes a cognitive tool rather than a chatbot. It gives the researcher a way to decide what participates in the current act of reasoning.

The practical ontology idea is simple: boundaries are part of the object. A source shared in full is not operationally the same as a source represented only by a summary. A private note excluded from context is not participating in the same interaction network as a public article. Good AI tooling should let users express those differences.


Chat Is for Exploration, Ask Is for Discovery, Transformations Are for Reuse

One reason prompt-first tools feel shallow is that every task becomes the same interaction.

Research has more shapes than that.

Sometimes the team wants a conversation:

"Compare these two customer interviews. What tension do you see?"

That is Chat. The user chooses the context, asks follow-up questions, and steers the reasoning.

Sometimes the team wants discovery:

"Across all sources, what are the strongest arguments for delaying the launch?"

That is Ask. Retrieval matters because the user may not know where the relevant evidence lives.

Sometimes the team wants repeatability:

"For each interview, extract pain points, buying triggers, objections, and quoted phrases."

That is a transformation. The goal is not a conversation but a consistent note structure that can be compared later.

Open Notebook separates these modes, and that separation is healthy. Chat, Ask, and Transformations are not three labels for the same thing. They are three ways of working with knowledge:

  • Chat keeps the thinking fluid.
  • Ask finds relevant material across the notebook.
  • Transformations turn raw sources into structured notes.

A good workflow uses all three. The team might transform every interview into a consistent format, use Ask to find cross-source patterns, and then use Chat to reason through tradeoffs before saving the final answer as a note.

That is much closer to how thinking actually happens.


The Notebook Should Remember the Work

The moment an AI answer becomes useful, it should not disappear into chat history.

It should become part of the notebook.

This is where saved answers and AI-assisted notes matter. A research workspace needs a way to turn a transient interaction into durable knowledge. Otherwise, the team repeats the same questions and loses the path from evidence to decision.

In a good notebook workflow:

  • raw sources stay available for verification
  • generated answers can be saved as notes
  • manual notes can correct or extend AI output
  • notes can become searchable material for later work
  • citations keep processed claims connected to evidence

This is not merely organization. It is internal adjustment.

In existence-oriented language, a system survives and develops by acting outward and adjusting inward. For a research notebook, outward action means importing sources, asking questions, generating answers, and producing briefings. Inward adjustment means saving notes, changing context, revising interpretation, and keeping the workspace ready for the next question.

That is the difference between using AI once and building understanding over time.


Audio Briefings Are Not a Gimmick if the Source Trail Survives

Podcast-style output can look like a flashy feature, but in a research workflow it solves a real problem.

Not every stakeholder will read the full notebook. Not every teammate has time to inspect every source. Sometimes the useful output is a short audio-style briefing that turns a messy pile of material into a listenable synthesis.

Open Notebook's open-source materials describe podcast generation as a higher-level transformation: sources and notes become an outline, dialogue, text-to-speech output, and finally an audio file. The important part is not just the audio. It is the path from evidence to briefing.

If the briefing is detached from the notebook, it becomes just another generated artifact. If it remains connected to sources and notes, it becomes a new consumption layer for the same research state.

That matters because knowledge work is not only about producing text. It is about changing form without losing traceability:

  • source to answer
  • answer to note
  • note to briefing
  • briefing back to follow-up questions

Better-designed AI notebook systems should not only generate outputs. They should preserve the continuity between outputs.


Why Local and Self-Hosted Options Change the Workflow

Model choice changes behavior.

If a team has to send everything to a single cloud model, it may over-share or avoid using AI for sensitive work. If the same notebook can use local models for privacy-sensitive passes and cloud models for less sensitive synthesis, the workflow becomes more flexible.

Open Notebook's support for multiple providers and local options such as Ollama is valuable for this reason. It lets model selection become part of the work, not a hidden infrastructure detail.

For the strategy review example, the team might use:

  • a local model to summarize sensitive notes
  • a cloud model to polish a non-sensitive stakeholder briefing
  • embeddings and search to find relevant source chunks
  • a self-hosted deployment to keep the notebook near private data

Self-hosting is not free. It brings setup, updates, credentials, backups, and security responsibility. But it also gives a team more control over where research lives and how models interact with it.

The point is not that every team must self-host.

The point is that serious knowledge workflows need visible tradeoffs.


What This Style of Notebook Is Really For

Open Notebook is best understood as a tool for people who do not only want answers.

They want a controlled path from source material to reusable understanding.

That makes it relevant for:

  • researchers collecting papers, transcripts, and notes
  • product teams preparing decisions from mixed evidence
  • students building long-term understanding instead of one-off summaries
  • consultants turning interviews and documents into client-ready briefings
  • teams that need local or self-hosted workflows for sensitive context

The pattern is broader than any one product:

Start with sources. Decide context. Ask and explore. Save notes. Transform knowledge into the format the next person can use.

That is the workflow an AI notebook should support.


Final Takeaway

A useful AI notebook is not simply the one that produces the smoothest answer to the first question.

It is the one that helps a person keep control of the research process after that answer appears.

Open Notebook points toward that direction: sources remain evidence, notes become evolving understanding, context control defines what the model can touch, and outputs can become briefings without losing their relationship to the notebook.

That is why the product is more interesting as a cognitive workflow than as a chat interface.

It starts with sources, not prompts.

And when it works, it helps research become something you can return to, revise, and reuse.

References

Originally published on [https://medium.com/@li3169086779/a-research-workflow-that-starts-with-sources-not-prompts-134f86e53e5a]

Top comments (2)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.