DEV Community

Michel Osswald for Kontext

Posted on • Originally published at kontext.security on

Stop losing your research in chat logs 🧠

Stop losing your research in chat logs 🧠

I kept having the same dumb experience.

I'd spend an evening going deep on something — twenty tabs, a couple of PDFs, screenshots, random notes, and a long back-and-forth with a model. It would finally click. I'd have a decent mental model, a rough thesis, a few "oh, that's how this actually works" moments.

Two days later I'd need it again.

And it was gone.

Pieces were buried somewhere in chat history. One answer lived in a markdown file called notes-final-v3.md. Another was in a clipped article I never opened again. Search would bring something back, but never the shape of what I actually understood at the time.

At some point I realized I was doing unpaid archaeology on my own work.

I didn't want "better retrieval." I wanted a system that actually accumulates understanding over time. Something closer to a maintained wiki than a stack of disposable chats and notes.

So I built oamc: a local-first research workspace that turns raw source material into a markdown wiki you can query, browse, and keep editing — instead of losing everything to chat logs.

The thing I was trying to avoid

Most AI "research" workflows still look like this:

  1. Paste a bunch of material into a model.
  2. Ask a question.
  3. Get a decent answer.
  4. Lose it somewhere.
  5. Repeat next week.

That loop is fine for disposable tasks. It's terrible for anything that's supposed to compound: a market you're tracking, a product thesis, a technical area you're learning, or an ongoing project where the real asset is the growing body of context behind each answer.

I didn't want another app that gave me smart search over a mess.

I wanted the mess to become less messy.

The basic idea

oamc uses a strict raw/ → wiki/ pipeline.

You drop sources into raw/. The system ingests them and builds a maintained markdown wiki in wiki/, split into a few page types:

  • source pages — one per ingested thing
  • entity pages — people, organizations, tools, projects
  • concept pages — ideas, methods, patterns
  • synthesis pages — durable answers, comparisons, analyses

Then, instead of asking future questions against your scattered notes, you ask them against the wiki.

That turns out to feel very different.

You're not hoping the model can reconstruct context from whatever you happened to paste in today. You're working against a knowledge layer that already has some shape to it. Basically a wiki that refuses to forget the good stuff.

What it actually ships

The goal was for this to not feel like another CLI you have to remember to run. Three surfaces do most of the work.

A macOS menubar app. Install once, it stays resident, runs the watcher, and launches the dashboard. Daily use is a click, not a command.

oamc menubar — status, last activity, Open Dashboard, Open Vault in Obsidian, Process Inbox

A local dashboard. Search, browse, and ask one bounded research question at a time against the wiki. The default outcome is a saved synthesis page, not a disposable answer.

oamc local dashboard — 'Research against the wiki, not against scattered notes.'

Obsidian, for reading and editing the maintained markdown pages.

Under the hood it's a Python app with a CLI (llm-wiki) that handles ingest, query, lint, status, doctor, watch, and process. Install the menubar runtime and you barely touch it — the watcher processes what you clip in, the dashboard is where you ask questions, and Obsidian is where you read.

The wiki is just files on disk. Plain markdown. Obsidian-friendly layout. I can read them, edit them, move them around, and keep using them even if I rip out the app later.

I've built enough tools that trapped my own data. Not interested in doing that again.

The daily flow

One-time setup:

uv sync
cp .env.example .env
export OPENAI_API_KEY=...
uv run llm-wiki init
uv run llm-wiki install-menubar
Enter fullscreen mode Exit fullscreen mode

That last command installs oamc.app, drops it in the menubar, starts the watcher, and opens the dashboard. After that, no terminal.

The actual daily loop is:

  1. Clip a source into raw/inbox/ — paste, drag, drop.
  2. Forget about it. The menubar watcher ingests it, generates source/entity/concept pages, and files them into the wiki.
  3. Open the dashboard (menubar → Open dashboard), search or ask a question. If the answer is worth keeping, save it as a synthesis page.
  4. Browse the wiki in Obsidian when you want to read, link things together, or clean something up.

That's it. The CLI is there if you like terminals, but it's not the point.

The only "discipline" is asking one bounded question at a time. That's what pushes the system toward specific syntheses instead of vague everything-bags.

Why a wiki, not just retrieval

This is the part I keep coming back to.

Most note systems are good at storage. Most AI systems are good at answering. Very few are good at the middle layer — the thing that sits between what you've collected and what you want to know.

That middle layer is the whole point.

If a source matters, it leaves a trace in a source page. If a tool or person keeps showing up, it gets an entity page. If a recurring idea reappears, it gets a concept page. If you ask a good question and get a useful answer, that answer becomes a synthesis page instead of disappearing into a sidebar.

That's the difference between "I asked a model something useful once" and "I'm slowly building a body of knowledge I can return to."

Why local-first

I wanted this to feel boring in the right ways.

Your live research corpus stays local. Generated wiki pages stay local. Inbox files stay local. The main reading surface is markdown.

That makes the whole thing easier to trust.

You don't have to wonder where your notes ended up. You don't have to treat your own thinking like SaaS exhaust. And you don't need a database just to start using the thing.

I know there are more ambitious knowledge systems out there. Some of them are genuinely cool. I just wanted one that fit into my normal workflow and didn't require me to buy into a whole worldview first.

The part that surprised me

I originally built oamc for research. What surprised me is how useful it became for editorial and strategy work.

Once you have a maintained wiki, you can ingest more than source material. You can ingest playbooks, analytics notes, topic ideas, screenshots, examples of good posts, rough audience notes. Then you can query that body for things like:

  • what keeps coming up across these sources
  • what's still unclear
  • what changed since last week
  • what the next piece should probably be about

Still the thinking layer, not the distribution layer. But for briefs, synthesis, and "what do we actually know right now?" questions, it's been much better than bouncing between chats and folders.

What it is not

I'll be honest about this part, because it would be easy to make oamc sound smarter than it is.

It's not a CMS. It doesn't publish to dev.to. It doesn't schedule social posts. It doesn't magically solve research quality if the input is junk.

It's also not trying to be a giant autonomous knowledge machine. The write path is narrow on purpose. The structure is opinionated on purpose. I'd rather have a smaller system that stays legible than a giant one that turns into sludge after two weeks.

The current shape

The repo is open source and set up for local use. It includes the CLI, the dashboard, the menubar runtime, tests, docs, and a release flow. Current release is v0.4.1.

If you want to try it:

github.com/michiosw/oamc — a star helps if this resonates.

Daily workflow on macOS:

uv run llm-wiki install-menubar
Enter fullscreen mode Exit fullscreen mode

That installs oamc.app, keeps the watcher and dashboard running, and makes it feel like part of the machine instead of one more repo you have to remember to start.

Why I think this direction matters

I don't think the future of research work is "ask bigger models and hope for the best."

The more interesting direction, to me, is giving ourselves better intermediate memory. Not raw retrieval. Not permanent chat logs. Not another pile of notes. A maintained layer that can survive across weeks and actually improve as you use it.

That's what I wanted oamc to be.

It's still early, but it already feels better than what I had before. Which was the bar I cared about first.

If you're building something similar — or you've found a better way to make AI-assisted research compound instead of reset every session — I'd genuinely like to see it.

Top comments (0)