DEV Community

Cover image for I got tired of re-explaining my projects to Claude — so I built context-window
Promise for Prodevel

Posted on

I got tired of re-explaining my projects to Claude — so I built context-window

Every time I opened up a new Claude conversation, I find myself re-telling it the same things. Who I am. What stack this project uses and more importantly, for business context, I find myself resharing the same business rules. The two odd conventions we picked up six months ago. That one third-party API quirk that bit us last quarter.

It's not Claude's fault. It can't remember things across sessions — that's how the model works. But it means I'm doing the same warm-up exercise every single time. Multiply that by however many projects I touch in a week and it adds up to hours of wasted effort.

So I built a small tool to fix it for myself, and I've open-sourced it: context-window.

The idea

What if your context was a first-class entity, not something you re-type into a chat?

  • You write a context once — your coding conventions, your business background, notes on a tricky API. It lives in a local library with a name, a description, a category, tags, a priority, and a token count.
  • You attach it to as many projects as you like. The same "Rust Conventions" context can be referenced by every Rust repo on your machine.
  • Each project owns a tiny manifest file — just metadata, not the full bodies — that any LLM client can read at session start.
  • The LLM decides which contexts are relevant, asks for the bodies it actually needs, and gets to work already up to speed.

That's the whole shape of it. Two entities (Context and Project), one manifest, one MCP server, no cloud.

What it looks like in practice

Create a context once:

context new \
  -n "Rust Conventions" \
  -d "How we write Rust in this codebase" \
  --content-file ./RUST_STYLE.md \
  --category conventions \
  --tags rust,backend \
  --priority important
Enter fullscreen mode Exit fullscreen mode

Then, from any project on your machine:

cd /path/to/your/project
context project init -n my-project
context project attach <context-id>
Enter fullscreen mode Exit fullscreen mode

That writes a .context/manifest.json in the project root. It looks like this:

{
  "project_name": "my-project",
  "mcp_server": { "transport": "stdio", "command": "context-mcp" },
  "contexts": [
    {
      "context_id": "…",
      "name": "Rust Conventions",
      "description": "How we write Rust in this codebase",
      "category": "conventions",
      "priority": "important",
      "token_count": 1240
    }
  ],
  "total_token_count": 1240,
  "instructions": "Call get_project_summary first. Use get_context to fetch a body. ..."
}
Enter fullscreen mode Exit fullscreen mode

Note the deliberate absence of content. The manifest is a catalogue — names, descriptions, categories, priorities, token counts. The LLM reads this on session start, decides what's relevant, and fetches bodies on demand via the MCP server. Cheap.

Wiring Claude up to it

context-window ships with a stdio MCP server (context-mcp). One command registers it globally with Claude Code:

claude mcp add context-window --scope user context-mcp
Enter fullscreen mode Exit fullscreen mode

No --project-id is needed. The server walks up from the current working directory looking for the nearest .context/manifest.json and uses that project automatically. Move between repos, and the active project follows you.

There are 16 MCP tools, but in practice you only think about a handful:

  • get_project_summary — orient yourself; returns the manifest
  • get_context — fetch the body of one context
  • search_contexts — FTS5 search across the library
  • get_context_budget — when you're tight on tokens, ask for the highest-priority subset that fits a given budget
  • create_context / attach_context — capture and link new knowledge mid-conversation

The last point is the most underrated. When the user explains something, the next session will need to know — a constraint, a decision, a person's preference — the model can call create_context(..., created_by: "llm:claude-opus-4-7") and that knowledge is permanent and reusable. The conversation doesn't have to be wasted as soon as you close the tab.

What else did I like: usage analytics

I quickly realised I wanted to know if Claude was actually using the contexts I was carefully curating. So every MCP, CLI, and web call records an event with:

  • the tool name
  • duration, outcome, error code
  • the active project and (when applicable) the context id touched
  • the calling client's name and versionclaude-code, chrome, cli, firefox, etc.

That last one matters. The MCP server captures the clientInfo block from the initialize handshake, so the dashboard tells you exactly which client (and which version) is calling each tool. If Claude Code is doing the work, you'll see claude-code in the breakdown. If you use the web UI yourself, you'll see your browser. If you forget you wired up a different LLM, you'll see it sitting there in the bar chart.

context-window dashboard

The dashboard is server-rendered HTML with SVG charts. No client-side framework, no build step beyond tsc. There's a sparkline for activity over time, four bar charts (top tools, top contexts, top projects, top clients), and a row of summary stat cards. It runs at http://127.0.0.1:5173/dashboard after context web.

A few interesting design choices

The manifest is derived state. If you update a context that's attached to three projects, all three manifests rewrite themselves automatically. Same for archive, delete, attach, detach, reorder. There's exactly one source of truth — the SQLite DB — and the on-disk manifest is always a fresh projection of it. No staleness, no sync command needed.

Inactive contexts are hidden, not surgically removed. Archive a context and it stays attached to whatever projects already had it, but it's filtered out of get_project_summary, get_context_budget, and search_contexts. So an LLM never sees stale knowledge as "available", but the link is preserved if you decide to unarchive later.

Token counts are pre-computed on write. Every time a context is created or updated, we tokenise it with gpt-tokenizer and store the count. That means get_context_budget can do its work entirely as a SQL query — no expensive tokenisation per request.

Storage is transport-agnostic. The whole codebase is split into core/ (business logic), storage/ (the Repository interface + a SQLite implementation), and three transports (mcp/, cli/, web/). If someone wanted to add a Postgres backend or an HTTP MCP transport for a cloud mode, they'd swap the storage + add a transport. The core services don't move.

Try it

git clone https://github.com/prodevuk/context-window.git
cd context-window
npm install
npm run build
npm install -g .
context project init       # in your project
context web                # http://127.0.0.1:5173
claude mcp add context-window --scope user context-mcp
Enter fullscreen mode Exit fullscreen mode

That's the whole getting-started. It's TypeScript, Node ≥ 20, SQLite. No accounts, no API keys, no telemetry, no network calls (unless you point an MCP client at it).

The repo has a 10-test unit suite plus end-to-end smoke tests against both the stdio MCP server and the web UI.

Where to take it from here

A few things on my list:

  • Better budget heuristics. Right now get_context_budget is a strict priority-then-position fit. It could be a lot smarter — maybe weight by recent usage from the analytics it already collects.
  • A proper "what should I capture next?" prompt for the LLM, surfaced as an MCP tool.
  • Per-project token windows that match the actual model's window, not a fixed default.
  • Encrypted-at-rest contexts for the ones marked visibility: private.

If you've ever felt the same pain — "I'm pasting the same paragraph again, aren't I?" — I'd love to hear what you'd want from a tool like this.

The repo is at github.com/prodevuk/context-window. Issues, ideas, and PRs are all very welcome. MIT licence.

Thanks for reading.

Top comments (0)