A year ago, the consensus was that AI coding assistants needed a cloud documentation service to stay current. Context7 and Deepcon were the obvious choices. We disagreed, and we shipped a bet: a local SQLite file plus MCP is enough.
Today that bet ships as v1.
@neuledge/context v1.0.0 is the stable release of an open-source, local-first documentation server for AI coding assistants. No API keys. No rate limits. No network calls at query time. One install, one MCP entry, and your AI sees real, version-pinned docs in under 10ms.
This post is the short version of what we learned getting here, what's in v1, and where we're headed.
The bet, in one paragraph
Library documentation isn't streaming data. It changes per release, not per minute. SQLite handles that workload at memory speed. MCP makes the result addressable to any AI client — Claude Code, Cursor, Copilot, Windsurf, anything else that speaks the protocol. Wrap that into one CLI and you don't need a SaaS for documentation. You need a binary and a registry.
That's the whole product. Everything in v1 is a consequence of that decision.
How we got here — the 0.x story
Each minor release in 0.x answered a specific "but what about…" question. v1 is the moment that list got short enough to call the API stable.
0.1 — context add <git-repo>. The original premise: clone a docs repo, parse the Markdown, build a .db file, expose it over MCP. It worked, but every developer built every package from source the first time they used it. (Getting started walkthrough →)
0.3 — A community registry. We built api.context.neuledge.com and pre-built ~150 packages across npm, pip, and maven. context install npm/next 15 pulls a verified, current .db instead of building one. (Why we built a registry →)
0.3 — Multi-format parsing. Markdown wasn't enough. Python ecosystems live in reStructuredText. Java lives in AsciiDoc. The same release added both, which is how Django, Flask, FastAPI, and Spring Boot showed up in the registry. (Beyond Markdown →)
0.4 — HTTP server mode. context serve --http turns one machine into a team-shared MCP server. One install, every developer's editor connects to it.
0.5–0.6 — HTML parsing and Windows compatibility. Turndown for HTML pages, sql.js as a WebAssembly fallback when better-sqlite3 won't compile. Less glamorous than features, more important for adoption.
0.7–0.8 — llms.txt with link following. context add https://react-aria.adobe.com works on any site that publishes an llms.txt. Most tools stop at the index file. We follow the links and store the actual docs.
0.9 — Anything with a URL. When llms.txt isn't there, fall back to fetching the page directly with browser-like headers and HTML cleanup via defuddle. Plus context auth add lets you index docs behind a login — your own Substack, your own Medium, your own paid newsletters — without sending credentials anywhere except the source site.
The pattern: every minor version closed off a "you can't use this for X" objection. v1 says we're done closing the obvious ones.
Why this matters vs the cloud alternatives
The headline difference between Context and the cloud documentation services isn't speed (though sub-10ms is hard to beat) — it's the absence of a vendor in the loop. With a local .db file:
- No rate limits. Context7's free tier dropped to 1,000 requests/month earlier this year. That's a couple of long debugging sessions. v1 has no concept of a request quota.
- No outages on someone else's status page. Your AI's documentation lookup works on a plane, in a SCIF, on a flaky hotel WiFi.
- No privacy tax. Your queries don't leave your machine. The model sees the docs; nobody else sees what you asked.
-
No paywall on your own subscriptions. With
context auth add, the docs you already pay for (newsletters, paid blogs, gated developer portals) become accessible to your AI without re-routing credentials through a third party.
The cloud services optimize for "we keep the docs current so you don't have to." v1 optimizes for "the docs are a file, and files are a solved problem."
What v1 actually means
v1.0.0 is not a feature dump. It's a stability commitment on top of everything 0.x shipped:
-
Semver from here. The CLI surface, the MCP tool definitions, and the
.dbschema are now covered by semantic versioning. Breaking changes ship as v2. -
Three sources, one tool. Git repositories, the community registry, and any URL with or without
llms.txt. If documentation exists somewhere on the public internet,context addingests it. - Cross-ecosystem coverage. JavaScript, Python, Java, plus any HTML/Markdown/RST/AsciiDoc source. Not the universe, but enough of it that "my stack isn't supported" is a rare answer.
- A team story. HTTP server mode, session management, and structured logging mean one developer can run a Context server for the whole team instead of each engineer maintaining their own.
If you've been waiting for v1 before evaluating, this is your signal. The rough edges that justified "we'll wait for stable" have been sanded down.
What it looks like
Two commands and a config block:
# Install
npm install -g @neuledge/context
# Pull a documentation package from the registry
context install npm/next 15
Wire it into your MCP client (Claude Code shown — see /integrations for Cursor, Copilot, Windsurf, and Claude Desktop):
{
"mcpServers": {
"context": {
"command": "context",
"args": ["serve"]
}
}
}
That's it. Your AI coding assistant can now answer "how do I do X in Next.js 15" from a local SQLite file in under 10 milliseconds, with zero network calls, zero rate limits, and the actual API surface of the version you pinned — not whatever the model trained on a year ago.
For libraries that aren't in the registry yet:
context add https://docs.example.com # auto-discovers llms.txt or falls back to page fetch
context add github.com/owner/docs-repo # straight from git
What's next
A few directions we're investing in, deliberately vague on dates:
- More registry coverage. Especially Python, Java, and Go ecosystems where the registry is thinner than npm.
- Better discovery. Right now you have to know the package name. We want the AI to be able to answer "what docs do you have for $thing" before it answers the question itself.
- Tighter editor integrations. The MCP transport layer is solved; the per-editor UX still has rough edges.
If you have an opinion about which of those matters most, the issue tracker is github.com/neuledge/context/issues.
Try v1
npm install -g @neuledge/context
Read the docs, browse the integrations, or star the repo if v1 saves you a Context7 bill.
A year ago we bet that local SQLite + MCP was enough for documentation. v1 is what "enough" looks like.
New here? Start with why local-first documentation matters for AI coding, or jump straight to the step-by-step setup guide.
Top comments (0)