DEV Community

Cover image for I plugged my Claude Code into 881 indexed libraries. Here's what changed.
Ladislav Sopko
Ladislav Sopko

Posted on

I plugged my Claude Code into 881 indexed libraries. Here's what changed.

There's a class of bug that the current generation of AI coding assistants reliably produces, and it's worth naming because the fix is real and available now.

The bug: your assistant calls a library method that doesn't exist.

Not a typo. Not a logic error. A confident, well-formatted, syntactically perfect call to a method that has been deprecated, renamed, or never existed in the version you're using. The compiler tells you. You go back, ask the assistant to fix it, and 40% of the time it suggests a different method that also doesn't exist. By the time you've corrected it manually, you've burned 15 minutes and 30,000 tokens.

This isn't because the model is bad. It's because the model's training data is a snapshot, and library code is a stream. The data set knew StackExchange.Redis 2.5; you're on 2.8. The data set knew Tokio when its runtime API had a different shape. The model is doing exactly what it was trained to do — reproduce patterns it saw — and the patterns have moved.

The grep workaround makes it worse. When the assistant fetches your repo and runs textual searches over node_modules or vendor directories, it's still pattern-matching on substrings. It finds "Connect" in 200 places and has no way to tell which is the actual method definition vs. a comment vs. a similarly-named function in an unrelated package.

What's missing is the same thing your IDE has: a semantic index. A structured map that knows ConnectionMultiplexer.Connect is one specific method, returns these types, is called from these 47 places, was added in version X. SCIP — the format Sourcegraph open-sourced — is exactly that. It's how your "Find references" key works.

What I did

I connected my Claude Code (and a parallel Cursor session for cross-checking) to a public MCP server that hosts SCIP indexes for 881 popular open-source libraries: mcp.example4.ai. Free, zero install, one URL in the MCP config.

Then I re-ran the kinds of questions that usually go sideways:

  1. "Use StackExchange.Redis to connect to a sentinel cluster."
  2. "Show me how tokio::select! actually expands at runtime."
  3. "Find every place in Spring Boot where WebSecurityConfigurerAdapter is used so I know what to migrate to."
  4. "What does useFormStatus actually do under the hood in React 19?"

In every case, the assistant didn't hit the model first. It made tool calls — xmp4_search, xmp4_source, xmp4_usages, xmp4_callers, xmp4_hierarchy — and pulled back real, current, indexed source. Then it answered.

The token economics

The team behind it published a reproducible benchmark on their protocol repo. Same questions answered with grep-based retrieval vs. SCIP-based retrieval — between 70 and 93 percent fewer tokens for equivalent answers, depending on the kind of query (outline lookups land near the top, deep impact-analysis queries near the floor). I rerun a slimmed version on my side: same direction, same magnitude. The savings come from the index returning just the relevant symbols instead of the assistant having to triangulate through dumps of file content.

The downstream effect is the part nobody talks about: when you save tokens on retrieval, the conversation stays coherent for longer. Long sessions on hard refactors stop drifting because the context window isn't half-full of grep results.

Where it works and where it doesn't

It works on the libraries that are indexed. 881 today, including most of the ones I touch in real work — React, Vue, Svelte, Django, FastAPI, Flask, Spring Boot, Tokio, Axum, StackExchange.Redis, gRPC, Tailwind, Vercel AI SDK. 11 languages.

It does not work on:

  • Your private repo (you'd need to run a private indexer).
  • Libraries not yet in the index (request publicly, they triage weekly).
  • Niche language ecosystems they don't yet cover.

How to try it

In your MCP client config (Claude Code, Cursor, Claude Desktop, Continue, Cline — all support MCP), add:

{
  "mcpServers": {
    "example4": {
      "url": "https://mcp.example4.ai/mcp",
      "transport": "http"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart your client. Ask it a library question. Watch the tool calls.

My take

This is not a "10x your productivity" claim. It is a "stop your AI from making this specific class of mistake" claim. That alone is worth the two-minute config change for me. The token savings is a bonus.

Spec + numbers (open source): github.com/LadislavSopko/lsai-protocol
Public endpoint: mcp.example4.ai

Top comments (0)