I watched Claude burn through half its context window just to find where a function is defined.
Ask any AI coding assistant "Where is PlayerHealth defined?" and here's what happens:
- It runs grep "PlayerHealth" → 200 matches across 40 files
- It reads File1.cs, File2.cs, File3.cs...
- 2000+ tokens gone, 5+ tool calls — for a simple lookup
Do that 10 times and your context window is toast. Not from coding — from navigation.
grep is the wrong tool for AI
Think about it: grep searches text. Search for log and you'll match catalog, logarithm, blog, every comment mentioning
"log", and every string containing it. The AI has to read through all that noise to find the actual log function.
What if the AI could just query an index instead?
AiDex: one query, exact answer
I built https://github.com/CSCSoftware/AiDex — an MCP server that pre-indexes your codebase using Tree-sitter and
gives AI assistants instant access.
Before (grep):
grep "PlayerHealth" → 200 matches, AI reads 5 files
→ 2000+ tokens, 5 tool calls, 10+ seconds
After (AiDex):
aidex_query({ term: "PlayerHealth" })
→ Engine.cs:45, Player.cs:23, UI.cs:156
→ ~50 tokens, 1 tool call, 3ms
50x less context. One call instead of five.
Tree-sitter parses your code into an AST — it knows what's a function, what's a class, what's a variable. AiDex
indexes only identifiers. So log finds only log, not catalog.
What you get
Instant code search — find any function, class, or variable:
aidex_query({ term: "render", mode: "starts_with" })
→ renderFrame (engine.ts:45), renderUI (app.ts:120)
Method signatures — see all methods without reading the file:
aidex_signature({ file: "src/engine.ts" })
→ class Engine { ... }
→ function renderFrame(delta: number): void
Time-based filtering — "what changed in the last 2 hours?":
aidex_query({ term: "render", modified_since: "2h" })
Cross-project search, session notes that persist between chats, a task backlog for tracking TODOs and bugs, and an
interactive browser viewer at localhost:3333.
🆕 New since v1.11: Global Search, Zero-Config, Screenshot Optimization
Global Search (v1.11) — Search across ALL your projects at once:
aidex_global_query({ term: "TransparentWindow", mode: "contains" })
→ Found in: LibWebAppGpu (3 hits), DebugViewer (1 hit)
Perfect for "Have I ever written X?" — one call searches 150+ projects.
Zero-Config Setup (v1.12) — Just install, everything auto-configures:
npm install -g aidex-mcp
That's it. No more manual setup. Auto-detects Claude Code, Claude Desktop, Cursor, Windsurf, Gemini CLI, VS Code Copilot and registers AiDex with all of them.
Screenshot Optimization (v1.13) — Screenshots 95% smaller for LLM context:
aidex_screenshot({ scale: 0.5, colors: 2 })
→ 108 KB → 5 KB (95% saved!)
Most screenshots in AI context are for reading text — error messages, logs, UI labels. You don't need 16 million colors for that. New scale and colors parameters reduce file size dramatically while keeping text readable. The AI starts with aggressive settings (B&W, half size), retries with more colors if unreadable, and remembers what works per app.
| | Raw Screenshot | Optimized (scale=0.5, colors=2) |
|---|---|---|
| File size | ~100-500 KB | ~5-15 KB |
| Tokens consumed | ~5,000-25,000 | ~250-750 |
| Text readable? | Yes | Yes |
Performance
┌─────────┬───────┬────────────┬────────────┐
│ Project │ Files │ Index time │ Query time │
├─────────┼───────┼────────────┼────────────┤
│ Small │ ~20 │ <1s │ 1-5ms │
├─────────┼───────┼────────────┼────────────┤
│ Medium │ ~100 │ <1s │ 1-5ms │
├─────────┼───────┼────────────┼────────────┤
│ Large │ ~500+ │ ~2s │ 1-10ms │
└─────────┴───────┴────────────┴────────────┘
Single SQLite file. No cloud, no telemetry. Everything runs locally.
11 languages: C#, TypeScript, JavaScript, Rust, Python, C, C++, Java, Go, PHP, Ruby
Setup in 10 seconds
npm install -g aidex-mcp
That's it. Auto-setup detects your AI tools and registers AiDex with them. Works with Claude Code, Claude Desktop, Cursor,
Windsurf, Gemini CLI, VS Code Copilot, and anything else that speaks MCP.
How it works
- Tree-sitter parses each file into an AST
- Extractor walks the AST, collects identifiers + method signatures
- SQLite stores everything (WAL mode, fast reads)
- MCP server exposes 27 tools via stdio transport
- Incremental updates — only changed files get re-indexed
The bottom line
AI assistants are getting better at writing code but still waste most of their context on finding code. Persistent
indexing fixes that. The AI gets instant, precise answers and spends its tokens on what matters — actually building
things.
I've been using it daily for months. The difference is immediately noticeable, especially on longer sessions.
Give it a try and let me know what you think — what's your biggest pain point with AI context usage?
GitHub: https://github.com/CSCSoftware/AiDex
npm: npm install -g aidex-mcp
MCP Registry: io.github.CSCSoftware/aidex
Open source, MIT licensed. 27 tools, 11 languages. Contributions welcome.
Updated March 2026: Added Global Search (v1.11), Zero-Config Setup (v1.12), and Screenshot Optimization (v1.13).
Top comments (2)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.