DEV Community

Linghua Jin
Linghua Jin

Posted on

I Built a Code AST MCP That Saves 70% Tokens and Went Viral (54K+ Views)

Last week, I open-sourced a lightweight Code MCP server that uses AST (Abstract Syntax Tree) parsing to give coding agents semantic understanding of your codebase. It went viral on X with 54K+ views.

Here's the tweet that started it all:

The Problem: Coding Agents Are Burning Tokens

If you've used Claude Code, Codex, Cursor, or any AI coding agent on a real codebase, you've probably noticed: they dump entire files into context just to understand your code structure. That's expensive, slow, and wasteful.

The agent doesn't need your whole file. It needs to know what functions exist, what classes are defined, and how they relate to each other.

The Solution: AST-Based Semantic Code Search

I built cocoindex-code - a super lightweight embedded MCP that:

  • Parses your code into ASTs using tree-sitter, extracting meaningful chunks (functions, classes, methods)
  • Creates semantic embeddings of those chunks
  • Lets your coding agent search by meaning, not just text matching
  • Only re-indexes changed files - built on a Rust-based incremental indexing engine

The result? 70% token savings and noticeably faster coding agent responses.

1-Minute Setup - No Config Needed

For Claude Code:

pipx install cocoindex-code
claude mcp add cocoindex-code -- cocoindex-code
Enter fullscreen mode Exit fullscreen mode

For Codex:

codex mcp add cocoindex-code -- cocoindex-code
Enter fullscreen mode Exit fullscreen mode

That's it. No database, no API keys, no config files. It just works.

How It Works Under the Hood

  1. Tree-sitter parsing breaks your code into semantic chunks (functions, classes, etc.) across 20+ languages
  2. Local embedding model (SentenceTransformers) creates vector representations - completely free, no API key needed
  3. SQLite + vector search stores everything locally and portably
  4. Incremental indexing via CocoIndex (Rust engine) means only changed files get re-processed

When your agent needs to find code, it calls the search MCP tool with a natural language query and gets back exactly the relevant code chunks with file paths and line numbers.

Why It Went Viral

I think people resonated with a few things:

  1. Real pain point - everyone using coding agents feels the token burn
  2. Zero friction - one pip install and one MCP add command
  3. No vendor lock-in - works with Claude, Codex, Cursor, or any MCP-compatible agent
  4. Open source (Apache 2.0) - you can inspect every line of code
  5. No API keys required - the default embedding model runs locally for free

Supported Languages

Python, JavaScript/TypeScript, Rust, Go, Java, C/C++, C#, Ruby, Kotlin, Swift, SQL, Shell, and more. It uses tree-sitter grammars so adding new languages is straightforward.

What's Next

We're actively working on:

  • Better embedding models optimized for code (try nomic-ai/CodeRankEmbed with a GPU)
  • Enterprise features for large codebases and shared indexing across teams
  • More MCP tools beyond search

The repo is at github.com/cocoindex-io/cocoindex-code - 520+ stars and growing fast.

Built with CocoIndex, our open-source Rust-based data indexing framework.

Would love to hear your experience if you try it out. Drop a comment or open an issue on GitHub!

Top comments (0)