DEV Community

Cover image for I Built a Tool That Feeds AI Assistants Real Big-O Analysis Instead of Letting Them Guess
Łukasz Holc
Łukasz Holc

Posted on

I Built a Tool That Feeds AI Assistants Real Big-O Analysis Instead of Letting Them Guess

The Problem

Frontier AI models can analyze time complexity. But they do it by reading your code, reasoning through it, and burning tokens in the process. Sometimes they get it right. Sometimes they confidently tell you a function is O(n) when it's actually O(n²) because there's a .contains() hiding inside a loop.

What if we skip all that and just feed them the answer through static analysis?

The Solution

I built Time Complexity MCP — an MCP server that parses your code into ASTs using tree-sitter, walks the syntax tree to detect complexity patterns, and reports per-function Big-O with line-level annotations.

It works as a tool that Claude Code or GitHub Copilot can call on demand — no tokens spent on analysis, just structured results.

GitHub: Luzgan/time-complexity-mcp

What It Detects

  • Loop nestingfor, while, do-while with depth tracking. Constant-bound loops (e.g., for i in range(10)) are recognized as O(1)
  • Recursion — linear recursion (O(n)) vs branching recursion like fibonacci (O(2^n))
  • Known stdlib methods.sort() as O(n log n), .filter()/.map() as O(n), .push()/.pop() as O(1). Each language has its own pattern set
  • Combined complexity — a .indexOf() inside a for loop correctly reports O(n²), not O(n)

Supports JavaScript, TypeScript, Python, Java, Kotlin, and Dart.

How It Works

Each language has an analyzer class that implements 9 template methods from a shared BaseAnalyzer:

  1. Parse the source file into an AST via tree-sitter
  2. Extract all function nodes
  3. For each function, walk the tree to find:
    • Loops (and their nesting depth)
    • Recursive calls (and whether they branch)
    • Known stdlib calls (e.g., .sort(), .indexOf())
  4. Combine the results: an O(n) method inside an O(n) loop = O(n²)
  5. Return structured results with per-line annotations

No code is ever executed — it's pure structural analysis.

The Fun Part: Self-Analysis

I pointed the tool at its own codebase. 27 files, 150 functions.

The breakdown:

Complexity Count
O(1) 102
O(n) 40
O(n log n) 1
O(n²) 4
O(n³) 2
O(2^n) 1

It found real issues:

The directory scanner was O(n³) — because .indexOf() was being called inside a .sort() comparator, which was called after a nested file → function iteration loop. A formatting utility was O(n²) for the same .indexOf() reason.

I fixed those based on its own report. Replaced .indexOf() lookups with a Map for O(1) access. The tool ate its own dog food.

Setup

Download a prebuilt release for your platform from GitHub Releases, extract it, and add to your MCP config:

{
  "mcpServers": {
    "time-complexity": {
      "command": "node",
      "args": ["/path/to/time-complexity-mcp/dist/index.js"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Claude Code and the tools are available. No npm install, no C++ compiler — just Node.js 18+.

Or install from source:

git clone https://github.com/Luzgan/time-complexity-mcp.git
cd time-complexity-mcp
npm install && npm run build

What's Next
Open to feedback and language requests. The architecture makes it straightforward to add new languages — each one is ~3 files implementing the template methods for that language's AST structure.

Repo: github.com/Luzgan/time-complexity-mcp

Top comments (0)