DEV Community

Cover image for I built a code intelligence engine because nothing else worked for my cross-repo setup
Andrew Kumanyaev
Andrew Kumanyaev

Posted on

I built a code intelligence engine because nothing else worked for my cross-repo setup

tl;dr: Gortex is a tool I wrote over the past few weeks to fix my own frustration with token-heavy AI coding and research sessions for complex projects and setups. I did it because other alternatives didn't work in my case. Gortex builds a knowledge graph from your code and exposes it through MCP. While I attached the UI picture, I rarely use the UI, and MCP is enough for me. Link at the bottom.

--

A few weeks ago, I was neck-deep in an investigation involving multiple repositories, tangled call chains, and some genuinely surprising dependencies. The kind of work where you end up with 15 browser tabs open, and you're mentally maintaining a graph of which function calls what across which repo.

I was using an AI assistant throughout, and it was helpful — but the constant context management was friction. I'd feed in a file, get a useful response, then the context would fill up, and I'd have to start curating what to keep. For straightforward tasks, this is fine. For deep investigation across multiple codebases, it's genuinely painful.

I tried a few existing tools for this: context managers, code indexers, and graph-based retrievers. Some were genuinely good. A couple was close enough that I seriously considered contributing — but the conceptual issues I ran into were deep enough that a large PR felt like a long argument with an uncertain outcome.

What the process was for me:

  • Download repository
  • Install (initialise) the tool inside, index it, and debug cases.
  • Switch to another repository, repeat the previous steps.
  • Jump from repository to repository, tuning the prompt and searching for the answer.

And it was a very annoying process, which had breaks due to limits.

So I started from scratch, focused on the specific problem I had, and it slowly grew from there.

What it actually does

Gortex builds an in-memory knowledge graph from your source code. Not just file contents — actual structure: files, symbols, imports, call chains, type relationships, cross-repo dependencies. It keeps this graph in memory, snapshots it to disk between sessions, and restores incrementally on startup.

To empower the coding agent, it exposes 47 MCP tools that AI agents can call. Instead of the agent reading 5–10 files to trace a call chain, it calls smart_context once and gets a graph-derived answer focused on exactly what it asked about. It can search for any symbols, words, or anything related to the problem, and get clear entry points. It can retrieve the concrete function body and not scan huge files. In practice, this has reduced token usage in my sessions by around 94%. According to the built-in telemetry, for how much it saves:

$ gortex savings
Gortex Token Savings
====================

Store:          ~/Library/Caches/gortex/savings.json
Tracking since: 2026-04-12 10:51
Last updated:   2026-04-19 14:17

Calls counted:   618
Tokens returned: 228,102
Tokens saved:    3,051,768
Efficiency:      14.4x

Cost avoided (tokens saved × input-price, USD):
  claude-haiku-4.5     $3.0518
  claude-opus-4        $45.7765
  claude-sonnet-4      $9.1553
  gpt-4o               $7.6294
  gpt-4o-mini          $0.4578
Enter fullscreen mode Exit fullscreen mode

Worth saying, it was a good bonus for me, and I started rarely seeing the limit paywall.

The parts I'm most glad I built

Cross-repo workspaces: you define a workspace config pointing at multiple repos. Symbols resolve across all of them. This was the main thing missing from the tools I tried. You can track contracts between repositories or make cross-repository debugging for complex cases with clear process flows.

Cross services contracts capture

Confidence-tiered call graph edges: every edge in the call graph has a confidence level — whether it was resolved by LSP, inferred from AST analysis, or matched from text patterns. When you ask for the blast radius before a refactor, you want to know how much to trust each edge.

Community-scoped skills: Gortex runs Louvain clustering on the graph and auto-generates SKILL.md files for each detected functional community. When an agent starts working in a part of the codebase, it gets context-scoped to that community — entry points, key files, cross-community connections.

Hybrid semantic search: BM25 + vector search with a bundled pure-Go ONNX runtime. No Python, no model server to run separately.

For the full features list, I'm not sure if it really makes sense to copy-paste them from the Readme document

Performance

Indexing VS Code's codebase (~10.7K files) takes about a minute on Apple Silicon. The Linux kernel (70K files, 1.69M nodes) takes ~3 minutes. After the first index, restoring from a snapshot is much faster.

State of the project

Functional, and I use it daily. Reached the point where I'm not embarrassed to share it. But there's a long roadmap, plenty of rough edges, and things I'd like to improve that I haven't had time for yet.

Written in Go. Source available under PolyForm Small Business — free for individuals, open source projects, small businesses (under 50 employees / $500K revenue), education, and government.

Built around my specific cases, so it probably handles some things well and misses others entirely. If you've been in similar situations, I'd genuinely like to hear what your setup looks like and where this would or wouldn't fit.

GitHub: https://github.com/zzet/gortex

If it's useful to you, sharing it or leaving a star helps more than you'd think for a solo project — it's how things get found, and honestly, it's the main way I know whether this is worth continuing to invest in.

p.s. If you'd like to learn more about how such solutions work, let me know. I'm happy to share more insights.

Top comments (0)