The Problem If you use Claude Code, Cursor, or GitHub Copilot on a large codebase, you've probably noticed something annoying: they hallucinate.
When you ask an agent to fix a bug in a large monorepo, it blindly stuffs the context window with as many files as it can fit (usually up to 128k tokens) and crops the rest off. This means 80% of what the AI is looking at is pure noise. It gets confused, invents APIs that don't exist, and burns through your API budget.
I got tired of this, so I built Entroly.
What is Entroly?
Itβs a local proxy that acts as an Epistemic Firewall for your AI agents.
When you close your laptop for the night, Entroly's local background daemon wakes up. It crawls your repository, structurally induces the architecture, and pre-fetches answers to likely problems.
When you wake up and open Cursor the next morning, the agent responds in 0.1 seconds because Entroly already "dreamt" about your codebase all night, cached the symbolic graph, and optimized your context window.
How the Magic Works:
Entroly sits locally on localhost:9377 and intercepts the traffic between your editor and the LLM.
Instead of passing raw text, it uses a high-performance Rust backend to do the heavy lifting:
The PRISM Optimizer: It tracks a 4x4 covariance matrix over 4 scoring dimensions (recency, frequency, semantic SimHash, and Shannon entropy). It mathematically filters out context noise before the LLM even sees it.
0/1 Knapsack Context Selection: It uses Dynamic Programming to pack the absolute most critical, high-signal information into the absolute smallest token footprint.
The Live Dashboard: Entroly comes with a live localhost intelligence dashboard so you can actually watch it shredding tokens and tracking your cost-savings in real-time.
The Result:
Because the LLM is only fed mathematically perfectly-dense code snippets, hallucinations drop to near zero.
As a bonus, because you are sending thousands of fewer tokens on every request, your API costs drop by up to 90%, and the LLM responds significantly faster.
How to use it today
It's completely open-source. You don't have to change your coding habits or learn a new UI.
Install it via pip:
bash
pip install entroly
entroly start
Point your AI tool's API base URL to http://localhost:9377/v1.
Open http://localhost:9378 in your browser to watch the live dashboard.
Iβd love for you to try breaking it on massive codebases. Let me know what you think in the comments!
GitHub Repo: https://github.com/juyterman1000/entroly/
Top comments (0)