DEV Community

Jaswanth
Jaswanth

Posted on

My AI agent kept blind-reading my files. So I built a local CLI to give it "eyes" (Looking for beta testers! 🐛)

Hey DEV community 👋,

As a solo dev, I’ve been relying heavily on AI coding agents lately (Claude Code, Cursor, etc.). They're amazing, but my token bills have been getting ridiculous, and the agent's context window inevitably turns into a clogged mess after a few turns.

Digging into it, I realized the biggest culprit is file reads.
When an agent wants to explore a codebase, it usually reads full files blindly. It doesn't actually "know" the structure or the blast radius of what it's touching until it reads 500 lines of irrelevant code.

I looked at existing context tools like Graphify and RTK, but they either rely on post-hoc compression (compressing the output after the agent has already wasted the read) or they use an external LLM to do the reasoning. I wanted something deterministic, entirely local, and pre-hoc.

So, for the past few weeks, I’ve been building unerr.

Token Savings Component Analysis

What it actually does

It's a local intelligence CLI that sits between your AI agent and your repo via the Model Context Protocol (MCP).

Instead of letting the agent do a raw, naive file read, unerr intercepts it. When you run it, it indexes your repo in a few seconds, building an AST graph using Tree-sitter and an embedded local database (CozoDB).

When your agent needs to understand a file, unerr serves it the exact structural entities, callers, and dependencies—completely bypassing the need to send your code to another LLM to figure out the graph. It literally just gives the agent "eyes."

Why I'm posting here (I need your help)

Because I built this solo, I am currently terrified of the classic "Works on My Machine" curse.

Since unerr deals with local databases, AST parsers, and specific agent environments, I am almost certain there are edge cases I've missed on different operating systems and Node setups.

Before I push this out to the wider world, I’d love for some of you to try breaking it. I'm looking for brutally honest feedback on a few things:

  1. The Cold Install: Does npm install -g @unerr-ai/unerr actually work cleanly on your OS/environment?
  2. The File Reads: Is the pre-hoc file read optimization actually grabbing the right context for your specific codebase?
  3. Expectation Mismatches: Does the agent actually feel "smarter" and faster, or does the tool get in the way?

You can test it out by running: https://www.npmjs.com/package/@unerr-ai/unerr

npm install -g @unerr-ai/unerr

Enter fullscreen mode Exit fullscreen mode

(No cloud, no accounts, no API keys needed. The intelligence runs entirely locally).

If you hit a bug, get a weird error log, or if the agent just ignores it, please drop it in the comments. I will be glued to my keyboard squashing bugs all weekend!

Thanks for reading!

Token savings observed

Reasoning Quality Observed

Memory

Top comments (1)

Collapse
 
vamsee_koneru_243a35818c1 profile image
Vamsee Koneru

This is exactly the kind of tooling AI coding agents need. Blind file reads are such a hidden tax — on tokens, speed, and reasoning quality.

The local + deterministic + pre-hoc approach makes a lot of sense. Excited to test unerr on a real repo and see how much smarter the agent feels.