Developer A builds the dashboard. Queries the users table.
Developer B builds settings. Queries the users table.
Developer C builds checkout. Queries the users table.
Each developer knows their part. Nobody knows all three hit the same table on every page load.
Three files. Three developers. Three code reviews. One table getting hammered, and nobody can see it.
This isn't a skill problem. These are good developers writing good code. Each file passes code review. Each endpoint returns 200.
The problem isn't in any file. It's in the space between files. The connections that exist at runtime but are invisible in code.
The gap that compounds every sprint
| Time | Endpoints | Developers | System understanding |
|---|---|---|---|
| Month 1 | 10 | 1 | Full |
| Month 3 | 30 | 2 | ~80% |
| Month 6 | 60 | 4 | ~50% |
| Month 12 | 120+ AI generated | 4+ | Maybe 30% |
Every sprint adds endpoints. No sprint adds understanding.
Architecture diagrams go stale the week they're drawn. Knowledge-sharing meetings cover what people remember, not what they've forgotten. The gap between what the codebase does and what the team knows grows silently, sprint after sprint. Then something breaks and everyone discovers the complexity that was always there.
This isn't a failure of process. No amount of documentation fixes it. The codebase grows structurally. Understanding doesn't.
What lives in the gaps
These are patterns that exist in real codebases right now. Not bugs. Not failing tests. Just waste and risk that nobody knows about.
The same query, four times
A SELECT * FROM users WHERE id = $1 runs on every page load. The nav bar fetches it. The dashboard fetches it. The notification badge fetches it. The activity feed fetches it.
Four endpoints. Four round trips. Same row every time.
Two teams, same external API, no idea
The pricing page fetches exchange rates from a third-party service. So does checkout. Built independently, months apart, by different people. Separate error handling. Separate caching. Double the rate limit exposure.
The silent regression
An endpoint that responded in 120ms three sprints ago now takes 900ms. A refactor added an eager-load. Tests still pass. Response is still correct. Nobody tracks endpoint performance across sprints. They track whether the code works, not how it performs.
The accidental fan-out
A single page load triggers three component mounts. Each one fetches /api/user/me on its own. Three identical queries. Three identical serializations. Three times the load. For one page view.
None of these show up in error logs. None fail tests. None get caught in code review. All of them are real.
Code review sees files. Nobody sees the system.
Pull up any PR on your project right now.
The reviewer reads the diff. Variable names look good. Logic is correct. Tests pass. Approved.
But the reviewer doesn't know that the query in this PR already runs from three other routes. They don't know the external API call duplicates one in another service. They can't know. The PR shows one file. It doesn't show the rest of the system.
Code review is file-shaped. Systems are graph-shaped.
That's the fundamental limit of file-based review in a system-level world. The tool shows exactly what changed. It can't show what that change means in the context of everything else that's already running.
AI makes the gap wider, not narrower
This is the part nobody talks about.
AI coding tools write correct code fast. Ask your AI assistant to build a settings page. It generates clean endpoints, proper validation, solid error handling. The code works. It passes review.
But it wrote each endpoint in isolation. It didn't check whether those endpoints duplicate queries that already run from three other routes. It can't. It has file context. It doesn't have runtime context.
AI reads your codebase. It doesn't see your system.
The faster AI writes code, the faster the codebase outgrows the team's understanding. Every generated endpoint is another node in a system graph nobody is looking at. Code quality goes up. System awareness goes down.
We're accelerating the exact problem that was already compounding. More code, faster, with less understanding of how it all connects.
Your codebase already knows all of this
Here's the thing. Your application already has this information. Every request that hits your server triggers a chain of queries, fetches, logs, and responses. The runtime knows which tables get hit together, which services depend on each other, which endpoints share data sources.
It just doesn't tell anyone.
The knowledge exists. It lives in the execution path of every request. But it disappears the moment the response is sent. No record. No correlation. No visibility.
What if it didn't disappear?
Seeing the system
We built Brakit to close this gap. One import. Zero config. Get started here.
Start your app the way you always do. Open /__brakit.
See what a single request actually does
Not just the handler. Everything behind it. Every query, every outgoing fetch, every log, all on one timeline:
47 queries for one checkout. Invisible in the code. Obvious in the timeline.
See patterns across your whole application
The same query appearing from four different routes. The same external API called by two unrelated endpoints. The same table hit on every page load. Not found by searching. Found by watching the system run.
See the graph
Every endpoint, every table, every external service, and how they actually connect at runtime:
Not a diagram someone drew. A live map built from real requests. Your actual architecture, as it exists right now.
Automatic detection
Brakit doesn't just show you data. It analyzes patterns across requests:
| What it catches | How |
|---|---|
| N+1 queries | Same query shape fired 5+ times in a single request |
| Duplicate calls | Same endpoint hit multiple times per page load |
| Security findings | Passwords in responses, tokens in URLs, stack traces leaking, PII exposure |
| Performance regression | p95 latency tracked across sessions, regressions flagged automatically |
These aren't manual investigations. They surface on their own, from live traffic, as you develop.
How it works
Brakit runs inside your Node.js process. It hooks into the HTTP layer, fetch, console, and your database client (Prisma, pg, or mysql2) automatically.
Every incoming request gets a unique ID. Every query, fetch, and log that happens during that request is correlated back to it through async context. That's how a single request timeline can show you 47 queries you didn't know existed. That's how the graph knows which endpoints hit which tables.
Brakit never runs in production. It checks
NODE_ENV, detects CI environments, and disables itself. It never throws errors into your application. Every hook is wrapped in safety layers. If anything goes wrong, it fails silently and your app continues untouched.
It works with Express, Fastify, Koa, or raw http.createServer. No agents to install. No environment variables. No dashboard to configure. One import.
One more thing
Brakit exposes its findings through MCP, the Model Context Protocol. That means AI assistants like Claude and Cursor can query your running application's runtime data directly.
Your AI assistant can already read your files. Now it can see your system. It can look at open issues, inspect endpoint performance, verify whether a fix actually worked. All from live telemetry, not static code.
This is the piece that starts to close the AI gap. Give the AI the same system-level context that developers are missing, and it stops generating endpoints in isolation.
The real problem
Your codebase isn't broken. Your understanding of it is incomplete. And it gets more incomplete every sprint.
The fix isn't writing better code. Your code is fine. The fix is seeing the code you already have as a connected system. The queries, the dependencies, the patterns, the waste. All of it visible, all of it correlated, all of it automatic.
Your codebase has always known more than your team. Brakit just makes that knowledge visible.
It's open source, runs locally, and your data never leaves your machine.


Top comments (0)