I've been frustrated by the same problem for years.
You join a new project. There's no architecture documentation. Or worse — there is documentation, but it describes a system that no longer exists. Someone drew a beautiful diagram in Q1. By Q3, three refactors later, it's a historical artifact. Everyone still references it. Everyone knows it's wrong. Nobody updates it.
That's not laziness. That's just how software works.
I spent the last year building BlueLens to fix this. Not as a documentation tool — as a synchronization engine. Here's what I learned.
The insight that changed everything
The standard mental model for architecture diagrams is: write code first, document second. The diagram is a derivative artifact. It describes what the code does.
That model is broken by design. A derivative artifact will always lag behind its source. The moment you stop updating it — which is always, because you're shipping — it starts lying.
The insight behind BlueLens: flip the relationship. The diagram shouldn't document the code. The diagram should be the source of truth. You design the system visually. AI writes the implementation. BlueLens keeps both honest.
In the AI coding era, this isn't just a philosophy — it's becoming the natural workflow. You describe systems. AI generates code. The diagram is the language.
What BlueLens does differently
Most architecture tools are static. You open them, draw something, close them. The diagram lives in a separate universe from your codebase.
BlueLens has three things I haven't seen combined elsewhere:
1. CodeGraph — automatic architecture maps from any repo
Point BlueLens at a local repo and it generates an interactive architecture map automatically. No manual diagramming. It builds a hierarchical model:
- D0 — System: the top-level view, major modules
- D1 — Module: functional groupings within the system
- D2 — File: individual files and their relationships
- D3 — Symbol: functions, classes, exports
The grouping uses a two-agent LLM pipeline — one agent proposes the structure, another validates it — with a heuristic fallback when AI isn't available. I ran it on the BlueLens repo itself: 413 nodes, 946 relations, generated in seconds.
Every node in the CodeGraph links directly to its source code. Click a node, read the code. No searching.
2. Drill-down navigation across multiple abstraction levels
Most diagram tools give you one flat view. BlueLens lets you link any diagram node to a sub-diagram. Click to go deeper, breadcrumb to come back.
In practice: you have a system-level diagram (D0). You click on the "Auth" module. It opens the module-level diagram (D1) for Auth. You click on a specific service. It opens a file-level diagram (D2). You click on a function node. It opens the source code.
Four clicks from "I have no idea how auth works" to "I'm reading the exact function that handles token refresh." No grep. No file searching. No mental mapping.
3. An AI agent that actually traverses your workspace
This is the part I'm most proud of technically.
Most "AI diagram tools" inject static context into a prompt — the current diagram as text, maybe some metadata. BlueLens gives the agent real tools:
list_diagrams()
get_diagram(id)
list_node_links(node_id)
get_graph_nodes(graph_id)
get_node_source(node_id)
The agent runs a real tool-use loop. When you ask "explain the authentication flow," it doesn't answer from static context. It navigates your workspace autonomously — traversing sub-diagrams, reading node links, querying the code graph — before constructing its answer.
The difference in answer quality is significant. Static context injection gives you a summary of what you already have open. Tool-use traversal gives you answers about your entire system, including parts you weren't looking at.
The technical decisions worth explaining
Why client-side only
BlueLens is a React app with no backend. All data lives in localStorage and IndexedDB. Your code never leaves your machine.
This was a deliberate trade-off. The alternative — uploading your codebase to a server for analysis — is faster to build but creates a trust problem. Most developers are reasonably uncomfortable sending proprietary code to a third-party service.
Client-side only eliminates that concern entirely. The downside: no real-time collaboration, no cloud storage, no cross-device sync. That's what BlueLens Cloud (coming soon) will solve, with explicit user consent for what gets stored where.
Why Chromium only
The File System Access API — which lets the app read your local repo without uploading anything — is only available in Chromium-based browsers (Chrome, Edge, Brave). Firefox doesn't support it yet.
This is a real constraint. I accepted it because the alternative (asking users to zip and upload their repo) felt like the wrong trade-off for a privacy-first tool. When Firefox ships the API, we support Firefox.
Why Mermaid.js
I could have built a custom graph renderer. Mermaid is the standard — it's what GitHub renders natively, it's what most developers already know, and it's battle-tested at scale. Building something custom would have taken months and produced something worse.
The trade-off: Mermaid has layout limitations. Complex graphs with many nodes can be hard to read. The CodeGraph force-directed visualizer (which is custom) handles this for the automatic generation case. The Mermaid editor handles the manual diagramming case.
API key encryption
AI features require an API key (Gemini, OpenAI, or Anthropic). Keys are encrypted with AES-GCM and stored in IndexedDB. They're never transmitted to BlueLens servers — because there are no BlueLens servers. They're decrypted in-memory when needed and re-encrypted immediately after.
What I got wrong (and what I'd do differently)
The heuristic fallback is too aggressive. When the LLM grouping fails or times out, the heuristic takes over and produces a much flatter, less useful CodeGraph. I should have built a better degradation curve — partial LLM results are better than full heuristic results.
I underestimated the Chromium constraint. A meaningful percentage of developers use Firefox as their primary browser. I knew this going in but didn't fully appreciate how often "works in Chromium" translates to "not for me" in practice.
The onboarding is too abrupt. The app assumes you know what a workspace is, what a diagram is, what CodeGraph is. First-time users need a guided path. This is the next thing I'm fixing.
Where it stands
The self-hosted version is complete and MIT licensed. You can clone it, run it locally, point it at any repo, and use all features.
A hosted cloud version is in progress — real-time collaboration, cloud storage, GitHub sync. No setup required. If you want early access: bluelens.dev.
I'd genuinely appreciate feedback on:
- The D0→D3 hierarchical model — does this map to how you think about system decomposition?
- The "diagram as source of truth" thesis — realistic for teams, or too prescriptive?
- The Chromium constraint — dealbreaker, or acceptable trade-off?
GitHub (MIT): https://github.com/Nathanf22/BlueLens
Try it (no install): https://app.bluelens.dev
Built with React 19, TypeScript strict, Vite, Mermaid.js v11, Monaco Editor.
Top comments (0)