If you’re hacking in Cursor or VS Code, hit “Fix with AI,” and it rewrites half your frontend - the problem isn’t the model. This post is about giving the model real eyes - not vibes.
The problem is that it doesn’t see what the user sees.
React/Vue devs know this pain:
you think your <input>
is valid, but the runtime DOM has
disabled
, loading
, and pointer-events:none
.
The code looks fine, the UI doesn’t — and the model can’t guess why.
🧠 The missing layer
LLMs work with text, but debugging UIs requires reality — the live DOM after render.
That’s why I built E2LLM —
a browser extension that captures runtime DOM snapshots (HTML, CSS, validation, computed styles, visibility)
and turns them into structured JSON for LLMs.
It’s like giving the model real eyes instead of source-code assumptions.
🧩 Example
<input type="text" disabled class="loading" style="pointer-events:none;">
Snapshot JSON:
{
"selector": "#submit",
"disabled": true,
"computed": {"opacity":"0.5","pointer-events":"none"},
"validation": {"formValid": false, "invalidFields": ["email"]}
}
With that context, the LLM finds the real cause
→ “disabled due to invalid email + loading state still active”
and suggests a 3-line patch, not a full rewrite.
⚙️ TL;DR
LLMs aren’t stupid — they’re blind.
If you want useful debugging, show them what the user sees.
Chrome + Firefox: https://insitu.im/e2llm/
Let me know if you want to test it on your own project — I’m collecting wild DOM cases this week.
Top comments (0)