DEV Community

hoosber neo
hoosber neo

Posted on

DOMPrompter: The hardest part of AI UI coding is pointing at the right element

When people talk about AI coding, they usually talk about generating whole components, pages, or features.

But in real front-end work, the painful part is often much smaller.

It is the last 10%:

  • move this button a little to the right
  • increase the gap between these two cards
  • make this tag less cramped
  • tighten the spacing between a heading and the supporting copy

That sounds simple, but the communication overhead is brutal. You know what you want to change. The AI does not know exactly which element you mean, what context matters, or what "a little" means in this layout.

Why screenshots and vague prompts still fail

Screenshots help, but they are still missing structure.

A screenshot does not tell the model:

  • which DOM node you are referring to
  • what the surrounding hierarchy looks like
  • whether the spacing problem is margin, padding, gap, or alignment
  • how the target element relates to its siblings and container

Natural language alone is even worse.

"Move the tag down a bit" quickly turns into a guessing game. And once the AI guesses wrong, you burn both time and tokens correcting the correction.

The part I wanted to improve

So I built DOMPrompter.

It is a macOS tool specifically for interface micro-tuning in DOM-based UIs.

Instead of trying to describe the target from memory, I can:

  1. click the exact element I want to change
  2. inspect the DOM context around it
  3. see the tag, spacing, position, and hierarchy information that actually matters
  4. write the specific change I want
  5. generate a more structured prompt for Cursor, Claude Code, Copilot, or any other AI coding tool

That shift matters because the workflow stops being "describe and hope" and becomes much closer to "point, explain, generate."

Not just for browser pages

One thing I wanted to get right is that this workflow should not be limited to a normal website.

If the interface is DOM-based, the same micro-tuning problem shows up in:

  • localhost front-end projects
  • static HTML prototypes
  • desktop apps built with web technology

So the product is a macOS app, but the adjustment workflow is useful anywhere the UI itself is DOM-driven.

Why I still think this is worth building

This week I noticed that tools like Codex and Claude Code are also adding pieces of this experience.

That makes sense. The problem is real.

But my own feeling is that UI micro-adjustments still deserve a more complete workflow than "the model kind of knows what you clicked."

What helps in practice is not just selecting an element. It is carrying over the context around that element:

  • tags
  • gap
  • spacing
  • position
  • hierarchy

Then turning that into a prompt that is tailored to the exact edit you want.

That is the core idea behind DOMPrompter.

The principle

The principle I keep coming back to is simple:

click the element, stop guessing

If AI is going to be part of front-end iteration, the handoff between "what I see" and "what the model edits" has to get more precise.

Links

If you use AI for front-end work, I would genuinely love to know: how are you handling the last-mile UI tweaks today?

Top comments (0)