DEV Community

Cover image for "Do You Even Know What Red Is?" (And How Claude Saw My Plugin Before I Did)
Igor Halilovic
Igor Halilovic

Posted on

"Do You Even Know What Red Is?" (And How Claude Saw My Plugin Before I Did)

"Do you even know what red is?!"

That’s what I wanted to scream, three messages deep into a color-mismatch argument. We've all been there: paragraphs of text describing a vision, the AI understands maybe half, and then the endless iterations - each round getting us only 10% closer.

It’s like describing a painting over the phone.

So, I decided to stop talking and start showing.

I built an MCP server that finally gives Claude eyes. It now screenshots my app and sees exactly what I see. Instead of us guessing colors in the dark, it generates design options I can actually look at.

The best part? It now validates its own code before I even have to check it.


The "Telephone Game" Problem

It’s the same old story. You need to tweak a UI, so you open a conversation with your AI assistant and start typing.

"The button should be more prominent. The spacing feels off. Can you make the header less crowded?"

The AI makes changes. They’re... okay, but not quite right. So you type more. "No, I meant the vertical spacing, not horizontal. And the button color should match the accent, not the primary."

The friction is exhausting. You’re spending more time describing the fix than it would take to just do it yourself. This is exactly where that "phone call" breaks down - you're trying to explain a painting to someone who is effectively blind.

My previous TabbySpaces UI was what I call "one-shot AI slop" - generated quickly, shipped fast, and it looked exactly like that. When I decided to redesign it properly, I had one goal: find a way to do this without losing my mind.

Before: vertical card list
Before: Simple vertical card list. Functional, but basic.


Giving Claude Eyes and Hands

The answer was obvious: if describing visuals is the bottleneck, I had to remove the need for descriptions.

I built an MCP server called Tabby MCP. It gives Claude direct access to my Tabby terminal through Chrome DevTools Protocol. In simple terms, I gave it a way to look at the screen and interact with the code directly. Since Tabby runs on Electron/Chromium, CDP was the obvious choice.

Tool What it does
screenshot Visual snapshot - whole window or specific element
query DOM inspection - finding selectors and classes
execute_js Run JavaScript directly in Tabby's Electron context
list_targets List available tabs for targeting

Now Claude has eyes (screenshot) and hands (query/execute_js). It can see what I'm working on, inspect the structure, test interactions, and validate its own changes.

The difference in iteration speed:

"I didn’t reinvent the wheel", I told a friend while explaining this. "I just connected existing Lego blocks."

Building this took about 30 minutes. Between solid documentation and Claude’s native ability to build MCP servers, you can skip the "interview phase" and go straight to work.

The hard part wasn't technical - it was realizing this was even possible.

If you’re working with an Electron app, a CDP-compatible browser, or anything with an automation API, you can do the same thing.


From Concept to Implementation in 30 Minutes

Instead of wrestling with the code immediately, the process for the TabbySpaces v0.2.0 redesign felt more like a fast-paced brainstorming session:

  1. The Visual Check: Claude takes a screenshot via MCP. It finally sees what I'm seeing.
  2. The "Wild Ideas" Phase: Claude generates 10 standalone HTML variants. Since these aren't touching the production code, I can afford to be experimental.
  3. The Human Filter: I look at the options and pick the best bits. "Combine the tabs from #3 with the spacing from #7."
  4. Fine-tuning: Another 10 variants, but this time focusing on the "vibe" (e.g., "Tight & Sharp" vs. "Soft & Modern").
  5. Implementation & Auto-Correction: Once I'm happy, Claude writes the real code and immediately uses the MCP to validate it.

Key insight: Because the mockups are standalone HTML, I can explore 10 wild ideas without risk. If they all suck, I just delete them and try again.

And then comes the best part: Claude catches its own CSS bugs before I even see them.

"Opaaa!" (That's Serbian slang for “There it is!” or “Bingo!”). I was right. It works.


The Numbers

Phase Sessions Total Time Output
MCP Implementation 1 ~30 min CDP bridge for Claude
Mockups & Design 4 ~1.5h 20 unique variants
Final Implementation 2 ~30 min Production-ready code
Visual Validation 6 ~2h Automated design check

The visual testing was the longest part of the process. But here is the kicker: Claude was the one doing the looking, not me. While Claude was busy comparing pixels and validating CSS, I could actually take a breather and chat with my wife. Its feedback loop told me exactly when I needed to intervene and when I could just sit back.

After: tab bar with inline editing
After: Horizontal tab bar, inline pane editor, organized sections.


The Universal Pattern: Stop Describing Red

This isn't just about Tabby terminal or MCP specifically. It’s a pattern that any developer can use to escape "LLM design phobia":

  1. Give AI eyes - Provide visual context (screenshots, recordings).
  2. Work in isolation - Use mockups or sandboxes where the AI can play without breaking production code.
  3. Keep human in the loop - Let the AI generate options, but you make the final call.
  4. Enable self-validation - Let AI verify its own work before it even reaches you.

Wrap-up

TabbySpaces v0.2.0 shipped with a complete UI redesign done entirely through this workflow. It's not just a theory; it's the backbone of how I build now.

If you’re interested in this kind of stuff, I’ve started documenting my experiments, MCP tools, and other side projects over at Hanuya.net. It's basically my digital vault - feel free to drop by and check it out.

If you’ve ever given up on AI-assisted UI work because explaining was harder than doing, this is your exit ramp.

Build the bridge. Give AI eyes. Stop describing red.


Update: Someone Else Used the Workflow

While I was still editing this post, a friend (@mozulator) started using TabbySpaces and had ideas. "Why don't you add drag-to-resize? What about percentage labels? The empty state needs work."

My answer: "The whole repo is set up for Claude. Just tell it what you want."

So he did. He pointed Claude at the codebase, used the same tabby-mcp setup, and shipped a PR with drag-to-resize handles that snap to 10%, live dimension labels, a redesigned empty state: all visual features you can't validate without seeing them. The CLAUDE.md explained the workflow, the MCP tools were already there, and Claude handled the rest.

I didn't walk him through anything. The setup was the walkthrough.


Top comments (0)