DEV Community

Cover image for How I Get Better UI from Claude: Research First, Build Second
hassantayyab
hassantayyab

Posted on • Originally published at hassantayyab.com

How I Get Better UI from Claude: Research First, Build Second

Most AI-generated UIs look the same. You know the vibe — generic gradients, overused shadows, that "I was clearly made by AI" aesthetic you see in every vibe-coded app.

I found a simple workflow that fixes this. It takes an extra 5 minutes but the output looks like it came from an actual designer.


The Problem with "Build Me a Component"

When you ask Claude to build a UI component directly, it pulls from its training data. That means you get a mix of patterns it's seen — some good, some dated, some just weird.

I was building a typing indicator for my Angular AI UI component library. You know, those animated dots that show when the AI is thinking.

My first instinct was to just ask Claude to build it.

The result? Functional, but forgettable. Three bouncing dots that looked like every tutorial example from 2019.


The Fix: Make Claude a Design Researcher First

Instead of jumping straight to code, I added one step.

I asked Claude Code (using Opus 4.5 in research mode) to research the problem first:

Research how typing indicators are used in modern web apps, 
especially AI chats like ChatGPT, Claude, and Gemini. 
Look at design patterns, animations, and how they signal 
different states.
Enter fullscreen mode Exit fullscreen mode

Giving it specific examples of well-designed apps makes a huge difference. Claude now has a reference point for what "good" looks like in this specific context.

The research came back with insights I wouldn't have thought to look for — how ChatGPT uses a pulsing effect vs Claude's shimmer animation, how Gemini handles the transition between thinking and responding, accessibility considerations for motion-sensitive users.


Then Feed the Research Back

Here's the key part.

I take the entire research document and paste it into a new conversation. Then I ask Claude to build the component with this context.

Based on this research, build me a typing indicator component 
for Angular that follows modern AI chat patterns. Make it 
polished and professional.
Enter fullscreen mode Exit fullscreen mode

The difference is night and day.

Instead of generic bouncing dots, I got a component with:

  • Subtle animation timing that felt natural
  • Smooth state transitions
  • Design choices that actually made sense for an AI chat context
  • Details I wouldn't have specified myself

It looked like something that belongs in a production app, not a weekend hackathon.


Why This Works

Claude is good at following patterns. The problem is which patterns it follows.

When you ask it to build something from scratch, it averages across everything it knows. You get median design quality — safe, boring, forgettable.

When you give it focused research on how the best apps solve this exact problem, it has a much tighter reference point. It's not guessing anymore. It's applying specific patterns from apps you actually want to emulate.

Think of it like this: asking Claude to "design a button" vs asking it to "design a button like Stripe's dashboard" gives you completely different results. Research mode just automates finding those references.


My Workflow Now

For any UI component that needs to look good:

Step 1: Ask Claude (Opus 4.5, research mode) to research the problem

Research how [component] is implemented in modern apps like 
[2-3 specific well-designed examples]. Focus on design patterns, 
animations, and UX details.
Enter fullscreen mode Exit fullscreen mode

Step 2: Review the research (sometimes it surfaces things I didn't know to ask for)

Step 3: Paste the full research document into a new conversation

Step 4: Ask Claude to build with that context

The extra 5 minutes of research saves hours of tweaking generic output into something that actually looks professional.


When I Use This

I don't do this for every component. Simple stuff like form fields or basic layouts — Claude handles those fine without research.

But for anything that needs to feel polished:

  • Complex interactive components
  • Animations and micro-interactions
  • Components where "feel" matters (chat UIs, dashboards, onboarding flows)
  • Anything users will see repeatedly

That's when research-first pays off.


The Bigger Point

AI-assisted development isn't just about speed. It's about using AI at the right step.

Most people use Claude as a code generator. But it's also a researcher, a design consultant, and a pattern library — if you prompt it that way.

The typing indicator I built is now one of the best-looking components in my library. Not because I'm a designer, but because I let Claude do the design research before writing a single line of code.

Try it on your next UI component. Research first, build second.


Links:

Top comments (1)

Collapse
 
bhavin-allinonetools profile image
Bhavin Sheth

This is such a good framing — treating Claude as a design researcher instead of jumping straight to “build me a component” explains why so many AI UIs feel the same.

I’ve noticed the exact issue you describe: generic outputs aren’t wrong, they’re just averages. Giving the model a tighter reference set changes the result completely.

The research → fresh prompt handoff is a great idea too. Do you ever keep a reusable “research prompt” template for different component types, or do you rewrite it each time based on context?