Table of Contents
- Why this matters
- 1. Keep scope brutally clear
- 2. Give your model a stronger quality bar
- 3. Make it critique itself before and after
- 4. Separate build passes from polish passes
- 5. Make it work against a reference standard
- The playbook
- The biggest unlock
- My strongest recommendation
Why this matters
I'm interested to hear how other vibe-coders are getting the best out of whatever model they are using. For me, and Claude, I've been reading everything, everywhere in order to improve my already pretty great outcomes and make more robust software and applications.
I’ve found that the difference between average AI-assisted coding and genuinely impressive output often has less to do with the model itself and more to do with how you direct it. In other words, the biggest improvement usually comes from becoming a better editor, product owner, and critic of the work.
This article is basically my current thinking on how to get better results. Not just more code, but better code. Better UX. Better structure. Better judgment. Less fluff. Less fake completeness. Less "looks great in a screenshot, falls over in real life."
1. Keep scope brutally clear
The first big improvement for me was realising that the model does best when the task is:
- narrow
- concrete
- sequenced
- judged against a clear standard
What hurts quality is asking for too much at once:
- coding
- product decisions
- architecture
- UX redesign
- future planning
all bundled into one giant prompt.
The best pattern I’ve found is:
- one task
- one success definition
- one summary at the end
That second prompt sounds ambitious, but it usually produces mush. The model starts solving five different problems badly instead of one problem well.
2. Give your model a stronger quality bar
Another thing that helped me a lot was stopping myself from only telling the model what to do and instead telling it what good looks like.
Don’t settle for technically correct if what you actually want is product-quality work.
For UI work, I’ve also found it helps to be even more direct:
That little shift makes a big difference. Otherwise the model often gives you something that is fine in a technical sense but generic in every other sense.
3. Make it critique itself before and after
This is one of the best tricks I’ve found.
Before it changes anything, get it to identify weak spots. After it changes things, get it to critique the result honestly.
That helps push it out of “task completed” mode and into “quality review” mode.
That has been incredibly useful for me because the model will often otherwise sound pleased with itself far too early.
4. Separate build passes from polish passes
One thing I’ve had to learn is not to expect the first pass to also be the best pass.
The model can build quickly, but quality usually comes from doing the work in layers.
My preferred sequence is:
- build it
- verify it
- refine UX
- clean code
- document next gaps
That feels much closer to how good teams actually work.
That distinction between build pass and polish pass has improved my results a lot.
5. Make it work against a reference standard
The model does better when it has something to aim at.
If you already know the kind of experience you want, say so clearly and repeatedly. Don’t assume the model will infer your taste.
For me, this often means defining a few non-negotiables around familiarity, clarity, calmness, cognitive load, and usability.
This helps stop the model wandering off into novelty for novelty’s sake.
The playbook
Here’s the practical version of how I now try to work.
Review every summary properly
When the model says it has completed something, I ask myself:
- Did it solve the real user problem?
- Did it stay in scope?
- Did it introduce unnecessary complexity?
- Does it feel generic?
- Would a busy user understand this quickly?
- Does it still feel like the kind of product I’m trying to make?
If the answer is “technically yes, emotionally meh,” that usually means it needs a refinement pass.
Ask for rationale on UI tasks
Not chain of thought. Just design rationale.
That helps surface whether the model is making deliberate choices or just decorating things.
Keep a quality debt list
This has been useful too.
I like having the model maintain a small file of things that are still weak, awkward, or unfinished, such as:
- awkward labels
- generic empty states
- spacing that still feels off
- interactions that are too hidden
- things that need real user testing
That stops “good enough” from becoming invisible.
Make it do a cleanup pass
Before a task is truly finished, I often want one more sweep for consistency and simplification.
Force human-grade restraint
This one matters more than people think.
The model loves adding layers, helpers, hooks, abstractions, and future-proofing when they are not actually needed.
That line alone can save a surprising amount of nonsense.
For UI/UX specifically
If I want the best UI/UX possible, I try to push the model toward these principles:
- 10-second scanability
- recognition over recall
- fewer competing focal points
- obvious primary actions
- visible active state
- trustworthy summary-to-detail flows
- calm density, not empty Dribbble fluff
- tables and controls that are genuinely usable
Refine the current UI to improve: Keep the core mental model familiar.Click to see the full UI polish prompt
Do not add scope.
Do not turn this into a flashy SaaS dashboard.
Explain what changed, why it is better, and what still feels weak.
That prompt has been a good one for me because it pushes the model toward clarity instead of visual showing off.
The biggest unlock
One of the best prompts I’ve used is asking the model what a great human would still dislike about the work.
That often gets you a better result than simply saying “make it better.”
My role in all this
The way I think about my job now is this:
My role is to stop the model becoming:
- too broad
- too clever
- too generic
- too pleased with itself
and keep pushing it toward being:
- clearer
- tighter
- calmer
- more honest
- more product-quality
That has made a massive difference.
My strongest recommendation
If I had to pick one thing that consistently improves output, it would be adding an excellence pass after each meaningful chunk of work.
That has probably been the single best quality multiplier for me.
Final thought
I’d genuinely love to hear how other people are getting the best out of their models.
What are you doing that consistently improves outcomes?
Are you using:
- tighter prompting
- staged passes
- self-critique
- design standards
- test-first workflows
- something else entirely?
Because the more I do this, the more I think the real skill in vibe-coding is not just getting the model to produce code.
It’s getting it to produce work you’d actually be proud to ship.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.